AI Assisted Coding

It is no exaggeration to say that artificial intelligence is transforming our lives and the way we do business. In less than a year AI has shown what it can do, and it is clear that this is only the beginning.

Although there has been a long history of research and development in AI, the general public has only occasionally come across the term – but since the launch of ChatGPT 3.0 in November 2022 at the latest, we have seen the real “AI explosion” everyone is talking about: language models, image and video generators, music… Let’s see how it has evolved.

AI-támogatott programozás ai assisted coding

A BIT OF AI HISTORY…

It all started with the Turing machine… created by the ill-fated English mathematician Alan Turing, the Turing machine is actually a simplified model of computers.

Later, game programs based on fixed input-output reactions appeared – the most famous of which are chess programs, the first created by Alan Turing in 1947. Here, “only” the right computing power is necessary to ensure that the machine can handle all combinations of known (fixed) options and move sequences. These became more and more advanced over time, until in 1996 the Deep Blue chess program beat the then world chess champion Garry Kasparov.

The next step was the creation of neural networks and genetic algorithms. As their names suggest, these are models and algorithms derived from biology and evolution.

  • A neural network is a numerically describable network based on input (activation) and output. It is not a cognitive system, it cannot learn on its own, so it needs to be taught, and the teaching parameters will stop the process at an acceptable level.
  • Genetic algorithms are a class of search techniques to search for an optimum or an element with a given property. Genetic algorithms are specialized evolutionary algorithms, their techniques borrowed from evolutionary biology.

Moving closer to the present, Big Data, a technological environment for processing large and complex data sets, first appeared in 1998 and flourished in the early 2000s. Big Data is characterized by the three V’s: large data volumes, fast processing and highly diverse data (Volume, Velocity and Variety) that cannot be manually managed and analyzed by traditional structured data models and human resources.

The next step was the appearance of chatbots. Although the first chatbot, ELIZA, was created in 1966, chatbots only really took off in the second half of the 2010s, mostly on corporate websites as a first-level customer service tool. They are based on pre-programmed cases, with limited ability to learn and adapt to changing environments. They are usually specialized in specific areas of expertise.

And we are slowly arriving at the present: perhaps the best known of the Large Language Models (LLMs), ChatGPT, developed by OpenAI, was launched in November 2022. Shortly afterwards, several similar applications were released, such as Bing AI, Azure OpenAI Services, GitHub Copilot and Google Bard. The aim of the large language models is to improve language expressiveness and create more natural human-machine interaction.

“AI is your friend”

How many times have we heard, in response to a question, “Google is your friend”? This annoying phrase is slowly being replaced by a modernized version, “AI is your friend” – because alongside the well-known internet search engines (Google and others), AI search has also appeared, for example through AI tools integrated into Bing or Opera search engines and browsers.

As an IT company, for the rest of this article we will focus on how AI tools, including large language models, can be used in AI assisted coding to support the work of developers.

AI ASSISTED CODING – A FEW TIPS

In June 2023, GitHub published a survey of 500 software developers: According to the survey, 92% of developers are already using some kind of AI-based service in their work (and of course for personal use)! Although the survey was conducted in the US, it is likely that the proportion of European developers is similar.

The main objectives of AI adoption for developers are:

  • Learning and professional development: AI makes it easier to look things up, to go deeper into a professional subject
  • Increase productivity and efficiency: Standard coding tasks can be “delegated” to AI
  • Focus on creation: Tedious tasks can be safely left to AI and the developer can concentrate on the truly creative development work

We at DSS have been using ChatGPT since its release. Based on our experience so far, we have been able to formulate some general tips and guidelines for AI assisted coding (more precisely, using ChatGPT):

ChatGPT (or any similar LLM) can be used for the following purposes:

  • to collect general information on any topic
  • answering specific questions and requests
  • to clarify a context

We do not, however, share

  • either personal data
  • or trade secrets in any form (code, documentation, correspondence) – as it can use all of these

It is also not advisable to use ChatGPT to search for specific information, data, references – a simple Google search is more suitable for this purpose, which can easily lead you to (for example) a manufacturer’s website where you can find the data and specifications you are looking for.

As for specific tasks related to the work of developers, the following types of tasks can benefit from the use of AI:

  • Clarifying general questions, basics about a new or less known business area (domain), so that the expected functionality and the development task are clear; or at least the questions can be formulated more precisely
  • Generate SQL script for a specific target task
  • Pre-generation of a skeleton of classes, interfaces, at source code level based on design patterns (e.g. in C# or Java); generation of other scripts, commands (e.g. PowerShell and batch commands for DevOps processes)
  • Investigating specific technical problems, looking for alternative solutions; context-dependent analysis for open source code
  • Developing custom solutions with LLM AI API integration (e.g. LangChain)

Although obvious, it is often forgotten that the most recent data ChatGPT uses is from September 2021 – so it is not worth asking it for information or suggestions that are likely to have a more recent answer or solution.

And once we have received an answer to our question or request, the answer given by AI must be interpreted:

  • The AI doesn’t know when it’s wrong: it can easily confuse information (in fact, it may, for example, get elementary school-level math problems wrong) – and it can be easily confused by a query back
  • It also matters how and when you ask questions: because of the token principle, wording, word order and conjugation also matter
  • Basic information from AI is typically OK, but answers to specific questions should be treated with caution, clarifying questions and follow up – the more specific the question, the more it is worth following up the data
  • Does not necessarily give the specific solution, but rather what leads to it – answers and solutions given by AI should not be taken one by one, only after interpretation

In summary, as AI is evolving very rapidly (and so are ChatGPT and similar language models), the following is worth keeping in mind:

  • What AI doesn’t know today, it may be able to know it tomorrow
  • Continuous practical use, getting practical experience is necessary
  • Experimentation and incorporation of innovations is important

AN OUTLOOK

Trying to predict the future of AI is tantamount to staring into a crystal ball, but we can still outline a few trends:

Further rapid progress

Due to strong competition, LLMs are evolving rapidly: the goal is to be cheaper, faster and “smarter” (e.g. stateful API, GPT-4). Moreover, let’s not forget that AI is not only about LLMs: There are speech and voice recognition applications, AI-assisted translation, image and video recognition and editing, speech synthesis, music editing… the sky’s the limit.

Further integration of applications and tools

Further integration of applications/services and devices/hardware has been announced and is expected to continue in the near future – for example, in the following areas:

Connecting value-added, customized services and services

  • Microsoft Azure OpenAI Services: Microsoft has extended its OpenAI ChatGPT service
  • OpenAI ChatGPT Web Browsing: this application will feed Bing search results back into ChatGPT, so that it has data more recent than September 2021
  • The big players are building their own systems: such as Meta AI, xAI or Google Bard – we don’t know exactly what and how they use their own data in-house: Google might prefer to optimize search results, Meta might work on chat and posts, e.g. ads, map suggestions, suggest topics of interest to users

AI-enabled smart devices (watches, TVs, cars… anything smart)

 Domain-specific solutions have also been considered, for example:

  • In education: interactive electronic learning materials, classroom AI assistant
  • In healthcare: combination of visual and textual analysis (MRI, X-ray, skin and other photos), support of clinical diagnostics
  • In law enforcement: AI support for interpretation issues and precedents

High level machine intelligence

After the current generative AI, the next step will be the so-called High level machine intelligence (HLMI), predicted by the majority of respondents in the 2019 GitHub survey to be available within 10 years (remember, that’s only six years from today!). The end goal, however, is “real” or “true” AI, which is fully similar to human thinking and works without human intervention or maintenance.

The evolution of hardware

The rapid evolution of AI is also bringing with it the evolution of processors and other hardware: more knowledge means more data, which requires more storage and more computing power.

Legal regulations

Last but not least, the legal regulation of AI applications is also expected, as AI-based applications offer many opportunities for both misuse of personal data and copyright infringement, with the first lawsuits already underway.