A year under the sign of artificial intelligence development

Andrzej Chybicki: projekty związane z wykorzystaniem sztucznej inteligencji to znacząca część naszych projektów
A year under the sign of artificial intelligence development
Slider

The end of the year is a time for summaries. In the world of IT, many interesting things have happened, so in this article, we decided to focus on AI. The development of artificial intelligence and its media presence accelerated to an unprecedented scale. Tools based on Large Language Models (LLMs) have been popularized and made widely available to users from various industries, not just technological ones. We decided to summarize the year with Andrzej Chybicki, the CEO of Inero Software. Here is the list he identified as the key 5 events of the past year. 

Fact 1: OpenAI – artificial intelligence becomes widely accessible 

OpenAI played a tremendous role in popularizing the field of artificial intelligence in the context of human language understanding. In 2022, they released ChatGPT, and in the following months, they presented new, improved models. These advancements not only improved the performance of existing applications but also opened new avenues for AI in healthcare, environmental science, administration, marketing, and more.  

In 2023, ChatGPT saw remarkable advancements, featuring enhanced learning algorithms for improved accuracy and nuanced conversations, personalized user interactions, expanded language support for global accessibility, and broader application integration. OpenAI emphasized ethical considerations and bias reduction, incorporated real-time learning for up-to-date content, improved multimedia interaction capabilities, and boosted the tool’s robustness and reliability. Additionally, ChatGPT was tailored for specific industries, providing specialized functionalities and knowledge, marking a significant leap in AI technology and user-centric applications. 

Expert Insight 

OpenAI was the first widely recognized large language model. In the coming years, we are likely to see various versions of LLMs designed for specific applications – in fact, this has been happening for a few months now. OpenAI, despite being a pioneer, at least in terms of recognizability, is not always considered the best model for everything. The direction of development is certainly popularization in a similar way as it was with computers (i.e., LLMs like PCs) and specialization, meaning specialized language models designed for specific applications or even entities or people.  

 

Fact 2: GitHub Copilot – a leader in AI/LLM implementation 

One of the key roles in the development of artificial intelligence is played by Microsoft, which collaborates with OpenAI. Over the past year, Microsoft has continued to refine its vision of Microsoft Copilot. Let’s focus on the solution for developers: GitHub Copilot. In 2023 it underwent significant changes and enhancements. Here are the key updates: 

In 2023, GitHub Copilot introduced several significant enhancements to bolster its role in AI-driven software development. The GitHub Copilot Chat, now generally available and powered by OpenAI’s GPT-4, provides more accurate code suggestions and explanations, using natural language to aid developers in various languages. This feature is integrated with both the GitHub platform and its mobile app, supporting coding, pull requests, and documentation. Additionally, GitHub Copilot Enterprise was introduced to tailor the tool to specific organizational needs, helping developers quickly adapt to their organization’s codebase and streamline tasks like documentation and pull request reviews, aimed at boosting enterprise-level productivity and security. The GitHub Copilot Partner Program was launched, integrating Copilot with various third-party developer tools and services, thereby creating a broad ecosystem that enhances the capabilities of developers using AI. Finally, GitHub unveiled new AI-powered security features in its Advanced Security suite, including a real-time vulnerability prevention system and application security testing features to detect and remediate code vulnerabilities and secrets, further securing the software development process. 

  

Expert Insight 

Thanks to its collaboration with OpenAI, Microsoft became a leader in AI/LLM implementation worldwide in 2023. Microsoft’s strategy in this area is based on using the LLM model to support (but not replace) as many activities and processes using Microsoft products as possible. Particularly important was ensuring an appropriate level of SLA (aligned with other Azure services) and data security. Among the most significant changes, apart from the mentioned GitHub Copilot (which aims to support developers in coding), are Copilot plugins available in practically all of this company’s flagship products (Word, Excel, PowerPoint, Outlook). 

In December 2023, Microsoft also presented the CoPilot Studio solution, which enables the creation of low-code/no-code IT systems with significant support from the OpenAI model. This effectively allows for the easy expansion of existing Azure low-code solutions such as Azure Agents with conversational bots or AI-supported database adapters. Although CoPilot Studio is not yet available in its final form, Microsoft clearly communicates development directions and the advantages that developers, engineers, and users can experience from its use. From the presentations of Microsoft representatives, it can be inferred that Microsoft’s goal is to lower the entry threshold for creating and implementing new advanced AI solutions, as using low-code platforms does not require as deep technical knowledge as traditional coding. We can expect widespread interest in these solutions not only from the largest companies using MS Azure in the coming years. Currently, among experts, the question is not “whether to use AI” but how to implement it to not fall behind the competition. Those entities that create a coherent strategy for incorporating AI-based products into their processes in the coming years will be able to significantly benefit from the revolution that is already taking place. 

  

Fact 3: The European AI Act: A Regulatory Milestone 

On 14 June 2023, the European Parliament adopted its negotiating position on the AI Act. Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems. The AI Act sets different rules for different AI risk levels. 

The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed. 

Unacceptable risk 

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include: 

  • Cognitive behavioral manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behavior in children 
  • Social scoring: classifying people based on behavior, socioeconomic status or personal characteristics 
  • Real-time and remote biometric identification systems, such as facial recognition 

Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval. 

High risk 

AI systems that negatively affect safety or fundamental rights will be considered high-risk and will be divided into two categories: 

1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts. 

2) AI systems falling into eight specific areas that will have to be registered in an EU database: 

  • Biometric identification and categorisation of natural persons 
  • Management and operation of critical infrastructure 
  • Education and vocational training 
  • Employment, worker management and access to self-employment 
  • Access to and enjoyment of essential private services and public services and benefits 
  • Law enforcement 
  • Migration, asylum and border control management 
  • Assistance in legal interpretation and application of the law. 

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. For more information, visit the European Parliament website. 

*source: https://www.europarl.europa.eu 

Expert Insight 

Ensuring security and confidentiality of data is certainly one of the most important issues concerning the implementation of AI solutions. Many experts indicate that despite the good intentions of the European Commission, the proposed solutions may contribute to reducing the competitiveness of the domestic AI market, which in effect will increase the distance between Europe and leaders in this field (i.e., the USA and China). I personally share these concerns. Here, a good example might be the similar situation that occurred about 15 years ago when cloud computing was being implemented. At that time, the EU also created a regulation governing the rules of access and data confidentiality (GDPR), which to this day is the regulatory basis in this area. At the same time, the largest solutions that most in the EU use are those developed in the USA, where the priority was the free development of technology, and only secondarily the legal framework. Unfortunately, many indications suggest that a similar situation might occur with AI. 

 

Fact 4: Gemini: new model from Google 

Without a doubt, the launch of Gemini was the most prominent premiere in the latter part of 2023, generating significant buzz. It is a result of large-scale collaborative efforts by teams across Google. It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across, and combine different types of information including text, code, audio, image, and video. 

Gemini 1.0 was trained to recognize and understand text, images, audio, and more at the same time, so it better understands nuanced information and can answer questions relating to complicated topics. This makes it especially good at explaining reasoning in complex subjects like math and physics. 

During the presentation on the release of the Gemini API for developers, a lot of time was dedicated to AI Studio, a browser-based, free tool for code creation. The second focus was on Vertex AI, a more advanced program that allows for “both training and deploying ML (machine learning) models and AI applications.” Google offers the option to transfer a preliminary project developed in AI Studio to Vertex AI, to add additional features available within the larger platform of Google Cloud. 

Expert Insight 

Google has officially joined the large language model (LLM) race. The most intriguing aspect of what they propose is that their model will operate in three versions: Ultra (the most feature-rich), Pro, and Nano, with the latter being designed for mobile phones. It’s still unclear whether Nano will run entirely on client devices (smartphones) or if it will simply be a thin client and a kind of extension of Google Assistant. It’s also worth emphasizing that Google, like Microsoft, will offer Gemini services as elements of its flagship products, such as Google Sheets, Google Docs, and others. 

  

Fact 5: Advancements in Natural Language Processing (NLP) 

2023 witnessed remarkable progress in the field of Natural Language Processing. Researchers and companies globally made significant strides in improving the accuracy and versatility of NLP models. These advancements have led to more sophisticated understanding and the generation of human language by machines, paving the way for more intuitive and natural human-computer interactions. This year saw the deployment of advanced NLP in various applications, from customer service chatbots to complex data analysis tools, revolutionizing how we interact with technology daily. This progress in NLP technology not only enhanced existing applications but also opened new possibilities for AI in fields such as education, content creation, and multilingual communication. 

Expert Insight 

AI technologies are increasingly breaking the barrier of understanding natural language, gradually blurring the line between structured data previously used in IT systems and human knowledge. It seems that the creation of AGI (Artificial General Intelligence), a machine matching or even surpassing the average human in many aspects, is now just a matter of time. The challenge for the world of science, business, and politics will now be to direct the development of AI in a way that serves the broadly understood humanity and does not cause threats that many (probably rightly) fear. 

The last 12 months have been rich in interesting AI releases. The presentation of new large language models has opened up a range of possibilities for their implementation in everyday tasks, both in programming work and creative teams. European authorities are trying to keep up with these changes and adapt legal regulations to be in line with the current technological situation. In the coming months, we will certainly see more premieres, as leading players like Google and Microsoft compete to create solutions that utilize artificial intelligence. 

 

 

Related Posts