AI Assistants: Opportunities and Challenges for Business Growth

INCONE60 Green - Digital and green transition of small ports
Andrzej Chybicki: projekty związane z wykorzystaniem sztucznej inteligencji to znacząca część naszych projektów
AI Assistants: Opportunities and Challenges for Business Growth

Gartner predicts that by 2028, at least 15% of daily professional decisions will be made autonomously by agent-based artificial intelligence. In theory, this technology promises the development of more adaptive software systems capable of performing a wide range of tasks. However, in practice, many uncertainties remain, particularly regarding data security, compliance with internal procedures, and legal regulations. In this article, we discuss the limitations (especially in the context of the EU) and focus on the challenges of implementing AI Agents.

The rapid development of AI-based tools suggests that the practical application of “agentic” artificial intelligence for performing time-consuming tasks will revolutionize many industries, particularly those where automation and data-driven decision-making can significantly enhance efficiency.

While exploring these possibilities, it is equally important to consider the limitations.

Limitations of Using AI Agents

Dependence on Data Quality

AI requires large datasets of high quality to operate effectively. Incorrect, incomplete, or manipulated data can result in flawed decisions, which, in critical areas like medicine or finance, may have severe consequences.

The study “The Effects of Data Quality on Machine Learning Performance” examines the impact of six traditional data quality dimensions on the performance of fifteen popular machine learning algorithms. The results indicate that incomplete, erroneous, or inadequate training data can lead to unreliable models that make incorrect decisions.

For AI applications to be dependable, the training and testing datasets must meet high standards across several quality dimensions, such as accuracy, completeness, and consistency.

While this may be evident to AI/ML professionals, evaluating data quality within an organization is not always straightforward. It is essential to ensure that the information intended for use by an AI agent, such as OpenAI, is comprehensive and free from offensive content. Assessing these aspects early in the process can help prevent potential issues during implementation.

Data Security

The program, which in this case is designed to serve as a helper or assistant, is tasked with delivering reliable and verified information. Its analysis relies on the data we discussed earlier. Naturally, the best learning materials are real documents and data processed within the organization.

However, many doubts and concerns arise when it comes to providing the Assistant with sensitive company data.

The current state of legal regulations, regardless of our opinions, sets the framework for the operation of AI systems, particularly in the context of GDPR compliance. For example, agents based on Azure OpenAI infrastructure ensure full compliance with regulations when properly configured by the user, thanks to their integration with cloud services. Solutions like Azure AI Studio, which enable the creation of agents within a cloud environment, are also worth mentioning.

 

On the other hand, local solutions—based on models such as Llama, Bielik, or other open language models (e.g., the somewhat controversial DeppSeek)—offer full control over data and its processing but require greater technical investment, such as dedicated servers. It’s important to emphasize, however, that compliance with regulations, including GDPR, depends on the proper configuration and implementation of systems by the user, whether cloud-based or local.

The dynamic development of this field suggests that in a few months, these capabilities will be even more advanced, offering new options for both local and cloud-based solutions, says Andrzej Chybicki, CEO of Inero Software.

Manage user access and permissions with Keycloak

We provide full implementation, maintenance, and training

The rapid development of AI-based tools suggests that the practical application of “agentic” artificial intelligence for performing time-consuming tasks will revolutionize many industries, particularly those where automation and decision-making based on large datasets can significantly enhance efficiency.

While considering the possibilities, it is also important to remain mindful of the limitations.

Cybersecurity and Authorization

User authentication poses a challenge for AI systems, requiring special attention in terms of security. While authorization is typically handled in earlier stages, there are methods to bypass this process, which can leave systems vulnerable to manipulation.

Example:

An AI agent that interacts with users can be susceptible to phishing attacks if not adequately secured. For example, an attacker might craft a query that imitates communication from an authorized user. If the agent fails to verify the context or input data, it could respond to such a query by disclosing sensitive information or performing unauthorized actions.

This highlights that prior user authorization, while critical, is not always sufficient to fully secure the system. It is essential to implement additional measures, such as contextual analysis, advanced anomaly detection systems, and robust safeguards for data processed by the AI agent. Such an approach is the only way to minimize the risks of phishing attacks and manipulation of AI agents.

In practice, user authorization should always occur before granting access to the AI assistant. In a cloud environment, such as Azure, additional security measures can be applied, such as using Active Directory or Keycloak, to reduce the risk of attacks. If data cannot be processed outside the organization, it is advisable to use local authentication and data storage solutions.

From a security perspective, however, the development of phishing-resistant mechanisms is crucial, as insufficient safeguards in AI agents can lead to serious threats, adds Andrzej Chybicki.

Z perspektywy bezpieczeństwa kluczowe jest jednak rozwijanie mechanizmów odpornych na phishing, ponieważ brak odpowiednich zabezpieczeń w agentach AI może prowadzić do poważnych zagrożeń- dodaje Andrzej Chybicki  

In summary, agentic artificial intelligence has the potential to revolutionize the way professional decisions are made, particularly in tasks based on analyzing large datasets. However, these possibilities come with significant challenges, such as data quality, information security, and compliance with legal regulations.

We will help you implement AI tools in your organization

Ready to unlock the potential of AI?