Andrzej Chybicki: The groundbreaking change brought by MCP and technologies based on LLM is the reduction of the communication barrier between humans and IT systems

INCONE60 Green - Digital and green transition of small ports
Andrzej Chybicki: projekty związane z wykorzystaniem sztucznej inteligencji to znacząca część naszych projektów
Andrzej Chybicki: The groundbreaking change brought by MCP and technologies based on LLM is the reduction of the communication barrier between humans and IT systems

The new open-source tool, Model Context Protocol (MCP), introduces a standard enabling the integration of AI assistants with key systems such as content repositories, business tools, and development environments. It is the first widely available solution to provide secure, bidirectional communication between data sources and AI-driven tools. We invite you to read an interview with Andrzej Chybicki, CEO of Inero Software, where we delve deeper into the practical applications of MCP.

Thanks to MCP, applications can dynamically adapt their functionality to the changing needs of users.

Marta Kuprasz: How does the Model Context Protocol (MCP) enhance real-time context management? Does it allow applications to adapt their functionality to changing user needs?

Andrzej Chybicki: The Model Context Protocol introduces a new approach to context management, focusing on standardizing the integration of organizational knowledge with large language models (LLMs). Traditionally, organizational knowledge has been fragmented and stored in various formats, making effective collaboration with AI systems challenging. MCP is the first solution from LLM providers that systematizes how this data is connected to the models, eliminating previous limitations caused by the lack of tools facilitating information flow.

Thanks to MCP, applications can dynamically adapt their functionality to changing user needs by leveraging up-to-date, multi-layered contextual data. The protocol enables smoother and more responsive AI system operations by moving beyond the traditional approach focused solely on model optimization. MCP simplifies data and application integration, allowing organizations to better utilize their knowledge resources and deliver more personalized user experiences.

The MCP standard ensures secure communication between modules.

MK: What key features of MCP’s modularity and scalability allow for seamless integration with various models and applications?

AC: One of the key aspects of MCP’s modularity and scalability is the use of ready-made components, such as the MCP Server and MCP Client, which significantly simplify the integration process. The MCP Server can be installed on a local machine or a base server, providing access to local resources like files or databases. This allows organizations to easily incorporate existing data into the ecosystem managed by the MCP protocol without the need to build complex infrastructure from scratch.

The MCP standard itself ensures secure communication between modules, leveraging mechanisms based on the SSH protocol. This means all transmitted data is protected, and the communication process between the client and server is fully automated and compliant with security standards. MCP eliminates the need for manual management of tasks like encryption or authentication, enabling users to focus on utilizing the data while the standard ensures reliable and secure information exchange.

The open-source community is already beginning to develop tools dedicated to monitoring MCP.

MK: What real-time monitoring mechanisms does MCP offer, and how do they contribute to better data management and increased transparency of shared resources?

AC: The MCP platform itself does not provide native tools for monitoring the MCP Server. However, thanks to its open architecture, it is possible to utilize existing open-source tools and libraries to configure logging, error tracking, and performance monitoring. For example, in Python, there are numerous libraries that can be easily adapted to monitor the operation of the MCP Server. Additionally, platforms like Grafana (which we’ve written about previously) allow for the visualization and analysis of performance data, providing practical tools for management.

Moreover, the open-source community is already beginning to create tools specifically dedicated to monitoring MCP. One example is the GitHub repository https://github.com/tinybirdco/mcp-tinybird/tree/main/mcp-server-analytics, which offers a solution based on the TinyBird library for analyzing data from the MCP Server. While we have not yet had the chance to test this tool ourselves, its existence demonstrates how flexible and adaptable MCP can be in the area of monitoring. Such projects highlight the wide range of configuration possibilities that MCP offers, enabling organizations to fully tailor the system to their specific needs.

One of the greatest advantages of MCP is its openness to integration with popular tools.

MK: How does MCP support integration with popular tools like Google Drive, Slack, or GitHub without compromising data security?

AC: One of the greatest advantages of MCP is its openness to integration with popular tools such as Google Drive, Slack, or GitHub, all while maintaining uncompromised data security. Alongside the launch of MCP, Anthropic released a comprehensive repository of example implementations available under the MIT license, which can be found on GitHub: modelcontextprotocol.io/examples.

This open repository allows users to quickly download ready-to-use integrations, significantly speeding up the process of implementing MCP in real-world applications. Examples include integrations with Google Drive and Slack, as well as more specialized solutions, such as connections with Google Maps that unlock new possibilities for LLM-based systems in logistics, or Docker integrations that enable monitoring of container status and logs using Claude.

Thanks to models like Claude and protocols like MCP, computers are beginning to “understand” information conveyed in human-like communication.

MK: What are the practical benefits for end users resulting from the adoption of MCP? Is it an improvement in the relevance of model responses or an increase in the efficiency of AI systems?

AC: We are currently in the early stages of adopting solutions based on large language models (LLMs), and MCP is one such example. Many organizations and end users are still trying to understand how to optimally harness the potential offered by AI technologies, both in the realm of LLMs and the broader pursuit of general AI (AGI). Companies are experimenting with AI integration into their processes, attempting to define measurable benefits of deployment, and learning how to effectively mitigate potential risks associated with the widespread use of these technologies. This learning process encompasses the entire ecosystem—from providers and integrators to end users.

The most groundbreaking change brought by MCP and LLM-based technologies is the reduction of the communication barrier between humans and IT systems. Thanks to models like Claude and protocols like MCP, computers are beginning to “understand” information conveyed through typical human communication—whether in conversations, texts, images, or even gestures. This means that IT technologies can now be deployed in areas where it was previously unfeasible or unprofitable, such as niche industries, local organizations, or even households.

From an end-user perspective, this translates into more relevant and useful responses from AI systems and increased efficiency through more intuitive and human-centered interaction with IT systems. Who knows—perhaps in a few years, we will no longer rely on typical graphical user interfaces (GUIs) like web pages, forms, or tables. Instead, we may simply interact with computers conversationally, not just through AI assistants but across most IT solutions.