Introduction

AI agents are the future and they can be the driving force to the future. They are the future and they can be the driving force to the future. AI agents are becoming increasingly integral to AI’s growth and new technological advancements. They are applications that mirror human-like attributes to interact, reason, and even make suitable decisions to achieve certain targets with sophisticated autonomy and perform multiple tasks in real time, which was impossible with LLMs.

In this article, we will look into the details of AI agents and how to build AI agents using LlamaIndex and MonsterAPI tools. LlamaIndex provides a suite of tools and abstractions to easily develop AI agents. We will also use MonsterAPI for LLM APIs to build agentic applications with real-world examples and demos.

Learning Objectives

  • Learn the concept and architecture of agentic AI applications to implement such applications in real-world problem scenarios.
  • Appreciate the difference between large language models and AI agents based upon their core capabilities, features, advantages.
  • Understand the core components of AI agents and their interaction with each other in the development of agents.
  • Explore the wide range of use cases of AI agents from various industry to apply such concepts.

This article was published as a part of the Data Science Blogathon.

What are AI Agents?

AI agents are autonomous systems designed to mimic human behaviors, allowing them to perform tasks that resemble human thinking and observations. Agents act in an environment in conjunction with LLMs, tools and memory to perform various tasks. AI agents differ from large language models in their working and process to generate outputs. Explore AI agents’ key attributes and compare them with LLMs to understand their unique roles and functionalities.

What are AI Agents?
  • AI agents think like humans: AI agents use tools to perform specific functions to produce a certain output. For example Search engine, Database search, Calculator, etc.
  • AI agents act like humans: AI agents, like humans, plan actions and use tools to achieve specific outputs.
  • AI agents observe like humans: Using frameworks for planning agents react, reflect and take action suitable for certain inputs. Memory components allow AI agents to retain previous steps and actions so that AI agents can efficiently produce desired outputs.

Let’s look at the core difference between LLMs and AI agents to clearly distinguish between both.

FeaturesLLMsAI agents
Core capability Text processing and generationPerception, action and decision making
InteractionText-basedReal-world or simulated environment
ApplicationsChatbot, content generation, language translationVirtual assistant, automation, robotics
LimitationsLack of real-time interaction with information can generate incorrect informationRequires significant compute resources to develop, complex to develop and build

Working with AI Agents

Agents are developed out of a set of components mainly the memory layer, tools, models and reasoning loops that work in orchestration to achieve a set of tasks or certain specific tasks that the user might want to solve. For example, Using a weather agent to extract real-time weather data with the voice or text command by the user. Let’s learn more about each component to build AI agents:

Working with AI Agents
  • Reasoning Loop: The reasoning loop is at the core of AI agents to make the planning of actions and enable decision-making for processing of the inputs, refining outputs to produce desired results at the end of the loop.
  • Memory Layer: Memory is a crucial part of the AI agents to remember planning, thoughts and actions throughout the processing of the user inputs for generating certain outcomes out of it. The memory could be short-term and long-term depending upon a problem.
  • Models: Large language models help to synthesize and generate results in ways humans can interpret and understand.
  • Tools: These are external built-in functions that agents utilize to perform specific tasks, such as retrieving data from databases and APIs. They can also get real-time weather data or perform calculations using a calculator.

Interaction Between Components

The Reasoning Loop continuously interacts with both the Model and the Tools. The loop uses the model’s outputs to inform decisions, while the tools are employed to act on those decisions.

This interaction forms a closed loop where data flows between the components, allowing the agent to process information, make informed decisions, and take appropriate actions seamlessly.

Let’s look at the use cases of AI agents and then we will look at live code examples of AI agents using MonsterAPIs.

Usage Patterns in AI Agents

LlamaIndex provides high-level tools and classes to develop AI agents without worrying about execution and implementation.

In the reasoning loop, LlamaIndex provides function-calling agents that integrate well with LLMs, ReAct Agents, Vector stores and advanced agents to effectively build working agentic applications from prototype to production.

In LlamaIndex agents are developed in the following pattern. We will look at the AI agent’s development in a later section of the blog:

from llama_index.agent.openai import OpenAIAgent
from llama_index.llms.openai import OpenAI

# import and define tools
# Define functions and tools to interact with agent


# initialize llm
llm = OpenAI(model="gpt-3.5-turbo-0613")

# initialize openai agent
agent = OpenAIAgent.from_tools(tools, llm=llm, verbose=True)

Use Cases of AI agents

AI agents have a wide range of use cases in the real world to achieve common tasks and improve time efficiency while enhancing revenue for businesses. Some of the common use cases are as follows:

  • Agentic RAG: Building a context-augmented system to leverage business-specific datasets for enhanced user query response and accuracy of answers for certain input queries.
  • SQL Agent: Text to SQL is another use-case where agents utilize LLMs and databases to generate automated SQL queries and result in a user-friendly output without writing a SQL query.
  • Workflow assistant: Building an agent that can work along with common workflow assistants like weather APIs, calculators, calendars, etc.
  • Code assistant: Assistant to help review, write and enhance code writing experience for the developers.
  • Content curation: AI agents can suggest personalized content such as articles, and blog posts and can also summarize the information for users.
  • Automated trading: AI agents can extract real-time market data including sentiment analysis to trade automatically that maximizes profit for the businesses.
  • Threat detection: AI agents can monitor network traffic, identify potential security threats, and respond to cyber-attacks in real time, enhancing an organization’s cybersecurity posture.

Building Agentic RAG using LlamaIndex and MonsterAPI

In this section, we will look at the agentic RAG application with LlamaIndex tools and MonsterAPI for accessing large language models. Before deep diving into code, let’s have a look at the overview of a MonsterAPI platform.

Overview of a MonsterAPI

MonsterAPI is an easy-to-use no-code/low-code tool that simplifies deployment, fine-tuning, testing, evaluating and error management for large language model-based applications including AI agents. It costs less compared to other cloud platforms and can be used for FREE for personal projects or research work. It supports a wide range of models such as text generation, image generation and code generation models. In our example, MonsterAPI model APIs access the custom dataset stored using LlamaIndex vector store for augmented answers to use query based on new dataset added.

Step1: Install Libraries and Set up an Environment

Firstly, we will install the necessary libraries and modules including MonsterAPI LLMs, LlamaIndex agents, embeddings, and vector stores for further development of the agent. Also, sign up on the MonsterAPI platform for FREE to get the API key to access the large language model.

# install necessary libraries
%pip install llama-index-llms-monsterapi
!python3 -m pip install llama-index --quiet
!python3 -m pip install monsterapi --quiet
!python3 -m pip install sentence_transformers --quiet

!pip install llama-index-embeddings-huggingface
!python3 -m pip install pypdf --quiet
!pip install pymupdf

import os
import os
from llama_index.llms.monsterapi import MonsterLLM
from llama_index.core.embeddings import resolve_embed_model
from llama_index.core.node_parser import SentenceSplitter
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
import fitz  # PyMuPDF

# set up your FREE MonsterAPI key to access to models 
os.environ["MONSTER_API_KEY"] = "YOUR_API_KEY"

Step2: Set up the Model using MonsterAPI

Once the environment is set, load the instance of Meta’s Llama-3-8B-Instruct model using LlamaIndex to call the model API. test the model API by running an example query to the model.

Why use the Llama-3-8B-instruct model?

Llama-3-8B is one of the latest models released by Meta which is outperforming models from its category on many benchmark metrics such as an MMLU, Knowledge reasoning, and reading comprehension. etc.  It is an accurate and efficient model for practical purposes with less computing requirements.

# create a model instance
model = "meta-llama/Meta-Llama-3-8B-Instruct"

# set a MonsterAPI instance for model
llm = MonsterLLM(model=model, temperature=0.75)

# Ask a general query to LLM to ensure model is loaded
result = llm.complete("What's the difference between AI and ML?")

Step3: Load the Documents and set Vectorstoreindex for AI agent

Now, We will load the documents and store them in a vector store index object from LlamaIndex. Once the data is vectorised and stored, we can query to LlamaIndex query engine which will utilize LLM instance from MonsterAPI, VectorstoreIndex and Memory to generate a suitable response with the suitable integration available.

# store the data in your local directory 
!mkdir -p ./data
!wget -O ./data/paper.pdf https://arxiv.org/pdf/2005.11401.pdf
# load the data using LlamaIndex's directory loader
documents = SimpleDirectoryReader(input_dir="./data").load_data()

# Load the monsterAPI llms and embeddings model
llm = MonsterLLM(model=model, temperature=0.75)
embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5")
splitter = SentenceSplitter(chunk_size=1024)

# vectorize the documents using a splitter and embedding model
index = VectorStoreIndex.from_documents(
    documents, transformations=[splitter], embed_model=embed_model
)

# set up a query engine
query_engine = index.as_query_engine(llm=llm)

# ask a query to the RAG agent to access custom data and produce accurate results
response = query_engine.query("What is Retrieval-Augmented Generation?")
 Outpur screenshot of the RAG query using Agentic RAG

Finally, we have developed our RAG agent, which uses custom data to answer users’ queries that traditional models can’t answer accurately. As shown above, the refined RAG query utilizes new documents using the LlamaIndex vector store and MonsterAPI LLM by asking question to query engine.

Conclusion

AI agents are transforming the way we interact with AI technologies by having AI assistants, or tools that will mimic human-like thinking and behavior to perform tasks autonomously.

We learned what are AI agents, how they work and many real-world use cases of such agents. Agents contain mainly memory layers, reasoning loops, models and tools to achieve desired tasks without much human intervention.

By leveraging powerful frameworks like LlamaIndex and MonsterAPI, we can build capable agents that can retrieve, augment, and generate personalized context-specific answers to users in any domain or industry. We also saw a hands-on agentic RAG example that can be used for many applications. As these technologies continue to evolve, the possibilities for creating more autonomous and intelligent applications will increase manyfold.

Key Takeaways

  • Learned about autonomous agents and their working methodology that mimics human behaviour, and performance to increase productivity and enhance the tasks. 
  • We understood the fundamental difference between large language models and AI agents with their applicability in real world problem scenarios. 
  • Gained insights into the four major components of the AI agents such as a Reasoning loop, tools, models and memory layer which forms the base of any AI agents.

Frequently Asked Questions

Q1. Does LlamaIndex have agents?

A. Yes, LlamaIndex provides in-built support for the development of AI agents with tools like function calling,  ReAct agents, and LLM integrations.

Q2. What is an LLM agent in LlamaIndex?

A. LLM agent in llamaIndex is a semi-autonomous software that uses tools and LLMs to perform certain tasks or series of tasks to achieve end-user goals.

Q3. What’s the major difference between LLM and AI agent?

A. Large language models(LLMs) interact mostly based on text and text processing while AI agents leverage tools, functions and memory in the environment to execute 

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.



Source link

Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *