There are a lot of amazing developments in AI over the last few years. We saw ChatGPT first reach the market in November, 2022. It was a remarkable breakthrough that made headlines around the world. ChatGPT and other AI startups are driving demand for software developers.

More recently, we have also heard about some of the newer developments in AI. Just today, Microsoft announced that it is introducing new AI employees that can handle queries.

But one of the biggest developments is the inception of RAG. Keep reading to learn how it is affecting our future.

RAG is the Newest Shiny Toy with AI

When we’re talking about AI, Retrieval Augmented Generation (RAG) and the like, it helps to think of an LLM as a person.

We’ve all heard the phrase “Jack of all trades, master of none,” and that applies to large language models (LLMs). In their default form, LLMs are generalist. IBM has a great overview of them.

If you want an LLM to participate in a business and either create productive output or make decisions – to move beyond generalist – you need to teach it about your business, and you need to teach it a lot! The list is long but as a baseline, you need to teach it the basic skills to do a job, about the organization and organization’s processes, about the desired outcome and potential problems, and you need to feed it with the context needed to solve the current problem at hand. You also need to provide it with all the necessary tools to either effect a change or learn more. This is one of the newest examples of ways that AI can help businesses.

In this way the LLM is very like a person. When you hire someone you start by finding the skills you need, you help them to understand your business, educate them on the business process they are working within, give them targets and goals, train them on their job, and give them tools to do their job.

For people, this is all achieved with formal and informal training, as well as providing good tools. For a Large Language Model, this is achieved with RAG. So, if we want to leverage the benefits of AI in any organization, we need to get very good at RAG.

So what’s the challenge?

One of the limitations of modern Large Language Models is the amount of contextual information that can be provided for each and every task you want that LLM to perform.

RAG provides that context. As such, preparing a succinct and accurate context is crucial. It’s this context that teaches the model about the specifics of your business, of the task you’re asking of them. Give an LLM the correct question and correct context and it will give an answer or make a decision as well as a human being (if not better).

It’s important to make the distinction that people learn by doing; LLM’s don’t learn naturally, they are static. In order to teach the LLM, you need to create that context as well as a feedback loop that updates that RAG context for it to do better next time.

The efficiency of how that context is curated is key both for the performance of the model but also is directly correlated to cost. The heavier the lift to create that context, the more expensive the project becomes in both time and actual cost.

Equally, if that context isn’t accurate, you’re going to find yourself spending infinitely longer to correct, tweak and improve the model, rather than getting results straight off the bat.

This makes AI a data problem.

Creating the context needed for LLMs is hard because it needs lots of data – ideally everything your business knows that might be relevant. And then that data needs to be distilled down to the most relevant information. No mean feat in even the most data-driven organization.

In reality, most businesses have neglected large parts of their data estate for a long time, especially the less structured data designed to teach humans (and therefore LLMs) how to do the job.

LLMs and RAG are bringing an age-old problem even further to light: data exists in silos that are complicated to reach.

When you consider we’re now looking at unstructured data as well as structured data, we’re looking at even more silos. The context needed to get value from AI means that the scope of data is no longer solely about pulling numbers from Salesforce, if organizations are going to see true value in AI, they also need training materials used to onboard humans, PDFs, call logs, the list goes on.

For organizations starting to hand over business processes to AI is daunting, but it is the organizations with the best ability to curate contextual data that will be best placed to achieve this.

At its core, ‘LLM + context + tools + human oversight + feedback loop’ are the keys to AI accelerating just about any business process.

Matillion has a long and storied history of helping customers be productive with data. For more than a decade, we’ve been evolving our platform – from BI to ETL, now to Data Productivity Cloud – adding building blocks that enable our customers to make the most of the latest technological advancements that improve their data productivity. AI and RAG are no exceptions. We’ve been adding the building blocks to our tool that allow customers to assemble and test RAG pipelines, to prepare data for the vector stores that power RAG; provide the tools to assemble that all-important context with the LLM, and provide the tools needed to feedback and access the quality of LLM responses.

We’re opening up access to RAG pipelines without the need for hard-to-come-by data scientists or huge amounts of investment, so that you can harness LLMs that are no longer just a ‘jack of all trades’ but a valuable and game-changing part of your organization.





Source link

Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *