Introduction

Mistral has released its very first multimodal model, namely the Pixtral-12B-2409. This model is built upon Mistral’s 12 Billion parameter, Nemo 12B. What sets this model apart? It can now take both images and text for input. Let’s look more at the model, how it can be used, how well it’s performing the tasks and the other things you need to know.

What is Pixtral-12B?

Pixtral-12B is a multimodal model derived from Mistral’s Nemo 12B, with an added 400M-parameter vision adapter. Mistral can be downloaded from a torrent file or on Hugging Face with an Apache 2.0 license. Let’s look at some of the technical features of the Pixtral-12B model:

FeatureDetails
Model Size12 billion parameters
Layers40 Layers
Vision Adapter400 million parameters, utilizing GeLU activation
Image InputAccepts 1024 x 1024 images via URL or base64, segmented into 16 x 16 pixel patches
Vision Encoder2D RoPE (Rotary Position Embeddings) enhances spatial understanding
Vocabulary SizeUp to 131,072 tokens
Special Tokensimg, img_break, and img_end

How to Use Pixtral-12B-2409?

As of September 13, 2024, the model is currently not available on Mistral’s Le Chat or La Plateforme to use the chat interface directly or access it through API, but we can download the model through a torrent link and use it or even finetune the weights to suit our needs. We can also use the model with the help of Hugging Face. Let’s look at them in detail:

Torrent link: Users can copy this link

I’m using an Ubuntu laptop, so I’ll use the Transmission application (it’s pre-installed in most Ubuntu computers). You can use any other application to download the torrent link for the open-source model.

Pixtral-12B: Mistral AI's First Multimodal Model
  • Click “File” at the top left and select the open URL option. Then, you can paste the link that you copied.
How to download Pixtral-12B? | Mistral AI's First Multimodal Model
  • You can click “Open” and download the Pixtral-12B model. The folder will be downloaded which contains these files:
How to download Pixtral-12B? | Mistral AI's First Multimodal Model

Hugging Face

This model demands a high GPU, so I suggest you use the paid version of Google Colab or Jupyter Notebook using RunPod. I’ll be using RunPod for the demo of the Pixtral-12B model. If you’re using a RunPod instance with a 40 GB disk, I suggest you use the A100 PCIe GPU.

We’ll be using the Pixtral-12B with the help of vllm. Make sure to do the following installations.

!pip install vllm

!pip install --upgrade mistral_common

Go to this link: https://huggingface.co/mistralai/Pixtral-12B-2409 and agree to access the model. Then go to your profile, click on “access_tokens,” and create one. If you don’t have an access token, ensure you have checked the following boxes:

Now run the following code and paste the Access Token to authenticate with  Hugging Face:

from huggingface_hub import notebook_login

notebook_login()#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

This will take a while as the 25 GB model gets downloaded for use:

from vllm import LLM

from vllm.sampling_params import SamplingParams

model_name = "mistralai/Pixtral-12B-2409"

sampling_params = SamplingParams(max_tokens=8192)

llm = LLM(model=model_name, tokenizer_mode="mistral",max_model_len=70000)

prompt = "Describe this image"

image_url = "https://images.news18.com/ibnlive/uploads/2024/07/suryakumar-yadav-catch-1-2024-07-4a496281eb830a6fc7ab41e92a0d295e-3x2.jpg"

messages = [

{

"role": "user",

"content": [{"type": "text", "text": prompt}, {"type": "image_url", "image_url": {"url": image_url}}]

},

]

I asked the model to describe the following image, which is from the T20 World Cup 2024:

outputs = llm.chat(messages, sampling_params=sampling_params)

print('\n'+ outputs[0].outputs[0].text)

From the output, we can see that the model was able to identify the image from the T20 World Cup, and it was able to distinguish the frames in the same image to explain what was happening.

prompt = "Write a story describing the whole event that might have happened"

image_url = "https://images.news18.com/ibnlive/uploads/2024/07/suryakumar-yadav-catch-1-2024-07-4a496281eb830a6fc7ab41e92a0d295e-3x2.jpg"

messages = [

{

"role": "user",

"content": [{"type": "text", "text": prompt}, {"type": "image_url", "image_url": {"url": image_url}}]

},

]

outputs = llm.chat(messages, sampling_params=sampling_params)

print('\n'+outputs[0].outputs[0].text)

When asked to write a story about the image, the model could gather context on the environment’s characteristics and what exactly happened in the frame.

Conclusion

The Pixtral-12B model significantly advances Mistral’s AI capabilities, blending text and image processing to expand its use cases. Its ability to handle high-resolution 1024 x 1024 images with a detailed understanding of spatial relationships and its strong language capabilities make it an excellent tool for multimodal tasks such as image captioning, story generation, and more.

Despite its powerful features, the model can be further fine-tuned to meet specific needs, whether improving image recognition, enhancing language generation, or adapting it for more specialized domains. This flexibility is a crucial advantage for developers and researchers who want to tailor the model to their use cases.

Frequently Asked Questions

Q1. What is vLLM?

A. vLLM is a library optimized for efficient inference of large language models, improving speed and memory usage during model execution.

Q2. What’s the use of SamplingParams?

A. SamplingParams in vLLM control how the model generates text, specifying parameters like the maximum number of tokens and sampling techniques for text generation.

Q3. Will the model be available on Mistral’s Le Chat?

A. Yes, Sophia Yang, Head of Mistral Developer Relations, mentioned that the model would soon be available on Le Chat and Le Platform.

I’m a tech enthusiast, graduated from Vellore Institute of Technology. I’m working as a Data Science Trainee right now. I am very much interested in Deep Learning and Generative AI.



Source link

Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *