DeepSeek V3:The $5.5M Trained Model Beats GPT-4o & Llama 3.1


ModelArena-HardAlpacaEval 2.0
DeepSeek-V2.5-090576.250.5
Qwen2.5-72B-Instruct81.249.1
LLaMA-3.1 405B69.340.5
GPT-4o-051380.451.1
Claude-Sonnet-3.5-102285.252.0
DeepSeek-V385.570.0
  1. Arena-Hard Performance:
    • DeepSeek-V3 ranks highest with 85.5, narrowly surpassing Claude-Sonnet-3.5 (85.2) and significantly outperforming DeepSeek-V2.5 (76.2).
    • This shows its exceptional ability to generate well-rounded, context-aware responses in difficult scenarios.
  2. AlpacaEval 2.0 Performance:
    • DeepSeek-V3 leads with 70.0, far ahead of Claude-Sonnet-3.5 (52.0), the second-best performer.
    • This demonstrates significant improvements in user preference and overall quality of open-ended outputs, showcasing better alignment with user expectations.
  3. Comparison with Competitors:
    • Qwen2.5 (Arena-Hard: 81.2, AlpacaEval: 49.1):
      • Performs reasonably well on Arena-Hard but falls behind significantly in user preference, indicating weaker alignment with user-friendly response styles.
    • GPT-4-0513 (Arena-Hard: 80.4, AlpacaEval: 51.1):
      • Competitive on both metrics but doesn’t match the user-centered quality of DeepSeek-V3.
    • LLaMA-3.1 (Arena-Hard: 69.3, AlpacaEval: 40.5):
      • Scores lower on both benchmarks, highlighting weaker open-ended generation capabilities.
    • DeepSeek-V2.5 (Arena-Hard: 76.2, AlpacaEval: 50.5):
      • The leap from V2.5 to V3 is substantial, indicating major upgrades in response coherence and user preference alignment.

You can also refer to this to understand the evaluation better:

deepseek evaluations

Link to the DeepSeek V3 Github

Aider Polyglot Benchmark Results

aider polygot

Here are the Aider Polyglot Benchmark Results, which evaluate models on their ability to complete tasks correctly. The evaluation is divided into two output formats:

  • Diff-like format (shaded bars): Tasks where outputs resemble code diffs or small updates.
  • Whole format (solid bars): Tasks requiring the generation of an entire response.

Key Observations

  1. Top Performers:
    • o1-2024-11-12 (Tingli) leads the benchmark with nearly 65% accuracy in the whole format, showing exceptional performance across tasks.
    • DeepSeek Chat V3 Preview and Claude-3.5 Sonnet-2024-1022 follow closely, with scores in the range of 40–50%, demonstrating solid task completion in both formats.
  2. Mid-Performers:
    • Gemini+exp-1206 and Claude-3.5 Haiku-2024-1022 score moderately in both formats, highlighting balanced but average performance.
    • DeepSeek Chat V2.5 and Flash-2.0 sit in the lower mid-range, showing weaker task resolution abilities compared to the leading models.
  3. Lower Performers:
    • y-lightning, Qwen2.5-Coder 32B-Instruct, and GPT-4o-mini 2024-07-18 have the lowest scores, with accuracies under 10–15%. This indicates significant limitations in handling both diff-like and whole format tasks.
  4. Format Comparison:
    • Models generally perform slightly better in the Whole format than the Diff-like format, implying that full-response generation is handled better than smaller, incremental changes.
    • The shaded bars (diff-like format) are consistently lower than their whole-format counterparts, indicating a consistent gap in this specific capability.

DeepSeek Chat V3 Preview’s Position:

  • Ranks among the top three performers.
  • Scores around 50% in the whole format and slightly lower in the diff-like format.
  • This shows strong capabilities in handling complete task generation but leaves room for improvement in diff-like tasks.

Insights:

  • The benchmark highlights the diverse strengths and weaknesses of the evaluated models.
  • Models like o1-2024-11-12 show dominance across both task formats, whereas others like DeepSeek Chat V3 Preview excel primarily in full-task generation.
  • Lower performers indicate a need for optimization in both nuanced and broader task-handling capabilities.

This ultimately reflects the versatility and specialized strengths of different AI systems in completing benchmark tasks.

DeepSeek V3’s Chat Website & API Platform

  1. You can interact with DeepSeek-V3 through the official website: DeepSeek Chat.
DeepSeek platform
  1. Additionally, they offer an OpenAI-Compatible API on the DeepSeek Platform: Link.
    There is an API cost to it and it depends on the tokens:
DeepSeek api price

How to Run DeepSeek V3?

If you prefer not to use the chat UI and want to directly work with the model, there’s an alternative for you. The model, DeepSeek-V3, has all its weights released on Hugging Face. You can access the SafeTensor files there.

Model Size and Hardware Requirements:

Firstly, the model is massive, with 671 billion parameters, making it challenging to run on standard consumer-grade hardware. If your hardware isn’t powerful enough, it’s recommended to use the DeepSeek platform for direct access. Wait for a Hugging Face Space if one becomes available.

How to Run Locally?

If you have sufficient hardware, you can run the model locally using the DeepSeek-Infer Demo, SGLang, LMDeploy, TensorRT-LLM, vLLM, AMD GPU, Huawei Ascend NPU.

Convert the model to a quantized version to reduce memory requirements, which is particularly helpful for lower-end systems.

Here’s how you can convert FP8 weights to BF16:

Conversion script if you need bf16

cd inference
python fp8_cast_bf16.py --input-fp8-hf-path /path/to/fp8_weights --output-bf16-hf-path /path/to/bf16_weights

Setup Process with DeepSeek-Infer Demo

Hugging Face’s transformers library does not directly support the model yet. To set it up, you’ll need to:

Clone the DeepSeek AI GitHub repository:

git clone https://github.com/deepseek-ai/DeepSeek-V3.git

Install the required dependencies:

cd DeepSeek-V3/inference
pip install -r requirements.txt

Download the Hugging Face checkpoints and run the model locally.

Refer to this: DeepSeek V3

Recommendation:

  • If you have powerful hardware, try running the model locally to fully explore its potential.
  • Otherwise, use the DeepSeek.com chat UI or their API platform for seamless access.

LLM DeepSeek Plugin

You can also use llm-deepseek plugin to access the model:

# Step 1: Install the llm package
!pip install llm
# Step 2: Install the DeepSeek plugin
!llm install llm-deepseek
# Step 3: Set the DeepSeek API key
!llm keys set deepseek
# Paste your API key when prompted
# Step 4: Run the DeepSeek Chat Model
!llm -m deepseek-chat 'five great names for a pet ocelot'
# Step 5: Run the DeepSeek Coder Model
!llm -m deepseek-coder 'how to reverse a linked list in python'

DeepSeek V3 Experimentations

On the DeepSeek platform, you can explore the web and engage in deep reasoning to solve complex problems, similar to OpenAI’s approach.

Deepseek search

Here are some prompts and outputs:

First Experiment

Prompt: Generate an SVG of a kid and a football

Output:

Here’s a simple SVG representation of a kid playing with a football. You can
copy and paste this code into an HTML file or an SVG viewer to see the
image:

height="200">

  

  

  

  

  

  

  

  

  

  

  

  

  

  

Explanation:

The kid is represented with simple shapes: a circle for the head, a rectangle
for the body, and lines for the arms and legs.

The football is an ellipse with two white lines to mimic the classic football
design.

You can adjust the colours, sizes, and positions to customize the image
further.

Let me know if you'd like to enhance or modify this!

Output

Second Experiment

Prompt: Create the classic snake game

Output:

Creating the classic Snake game involves using HTML, CSS, and JavaScript
. Below is a simple implementation:

  

  

  

Snake Game

  

    body {

      display: flex;

      justify-content: center;

      align-items: center;

      height: 100vh;

      margin: 0;

      background-color: #000;

    }

    canvas {

      border: 1px solid #fff;

    }

  

  

  



Source link

Leave a comment

All fields marked with an asterisk (*) are required