Будьте уважні! Це призведе до видалення сторінки "Simon Willison's Weblog"
.
That model was trained in part using their unreleased R1 "reasoning" design. Today they've released R1 itself, in addition to a whole household of brand-new designs obtained from that base.
There's a great deal of stuff in the new release.
DeepSeek-R1-Zero appears to be the base model. It's over 650GB in size and, like many of their other releases, wiki.die-karte-bitte.de is under a clean MIT license. DeepSeek alert that "DeepSeek-R1-Zero experiences challenges such as unlimited repeating, bad readability, and language mixing." ... so they likewise released:
DeepSeek-R1-which "includes cold-start information before RL" and "attains performance similar to OpenAI-o1 throughout mathematics, code, and reasoning jobs". That a person is also MIT certified, and is a comparable size.
I don't have the capability to run designs bigger than about 50GB (I have an M2 with 64GB of RAM), so neither of these 2 models are something I can easily have fun with myself. That's where the new distilled models are available in.
To support the research study community, we have DeepSeek-R1-Zero, DeepSeek-R1, and 6 dense models distilled from DeepSeek-R1 based on Llama and Qwen.
This is an interesting flex! They have designs based upon Qwen 2.5 (14B, 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).
Weirdly those Llama models have an MIT license connected, which I'm uncertain works with the underlying Llama license. Qwen models are Apache accredited so perhaps MIT is OK?
(I likewise just saw the MIT license files say "Copyright (c) 2023 DeepSeek" so they might require to pay a bit more attention to how they copied those in.)
Licensing aside, these distilled designs are interesting monsters.
Running DeepSeek-R1-Distill-Llama-8B-GGUF
Quantized versions are currently starting to appear. Up until now I've attempted just one of those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF launched by Unsloth AI-and it's actually enjoyable to have fun with.
I'm running it using the combination of Ollama, LLM and the llm-ollama plugin.
First I fetched and ran the design using Ollama itself:
This downloads around 8.5 GB of design information and starts an interactive chat user interface.
Once the design has been fetched, LLM can talk with it also. I choose utilizing LLM for experiments due to the fact that it logs whatever to SQLite for later expedition.
I installed the plugin and ran it like this:
Each reaction starts with a ... pseudo-XML tag containing the chain of thought used to help create the reaction. For "a joke about a pelican and a walrus who run a tea room together" that started like this:
Okay, so I require to come up with a joke about a pelican and a walrus running a tea room together. Hmm, that's an intriguing mix. Let me think of how these two characters might engage in an amusing scenario.
First, let's consider their qualities. Pelicans are understood for their long beaks and Webbed feet, typically seen near the beach or water. Walruses have huge teeth, thick fur, and they're typically found in colder climates, like icebergs or snowy locations. So, integrating these 2 into a tea room setting is already a funny image.
[...]
It then thought for 20 paragraphs before outputting the joke! Here's the full transcript and here's the output at the end, after the tag:
The Joke:
A pelican and a walrus choose to open a quaint little tea space together. The walrus, with its big size, struggles to discover comfy chairs, e.bike.free.fr so it sits on the table by mishap, overturning the teapot. Meanwhile, the pelican, trying to assist, utilizes its beak to place dishes on the table, triggering a couple of spills.
After a series of funny accidents, the walrus searches for and says with a smile, "This isn't so fishy anymore." The pelican smirks and says, "Maybe not, but we do have a great deal of krill in our tea!"
Explanation:
- Setup: Pelican and Walrus run a not likely tea space. Their sizes lead to comedic chaos.
Будьте уважні! Це призведе до видалення сторінки "Simon Willison's Weblog"
.