Yangpu Rolls out Corporate Innovations: Create Your DeepSeek in Just 5 Minutes March 24,2025
At the recent Yangpu Technology Innovation Conference, themed "Building the Digital Yangpu New Quality Innovation Belt", a flurry of corporate innovation projects was unveiled. Notably, the debut of DaoCloud d.run's DeepSeek R1 model captured significant attention.
With a single click on the "Model Experience - Text Model" section, users can seamlessly access both the full and distilled versions of the stable DeepSeek R1 model. Furthermore, deploying the 1.5B, 14B, and 32B distilled versions is equally straightforward - no GPU purchases, driver installations, network or storage configurations, environment setups, or model downloads are required. By eliminating these cumbersome steps, the process is streamlined, enabling users to create their personalized DeepSeek in a mere five minutes.
It has come to our attention that the d.run platform accommodates a diverse array of large-scale model architectures, such as MoE, and is outfitted with comprehensive tools spanning the entire lifecycle of pretraining, fine-tuning, and inference, effectively delivering a plug-and-play experience. Naturally, the reliability of model services hinges on the underlying computational power. Leveraging cloud-native technology, d.run adeptly orchestrates computational resources, elevating GPU utilization beyond 80%. Additionally, it supports a range of prominent chip architectures, both domestic and international, including NVIDIA, Huawei Ascend, MetaX, and Enflame. By dynamically assessing model requirements, it assists users in automatically identifying the most suitable computational solution. Whether you are a beginner or a seasoned developer, d.run offers a stable and dependable DeepSeek R1 experience.
When intelligent scheduling converges with the super cognitive model, d.run is powering the nuclear-grade evolution of DeepSeek R1.
When we look at the open-source release of DeepSeek as a window into innovation within the AI infra domain, it becomes clear that DaoCloud has been a trailblazer for a full decade. The old adage "constant dripping wears away a stone" holds true here. After ten years of relentless effort, DaoCloud has not only seized the opportunities for innovation in AI infra but has also sprinted ahead of the pack, achieving robust growth within the global tech ecosystem. To put it simply, DaoCloud has meticulously constructed a formidable open-source technology matrix in the realm of AI orchestration and scheduling.
Consider the competition in AI infra. Kubernetes has emerged as the de facto standard for orchestrating AI computing power, and its continuous technological advancements have become the beating heart driving the industry forward. DaoCloud has carved out a significant voice within the Kubernetes community, ranking third globally and first in China in terms of contributions. Building on the Kubernetes ecosystem, DaoCloud has independently open-sourced or made core contributions to a suite of groundbreaking technologies. KWOK, for instance, offers a lightweight solution for simulating large-scale clusters and conducting scheduler stress tests, a tool that has been eagerly adopted by global AI powerhouses like NVIDIA and OpenAI. Spiderpool brings advanced RDMA network optimization to the table, perfectly suited for AI workloads. Meanwhile, HAMi revolutionizes heterogeneous computing device management by slicing computing power down to a mere 1%, pushing utilization rates to unprecedented levels.
To further bolster inference capabilities at the upper layers, the community has witnessed the birth of AIBrix, the first enterprise-level inference system built on Kubernetes. DaoCloud stands as a core contributor to this project, collaborating with a powerhouse lineup of open-source industrial and academic partners. Together with ByteDance, Google, the University of Michigan, the University of Illinois Urbana-Champaign, and the University of Washington, DaoCloud is focused on optimizing inference efficiency in production environments and fast-tracking the transition of large-scale model services into an era of high-efficiency implementation. Looking ahead, as the industry grapples with the distinct demands of distributed inference scenarios, DaoCloud will remain steadfast in its commercial strategy of "open core + enterprise extension". This approach will see the company transition from breaking through technological barriers to empowering entire ecosystems, ensuring that the value of open source transcends the confines of the tech community and reaches deep into the heart of industries everywhere.