FAQ
Our new vGPU products are a variation of our popular root servers with high-quality hardware, but in this version they also offer dedicated GPU resources in addition to guaranteed CPU performance. This makes these products particularly suitable for computing-intensive workloads such as for example AI inference, video encoding or data analysis.
This early access campaign gives you the opportunity to gain exclusive, early access to our new vGPU products. You can be one of the first to buy and test these products. We would also be delighted to hear your feedback and will send you a non-binding feedback form.
Check whether you have entered a valid e-mail address or whether the confirmation e-mail is in the spam folder. If you have any problems, you can contact us here.
Yes, the number of vGPU servers within this early access campaign is limited. The principle of first come, first served applies. In addition, only one order per customer is permitted during the early access phase in order to ensure fair distribution.
If you have registered for the early access newsletter up to and including May 14, you will receive an e-mail notification from us as soon as the new products are available for early access. The sales launch for early access subscribers is scheduled for May 15th. From then on, you can use the code contained in the e-mail to get your hands on one of the strictly limited new products. The ordering process will also work as usual, and you will receive access to your ordered server as soon as possible. Please note: As a new customer, this process may take a little longer due to the necessary verification or prepayment.
If you have fully registered for early access, you will receive an individual code with which you can purchase one of the two tariffs.
However, the number of new vGPU servers that we allocate within the Early Access phase is very limited. The servers will therefore be allocated on a “first come, first served” basis. We therefore cannot guarantee that every subscriber will be able to purchase one.
Nvidia drivers are not publicly available and officially only support Ubuntu, so we currently only offer an Ubuntu image for vGPU Server.
We have tested the new servers for running a selection of compact open-source language models from various architectures. Below is a list of LLMs that are compatible with the specified server plans.
RS 2000 vGPU 7 + RS 4000 vGPU 14
- llama3.2:1b
- llama3.2:3b
- llama3.1:8b
- mistral:7b
- gemma3:1b
- gemma3:4b
- phi3:3.8b
- deepseek-r1:1.5b
- deepseek-r1:7b
- deepseek-r1:8b
- qwen2.5:0.5b
- qwen2.5:1.5b
- qwen2.5:3b
- qwen2.5:7b
- qwen2.5-coder:0.5b
- qwen2.5-coder:1.5b
- qwen2.5-coder:3b
- qwen2.5-coder:7b
- qwen:0.5b
- qwen:1.8b
- qwen:4b
- qwen:7b
- gemma:2b
- gemma:7b
- qwen2:0.5b
- qwen2:1.5b
- qwen2:7b
- gemma2:2b
- llama2:7b
- tinyllama:1.1b
- starcoder2:3b
- starcoder2:7b
- dolphin3:8b
RS 4000 vGPU 14
- gemma3:12b
- deepseek-r1:14b
- phi4:14b
- qwen2.5:14b
- qwen2.5-coder:14b
- qwen:14b
- llama2:13b
- phi3:14b
- mistral-nemo:12b
- starcoder2:15b