- IPMI Remote Access
- Redundant Power Supply
-
64 GB
DDR5 RAM -
2x 1 TB
NVMe SSD Storage -
2x
1 GBit/s Network
Our server systems with GPUs are individually configurable and can therefore be tailored precisely to your needs. The high flexibility of the application areas of server GPUs always requires maximum flexibility in the selection of hardware. A connection with high single-thread performance is often required, which is why our GPU servers are based on the latest AMD and Intel CPUs. In combination with server motherboards, these deliver high stability and strong I/O performance.
The use of graphics cards (GPUs) in dedicated server systems is ideal for graphics-intensive workloads such as AI & machine learning, video processing & encoding, scientific calculations, or data analysis in conjunction with databases. The usually faster GPU relieves the CPU and is often 10-100x faster due to a higher number of GPU cores.
64 GB
DDR5 RAM2x 1 TB
NVMe SSD Storage2x
1 GBit/s Network128 GB
DDR5 RAM2x 1 TB
NVMe SSD Storage2x
1 GBit/s Network128 GB
DDR5 RAM2x 1 TB
NVMe SSD Storage2x
1 GBit/s Network128 GB
DDR5 RAM2x 1 TB
NVMe SSD Storage2x
1 GBit/s NetworkNone of the products meet all your criteria? Perhaps our dedicated server overview will help you find something more suitable. If you have any questions or special requirements (high availability, cluster solutions, special hardware, etc.), our support team will be happy to help!
Our dedicated servers with graphics cards offer flexibility and stability. The hardware can be individually configured, so you are not bound by rigid hardware specifications. GPU servers are increasingly being used in AI/KI and terminal server/remote desktop services!
GPU dedicated servers offer special features that clearly distinguish them from classic CPU-based servers. They are optimized for highly parallel, computationally intensive tasks. Depending on the application, it may be advisable to use high-performance graphics cards or lower-performance graphics cards. Our graphics card servers are based on NVIDIA® RTX™ graphics cards. We also offer special graphics cards for use in AI environments. These graphics cards have the designation “ADA” in their product name.
The other server components are based on our Performance Server series. These impress with their high single-core performance, which can be very relevant for many AI applications, depending on the software used. These systems support up to 192 GB of DDR5 RAM and are therefore ideal for GPU-based server systems. Furthermore, high-quality server components are used, which provide permanent management access.
The carbon footprint of the Internet with its servers and data centers is growing incessantly. However, in addition to using energy-efficient hardware, we can also actively support nature: With every new rating - whether positive or negative - we therefore have 2 new trees planted.
We offer the following Linux dedicated server operating systems via automatic installation on our gpu dedicated servers. If you cannot find an operating system in the overview, an ISO installation is available.











Server management made easy: automatic installation, IP/rDNS management, rescue boot, IPMI remote access, and much more!
Cold aisle containment at the FRA4 site. Cold air flows out of the openings in the floor and cools the hardware installed in the racks on the left and right sides.
Rear cabling for our premium dedicated servers at FRA4 nLighten. The structure varies with the homogeneity and circulation of the products in the rack
Some of our premium servers and colocation spaces at the FRA4 nLighten site. Depending on customer requirements, this can result in very heterogeneous rack assignments.
Rear view of our Mini Server Products installed at FRA4 site.
Redundant routing stack consisting of technically different routers that are almost identical in terms of performance characteristics - Location FRA1
The administration of our servers is done via our self-developed server administration. Installation, reboots, monitoring, IP management and traffic analysis become a breeze.
Our co-user administration also allows your colleagues quick access to all servers. The shared servers and permissions can be set separately for each co-user.'
For installation we offer all common operating systems. Either as automatic installation, or manual installation with SSH/VNC access. If we don't have your desired operating system in our offer, you can use our ISO installer, or contact our support to extend our automatic installations.
In case of problems or misconfigurations, you can temporarily start a non-invasive rescue system. This resides exclusively in memory and allows SSH access to your server and the data on it.
For our servers we rely on hardware with IPMI support. IPMI can also be queried directly in our server administration. So you have all important system parameters of your server always available. Even reboots are possible without any problems in case of emergency.
You also have full visibility at the network level. We provide a detailed overview of your server system's traffic usage. Grouping multiple servers into a collective traffic pool - for example, for analysis purposes - is also possible.
A GPU server is a dedicated server equipped with one or more powerful graphics cards (GPUs). These servers are optimized for computationally intensive tasks such as artificial intelligence (AI), 3D rendering, scientific simulations, and video encoding.
GPU servers are often used for the following tasks:
Our GPU-based server systems exclusively use server GPUs. These are graphics cards that are suitable for continuous operation in data centers and can be installed in appropriate rack enclosures. Various NVIDIA graphics cards are available, such as the NVIDIA Quadro RTX 5000 ADA for AI applications.
Normal servers mainly use CPUs. GPU servers also have one or more graphics cards that are optimized for parallel computing. This enables massively higher computing power for certain tasks – especially in the field of AI and graphics processing.
In general, virtualization can be performed on any of our servers. However, with NVIDIA, you may need to purchase an additional license to use certain GPU features in order to be able to use the GPU features in a virtual environment.
Ada is the codename for the architecture of the latest generation of NVIDIA GPUs, succeeding the previous generation codenamed Ampere. These are enterprise graphics cards that are more focused on AI, real-time rendering, and hybrid workloads.
Our graphics card servers are operated exclusively in highly certified data centers in Frankfurt am Main.
Yes – especially in the AI environment, it may be necessary to implement a strict WAN/LAN separation. Particularly if your own AI is only to be used for internal business purposes and no data from it is to be transferred to the public internet, the use of a firewall solution would be recommended here. We offer affordable PaaS solutions for protecting environments with critical data.
In general, we do not monitor our customers with regard to the services they use. As a rule, we are not aware of the purposes for which the servers are used, so we do not rule out mining at this point.
For a small AI environment, e.g., for local AI experiments, model inference (not training), and possibly fine-tuning smaller models, VRAM (graphics memory) and computing power (especially tensor performance) are crucial. The RTX 2000 ADA graphics card is a suitable entry-level option. It is suitable for inference of medium-sized models such as LLaMA 2 7B.