رانندگان با ID یا نام دستگاه

شناخته شده دستگاه: 165022367

آخرین شناخته شده راننده: 23.12.2020

Ad

Новости devid.info

Nvidia Introduces Pioneer Cloud GPUs of VGX Series

Nvidia VGX platform, that aims at securing virtualization facilities in any PC system, was introduced by the company not so long ago. But this is not the only surprise from the manufacturer, as yesterday it introduced a new cloud GPU solution, a pioneer in graphics industry. Those GPUs are developed on Kepler architecture and are targeted mainly at the employment with GPU-VDI facilities. The launch of such cloud solutions will facilitate GPU virtualization in various environments.


A new series of graphics processing units feature a dedicated hardware template, known as VGX MMU (memory management unit). This is a truly unique technology which allows the Nvidia VGX hypervisor to provide for simultaneous GPU employment by a few clients; at the same time, it ensures consecutive performance, compatibility of various applications, low-latency remote display, and stability of operation. Moreover, Kepler architecture consists of a high-performance encoding facility (H.264) that is capable to encode up to four currents simultaneously, supplying the highest image quality with 720p resolution at 30 shots per second.

The VGX series will be represented by two models: Nvidia VGX K1 and Nvidia VGX K2. Both models feature two-slot PCB developed for PCI-e 3.0 x16 bus and are equipped with passive cooling system. The first GPU, VGX K1, houses four chips with 768 CUDA cores. It is equipped with 16 GB of DDR3 memory type. The level of power consumption does not surpass the marker of 130W. This GPU is powered through 6-contact power socket.

The second GPU, VGX K2, is based on two chips with 3072 CUDA cores. It accommodates up to 8 GB of GDDR5 memory type and is characterized by 225W TDP level. The organization of an additional power supply is realized via 8-contact socket for the connection to the PSU.
  • 0
  • 18 October 2012, 13:41
Only registered users can comment.