Web-Services Giants Use Artificial Intelligence to Make Smarter Applications, Driving Explosive Growth in Machine Learning Workloads
Santa Clara, CA, USA, 13th November 2015. NVIDIA today announced an end-to-end hyperscale data center platform that lets web-services companies accelerate their huge machine learning workloads.
The NVIDIA hyperscale accelerator line consists of two accelerators. One lets researchers more quickly innovate and design new deep neural networks for each of the increasing number of applications they want to power with artificial intelligence (AI). Another is a low-power accelerator designed to deploy these networks across the data center. The line also includes a suite of GPU-accelerated libraries.
Together, they enable developers to use the powerful Tesla Accelerated Computing drive machine learning in hyperscale data centers and create unprecedented AI-based applications.
"The artificial intelligence race is on," said Jen-Hsun Huang, co-founder and CEO of NVIDIA. "Machine learning is unquestionably one of the most important developments in computing today, on the scale of the PC, the internet and cloud computing. Industries ranging from consumer cloud services, automotive and health care are being revolutionized as we speak.
"Machine learning is the grand computational challenge of our generation. We created the Tesla hyperscale accelerator line to give machine learning a 10X boost. The time and cost savings to data centers will be significant," he said.
These new hardware and software products are designed specifically to accelerate the flood of web applications that are racing to incorporate AI capabilities. Ground-breaking advances in machine learning have made it possible to use AI techniques to create smarter applications and services.
Machine learning is being used to make voice recognition more accurate. It enables automatic object and scene recognition in video or photos with the ability to tag for later search. It makes possible facial recognition in videos or photos, even when the face is partially obscured. And it powers services that are aware of individual tastes and interests, which can organize schedules, deliver relevant news stories and respond to voice commands accurately and in a conversational tone.
The magic is made possible by machine learning. The challenge is obtaining the daunting amount of supercomputing power needed to innovate and train the growing number of deep neural networks, and the processing to instantly respond to the billions of queries from consumers using the services. The NVIDIA hyperscale accelerator line was created to accelerate these workloads and dramatically increase the throughput of data centers.
These new additions to the NVIDIA Tesla platform include:
- NVIDIA® Tesla® M40 GPU - the most powerful accelerator designed for training neural networks
- NVIDIA Tesla M4 GPU - a low-power, small form-factor accelerator for machine inference, as well as streaming image and video processing
- NVIDIA Hyperscale Suite - a rich suite of software optimized for machine learning and video processing
NVIDIA Tesla M40 GPU Accelerator
The NVIDIA Tesla M40 GPU accelerator allows data scientists to save days, even weeks, of time while training their deep neural networks against massive amounts of data for higher overall accuracy. Key features include:
- Optimized for Machine Learning - Reduces training time by 8X compared with CPUs days vs. 10 days for a typical AlexNet training).
- Built for 24/7 reliability - Designed and tested for high reliability in data environments.
- Scale-out performance - Support for NVIDIA GPUDirect allowing fast multi-node network training.
NVIDIA Tesla M4 GPU Accelerator
The NVIDIA Tesla M4 accelerator is a low-power GPU purpose-built for hyperscale environments and optimized for demanding, high-growth web services applications, including video transcoding, image and video processing, and machine learning inference. Key features include:
- Higher throughput - Transcodes, enhances and analyzes up to 5X more simultaneous streams compared with CPUs.
- Low power consumption - With a user-selectable power profile, the Tesla M4 50-75 watts of power, and delivers up to 10X better energy efficiency than a CPU for video processing and machine learning algorithms.
- Scale-out performance - - Harnesses widely used FFmpeg to accelerate video transcoding and video processing.
NVIDIA Hyperscale Suite
The new NVIDIA Hyperscale Suite includes tools for both developers and data center managers, specifically designed for web services deployments, including:
- cuDNN - the industry's most popular algorithm software for processing deep neural networks used for AI applications.
- GPU-accelerated FFmpeg multimedia software - With a user-selectable power profile, the Tesla M4 50-75 watts of power, and delivers up to 10X better energy efficiency than a CPU for video processing and machine learning algorithms.
- NVIDIA GPU REST Engine - Enables the easy creation and deployment of low-latency accelerated web services spanning dynamic image resizing, search acceleration, image classification and other tasks.
- NVIDIA Image Compute Engine - GPU-accelerated service with REST API that image resizing 5 times faster compared to a CPU.
In the latest showing of industry support for the Tesla Accelerated Computing Platform, Mesosphere announced that it is collaborating with NVIDIA to add support for GPU technology to Apache Mesos and the Mesosphere Datacenter Operating System (DCOS). The move will make it easier for web-services companies to build and deploy accelerated data centers for their next - generation applications.
Bikal is a Nvidia technology user and uses its technology for video and data processing. Video analytics offloaded to GPU reduce the load on the RAM and CPU, which reduces the power consumption and increases performance. Our Big Data analytics are compatible for GPU so that our algorithms benefit from deep learning and detect anomalies for better decision making. Recently we have used the Power8 tool, that IBM have created, to make these analytics compatible with our analytics and be deployable on an existing infrastructure.
For further information and advice please contact us on firstname.lastname@example.org