NVIDIA A100 Deep Learning Benchmarks for TensorFlow | Exxact Blog
Titan V Deep Learning Benchmarks with TensorFlow
Turing FP16 Discussion : r/nvidia
Caffe2 adds 16 bit floating point training support on the NVIDIA Volta platform | Caffe2
Revisiting Volta: How to Accelerate Deep Learning - The NVIDIA Titan V Deep Learning Deep Dive: It's All About The Tensor Cores
Train With Mixed Precision :: NVIDIA Deep Learning Performance Documentation
Testing AMD Radeon VII Double-Precision Scientific And Financial Performance – Techgage
NVIDIA A100 | AI and High Performance Computing - Leadtek
Mixed-Precision Programming with CUDA 8 | NVIDIA Technical Blog
FP16 Throughput on GP104: Good for Compatibility (and Not Much Else) - The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation
AMD FidelityFX Super Resolution FP32 fallback tested, native FP16 is 7% faster - VideoCardz.com
Choose FP16, FP32 or int8 for Deep Learning Models
Nvidia Unveils Pascal Tesla P100 With Over 20 TFLOPS Of FP16 Performance - Powered By GP100 GPU With 15 Billion Transistors & 16GB Of HBM2
Mysterious "GPU-N" in research paper could be GH100 NVIDIA Hopper GPU with 100GB of HBM2 VRAM, 8576 CUDA Cores, and 779 TFLOPs of FP16 compute - NotebookCheck.net News
AMD FidelityFX Super Resolution FP32 fallback tested, native FP16 is 7% faster - VideoCardz.com
Revisiting Volta: How to Accelerate Deep Learning - The NVIDIA Titan V Deep Learning Deep Dive: It's All About The Tensor Cores