semiconductors/gpus

chipsandcheese.com   (2025-03-15)

Hello you fine Internet folks,

open.substack.com   (2025-01-28)

Getting 'low level' with Nvidia and AMD GPUs

wccftech.com   (2024-11-23)

Intel's first Arc B580 GPUs based on the Xe2 "Battlemage" architecture have been leaked & they look quite compelling.

www.theregister.com   (2024-04-17)

Datacenter GPUs and some consumer cards now exceed performance limits

www.theregister.com   (2024-04-17)

Beijing will be thrilled by this nerfed silicon

open.substack.com   (2024-04-12)

GPT-4 Profitability, Cost, Inference Simulator, Parallelism Explained, Performance TCO Modeling In Large & Small Model Inference and Training

www.nextplatform.com   (2024-04-05)

While a lot of people focus on the floating point and integer processing architectures of various kinds of compute engines, we are spending more and more

wccftech.com   (2024-03-29)

Lenovo, the firm emerging as a driving force behind AI computing, has expressed tremendous optimism about AMD's Instinct MI300X accelerator.

www.nytimes.com   (2024-02-07)

Chafing at their dependence, Amazon, Google, Meta and Microsoft are racing to cut into Nvidia’s dominant share of the market.

chipsandcheese.com   (2023-07-28)

AMD, Nvidia, and Intel have all diverged their GPU architectures to separately optimize for compute and graphics.

www.semianalysis.com   (2023-07-09)

Quarterly Ramp for Nvidia, Broadcom, Google, AMD, AMD Embedded (Xilinx), Amazon, Marvell, Microsoft, Alchip, Alibaba T-Head, ZTE Sanechips, Samsung, Micron, and SK Hynix

www.tomshardware.com   (2023-06-30)

GDDR7 is getting closer, says Micron.

www.extremetech.com   (2023-06-30)

Though it'll arrive just in time for mid-cycle refresh from AMD, Nvidia, and Intel, it's unclear if there will be any takers just yet.

www.semianalysis.com   (2023-06-22)

Micron $MU looks very weak in AI

www.nextplatform.com   (2023-06-14)

The great thing about the Cambrian explosion in compute that has been forced by the end of Dennard scaling of clock frequencies and Moore’s Law lowering

spectrum.ieee.org   (2023-06-02)

GPUs may dominate, but CPUs could be perfect for smaller AI models

venturebeat.com   (2023-05-12)

Google's new machines combine Nvidia H100 GPUs with Google’s high-speed interconnections for AI tasks like training very large language models.

www.tomshardware.com   (2023-03-21)

Faster masks, less power.

www.cnbc.com   (2023-02-25)

The $10,000 Nvidia A100has become one of the most critical tools in the artificial intelligence industry,

timdettmers.com   (2023-01-20)

Here, I provide an in-depth analysis of GPUs for deep learning/machine learning and explain what is the best GPU for your use-case and budget.

www.nextplatform.com   (2022-01-06)

There are two types of packaging that represent the future of computing, and both will have validity in certain domains: Wafer scale integration and

www.nextplatform.com   (2021-12-08)

Nvidia has staked its growth in the datacenter on machine learning. Over the past few years, the company has rolled out features in its GPUs aimed neural

github.com   (2021-12-07)

Fast parallel CTC.

www.nextplatform.com   (2021-12-07)

The modern GPU compute engine is a microcosm of the high performance computing datacenter at large. At every level of HPC – across systems in the

www.nextplatform.com   (2021-06-26)

Like its U.S. counterpart, Google, Baidu has made significant investments to build robust, large-scale systems to support global advertising programs. As

www.eetimes.com   (2021-06-26)

Its second analog AI chip is optimized for different card sizes, but still aimed at computer vision workloads at the edge.

www.nextplatform.com   (2021-06-24)

Current custom AI hardware devices are built around super-efficient, high performance matrix multiplication. This category of accelerators includes the

tedium.co   (2021-03-30)

What makes a GPU a GPU, and when did we start calling it that? Turns out that’s a more complicated question than it sounds.

www.techspot.com   (2021-03-19)

AMD is one of the oldest designers of large scale microprocessors and has been the subject of polarizing debate among technology enthusiasts for nearly 50 years. Its...

www.nextplatform.com   (2021-03-18)

One of the main tenets of the hyperscalers and cloud builders is that they buy what they can and they only build what they must. And if they are building

rocmdocs.amd.com   (2021-03-15)

AMD ROCm documentation

hardware.slashdot.org   (2021-01-04)

Long-time Slashdot reader UnknowingFool writes: AMD filed a patent on using chiplets for a GPU with hints on why it has waited this long to extend their CPU strategy to GPUs. The latency between chiplets poses more of a performance problem for GPUs, and AMD is attempting to solve the problem with a ...

venturebeat.com   (2020-09-16)

Micron's GDDR6X is one of the star components in Nvidia's RTX 3070, 3080, and 3080 video cards. It's so fast it should boost gaming past the 4K barrier.

www.nextplatform.com   (2020-06-01)

When you have 54.2 billion transistors to play with, you can pack a lot of different functionality into a computing device, and this is precisely what

www.pyimagesearch.com   (2020-03-11)

In this tutorial, you will learn how to get started with your NVIDIA Jetson Nano, including installing Keras + TensorFlow, accessing the camera, and performing image classification and object detection.

www.reddit.com   (2019-12-23)

363 votes, 25 comments. This post has been split into a two-part series to work around Reddit’s per-post character limit. Please find Part 2 in the…

semiengineering.com   (2016-10-12)

Making Waves in Deep Learning How deep learning applications will map onto a chip.

www.nextplatform.com   (2016-10-10)

A new crop of applications is driving the market along some unexpected routes, in some cases bypassing the processor as the landmark for performance and