semiconductors/gpus

www.linkedin.com   (2025-03-26)

The Future of AI Accelerators: A Roadmap of Industry Leaders The AI hardware race is heating up, with major players like NVIDIA, AMD, Intel, Google, Amazon, and more unveiling their upcoming AI accelerators. Here’s a quick breakdown of the latest trends: Key Takeaways: NVIDIA Dominance: NVIDIA continues to lead with a robust roadmap, extending from H100 to future Rubin and Rubin Ultra chips with HBM4 memory by 2026-2027. AMD’s Competitive Push: AMD’s MI300 series is already competing, with MI350 and future MI400 models on the horizon. Intel’s AI Ambitions: Gaudi accelerators are growing, with Falcon Shores on track for a major memory upgrade. Google & Amazon’s Custom Chips: Google’s TPU lineup expands rapidly, while Amazon’s Trainium & Inferentia gain traction. Microsoft & Meta’s AI Expansion: Both companies are pushing their AI chip strategies with Maia and MTIA projects, respectively. Broadcom & ByteDance Join the Race: New challengers are emerging, signaling increased competition in AI hardware. What This Means: With the growing demand for AI and LLMs, companies are racing to deliver high-performance AI accelerators with advanced HBM (High Bandwidth Memory) configurations. The next few years will be crucial in shaping the AI infrastructure landscape. $NVDA $AMD $INTC $GOOGL $AMZN $META $AVGO $ASML $BESI

chipsandcheese.com   (2025-03-15)

Hello you fine Internet folks,

open.substack.com   (2025-01-28)

Getting 'low level' with Nvidia and AMD GPUs

open.substack.com   (2024-04-12)

GPT-4 Profitability, Cost, Inference Simulator, Parallelism Explained, Performance TCO Modeling In Large & Small Model Inference and Training

www.nextplatform.com   (2024-04-05)

While a lot of people focus on the floating point and integer processing architectures of various kinds of compute engines, we are spending more and more

www.nytimes.com   (2024-02-07)

Chafing at their dependence, Amazon, Google, Meta and Microsoft are racing to cut into Nvidia’s dominant share of the market.

chipsandcheese.com   (2023-07-28)

AMD, Nvidia, and Intel have all diverged their GPU architectures to separately optimize for compute and graphics.

www.extremetech.com   (2023-06-30)

Though it'll arrive just in time for mid-cycle refresh from AMD, Nvidia, and Intel, it's unclear if there will be any takers just yet.

www.nextplatform.com   (2023-06-14)

The great thing about the Cambrian explosion in compute that has been forced by the end of Dennard scaling of clock frequencies and Moore’s Law lowering

www.cnbc.com   (2023-02-25)

The $10,000 Nvidia A100has become one of the most critical tools in the artificial intelligence industry,

www.nextplatform.com   (2022-01-06)

There are two types of packaging that represent the future of computing, and both will have validity in certain domains: Wafer scale integration and

www.nextplatform.com   (2021-12-08)

Nvidia has staked its growth in the datacenter on machine learning. Over the past few years, the company has rolled out features in its GPUs aimed neural

github.com   (2021-12-07)

Fast parallel CTC.

www.nextplatform.com   (2021-12-07)

The modern GPU compute engine is a microcosm of the high performance computing datacenter at large. At every level of HPC – across systems in the

www.nextplatform.com   (2021-06-26)

Like its U.S. counterpart, Google, Baidu has made significant investments to build robust, large-scale systems to support global advertising programs. As

www.eetimes.com   (2021-06-26)

Its second analog AI chip is optimized for different card sizes, but still aimed at computer vision workloads at the edge.

www.nextplatform.com   (2021-06-24)

Current custom AI hardware devices are built around super-efficient, high performance matrix multiplication. This category of accelerators includes the

www.nextplatform.com   (2021-03-18)

One of the main tenets of the hyperscalers and cloud builders is that they buy what they can and they only build what they must. And if they are building

hardware.slashdot.org   (2021-01-04)

Long-time Slashdot reader UnknowingFool writes: AMD filed a patent on using chiplets for a GPU with hints on why it has waited this long to extend their CPU strategy to GPUs. The latency between chiplets poses more of a performance problem for GPUs, and AMD is attempting to solve the problem with a ...

www.nextplatform.com   (2020-06-01)

When you have 54.2 billion transistors to play with, you can pack a lot of different functionality into a computing device, and this is precisely what

www.pyimagesearch.com   (2020-03-11)

In this tutorial, you will learn how to get started with your NVIDIA Jetson Nano, including installing Keras + TensorFlow, accessing the camera, and performing image classification and object detection.

www.nextplatform.com   (2016-10-10)

A new crop of applications is driving the market along some unexpected routes, in some cases bypassing the processor as the landmark for performance and