Google First Processor Tensor. Everything You Need To Know About


Google announced on Monday that it would launch its latest Pixel smartphones in the fall with one significant innovation: the new lineup will be powered by a chip developed in-house for the first time.

Google has developed a custom-built System on a Chip (SoC), Tensor, to power Pixel phones. CEO Sundar Pichai tweeted.

“So excited to share our new custom Google Tensor chip, which has been 4 yrs in the making ( for scale)! Tensor builds off of our two decades of computing experience, and it’s our biggest innovation in Pixel to date. Will be on Pixel 6 + Pixel 6 Pro in fall,”

After Apple ( M1 ), Huawei ( Kirin ), Samsung ( Exynos ), and now Google have joined the in-house SoC club. The advantages of Tensor include:


More Computing Power

Google’s Pixel uses computational photography and ML to capture images (Night Sight, for example). The tech giant also introduced powerful speech recognition models for its devices. The features require high computational power and low latency for the best performance. A tensor can bring in complex AI innovations to Pixel smartphones.

Unlocking New AI Features

Tensor chips give Google the freedom to bring in new ML-based features without worrying about the performance. However, robust processors are a prerequisite to run heavy AI workloads.


More Layers To Hardware Security

Tensor’s new security core and Titan M2 will work as an added layer of protection. Titan M from Google is a custom-built chip to protect sensitive data such as passcode, enable encryption, and secure transactions in apps.

The company has not provided technical specifications for its new processor but said the Tensor chip could help bring “entirely new features, plus improvements to existing ones” to Pixel users,

Unlike Apple’s iPhones, which are powered by its A-series chips, most Android phones worldwide are powered by a mix of different processors, mainly from Qualcomm and MediaTek, and Samsung, which uses in-house chips for some of its Android phones.


Samsung Will Be Manufacturing Tensor Chip For Google

Google did not disclose who will manufacture the Tensor chip for Pixel, but sources familiar with the matter say that Samsung will handle production using its advanced 5-Nanometer process technology. Samsung declined to comment, but the company said last week that it plans to accelerate its foundry business this year, focusing on 5- and 4-Nanometer processes.{ Samsung uses its 4-Nanometer tech to make Qualcomm’s Snapdragon 888 mobile chipset, which Samsung and China’s Xiaomi use.}

The processor is crucial to the phone’s performance and battery life. Despite owning Android OS, Google was unable to put a dent in the smartphone market. With the all-new Tensor chips, the Mountain View giant is looking to revitalize its smartphone segment. Of late, Google has made a litany of innovations in artificial intelligence and machine learning.


LaMDA From Google

The Language Model for Dialogue Application from Google is built on Transformer, a neural network architecture open-sourced by Google Research in 2017, similar to many current language models such as BERT and GPT-3. Right now, it’s trained on text but can have future applications in Conversational AI, Google Maps, etc.

 AI In Google Maps

Two new AI features in Google maps, including Eco-Friendly routes to suggest fuel-efficient ways to the users and Safer Routing for real-time weather and traffic conditions.

Vertex AI

It is a managed ML platform for deploying and maintaining AI models. The platform allows users to design, deploy, and scale machine learning models more quickly using pre-trained and custom tooling within a unified AI platform. Moreover, it integrates easily with other open-source frameworks, including TensorFlow, sci-kit learn, and PyTorch.

Little Patterns

The tech giant introduced a new feature in Google Photos that employs machine learning to translate photos into numbers, which it then compares for visual and conceptual similarity.



Multitask Unified Model is the new AI algorithm built on a Transformer architecture and is trained across 75 different languages. As a result, MUM can understand information across text and images that can expand to audio and video in the future.

Similar Posts