Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Crypto Love You
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Crypto Love You
    Home»AI News»Google Launches TensorFlow 2.21 And LiteRT: Faster GPU Performance, New NPU Acceleration, And Seamless PyTorch Edge Deployment Upgrades
    Google Launches TensorFlow 2.21 And LiteRT: Faster GPU Performance, New NPU Acceleration, And Seamless PyTorch Edge Deployment Upgrades
    AI News

    Google Launches TensorFlow 2.21 And LiteRT: Faster GPU Performance, New NPU Acceleration, And Seamless PyTorch Edge Deployment Upgrades

    March 7, 20264 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    ledger


    Google has officially released TensorFlow 2.21. The most significant update in this release is the graduation of LiteRT from its preview stage to a fully production-ready stack. Moving forward, LiteRT serves as the universal on-device inference framework, officially replacing TensorFlow Lite (TFLite).

    This update streamlines the deployment of machine learning models to mobile and edge devices while expanding hardware and framework compatibility.

    LiteRT: Performance and Hardware Acceleration

    When deploying models to edge devices (like smartphones or IoT hardware), inference speed and battery efficiency are primary constraints. LiteRT addresses this with updated hardware acceleration:

    • GPU Improvements: LiteRT delivers 1.4x faster GPU performance compared to the previous TFLite framework.
    • NPU Integration: The release introduces state-of-the-art NPU acceleration with a unified, streamlined workflow for both GPU and NPU across edge platforms.

    This infrastructure is specifically designed to support cross-platform GenAI deployment for open models like Gemma.

    aistudios

    Lower Precision Operations (Quantization)

    To run complex models on devices with limited memory, developers use a technique called quantization. This involves lowering the precision—the number of bits—used to store a neural network’s weights and activations.

    TensorFlow 2.21 significantly expands the tf.lite operators’ support for lower-precision data types to improve efficiency:

    • The SQRT operator now supports int8 and int16x8.
    • Comparison operators now support int16x8.
    • tfl.cast now supports conversions involving INT2 and INT4.
    • tfl.slice has added support for INT4.
    • tfl.fully_connected now includes support for INT2.

    Expanded Framework Support

    Historically, converting models from different training frameworks into a mobile-friendly format could be difficult. LiteRT simplifies this by offering first-class PyTorch and JAX support via seamless model conversion.

    Developers can now train their models in PyTorch or JAX and convert them directly for on-device deployment without needing to rewrite the architecture in TensorFlow first.

    Maintenance, Security, and Ecosystem Focus

    Google is shifting its TensorFlow Core resources to focus heavily on long-term stability. The development team will now exclusively focus on:

  • Security and bug fixes: Quickly addressing security vulnerabilities and critical bugs by releasing minor and patch versions as required.
  • Dependency updates: Releasing minor versions to support updates to underlying dependencies, including new Python releases.
  • Community contributions: Continuing to review and accept critical bug fixes from the open-source community.
  • These commitments apply to the broader enterprise ecosystem, including: TF.data, TensorFlow Serving, TFX, TensorFlow Data Validation, TensorFlow Transform, TensorFlow Model Analysis, TensorFlow Recommenders, TensorFlow Text, TensorBoard, and TensorFlow Quantum.

    Key Takeaways

    • LiteRT Officially Replaces TFLite: LiteRT has graduated from preview to full production, officially becoming Google’s primary on-device inference framework for deploying machine learning models to mobile and edge environments.
    • Major GPU and NPU Acceleration: The updated runtime delivers 1.4x faster GPU performance compared to TFLite and introduces a unified workflow for NPU (Neural Processing Unit) acceleration, making it easier to run heavy GenAI workloads (like Gemma) on specialized edge hardware.
    • Aggressive Model Quantization (INT4/INT2): To maximize memory efficiency on edge devices, tf.lite operators have expanded support for extreme lower-precision data types. This includes int8/int16 for SQRT and comparison operations, alongside INT4 and INT2 support for cast, slice, and fully_connected operators.
    • Seamless PyTorch and JAX Interoperability: Developers are no longer locked into training with TensorFlow for edge deployment. LiteRT now provides first-class, native model conversion for both PyTorch and JAX, streamlining the pipeline from research to production.

    Check out the Technical details and Repo. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

    Michal Sutter is a data science professional with a Master of Science in Data Science from the University of Padova. With a solid foundation in statistical analysis, machine learning, and data engineering, Michal excels at transforming complex datasets into actionable insights.



    Source link

    10web
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    CryptoExpert
    • Website

    Related Posts

    Trustpilot partners with big model vendors

    March 18, 2026

    Google AI Releases WAXAL: A Multilingual African Speech Dataset for Training Automatic Speech Recognition and Text-to-Speech Models

    March 17, 2026

    U.S. Holds Off on New AI Chip Export Rules in Surprise Move in Tech Export Wars

    March 16, 2026

    Can AI help predict which heart-failure patients will worsen within a year? | MIT News

    March 15, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    aistudios
    Latest Posts

    Trustpilot partners with big model vendors

    March 18, 2026

    I discovered how to make $100K with Nano Banana AI (Real Results) 🤯

    March 18, 2026

    AI BASICS in 10 Minutes (2026 Beginner Guide) – BeerBiceps

    March 18, 2026

    Bitcoin Price Rally To $79K Would Make Spot ETF Holders Whole Again

    March 18, 2026

    DAOs May Need To Ditch Decentralization To Court Institutions

    March 17, 2026
    quillbot
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights

    Bitget Research Analyst Breaks Down What’s Happening With The Bitcoin Price

    March 18, 2026

    Is Dogecoin Ready to Rally?

    March 18, 2026
    quillbot
    Facebook X (Twitter) Instagram Pinterest
    © 2026 CryptoLoveYou.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.