In a move that could democratize AI development, Tether—best known for the USDT stablecoin—has released a software framework designed to train sophisticated language models on everyday devices. Announced this week, the technology allows engineers to fine-tune models with up to a billion parameters directly on a smartphone, a process the company says can take under two hours.
The system, part of Tether's QVAC platform, leverages Microsoft's BitNet architecture and LoRA techniques to drastically cut memory and processing demands. According to Tether, this reduces VRAM requirements by nearly 78% compared to standard models, enabling larger AI models to run on consumer-grade chips from AMD, Intel, Apple, and Qualcomm. The framework also supports models as large as 13 billion parameters on mobile hardware.
Beyond training, the performance boost applies to running the models, with mobile GPUs executing BitNet models multiple times faster than standard CPUs. This opens doors for on-device learning and federated learning, where devices can collaboratively improve an AI model without sending private data to the cloud.
Tether's push into AI infrastructure reflects a broader convergence of crypto and machine learning sectors. This year, firms like HIVE Digital Technologies and Core Scientific have reported significant growth and financing tied to AI and high-performance computing. Simultaneously, the rise of autonomous AI agents within crypto is accelerating, with recent launches from Coinbase, Alchemy, and World enabling these programs to verify identity and execute blockchain transactions. The industry's compute resources are increasingly being redirected from traditional mining to power this new wave of intelligent applications.
Source: CoinTelegraph
