Tracking the Body, Tracking the Law: How the U.S., EU, and APAC Govern Wearable AI
- WAI CONTENT TEAM
- 6 minutes ago
- 5 min read

By Anastasia Vener
Wearable AI is changing how personal health and biometric data are collected, interpreted, and used. The U.S. focuses on rapid innovation through deregulation and federal initiatives such as the AI Action Plan, while the EU emphasizes safety, transparency, and user rights under the AI Act and GDPR. In the Asia-Pacific region, China and South Korea are moving toward binding oversight, whereas Singapore, Japan, Australia, and India follow more flexible or emerging guidance. These differing approaches highlight both the opportunities of wearable AI and the regulatory challenges it creates worldwide.
Anastasia Vener is a multilingual Senior Litigation Counsel turned AI and privacy governance strategist. A CIPP/US-certified advisor, she leverages her litigation, regulatory, and contract management expertise to help enterprises navigate AI/ML governance and data protection challenges. She bridges legal, product, and business teams to enable responsible innovation and is a frequent speaker and Deputy Member of The L Suite (TechGC), contributing thought leadership on AI, privacy, and emerging tech.
Introduction
From Silicon Valley to Seoul, wearable devices are reshaping how societies collect, interpret, and govern health data. Once niche fitness trackers, AI-powered wearables such as Oura, Fitbit, and Huawei Health now analyze sleep, heart rate variability, and even predict illness through machine learning. Nearly half of U.S. adults use a health-tracking wearable, adoption in Europe is around one in four, and uptake continues to accelerate across the Asia-Pacific region. Yet regulation still trails innovation. The EU emphasizes rights and transparency through the AI Act and GDPR, while the U.S. pursues rapid growth under a deregulation-driven AI Action Plan. Across APAC, countries are taking diverse paths. China and South Korea are moving toward emerging oversight regimes, Singapore and Australia offer more flexible frameworks, and India is developing evolving policies. This global patchwork underscores wearable AI’s vast potential while raising urgent questions about privacy, bias, and accountability.
United States: Innovation First, Oversight Later
The United States’ approach to AI governance has entered a new phase under President Trump’s America’s AI Action Plan, released in 2025 to replace the Biden-era executive order on “Safe, Secure, and Trustworthy AI.” The plan positions artificial intelligence as a driver of national competitiveness rather than a technology requiring precautionary regulation. Its three pillars are accelerating AI innovation, building American AI infrastructure, and U.S. leading in global AI diplomacy, all signaling a clear shift toward deregulation. Federal agencies are directed to review and remove rules that may slow AI development or deployment.
For AI-powered wearables, including devices that collect biometric or health-related data, this new federal direction carries significant implications. The plan prioritizes speed and commercial growth but provides little clarity about consumer safeguards. Oversight is fragmented among several federal bodies. The Food and Drug Administration regulates devices only when they make medical claims. The Federal Trade Commission oversees deceptive or unfair data practices. The National Institute of Standards and Technology issues voluntary AI risk management frameworks. None of these agencies comprehensively address consumer-grade wearable intelligence.
Data protection laws remain incomplete. HIPAA safeguards health information only when handled by healthcare providers or related entities, meaning most consumer wearables fall outside its scope. Several states, including California and Colorado, have privacy laws granting individuals rights to access or delete personal data, but these rules vary widely, creating a patchwork of obligations. The net effect is rapid technological adoption paired with uncertainty around data ownership, algorithmic accountability, and user rights. The U.S. framework favors market growth over coordinated oversight, leaving wearable AI largely self-governing.
European Union: Rights-Based and Risk-Focused
The European Union has taken a distinct approach, emphasizing safety, user rights, and accountability. Central to this framework is the EU Artificial Intelligence Act, which entered into force in August 2024. The Act uses a risk-based classification system, placing certain applications, including biometric and health-monitoring AI, in the “high-risk” category. Providers of high-risk systems must meet obligations for transparency, human oversight, data quality, and risk management. Rules for general-purpose AI began applying in August 2025. Full compliance for many high-risk systems, including some wearable AI applications, will be phased in through August 2027, providing companies with time to prepare.
Complementing the AI Act is the General Data Protection Regulation, which has applied across the EU since 2018. GDPR protects biometric and health-related data as special categories of personal information, requiring explicit consent, transparency, and user rights such as access, correction, and deletion. For wearable AI, this means devices collecting biometric signals or generating health predictions must follow strict rules for data protection, even before the AI Act’s full obligations take effect.
Together, GDPR and the AI Act create a regulatory environment that prioritizes precaution and human rights. Companies must demonstrate that algorithms operate reliably, minimize bias, and protect individual privacy. Compliance can be challenging for startups and small companies, but the framework ensures consumers benefit from strong protections and legal recourse. The EU model contrasts sharply with the United States, illustrating the trade-offs between rights-based governance and rapid innovation while positioning the region as a global leader in ethical AI.
APAC: Innovation at Speed, Governance in Transition
Across the Asia-Pacific region, governments are moving rapidly from voluntary frameworks toward binding AI governance, especially in sectors like health and biometrics where wearable devices operate.
In China, recent updates to biometric and AI regulations have tightened controls on systems that use facial recognition or generate health insights. Companies must secure consent, register large-scale data collection, and comply with emerging national AI standards. Mandatory rules for labeling AI-generated content also came into effect in September 2025, signaling a shift toward more formal oversight of wearable and health-monitoring technologies.
South Korea will implement its Basic Act on the Development of Artificial Intelligence in January 2026. The law explicitly defines “high-impact AI systems,” including wearable devices used for health monitoring or biometric identification, and requires human oversight, risk assessment, and transparent documentation. This represents the region’s strongest move toward binding compliance for wearable AI.
Singapore continues to champion an innovation-friendly model. Its Model AI Governance Framework and updated guidance on generative AI emphasize transparency, fairness, and human-centric design. Regulatory sandboxes enable wearable AI developers to test products under supervision, striking a balance between innovation and accountability.
Japan passed the AI Promotion Act in mid-2025, establishing national principles for safety, reliability, and transparency. While largely foundational, the law lays the groundwork for future obligations for wearable AI, particularly for devices processing health or biometric data.
Australia has not yet enacted a technology-specific AI law but is expanding oversight through sector-specific reforms. Its Therapeutic Goods Administration regulates AI-based medical devices, including health-monitoring wearables, and consultations are underway on responsible AI in healthcare. Voluntary guardrails introduced in 2025 indicate that formal regulation may soon follow.
India remains in an early stage of AI governance, but is one of the fastest-growing wearable markets globally, with adoption exceeding 30 percent of adults in 2025. The proposed Digital India Bill seeks to bring connected devices, including smartwatches, under stricter regulatory supervision, while the government explores AI frameworks for incident reporting and risk classification.
These developments show that APAC is transitioning from innovation-led deployment to structured oversight. Companies operating across the region must navigate a spectrum of regulatory approaches from strict enforcement in China and South Korea to flexible guidance in Singapore and Australia and emerging frameworks in India, balancing innovation with legal accountability.
Conclusion
Wearable AI is transforming health tracking, but global governance remains uneven. The U.S. prioritizes rapid innovation, the EU enforces rights and safety, and the Asia-Pacific region is moving toward structured oversight, with China and South Korea leading the shift. As AI becomes increasingly embedded in personal health devices, global coordination, ethical safeguards, and inclusive policymaking are essential. Only through transparency, accountability, and equity can wearable AI achieve its promise without compromising public trust.
_____________________________________________________________
Collaborate with us!
As always, we appreciate you taking the time to read our blog post.
If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog. To explore this opportunity, please contact WAI editors Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co), or Dina Blikshteyn, (dina@womeninai.co).
Silvia A. Carretta and Dina Blikshteyn
- Editors
