Four Questions Chinese Courts Are Answering About AI
- WAI CONTENT TEAM
- 1 day ago
- 5 min read

By Dawn Yu
China has no single AI governance statute. What it has, increasingly, is a body of decided cases; and read together, they are doing the work that legislation has not yet done. Four questions recur across the docket, and the answers courts are giving have direct implications for how AI products are built, deployed, and owned:Â
What should AI not do?
Who is accountable?
What kind of oversight is sufficient?
What to do about vulnerable users?
This article is written by Dawn YU, a patent attorney and Shareholder at Jiaquan IP Law in China, focusing on cross-border patent strategy, patent invalidation proceedings, and trade secret protection. She works with technology companies and international partners on patent disputes and innovation protection, especially in areas such as AI, medical devices, and advanced manufacturing. She also led the development of the firm’s AI-assisted tool for monitoring CNIPA post-grant proceedings.
Four Questions Chinese Courts Are Answering About AI
China has no single AI governance statute. What it has, increasingly, is a body of decided cases; and read together, they are doing the work that legislation has not yet done. Four questions recur across the docket, and the answers courts are giving have direct implications for how AI products are built, deployed, and owned.
I. What should AI not do?
The most consequential behavioral question is where the law draws the line between competitive innovation and misappropriation. The Chinese Supreme People's Court's "Character Transformation" ruling ((2023) Jing 73 Min Zhong 3802) answered it in a way that matters well beyond the specific facts: the defendant had not copied the plaintiff's output; it had copied the trained model parameters that produced the output. The Court found this sufficient for unfair competition liability. The governance implication is significant. If training investment is legally protected independently of the final product, then the entire AI development pipeline, including data curation, compute expenditure, model architecture, acquires competitive value that the law will defend. This reshapes how organizations should think about model security, not just IP licensing.
A parallel line of cases, the Zhao portrait ruling, and the Li voice ruling extended this logic to human identity. Both involved AI outputs that had been technically processed to distance them from the original person. In those cases, Courts applied a "recognizability standard": if a reasonable person can identify the individual behind the synthesis, the technical distance is legally irrelevant. Together with the model parameters case, these decisions establish that Chinese courts are willing to protect the inputs to AI systems, not just the outputs. This closes what many developers had assumed was a safe gap between training data and legal exposure.
II. Who is accountable?
The accountability question has two distinct dimensions that the case law is beginning to resolve in tandem. The first is corporate liability. The Ultraman ((2023) Hu 0110 Min Chu 16054) and Hatsune Miku ((2023) Hu 0110 Min Chu 43656) cases both held that the natural persons behind the infringing entities were personally liable. The courts' willingness to pierce corporate liability is a signal that AI-related legal risk cannot be structurally quarantined. As AI products multiply and the individuals making product decisions become harder to identify, this principle will matter more, not less: someone inside the organization must own each consequential AI decision, and that person is exposed to liability when there is infringement.
The second dimension is rights ownership in an era of human-machine collaboration. The Ada case ((2022) Zhe 0192 Min Chu 9983) and the influencer liability ruling together sketch a framework: performance rights in AI-mediated human output vest in the human contributor unless expressly assigned in writing, while commercial beneficiaries of AI-generated content bear review obligations regardless of whether they operated the tool. The governance effect is to push legal responsibility toward the parties best positioned to exercise control, and to make that responsibility non-delegable. Outsourcing the generation of content, or wrapping a human performance in a digital persona, does not transfer the attendant legal risk.
III. What kind of oversight is sufficient?
This is where the case law is moving fastest, and where the shift has the deepest structural implications. The traditional approach to AI compliance is output-focused: audit what the system produces and intervene when the system causes harm. The Liao case dismantled that model. The defendant's AI output was examined and found not to infringe, yet the defendant still lost and was found liable, because the data collection and processing that preceded the output had independently violated personal information rights. Harm, the court held, can occur entirely within the pipeline, before any output is generated or seen by anyone.
The Chen v. Shanghai Yi ruling ((2024) Hu 0114 Min Chu 1326) adds a second dimension: courts are not merely resolving disputes, they are actively extending regulatory reach. By conditioning resolution on algorithm registration and compliance commitments, the court used private litigation as a vehicle for bringing a non-compliant actor into the public regulatory framework. This judicial-regulatory hybrid is a preview to how AI governance will operate in China: not purely through ex ante regulation, nor purely through ex post litigation, but through a combination of that makes compliance a condition of operating without legal jeopardy.
IV. What about vulnerable users?
The final question is the least settled doctrinally, but potentially the most consequential socially. AI companion and social products create sustained, emotionally engaging interactions. For minors and other vulnerable users, the risk of dependency or manipulation is material. Courts and regulators are beginning to treat anti-addiction mechanisms and ethical interaction design not as product differentiators but as legal baselines. The direction of travel is clear even if the precise standard is not yet fixed.
The liability question sits at the same frontier. When an AI system produces false output that causes concrete harm, the question of who compensates for the harm remains genuinely unresolved. What the case law suggests, by analogy, is that courts will ask not only what the output was, but what oversight was in place when it was generated. Organizations in regulated sectors, such as healthcare, financial services, and legal, to name a fe,, should not read the current doctrinal gap as protection. They should read it as an indication that the standard, when it arrives, will be applied retrospectively to conduct that is happening now.
What emerges from reading these cases together is not a patchwork of isolated rulings but a coherent direction: in the absence of legislation, Chinese courts are building an AI governance framework from the ground up, case by case, and they are consistently choosing to extend protection upstream to training data, to human contributors, to the AI pipeline itself. Organizations that have calibrated their compliance programs to outputs alone are already behind.
Key Takeaways
The entire AI pipeline is legally exposed, not just the outputs that the AI pipeline produces. Trained model parameters, biometric training data, and data collection practices are each independently actionable.
Technical distance from the original does not create legal distance. Courts apply a recognizability standard to synthesized likeness and voice; obfuscation is not a compliance strategy.
AI legal risk cannot be structurally quarantined. Personal liability follows corporate misuse. Someone inside the organization must own each consequential AI decision.
Commercial benefit creates review obligations. Parties who profit from AI-generated content bear oversight duties regardless of whether they operated the tool.
Courts are acting as regulators. Judicial recommendations requiring algorithm registration signal that litigation is becoming a vector for regulatory enforcement, not just dispute resolution.
Current doctrinal gaps are not safe harbors. In regulated industries, the standard for hallucination liability will likely be applied retrospectively to conduct happening today.
_____________________________________________________________
Collaborate with us!
As always, we appreciate you taking the time to read our blog post.
If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog in 2026! To explore this opportunity, please contact WAI editors Silvia A. Carretta - WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co) or Dina Blikshteyn (dina@womeninai.co).
Silvia A. Carretta and Dina Blikshteyn
- Editors
