top of page

The Digital Jury: AI-Generated Evidence and the Blurry Boundaries of Truth in International Arbitration

 

Introduction: The Digital Transformation of Dispute Resolution


AI-generated evidence is increasingly present in international arbitration, offering efficiency but posing serious legal risks arising from deepfakes, bias, and hallucinations. These threats undermine evidentiary reliability, procedural fairness, and award enforceability. While some jurisdictions and arbitral bodies have issued guidelines, Vietnam lacks a clear framework addressing this issue.


Given its ambition to establish itself as a regional arbitration hub, Vietnam faces both regulatory and reputational risks. This article examines these challenges and proposes responses, including setting admissibility standards, improving arbitrator training on AI, and promoting transparency in AI-assisted arbitration.


I.              The Use of AI-Generated Evidence in Arbitration


AI’s role in arbitration extends well beyond automation, encompassing advanced applications that shape the information presented to arbitral tribunals. To assess the associated risks, it is essential to classify the types of AI intervention.


In the pre-hearing stage, evidence gathering and assessment, traditionally time-consuming processes, have been transformed by AI-driven e-discovery systems. Using machine learning and natural language processing (NLP), these tools efficiently scan, organize, and analyze vast digital datasets, identifying relevant materials and significantly reducing the time and costs associated with manual document review.[1] These technologies are particularly useful in complex commercial disputes, allowing arbitrators to handle large datasets efficiently and reduce human error. Unlike traditional arbitration, which involves lengthy witness examinations and reviews, AI can rapidly analyze evidence and generate draft decisions in a fraction of the time.[2]


Beyond efficiency, AI shapes legal strategy and drafting. Predictive models analyze past awards, case law, and arbitrator profiles to estimate success and simulate outcomes. Large Language Models (LLMs), built on transformers and trained on vast text corpora, handle tasks like summarization, translation, and reasoning, and are fine-tuned to align with user intent and ethical standards.[3] However, these systems are only as reliable as their training data; biased or incomplete datasets can yield misleading outputs, creating unfair informational advantages and compromising fairness.


Generative AI, using deep learning and Generative Adversarial Networks (GANs), can produce synthetic evidence, reconstruct contracts, fabricate correspondence, or createrealistic audio and video. GANs iteratively train a generator and discriminator to achieve high realism, enabling sophisticated real-world applications.[4] These challenges have led researchers to develop methods to stabilize GAN training, improve data diversity, and refine evaluation techniques. Beyond technical issues, GANs also raise serious ethical concerns, as they can generate deepfake content frequently used for misinformation and manipulation.[5]


While useful for document reconstruction, these capabilities threaten arbitral integrity. Fabricated evidence undermines authenticity, reliability, and due process, making it increasingly difficult for tribunals to distinguish truth from AI-generated content.


II.            Core Evidentiary Risks and Challenges Posed by AI-Generated Content: A Deep Dive into Legal Theory


The emergence of AI-generated evidence challenges the foundational principles of arbitration, which traditionally values flexibility and party autonomy. Although arbitral tribunals enjoy broad discretion to admit or exclude evidence, unconstrained by strict domestic evidentiary rule,[6] this very flexibility becomes a vulnerability when faced with advanced digital forgeries and opaque algorithmic processes.


1.     Authenticity Undermined: A Crisis of Provenance in the AI Era


The most immediate and critical threat from AI evidence is the challenge to its authenticity. In the digital age, the core question has shifted from ‘Is this relevant?’ to ‘Is this authentic?.’ The spread of deepfakes and other forged materials created using advanced GANs and similar technologies fundamentally undermines the integrity of evidence.[7] Convincing deepfakes, synthetic emails, or fabricated records can be nearly indistinguishable from genuine evidence, especially for non-technical arbitrators. This creates a crisis of trust and shifts the burden of proof: instead of challengers proving falsity, parties submitting AI-generated evidence may need to establish its authenticity and provenance. Moreover, such evidence often lacks a verifiable chain of custody.[8] AI outputs from LLMs or GANs remain hidden in a “black box”, making traceability and verification difficult. Reliance on digital forensics raises costs and complexity, limiting access to justice despite AI’s efficiency gains.


2.     Bias, Hallucinations, and the Lack of Explainability


AI poses deeper risks to reliability and fairness than simple forgery. Its outputs depend on training data, so embedded biases, from historical, cultural, or socio-economic sources, can be reproduced and amplified.[9] AI tools can embed bias or “hallucinate” false facts and precedents, risking unfair outcomes and overburdening tribunals with verification in arbitration.[10]


Finally, advanced AI often suffers from the “Black Box Problem”, where complex neural networks and high-dimensional data processing make outputs difficult for humans to understand.[11] If evidence relies on AI outputs, parties may be unable to assess accuracy, undermining the right to cross-examination.[12] If an AI’s methodology is opaque, counsel cannot properly cross-examine, risking the AI output being treated as unquestionable truth.


3.     Manipulation, Fraud, and the Deficiency of Procedural Frameworks


The ease of creation, difficulty of detection, and lack of transparency in AI-generated evidence heighten risks of manipulation and fraud, exacerbated by the absence of clear legal frameworks. Most major arbitration rules (ICC, LCIA, UNCITRAL) were drafted for an era of physical evidence and reflect outdated notions of “documents” and “witness testimony.”


Similarly, neither Vietnamese law nor major arbitral rules provide specific guidance on AI-generated evidence. This silence forces tribunals to rely on overly broad discretion to address complex technological issues, leading to inconsistent decisions and eroding trust in the arbitral process. The absence of a clear standard for “Technological Due Process”,[13] ensuring transparency, accountability, and fairness in automated decision-making, creates a significant procedural gap.


III.         Managing AI Evidence Risks in the Vietnamese Arbitration Landscape


The rise of AI-generated evidence presents significant challenges for arbitration globally, and Vietnam is no exception. While Vietnamese commercial arbitration has matured under the Law on Commercial Arbitration 2010 (henceforth, the LCA 2010),[14] the rapid adoption of AI technologies exposes gaps in legal guidance and technical capacity that threaten both procedural fairness and international enforceability.


Article 46 of the LCA 2010 gives tribunals broad powers to collect evidence, including witness testimony and expert assessments. Yet, it does not address AI-generated evidence, which involves complex, opaque algorithms outside the reach of traditional rules.


That said, it is possible to apply the provisions under Articles 93 and 94 of Vietnam’s Civil Procedure Code 2015,[15] where admissible evidence includes documents, electronic data, witness statements, expert reports, and other legally recognized sources. While AI-generated data could technically fall under “electronic data”, there are no clear standards for verification, reproducibility, or disclosure of methodology and metadata.


As the result, this lack of specificity has several consequences:


First, the broad discretion granted to arbitral tribunals can become a weakness when assessing opaque AI-generated outputs. This lack of transparency risks inconsistent rulings and reduced predictability. Moreover, if tribunals overstep their authority, parties may claim violations of Vietnam’s core arbitration principles of independence, impartiality, objectivity, and legal compliance.[16] Such breaches may trigger annulment under Article 68(2)(b) of the LCA 2010, which applies when a tribunal acts contrary to the parties’ agreement or the law. In complex cases, tribunals often rely on soft law instruments like the IBA Rules on Evidence.[17] However, neither Vietnamese law (LCA 2010) nor the Vietnam International Arbitration Center (VIAC) Rules expressly authorize tribunals to apply such standards. This creates risks, as courts may annul awards for inconsistency with Vietnamese public policy. For instance, the Hanoi People’s Court annulled an award after finding that the tribunal’s reliance on the IBA Rules to reject evidence, despite Vietnamese law governing the dispute, violated fundamental legal principles.[18] Accordingly, the Hanoi Court held that the tribunal violated Article 56(2) of the LCA 2010 by disregarding the Respondent’s factual witness statements, submitted but not presented at the hearing, relying on the IBA Rules, and thereby failed to base its decision on available evidence as required under Vietnamese law and the VIAC Rules. Consequently, Vietnamese tribunals exercise caution in excluding evidence, limiting procedural efficiency.


Second, procedural fairness is at risk when a party cannot effectively challenge AI-generated evidence. Under Article 68(2)(b) of the LCA 2010, an award may be annulled if proceedings are inconsistent with the parties’ agreement or the law. Such risks undermine both the finality and enforceability of arbitral awards, inviting annulment claims based on unfair procedure.


Third, limited AI expertise among Vietnamese arbitrators hinders detection and assessment of AI evidence, increasing risks of unreliable evidence, misconduct claims, and award annulment.

Last but not least, another challenge lies in cross-border enforcement under the New York Convention 1958. Even if an AI-generated arbitral award withstands scrutiny in the domestic forum, its enforceability in foreign jurisdictions remains uncertain.[19] Article V(1)(b) of the New York Convention 1958 allows refusal of enforcement if a party was “unable to present its case.” Awards based on unverifiable or biased AI evidence may face similar challenges, raising due process and public policy concerns under Article V(2)(b). Vietnam reflected this in Decision No. 1768/QĐ-PQTT (2020), where an award was annulled because a company’s power of attorney (hencefort, POA) lacked required consular authentication.[20] The Court annulled the award under Article 68 of the LCA 2010, finding that accepting a non-authenticated power of attorney violated fundamental law. This underscores that foreign documents must be consularly authenticated, signaling similar risks for AI-generated evidence, which may be deemed inadmissible or unenforceable under Vietnamese law.


Domestic procedural risks and enforcement issues undermine confidence in Vietnamese arbitration and may deter investors. While AI evidence improves efficiency, it can threaten fairness. Vietnam should set clear standards, enhance digital literacy, and regulate AI to protect its arbitration system’s integrity.


IV.          Toward Strategic Integration: From Reliability Test to Global Integration


To balance efficiency and fairness in international arbitration, Vietnam must adopt a multi-layered strategy for regulating AI-generated evidence, through legislative reform, institutional training, and stronger international cooperation.


A.   Establishing Mandatory Technical Standards and Verification Mechanisms


Vietnam should move beyond the general relevance standard and adopt mandatory technical criteria for AI-generated evidence, ideally through amendments or guidelines supplementing the LCA 2010.


A “Reliability Test,” modeled on the U.S. Daubert Standard but tailored to technology, could guide tribunals in assessing AI evidence.[21] Under Daubert v. Merrell Dow Pharmaceuticals (1993), judges act as gatekeepers, evaluating the methodology’s validity based on:


i)      Testability: whether the theory or technique can be and has been empirically tested, as methods that cannot be replicated lack scientific validity;

ii)    Peer review and publication: whether it has been subject to scrutiny and validation through reputable scientific channels;

iii)  Error rate: the known or potential margin of error, which helps determine the evidence’s probative value;

iv)   Standards and controls: whether established standards govern the method’s application to ensure consistency and reliability; and

v)    General acceptance: the degree to which the method is accepted within the relevant scientific community, signaling its overall credibility and trustworthiness.[22]


In sum, the Daubert Standard demands a rigorous yet flexible scientific assessment of expert methodology, ensuring that only evidence grounded in valid and reliable principles reaches the decision-maker.


B.    International Collaboration and Leadership from Vietnam International Arbitration Centre (VIAC)


Given the transnational nature of arbitration, Vietnam should align with the United Nations Commission on International Trade (UNCITRAL) and similar initiatives to ensure consistency and enforceability, while VIAC should proactively update its Rules or issue guidelines on AI evidence, following CIArb and The Silicon Valley Arbitration and Mediation Center (SVAMC).

The Chartered Institute of Arbitrators (CIArb) pioneered guidance on technology in arbitration, including AI-focused updates.[23] Its framework emphasizes balance, proportionality, and party transparency when using technology. Part I sets four key principles, arbitrators’ powers, proportionate use, fairness and transparency, and security, while Part II offers detailed guidance on cybersecurity. The CIArb guidelines provide a practical foundation for managing complex electronic and AI-generated evidence and for ongoing development in technology-driven arbitration.[24]        


Similarly, SVAMC’s Guidelines on AI in Arbitration 2024 provide a principle-based framework to harness AI’s benefits while safeguarding fairness, confidentiality, and due process. Structured in three chapters:[25]


i)      All participants must understand AI, protect confidentiality, and disclose substantial AI contributions;[26]

ii)    Parties and counsel must use AI competently and avoid manipulating evidence;[27]

iii)  Arbitrators must not delegate decisions to AI and must ensure fairness and transparency.[28]


Alongside CIArb initiatives, SVAMC offers a practical model for VIAC to proactively set binding standards through rules or guidelines, promoting consistent and fair AI evidence handling without waiting for legislative reform.


V.            Conclusion: AI as a Tool for Justice, Not a Source of Injustice


AI-generated evidence is reshaping arbitration, offering efficiency but risking fairness through deepfakes, bias, and hallucinations. Vietnam should adopt clear, technology-aware, and globally aligned rules, enhance digital literacy, and empower VIAC to ensure reliability and transparency. A proactive approach will strengthen trust in Vietnamese arbitration, though AI’s evolving nature means such risks can only be mitigated, not eliminated.


[1] D. Shoukat, ‘Using AI in International Arbitration: From Predictive Analytics to Automated Awards’ (2025) 3(2) Legal Research & Analysis  6,7 < https://doi.org/10.69971/lra.3.2.2025.87 > accessed 7 October 2025.

[2] I. S. Szalai, ‘Stranger Disputes: When Artificial Intelligence Turns Arbitration Upside Down’ (2025) 25(2) Pepperdine Dispute Resolution Law Journal 133, 147-149 <https://digitalcommons.pepperdine.edu/drlj/vol25/iss2/1> accessed 7 October 2025.

[3] H. Naveed et al, ‘A Comprehensive Overview of Large Language Models’ (2024) preprint submitted to Elsevier <https://arxiv.org/pdf/2307.06435> accessed 7 October 2025.

[4] P. Purwono et al, ‘Understanding Generative Adversarial Networks (GANs): A Review’ (2025) 3(1) Control Systems and Optimization Letters <https://ejournal.csol.or.id/index.php/csol/article/view/170 > accessed 7 October 2025.

[5] Ibid. 

[6] K. Pilkov, ‘Evidence in International Arbitration: Criteria for Admission and Evaluation’ (2014) 80(2)Arbitration: the International Journal of Arbitration, Mediation and Dispute Management  147 <https://repository.ndippp.gov.ua/bitstream/handle/765432198/242/Pilkov_2014_Evidence_in_International_Arbitration.pdf?sequence=1&isAllowed=y > accessed 6 October 2025.

[7] A. Dash, J. Ye, G. Wang, ‘A Review of Generative Adversarial Networks (GANs) and Its Applications in a Wide Variety of Disciplines: From Medical to Remote Sensing’ (2024) 12 IEEE Access 18330, doi: 10.1109/ACCESS.2023.3346273 accessed 6 October 2025.

[8] S. Nath et al‘Digital Evidence Chain of Custody: Navigating New Realities of Digital Forensics’ (2024) IEEE International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA) 11doi:10.1109/TPS-ISA62245.2024.00012 accessed 6 October 2025.

[9] M. Magál, A. Calthrop, K. Limond, ‘Artificial Intelligence in Arbitration: Evidentiary Issues and Prospects’ (2025)Global Arbitration Review, The Guide to Evidence in International Arbitration , <https://globalarbitrationreview.com/guide/the-guide-evidence-in-international-arbitration/3rd-edition/article/artificial-intelligence-in-arbitration-evidentiary-issues-and-prospects > accessed 7 October 2025.

[10] B. Scott, ‘A case of ‘AI hallucination’ in the air’ (2023) Leiden Law Blog  https://www.leidenlawblog.nl/articles/a-case-of-ai-hallucination-in-the-air > accessed 7 October 2025.

[11] Y. Bathaee, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31 Harvard Journal of Law & Technology, 901.

[12] S. Jagusch KC, ‘Cross-examination of fact witnesses: the common law perspective’ (2025) Global Arbitration Review, The Guide to Advocacy https://globalarbitrationreview.com/guide/the-guide-advocacy/seventh-edition/article/cross-examination-of-fact-witnesses-the-common-law-perspective > accessed 7 October 2025.

[13] D. K. Citron, ‘Technological Due Process’ (2007) 85 University of Maryland Legal Studies Research Paper No 2007-26, Washington University Law Review 1249–1313 < https://ssrn.com/abstract=1012360 > accessed 7 October 2025.

[14] The Law No.54/2010/QH12 on Commercial Arbitration was passed by the National Assembly of the Socialist Republic of Vietnam on 17 June 2010.

[15] Article 93 of the Civil Procedure Code of Vietnam (Code No. 92/2015/QH13), enacted on November 25, 2015, and effective from July 1, 2016, establishes the fundamental principles and procedures for handling civil cases.

[16] Article 4(2) of the LCA 2010.

[17] The IBA Rules on the Taking of Evidence in International Arbitration are a set of rules for managing evidence in international arbitration proceedings, adopted by the International Bar Association (IBA) Council on December 17, 2020.

[18] Decision No. 11/2019/QD-PQTT issued by the Hanoi People’s Court on 14 November 2019.

[19] The Convention on the Recognition and Enforcement of Foreign Arbitral Awards, commonly known as the New York Convention, was adopted by a United Nations diplomatic conference on 10 June 1958 and entered into force on 7 June 1959.

[20] M. Dang, N.n Đỗ, T. Phạm, ‘The Vietnamese Courts Have Spoken: Consular Authentication of Foreign Powers of Attorney Is a Must to Initiate a Vietnamese Arbitration’ (2023) Kluwer Arbitration Blog,  https://legalblogs.wolterskluwer.com/arbitration-blog/the-vietnamese-courts-have-spoken-consular-authentication-of-foreign-powers-of-attorney-is-a-must-to-initiate-a-vietnamese-arbitration/ > accessed 7 October 2025.

[21] Legal Information Institute, ‘Daubert Standard’ (Cornell Law School)

[22] A. S. Smith, J. B. Jones, ‘The Daubert Standards for Admissibility of Evidence Based on the Personality Assessment Inventory’ (2024) 17(2) Psychological Injury and Law 105.

[23] C. Morgan, D. Misri, E. Kantor, ‘AI-volution in Arbitration: the New Chartered Institute of Arbitrators (CIArb) Guidelines’ Herbert Smith Freehills Kramer Notes (2025) <https://www.hsfkramer.com/notes/arbitration/2025-03/ai-volution-in-arbitration-the-new-chartered-institute-of-arbitrators-guidelines> accessed 7 October 2025.

[24] A. C. Eernisse, ‘Arbitration Tech Toolbox: Interview with Dr Gordon Blanke on the New CIArb Technology Guideline’ (2022) Kluwer Arbitration Blog <https://legalblogs.wolterskluwer.com/arbitration-blog/arbitration-tech-toolbox-interview-with-dr-gordon-blanke-on-the-new-ciarb-technology-guideline/> accessed 7 October, 2025.

[25] Silicon Valley Arbitration & Mediation Center – SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration (“Guideline”), opened for public consultation on 31 August 2023 and published on 30 April 2024.

[26] Chapter 1 provides Guideline 1, 2, 3.

[27] Chapter 2 provides Guideline 4, 5.

[28] Chapter 3 provides Guideline 6, 7.

 
 
 

Commentaires


bottom of page