The federal judiciary in the United States is moving to tighten rules for AI-generated evidence. In mid-2025 the Judicial Conference’s advisory committee mentioned they are considering drafting a new Federal Rule of Evidence (Rule 707) on machine‑generated evidence. The proposed Rule 707 would treat machine generated outputs to the same admissibility standards as all other evidence. If no human expert presents the evidence, the court can only admit it if it meets the reliability and relevance criteria of Rule 702(a)–(d) (the standard for expert opinions).
In effect, courts would have to scrutinise the AI system’s methods and training data (e.g. whether the data was representative and validated) before accepting its conclusions. The rule text states: “When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)–(d)”. These reforms, now open for public comment through early 2026, aim to ensure AI tools are used responsibly in federal courts.
Legal scholars have noted that the proposal addresses growing concerns about the opacity and potential bias of automated systems, particularly in forensic and analytical contexts. If adopted, Rule 707 could significantly influence how parties deploy AI-derived evidence, increasing the need for technical disclosure and expert validation. Similar debates are emerging in other U.S courts as they grapple with the evidentiary status of machine-generated outputs.
