1 of 21

2 of 21

Clarifying Validation: Collaborative Approach to a Consistent Terminology

This project is addressing the inconsistent use of the term "validation" across disciplines. We are compiling relevant publications and aim to create a concise resource to clarify terminology and support a peer-reviewed publication on the topic.

We will advocate for a more harmonized, context-aware approach to the term ’validation’, particularly considering the growing influence of AI/ML technologies in diagnostics.

The aim of the paper will be to raise awareness of the inconsistent and context-dependent use of the word, promote collaboration for better definitions, and ensure clearer, more consistent performance metrics in laboratory practice and related fields.

Please send any relevant publications ahead of this meeting. �We look forward to this discussion.  

3 of 21

Meeting #1

  • Recording (=youtube) + screenshot (see next slide)
  • This meeting (01/06/2025) addressed the inconsistent use of the term "validation" across subspecialties and disciplines. We discussed the compilation of relevant publications and creation of a resource and in support of a peer-reviewed publication.

4 of 21

5 of 21

MCQ

OWL

ARROW

Let’s pick a project page cover image

6 of 21

Collection of relevant Publications

  • Screenshot of table
  • Add file for download

7 of 21

VALIDATION

T

E

R

M

8 of 21

Overview of the Term

  • Based on the literature review and supplementary files, the term "validation" has been used inconsistently across disciplines, particularly in the context of AI/ML, medical devices, pharmaceuticals, and regulatory science. Below is an overview of different definitions of validation across various fields:

9 of 21

Overview

  • Paste screenshot + Add Slide-deck for download

10 of 21

Key Takeaways

  1. No universal definition of validation exists across disciplines; it varies by field, objective, and regulatory context.
  2. AI validation involves both technical (accuracy, robustness) and regulatory (safety, ethics) considerations.
  3. Validation is critical in healthcare, pharmaceuticals, engineering, and AI governance to ensure safety, reliability, and compliance.
  4. Ongoing efforts (e.g., TRIPOD+AI 2024, British Standard BS30440, FDA AI Validation Guidelines) aim to standardize AI validation approaches.

11 of 21

12 of 21

13 of 21

What happened

  • Received many publications
  • Thank you all (special thanks to AD and SB)
  • Sorted by themes = domains
  • Extracted key elements

  • Broader scope: npj digital medicine or nature digital health

14 of 21

15 of 21

16 of 21

Comments 1 – 4/4/25

  • AK: grouping looks good => key areas… aim… towards recommendations for each area�try standardize term for different group
  • MH: echo perspectives; “Venn diagram” => different stakeholder groups => lay out different vantage points
  • NZ: timely topic; ISO/AWI:2405-2 activities (image analysis standard; “-1” general principles in general laboratories vs. “-2” AI applications) => contribute to different usages of the terms�did not talk about product/launch = postmarket surveillance <= great point “postmarket validation”
  • SB: Recommendations for validation… what does not qualify as validation (e.g. optimization of an AI model, the term validation). What is within and what is outside the scope of the term.�TRAIN > TUNE (optimization) > TESTING
  • Development of the model = “verify”=validate that the tools do what they are supposed to do.
  • AD: engineering/software => verification (e.g., unit testing) vs. validation (performance metrics) e
  • AK: jargon/technical term 5-fold cross validation

17 of 21

Comments 2 – 4/4/25

  • NZ: who is active in the role => blinded/methods => introduce terms
  • CLIA does not use validation
  • Are we proposing to not use validation as a stand-alone term = specify context
  • NZ: use of similar terms = interdisciplinary problem (US vs. non-US; e.g. AI act)
  • SB: regulatory team asks “is this validated” => engingeering
  • SandraB: public health research => special procedures (e.g. pandemic) ML/AI – emphasize the context (sector-specific)
  • AD: can use validation when it comes with pre-fixes; adopt software definition validation when locked down – verification when it is not done. “example paragraph”
  • JS: Other example: CLSI CAP CLIA CFR ISO definition verification DX/IVDR => �ISO used software => conflicting => “method evaluation”
  • JS: Method evaluation => performance specification of a test system

18 of 21

Comments 3 – 4/4/25

  • AD: “define working” => load images vs. format; working could mean loss minimization
  • MH: working means different things to different people
  • SJ: CLIA covers AI => CFR = establishing performance characteristics.

§ 493.1253 Standard: Establishment and verification of performance specifications.

(2) Establishment of performance specifications. Each laboratory that modifies an FDA-cleared or approved test system, or introduces a test system not subject to FDA clearance or approval (including methods developed in-house and standardized methods such as text book procedures), or uses a test system in which performance specifications are not provided by the manufacturer must, before reporting patient test results, establish for each test system the performance specifications for the following performance characteristics, as applicable:

(i) Accuracy.

(ii) Precision.

(iii) Analytical sensitivity.

(iv) Analytical specificity to include interfering substances.

(v) Reportable range of test results for the test system.

(vi) Reference intervals (normal values).

(vii) Any other performance characteristic required for test performance.

19 of 21

20 of 21

Clarifying

Validation

21 of 21

Clarifying Validation Project Team

Amanda Dy

University Toronto

Canada

Sandra Bütow

RKI Berlin

Germany

Brandon Gallas

FDA, OSEL

USA

Kim Blenman

Yale University

USA

Dave McClintock

Mayo Clinic

USA

April Khademi

University Toronto

Canada

Roberto Salgado

Antwerp/Peter McCallum

Belgium/Australia

Joe Lennerz

BostonGene

USA

Staci Kearney

Elevations

USA

Doc deBaca

Sysmex/CAP

USA

Kevin Schap

CAP

USA

Matthew Hanna

UPMC

USA

Jansen Seeheult

Mayo Clinic

USA

Norman Zerbe

Charite Berlin

Germany

Shannon Bennet

Mayo Clinic

USA

Fabienne Lucas

University Seattle

USA