Artificial intelligence (AI) and the AI application machine learning (ML) are playing an increasingly important role in medicinal product development and manufacture, as well as in the regulatory environment. For example, they may be used to help identify new therapeutic molecules, design clinical trials or assist with release-related analytical testing of ready-to-use medicinal products.
However, the high level of complexity of AI/ML models and their lack of transparency present challenges, particularly as regards data quality assessment, model transparency, quality control and possible bias. Regulatory authorities are currently developing framework conditions for the safe and effective use of AI applications with no risk to patient safety. Since AI applications are undergoing constantly development, authorities have to closely supervise their use over the entire life cycle of medicinal products and medical devices to ensure alignment between innovation and compliance.
Guidelines for the assessment of AI-generated elements
Authorisation applications must contain complete documentation that complies with the current state of science and technology. When assessing AI-generated elements, Swissmedic takes account of the guidelines and directives of international organisations and partner authorities such as the WHO, International Council for Harmonisation (ICH)1, International Medical Device Regulators Forum (IMDRF), EMA or US FDA as the expression of science and technology. As a member of international organisations, Swissmedic plays an active role in the drafting and evolution of the corresponding foundation documents.
[1] Federal law may refer to specific versions of international guidelines.