Thank you everyone for joining our recent webinar on AI Technology for Regulatory Intelligence & Surveillance! If you missed it, you can still find the link at the end of this post.
We were amazed by the volume and depth of questions during the Q&A session—thank you for your enthusiasm! To provide the detailed answers you deserve, we’ve divided the responses into two parts.
In Part 1, we explored how AI enhances medical device quality, identified emerging trends in regulatory surveillance, recommended key AI tools, and discussed principles of good AI practice.
In Part 2 (the final part), we shift the focus to key challenges and strategies for implementing AI in regulatory surveillance. We’ll cover:
Overcoming organisational challenges in adopting AI
The importance of explainability in regulatory compliance
Metrics for evaluating the effectiveness of AI systems
Motivating experts to document knowledge for AI systems
Building trust in AI tools and understanding their limitations
Q&A Part 2: AI Technology for Regulatory Intelligence & Surveillance
What are the main challenges organisations face when implementing AI for regulatory surveillance, and how can they overcome them?
One major challenge organisations face when implementing AI for regulatory surveillance is employees using AI outside their expertise. This challenge is particularly important to address because it can lead to inaccurate outputs and compliance risks, ultimately undermining the reliability of the process. Here’s how organisations can address this issue effectively:
AI should only be applied by employees with sufficient expertise to verify/validate the output.
Establish a corporate AI Policy.
Encourage the use and disclosure of AI (nothing to hide policy).
Declare use of AI on Documents.
What role does explainability in AI play in regulatory compliance, and how can organisations ensure transparency?
Explainable AI (XAI) significantly enhances regulatory compliance by providing clarity on the application of AI and ensuring that users can understand the AI output. This is critical for maintaining accountability and transparency in regulatory processes.
To ensure transparency and compliance, organisations can leverage Explainable AI (XAI) in the following ways:
Ensuring Accountability: Organisations are accountable for their AI-influenced decisions. XAI empowers organizations to communicate how AI was applied, thereby demonstrating evidence of compliance.
Evidence: XAI helps to understand and provide rationale for the output from the application of AI, ensuring compliance with regulations and standards.
What metrics or KPIs should companies use to evaluate the effectiveness of AI-driven regulatory intelligence systems?
To evaluate AI-driven regulatory intelligence, companies should focus on KPIs that track efficiency and regulatory compliance.
Here are some key metrics companies should consider:
Predictable timelines for regulatory preparations and submissions.
Response times to regulatory changes.
Reductions in non-conformances.
Reductions in changes to documentation.
Reductions in regulatory costs (resubmission, revalidation, label changes).
How do we motivate experts to record or document their knowledge so it can be leveraged by others in the organisation?
Effective documentation of expert knowledge is essential for organisational value and regulatory compliance. It enables valuable insights to be shared and used across teams, improving efficiency and ensuring compliance. To motivate experts to document their knowledge in ways that can be easily shared, organisations need to highlight the benefits and establish clear processes. Here’s how to encourage knowledge documentation:
Recognition and Incentives: Recognize and reward experts for their contributions to knowledge documentation, ensuring they feel valued for their input.
Ease of Documentation: Provide user-friendly tools and platforms that make the documentation process simple and efficient, such as templates, AI-assisted documentation tools, and collaborative platforms.
Integration into Workflow: Make documentation an integral part of every activity and incorporate it into the regular workflow to ensure it becomes a routine practice.
Demonstrate Impact: Show experts how their documented knowledge is being used and how it positively impacts the organisation, making their contributions more tangible and appreciated.
Cultural Shift: Foster a culture of knowledge sharing, where not just experts, but everyone, is encouraged to document and share insights, creating a more collaborative environment.
Evidence:
Apply AI to analyse documents and identify gaps, fostering continuous improvement and ensuring no critical knowledge is left undocumented.
Use AI to manage and support expert knowledge, aiding in troubleshooting and product development, thus adding further value to the organisation.
By focusing on these strategies, organisations can ensure that the three pillars of Regulatory Affairs—Claim, Argument, and Evidence—are upheld through effective knowledge documentation. Without evidence, the value of expert insights becomes less impactful, making tangible documentation essential for organisational success.
Will having a regulatory domain-specific small language model (SML) provide more robust regulatory information?
Yes, an SML that is specific to the scope of regulatory activity of a company will significantly:
Reduce data collection
Improve focus and accuracy
Ease verification and validation
Trust is a significant issue in adopting AI in the RA community. Similar to how we did not start flying right away when aircrafts were introduced, how can we build trust in these tools to encourage usage?
One should note that unlike flying, we do not need to trust the pilot; the user remains responsible for the input and output of an AI tool.
By starting with specific, verifiable tasks and maintaining transparency and user involvement, organisations can gradually build trust in AI tools.
Start Small and Simple:
Apply AI to Specific Tasks: Implement AI for tasks where the benefits are clear and the output can be immediately verified by the user.
Communicate the Use of AI: Clearly communicate the role of AI in these specific tasks.
I feel humans cannot be entirely replaced by AI, even with extensive training. What is your opinion on this?
While AI is a powerful tool that can augment human capabilities and enhance efficiency, it is not a replacement for human perception, cognition, and oversight. The application of AI can lead to more effective and informed outcomes, but the human element is not replaced for the Feedback (Verification) of the AI output which also requires human perception, cognition and interpretation.
Please elaborate on what is meant by "tooling around LLMs is critical." What does "tooling" mean in practical terms?
"Tooling around LLMs" refers to the specialised tools and systems required to effectively use, manage, and enhance Large Language Models (LLMs). In practical terms, it involves tools for fine-tuning models, data preprocessing, deployment, performance monitoring, and ensuring security and compliance. These tools are essential to optimise LLMs for real-world applications and ensure they function efficiently and accurately.
Can you share examples of successful AI applications in regulatory surveillance within the healthcare industry?
AI is becoming an essential tool in regulatory surveillance, particularly in ensuring compliance within the healthcare and medical device industries. Below are examples of AI applications using frameworks like the European Medical Device Regulation (MDR) and tools like the MDCG 2020-13 Clinical Evaluation Report Template.
MDCG 2020-13 Clinical Evaluation Report Template - Section C:
Device Description:
AI Check: Does the Device Description address the requirements?Ensure the device description meets regulatory standards and includes all required information.
Classification:
AI Check: Enter MDR 2017/745 Annex VIII and data entry in form into toolEnsure correct device classification based on risk and intended use.Are the classification rules listed correctly?
Previous Generations of the Device and Similar Devices (if applicable):
AI Check: Enter data about the device and similar devices (perhaps use 510k database)Create a table comparing the device with similar devices on the market to assess regulatory history.
Clinical Evaluation Plan:
AI Check: What gaps are there, if any, with MDR 2017/745 Annex XIV Part A Section 1a?Ensure the clinical evaluation plan addresses all necessary requirements per MDR.
Harmonized Standards:
Are there harmonized standards relevant to the clinical evaluation of the device under evaluation?AI Check: Are harmonized standards correctly referenced?Verify that applicable harmonized standards are referenced correctly in the clinical evaluation.
State of the Art (SOTA):
AI Check: Use AI to create a table demonstrating SOTA with other identified devices.Compare the device to others on the market to demonstrate its compliance with current industry standards and innovation.
Conclusion: AI is streamlining regulatory processes in the healthcare industry by automating tasks like device classification, clinical evaluation checks, and compliance with harmonized standards. Tools like Hoodin help ensure companies stay compliant by offering AI-driven insights and automation.
✨ Thank you for your great questions and enthusiastic participation! We hope these insights prove valuable.
If you haven’t registered yet for Part 2: The Workshop, where we’ll dive deeper into the practical applications and advanced strategies of AI for regulatory intelligence and surveillance—don’t miss out! Spots are limited for these sessions, so we encourage you to register early to secure your place.
Subscribe to our blog to stay informed and access additional resources, videos, and much more