<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:g-custom="http://base.google.com/cns/1.0" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
  <channel>
    <title>practical-qara</title>
    <link>https://www.practicalqara.com</link>
    <description />
    <atom:link href="https://www.practicalqara.com/feed/rss2" type="application/rss+xml" rel="self" />
    <item>
      <title>AI in MedTech</title>
      <link>https://www.practicalqara.com/ai-in-medtech</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
           AI in MedTech - Should we be worried?
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Seldom does a day pass without hearing the acronym "AI" or having our lives touched by artificial intelligence in some way. It is one of the few acronyms that everyone seems to understand. Intentionally or not, we likely all interact with some form of AI in our daily lives whether through a simple Google search that generates AI-driven results, or by deliberately using AI to reword something we have written.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Like COVID-19, the AI revolution has caught us somewhat off guard. We are having to adjust how we work and live to embrace it. But unlike COVID-19 and conspiracy theories aside, AI as the name implies, is artificial. Arguably, we should be in full control of its seemingly uncontrolled proliferation in society. Yet the reverse appears true: something artificial is trying to control us. So how did we get here?
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           For context, I am no IT expert , ust someone who has worked in the MedTech quality and regulatory space for decades. Like many of you, I am coming to grips with what AI means and what lies ahead. While the term "artificial intelligence" might strike some as an oxymoron, I thought it prudent to explore some background.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           First: AI is not new.
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
            It has been around for nearly 90 years. In 1936, Alan Turing conceptualised the "Turing Machine"—a device capable of reading data and solving problems using algorithms. The term "artificial intelligence" didn't enter common vocabulary until the 1950s. In 1955, the Oxford English Dictionary defined it as:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           "The capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this."
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           This was later expanded to include:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           "…software used to perform tasks or produce output previously thought to require human intelligence, especially by using machine learning to extrapolate from large collections of data."
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           The key word here is intelligence, defined as "the faculty of understanding; intellect." Intellect, in turn, is "that faculty of the mind by which a person knows and reasons; power of thought; understanding; analytic intelligence."
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           All pretty deep.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Looking at the updated Oxford definition above, it states:
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            "…software used to perform tasks or produce output
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
           previously thought
          &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            to require human intelligence."
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
            "Previously thought"? Are we now suggesting that AI's tasks aren't truly intelligent but merely computational—driven purely by input?
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Reflecting on the deeper definition of intellect, the key terms are reasoning and power of thought. Reasoning and thought rely on knowledge, yet knowledge alone does not equal intelligence. Someone with vast knowledge may simply have a strong memory, not necessarily the ability to critically process, question, or extrapolate. Knowledge serves as the foundation for analysis and justification. However, the ability to retain knowledge, analyse, reason, and process information varies greatly among individuals. It is not one-size-fits-all.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           The rise of AI is directly tied to software's ability to analyse information at lightning speed. Tasks that might take us hours or days can now be completed in seconds. But AI is not magic. It is software, rooted in zeros and ones, that detects patterns in data to draw conclusions or make predictions. AI merely simulates and processes existing knowledge. That is arguably not intelligence. It cannot reason or justify.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           AI in Healthcare
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
            AI's use in MedTech has also been around for decades. In 2002, during a Master's degree in Medical Diagnostics, I was assigned a project on Artificial Neural Networks. I found it fascinating and saw huge potential—but with my quality and regulatory hat on, I could see clinical risks, particularly in disease diagnosis. I highlighted those risks. Despite being challenged by a few academics, I earned a very good mark because I could argue my points with reasoning and justification, points grounded in real-world experience as a medical device developer and user.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Fast forward 20+ years, and AI in MedTech is becoming more prominent. As noted, we are playing catch-up. Recent publications demonstrate this:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            2019:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             OECD AI Principles (fairness, innovation, accountability, transparency)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            2021:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             EU proposal for the AI Act (entered into force 01 August 2024)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            2023:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             BS/AAMI 34971 (applying ISO 14971 to machine learning in AI)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            2023:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             ISO 42001 (AI Management System)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            2023:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             US Blueprint for an AI Bill of Rights (ethics and bias)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            2025 (draft):
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             FDA guidance Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           With the exception of the FDA guidance (specific to medical devices), all documents are non-industry-specific.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Notably, ISO 42001 does not include the word quality, yet it contains many elements of a medical device QMS (document control, internal audit, etc.) and is expected to be integrated within it. Like the OECD document, it promotes transparency, fairness, accountability, and security, but takes a deeper dive into risk management. Rightly so.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           BS/AAMI 34971 specifically addresses risks arising from machine learning. It provides guidance on applying ISO 14971 to regulated AI medical technologies, without replacing it.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           The FDA draft guidance outlines key information for marketing submissions. One of the FDA's strengths I've always appreciated is its commitment to transparency. The guidance underscores a Total Product Lifecycle (TPLC) approach for AI-integrated devices, highlighting transparency and bias mitigation. It adopts a patient/user-centred focus, requiring details on:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Intended users and user interface
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Instructions and information for users
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            User training requirements
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Device performance and AI training data (including representativeness)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Post-market surveillance (PMS) and performance monitoring
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           To further transparency, the FDA also requires a Public Submission Summary, offering stakeholders insight into design and validation.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           The EU AI Act
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
            The EU AI Act is comprehensive. It aims to provide a regulatory framework for AI within the EU, ensuring safety while respecting rights and values, using a risk-based approach. Implementation is staged over 6–36 months. As of 02 February 2025, obligations on prohibited AI practices and AI literacy are mandatory. Obligations for high-risk AI systems, including medical devices and IVDs take effect on 02 August 2026. Only two guidance documents have been published so far; others are expected by August 2026.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Unlike the FDA guidance, the Act places obligations on both providers and deployers of AI systems:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Provider:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             develops an AI system or places it on the market under its own name.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Deployer:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             uses an AI system under its authority (excluding personal non-professional use).
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Article 26 requires deployers to:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ol&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Ensure use in accordance with instructions for use (IFU)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Assign human oversight to high-risk AI systems
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Monitor use and associated risks
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Report serious incidents to provider, importer, distributor, and market surveillance authority
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Undertake a data protection assessment
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Inform affected workers and representatives before use
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Inform natural persons how the AI system's output impacts them
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ol&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           These responsibilities aim at transparency, i.e. that users and patients are aware AI is being used and how it may affect them. In theory, that is positive but given my many years in MedTech, the following concerns arise:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Use in accordance with IFU:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             In reality, IFUs for MedTech devices are rarely read in clinical facilities. Do we really expect medical professionals and home users to read them? How will compliance be monitored?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Human oversight:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             This implies a "trust but verify" approach. How much oversight? By whom? Who decides? If constant verification is needed, what is the point of the AI?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Monitor risks:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             Risk assessment is subjective. User A may assign lower risk than User B for the same product. If the manufacturer monitors itself, is that not marking its own homework while trying to eliminate bias?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Report serious incidents:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
              Is it realistic to expect users to know who the importer, distributor, and market surveillance authority are? A consultant using an AI device will not be the one reporting incidents. Hospitals are chaotic environments.
            &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Data protection assessment:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             Who would do this? How would they know the software's limitations regarding data security? Full disclosure from providers is required, but the more risks they disclose, the less likely they are to sell. Commercial reality.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Inform workers and representatives:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             Who determines what information is disclosed? How is this monitored over time as workers and users change?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Inform natural persons:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             As above. How will users or patients acknowledge consent to AI use and accept associated risks?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            The vision is commendable, but experience tells me reality may differ. Transparency is good, but it relies heavily on labelling and product information. This challenges ISO 14971's principle that information for safety is the last resort of risk mitigation and for good reason, as IFUs are rarely, if ever, read.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           To date, those classified as deployers have had no obligation to understand regulatory requirements for bringing a device to market. Any failure when used as intended has largely been on the provider. Do deployer responsibilities now mean a clinical facility may be held accountable under an Act they likely know nothing about? Do we expect nurses, doctors, and patients to read and understand the Act? Having worked in QARA for decades, I can say it is not an easy read. Plenty of caffeine was required and something stronger later in the evening.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           I have major concerns that placing such obligations on deployers may give providers an easy escape from mitigating risks 'as far as possible', preferably by design, as required by EU MDR/IVDR. A defence of "they didn't read the IFU" or "they didn't use it as intended" becomes more justifiable after an adverse event.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           The FDA takes a more pragmatic approach. They state that transparency is context-dependent. They encourage designers to consider:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Where will the device be used, and what are the conditions?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            What else might users be doing simultaneously?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            How timely is the application of information?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            In what settings will the device output be viewed?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Will the users who interpret output be the same as those who operate the device?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           This is music to my ears. Anyone who has sat with me in a risk assessment session knows I always emphasise context of use, user profiles, what else users are doing (your device is not their focus!), and environmental limitations. Context is key—not only in AI transparency but in any MedTech risk assessment.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           Machine Learning
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
            AAMI TIR66:2017 defines machine learning as "function of a system that can learn from input data instead of strictly following a set of specific instructions." So, like us, AI "learns." Its brain is uploaded with existing knowledge (data) and computes based on input quality.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Everyone reading this can recall good and bad teachers at school. I think back to algebra class. Everyone received the same teaching input. Some grasped it immediately; most, including myself, did not. Was that intelligence, or just different ways of learning and processing information? The input was the teaching. The desired output was full understanding by all pupils. It didn't work.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           After class, many of us tried to teach ourselves, with limited success. Then one student went home, and his civil engineer father explained how algebra was used and why—he put it into context. Let's call him Pupil A. Pupil A then explained the reasoning to Pupil B, who grasped it and explained to others. Some grasped it; others did not. We all started with the same knowledge, but our ability to draw conclusions varied. The initial teaching input did not consider context of use or user variability.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           This is pertinent to AI in MedTech. Critical factors for safety and effectiveness include:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Method of training
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Training data
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Understanding user/patient variability (skills, customs, race, predispositions)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Understanding use environments
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Understanding that risk mitigation by labelling really is a last resort
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Above all, data, the input, is king. All these factors are inextricably linked and carry many permutations of risk. For an AI product to work according to its intended use, all assocuated risks must be fully understood and "built in."
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Take an AI diagnostic system for melanoma. If the algorithm is trained on white Europeans but destined for a global population, the data is not representative. As a pale Scot who turns beetroot red under a full moon, I differ hugely from my Mediterranean wife. Our skin cancer risk profiles and ease of detection are totally different. Similarly, an AI device for diabetes trained on people in the Far East would not necessarily be appropriate for US or Western European populations.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           I am not anti-AI. 
          &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Far from it. AI has enormous potential in society and MedTech, but it needs careful management. Quality in healthcare AI does not happen by chance. It is only as good as the data used to feed it. It builds on existing quality concepts like risk analysis and trainingbut here, training is of the algorithm, not people, and the permutations of harm increase profoundly.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            MedTech practice is grounded in risk. "First, do no harm" acknowledges that every intervention carries risk. Effective management is essential, yet many companies lack robust understanding of their devices' true clinical and usability risks or the realities of the clinical environment. AI will introduce unprecedented complexity and new risk categories. Consider an AI system that trains continuously during clinical use: Who verifies this learning? Who is accountable? Ultimately, it comes down to controlling inputs, outputs, and variability.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Thankfully, BS/AAMI 34971 addresses some concerns by guiding what to consider when identifying risks. It states that people involved in risk assessment must have relevant knowledge of the data used to train, test, and validate the system. But this requires correct input data at the time of use (e.g., ethnicity, age). The algorithm may work, but ongoing training data may be erroneous and this will be somewhat out of the developers control.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Curious about numbers of AI-based devices approved/cleared,
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            I ironically used AI to do the task for me. Why? It saved hours of trawling databases, and if the numbers are somewhat inaccurate, no one gets harmed.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           In an unscientific study, I asked three AI platforms (Deepseek, Grok, ChatGPT): "How many AI-enabled medical devices have been cleared by the US FDA to date?" and "How many have been approved under EU MDR/IVDR?"
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           For FDA clearance:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            ChatGPT: 1400–1450 (to April 2026)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Grok: 1430–1451 (to mid-April 2026)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Deepseek: 1356 (to end of March 2026)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Deepseek and Grok provided further detail (radiology predominant). ChatGPT provided most information on trends over time.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           For the EU, unsurprisingly, no platform could provide exact numbers due to the lack of an operational public database. Deepseek and ChatGPT stated numbers were unavailable and gave reasons. Grok provided estimates from multiple sources.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           The same query yielded different results because algorithms and training differ. Using AI saved me hours or days of work, and I was grateful. It replaced a manual, tedious task that didn't require much intellect, just knowing where to look (knowledge). Time was the winner. Despite differing outputs, I was impressed. But these results also show that AI in MedTech is growing, even the FDA now uses AI in technical reviews.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Should we be scared?
          &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            To a point, yes. At a recent MedTech expo, many expressed concerns about the speed of AI's takeover, especially in MedTech. The human reasoning element seems absent. Computational power is quick and efficient, but reasoning power is debatable. We will rely less on highly trained clinicians for diagnosis or procedures. It is not all binary zeros and ones—there are 0.27's, 0.35's, 0.52's, 0.86's. Those grey areas rely on clinical oversight, justification, reasoning, and thought. Ask any clinician if they have the same knowledge they had when they originally qualified, they will undoubtedly say no, most of their clincial reasoning has been based on experience away from the textbook.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Multidisciplinary team meetings have been common practice in medicine for centuries. They discuss grey areas, gather opinions that matter, include those who have seen something before that triggers alarm bells or offers alternative perspectives. Clinicians know the risks, they spent years training, are bound by the Hippocratic oath, and undergo periodic peer assessments. Should AI replace that?
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Curious to gather a clincial insight I gathered a clinician's viewpoint from consultant gynaecologist Dr. Maria Vella ;who said
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           "We increasingly hear how, in a cash-strapped health service, AI can save money, increase productivity, and improve accuracy especially in diagnostics. AI will undoubtedly help in many clinical scenarios, but its use needs careful regulation and governance.
           &#xD;
      &lt;br/&gt;&#xD;
      
            Areas like breast imaging (screening mammography) could deploy AI successfully, processing large volumes at lower cost. Dermatology is another trial area.
           &#xD;
      &lt;br/&gt;&#xD;
      
           The downside: human bodies have subtle differences. The experienced clinician's eye is essential to differentiate normal variants from early pathological changes. Using AI in diagnostics could, at best, increase clinician workload (anything not 'box standard' normal requires investigation, increasing patient anxiety). At worst, subtle early changes, critical for identifying disease, could be ignored. Any process deploying these systems needs careful review and appropriate governance before becoming the default service."
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           To conclude:
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
             The potential for AI in MedTech is enormous. It will save time, save money, and I hope save lives. But we need to proceed with caution, and a lot of it. The AI Act, FDA guidance, and others are steps in the right direction, but we seem to be running before we can walk. My fear is that society is being driven by AI, it is controlling us, not the other way around. The appeal of time and cost savings in cash-strapped health systems is real, but so are the risks. We must not lose sight of the bigger picture. 
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           We may be ready for AI to provide recommendations in MedTech that are verified by trained clinicians. But there are genuine concerns about AI devices that train on the job with no human validation check, and those that make final clinical decisions or perform surgical procedures. Lives are at stake. I hope that in the rush to adopt AI in MedTech we avoid a disaster that forces us to reassess its use.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Technological advancement only creates possibility. We cannot just drop AI into practice and assume a positive clinical impact. Positive impact in any MedTech requires deliberate design based on thorough risk management, and a fully weighted benefit-risk assessment throughout the entire lifecycle, not just getting the product to market. This is especially the case for AI.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;br/&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           The tech revolution is here and it is big. But unless we close the gaps, we will miss out on the value.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/79b92f3e/dms3rep/multi/AI+in+Medtech.png" length="2532781" type="image/png" />
      <pubDate>Tue, 14 Apr 2026 10:23:33 GMT</pubDate>
      <guid>https://www.practicalqara.com/ai-in-medtech</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/79b92f3e/dms3rep/multi/AI+in+Medtech.png">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/79b92f3e/dms3rep/multi/AI+in+Medtech.png">
        <media:description>main image</media:description>
      </media:content>
    </item>
    <item>
      <title>Quality and Regulatory hurdles in a Robotic Start-Up. In the Beginning…</title>
      <link>https://www.practicalqara.com/quality-and-regulatory-hurdles-in-a-robotic-start-up-in-the-beginning</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Quality and Regulatory hurdles
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Stephen Smith, Co-Founder of Practical QARA, discusses some of the QA and RA hurdles in getting a surgical robot to the market. The blog is based on real world experience.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           By nature medical device start ups are often spin outs characterised by academics and people who may not have had exposure to previous joys of getting a new device to the market, and that’s ok. The drawback is that they often do not realise how hard it is to get a new device to market especially one as complex as a surgical robotic system.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           So, let’s say you have a great idea for a new robotic device or any medical device for that matter.  You have thought about the technology, know roughly what type of surgery you think the system is capable of and set about building a prototype. You build your prototype, everything is good so you start to think about upscaling and selling. You desperately need that return on investment and keep that positive news flow to the investors. Time is of the essence, we get it, we have been there with robotic companies and many other start-ups.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           To quote Elon Musk, ‘Prototypes are Easy, Production is hard’. He is right but are medical device prototypes that easy? Practically, yes.  It is relatively easy to knock up a few in a lab and get them working how you think they need to be working. The fun comes when you want to up-scale and sell them. Such  challenges increase exponentially with device complexity. Both authors have seen many start-ups fail as they do not consider key elements in the prototype stages, such as;
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Can you actually manufacture it at scale? Building a prototype in a lab by R &amp;amp; D Scientists and Engineers is one thing, being able to mass produce with reproducibility and reliability is quite another.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
             Do you
            &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            really
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
             know what the customer wants? It is often totally different to what you think they want.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Who will be using your robot? Yes, obviously surgeons, but what type of surgeons? Surgeons with experience in robotics, general surgeons, specialty surgeons, surgeons who have just qualified?  Do surgical practices differ according to territories in which you want to sell it?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            What is the kind of environment that the device will be used in, the extremes of those environments, the chaos?
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           Introducing Quality Assurance (QA) and Regulatory Affairs (RA).
          &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            And then there is the often neglected aspect of QA and RA.  First a little perspective and food for thought. ‘First do no Harm’ is the Hippocratic Oath in which medics must abide by in practice, and we think it is fair to say that nobody could argue with this and we are all glad this is the case.. In essence doctors/surgeons perform three tasks, they
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           Diagnose,
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
             they
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           Treat
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            , and they
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           Monitor.
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
             They go through many years of training, theoretical and practical examinations before they are let loose on patients. Such that the patient does not come to any harm under their care. And again, we are all kind of glad they do. They are also governed by bodies such as the General Medical Council in the UK. In essence,
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           medical devices
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
             perform the same functions, they
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           Diagnose
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            , the
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           Treat
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            , they
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           Monitor
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           , yet we as device developers and manufacturers do not need to undergo any medical training or undertake exams in order to diagnose, treat or monitor those on the receiving end of our devices. Now think about how much medics rely on devices to either make their decisions in entirety or to aid them. Scary?
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Let’s now introduce QA and RA. Whilst QA and RA arguably are different disciplines, they are inextricably linked in practice.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           With an objective of ‘first do no harm’ they are essentially there to make sure our devices do what we say they do and patients will not be in a worse state after using our device than before.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Let’s start with Quality Assurance where the word ‘Assurance’ is key. How will you ‘assure’ the quality of your system? Quality is about knowing what the customer wants and having a product that meets those requirements,
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           first time
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            ,
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           every time
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           . ‘Assuring’ Quality can only come about by reducing sources of variability in the design and the manufacturing process; so we first need to know where these sources of variability originate, how to address them and how we will verify our fix. There is also the not insignificant requirement of a Quality Management System (QMS). As the title implies this system is supposed to assure quality for all products covering aspects such as design control, supplier control and Internal audits – yes, you will need to do these too. And your Quality system will also need to be certified. Fortunately the need for a QMS seems to be widely acknowledged at an early stages by execs but having one which is complaint is one thing, having one which is compliant and effective (I won't bog you down) is quite another.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Regulatory Affairs, is primarily about gaining approval so your device can be marketed. There are many regulations and the more complex your device, the more regulations you will have to comply with. Regulations are getting more stringent. Take the introduction of the EU Medical Device Regulation (2017/745) for instance. This has resulted in a significant increase in the level of pre-market scrutiny that is required to get your device to market. And where there is increased scrutiny, the costs also increase.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Regulations will vary according to where you want to sell it, but taking the EU as and example your robot and documentation will need to demonstrate electrical safety, most commonly to the EN60601 series of standards. If it has batteries in back up systems and sold in the EU you will need to comply with the Battery Regulation 2023/1542. As an invasive device you will need to show biocompatibility to the ISO10993 series. Usability to the IEC62366 series, software to the EN62366 series, clinical investigations to ISO14155, risk to ISO1491, the WEEE regulations, the RoHS regulations… to name a few: but you get the picture.  Getting regulatory clearance for a robot is no mean feat and will take time... and a lot of money.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           The Pitfalls
          &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           You may think your robot prototype is great and it probably is. It may work great in the lab, may have worked great in your animal and cadaver studies, but... if it does not meet biocompatibility or electrical safety requirements, it isn’t going anywhere near a patient. Many companies fail to consider the requirements of standards or regulations in good time and/or fail to keep up with regulatory changes. Central to your device will be clinical evaluations. How will you do this? What are your determinations of safety and effectiveness? What are your end points? You’ll also need ethical approvals, is the population of your trial(s) representative of your target population?
          &#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
            
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           The list goes on, and will need to be considered at the very early stages.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Central to QA and RA is a four-letter work ending in K that is pivotal to success, or failure – RISK. Getting back to the Hippocratic Oath mentioned above, in order to ‘do not harm’ we must know what the risks of our device are and mitigate these risks accordingly.  Regulations are ultimately there to protect the patient and user from risk, and there are any different types of risk. ISO14971 is
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           the
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           standard for medical device risk, and whilst not strictly deemed as obligatory, it is expected, so you had better have a pretty solid rationale for not adhering to it. Central to this standard is the concept of ‘Generally Acknowledged State of the Art’. You will be expected to have conducted a thorough analysis of, and have an understanding of the Generally Acknowledged State of the Art in relation to the Intended Use and Indications of your Robot. And how your device will present a favourable benefit-risk assessment against it. It is all about understanding and demonstrating device safety.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           The severity of harm, if your robot malfunctions, can be catastrophic; and one malfunction can put an end to your venture in an instant. As highly complex devices the potential points of failure will go into the thousands if not tens of thousands.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Your robotic surgical instruments are where the rubber meets the road. Whilst they may not be the most complex part of your system they probably present the highest risks. Each instrument alone is a highly complex and will present some of the biggest challenges. Each cable, mechanical joint, electrical connection will present single points of failure. But you will also need to consider multiple points of failure. As an example, if the user misses the port (it happens!) and the device bends back on itself, this may cause a cable to break! But will that one break in a cable result in additional strain on the others? Will an unforeseen surge in electrical current in electrocautery result in weakening of cables? Will too high a torque from the robotic arm impact any haptic feedback through the instrument or result in parts becoming detached from the instrument?
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           There are countless permutations and combinations of device failure and whist we cannot be expected to foresee each one, we will be expected to have made a thorough attempt and demonstrate due diligence in the process.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Aside from risks relating to the device failing from a technical perspective, consideration of usability risks and the operating environment are equally as critical and will receive scrutiny from regulators for good reason.  Whilst it is relatively safe to assume the users are all trained to a similar level clinically, there are other considerations, such as are they as tech savvy as each other? One user might love technology, be a huge fan of robotics having used them for years... whilst another may still be coming to terms with the steam engine, being a complete technophobe and avoiding them like the plague.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Both surgeons will present completely different risk profiles to the device but will likely fit your user profile. It is essential that you get input from extremes of users when analysing your risk.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           And how do we deal with risks once identified, and then verify any mitigations put in place?  If for example, in the case of a fault, it was decided that an audio alarm will sound... how many other alarms or noises will be present in the operating environment? How will the surgeon know how to react on being made aware of an alarm? If a visual alarm, how is it distinguishable from other flashing warnings? What if the user is colourblind?
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Put simply, if you do not comprehensively analyse risks and include sound verified risk mitigations in your design, it will come back to bite you by either not getting past regulatory scrutiny, or worse resulting in an adverse event and recall.
          &#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
            
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Not getting approval in the first place is actually the lesser of the two evils.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           Whilst investors might not be best pleased with a delay to approval, they will be less pleased with an adverse event where a patient is harmed.
          &#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
            
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           I have seen devices and start-ups fail as they simply neglect to adequately consider design risk coupled with user needs, user variability, the use environment, and multiple points of failure.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           What we Suggest.
          &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           With a combined 50+ years in devices we have been there, done it and got the scars as well as grey hairs to prove it. With that in mind we have the following advice:
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ul&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Consider  Regulator(s) as a CUSTOMER who you need to satisfy as much as those buying the system. If there is a requirement to make your surgical graspers of a certain type of material - then that is a design input. If there is a regulatory or guidance requirement to have your instrument cables to a specific tensile strength - that is a design input. If there is a requirement that an audible warning must be of a certain decibel for the use environment - that is a design input….you get the picture…
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Know your user(s). You may think you know how the device will be used, or want them to use it in a certain way... but the harsh reality is, they won’t. Operating theatres can be chaotic and your robot is not the centre of attention. It needs to be easy to use by all users, including those technophobes. If you want something broken in ways you cannot conceive, give it to a medic. If you want to see how the device can be used incorrectly as well as broken in inconceivable ways, give it to two medics. Ease of use and usability is critical.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Know your applicable regulations, standards, and guidance documents from the get-go. Quality and Regulatory requirements are dynamic and you need to keep abreast of changes and trends. Many standards are also not cheap, so these will need to be budgeted for. Guidance documents are plentiful and useful, use them. Whilst they are termed ‘guidance’ and therefore not obligatory, many inspectors treat them as such.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Many requirements in regulations and standards can be interpreted and applied in different ways. You need a sound rationale to show how you interpreted and applied them. QA and RA can help with this, but it is not their ultimate responsibility. Whilst they might have scientific, engineering or medical degrees... they are not experts in everything, and cannot be expected to understand much of the technical jargon. But they will help you.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Unfortunately there is a lot of inconsistency in what reviewers/inspectors/auditors deem acceptable; with most having their own preconceived ideas. So you need to be prepared to present your case in a clear concise manner. Whilst they may have knowledge of the area they are inspecting, they will not know the product as well as you. Know your risks, mitigations and be prepared to defend them.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Lead times to regulatory clearance can be long as well as super expensive, so it is essential that you include realistic timelines in your project plans and budgets. Also keep in mind that times to regulatory clearance are often best case, and they will likely take much longer... with added costs for additional reviews. In the EU and UK, Notified Bodies and Approved Bodies are commercial entities. They charge an absolute fortune, with questionable reliability and consistency. They will make more money out of more visits and reviews... so it pays to get it right first time.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Importantly, QA and RA personnel are not the enemy! Despite what it may appear at time, they want the same outcome as you do, and are only doing their job. If they say something needs to be different, then they are saying bercause they believe a regulator will require that, based on their experience. It is not the QARA team you need to convince. Remember they do not make the rules nor necessarily agree with many of them. Their job is not an easy one, they take a lot of flack so be nice to them. (Please.)
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ul&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Meeting QA and RA is more than just a tick box. A poor device may still meet regulations, same as a poor Quality System can be certified, however a poor device will not sell and may result in an adverse event, and a poor quality system will burden you down with bureaucracy. Get it right from the start.  It all starts with
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           design
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      
           .
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Whilst the above is targeted toward a surgical robot, what is written applies to any medical device or IVD. A blog from a clinicians point of view on devices, an area often overlooked will follow soon.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/79b92f3e/dms3rep/multi/Screenshot+2025-05-27+at+14.44.10.png" length="490907" type="image/png" />
      <pubDate>Wed, 06 Nov 2024 13:45:22 GMT</pubDate>
      <guid>https://www.practicalqara.com/quality-and-regulatory-hurdles-in-a-robotic-start-up-in-the-beginning</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/79b92f3e/dms3rep/multi/Screenshot+2025-05-27+at+14.44.10.png">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/79b92f3e/dms3rep/multi/Screenshot+2025-05-27+at+14.44.10.png">
        <media:description>main image</media:description>
      </media:content>
    </item>
  </channel>
</rss>
