Machine-learning AI—What makes it different?

Machine learning AI systems, unlike simple rules-based systems, are cognitive in some sense and can modify their outputs accordingly

Systems that simply imitate humans are not new to the medical field, of course. Since the advent of commercial transistors in the 1960s, computational medical devices have increasingly mimicked human behavior and actions. Automatic blood pressure monitors (sphygmomanometers) imitate the actions of trained clinicians in detecting and reporting the Korotkoff sounds that signify systolic and diastolic blood pressures. Portable defibrillators evaluate heart waveforms to determine when defibrillation is necessary and can then act to deliver the needed defibrillation.

Devices like these, by supplementing or in some instances replacing direct clinician involvement, have already expanded the availability of care outside of healthcare facilities, to homes and workplaces, as well as to areas and regions where trained clinicians are rare or absent. Such technologies, however, do not act independently of human reasoning, but instead utilize previously validated clinical protocols to diagnose medical conditions or deliver therapy. They do not “think” for themselves in the sense of understanding, making judgements, or solving problems[1]; rather, they are static rules-based systems,[2] programmed to produce specific outputs based on the values of received inputs.

While such systems can be very sophisticated, the rules they employ are static— they are not created or modified by the systems. Their algorithms are developed based on documented and approved clinical research and then validated to produce expected (i.e., predictable) results. In this aspect, rules-based AI systems, other than their complexity, do not differ substantially from computational and electronic medical devices that have been in use since the 1960s.

There are other types of AI that utilize large data sets and complex statistical methodologies to discover new relationships between inputs, actions, and outcomes. These data-driven or machine learning systems[3] are not explicitly programmed to provide pre-determined outputs, but are heuristic, with the ability to learn and make judgements. In short, machine learning AI systems, unlike simple rules-based systems, are cognitive in some sense and can modify their outputs accordingly. For the purposes of this blog post, we have separated data-driven/machine learning AI into two groups—locked models that are unable to change without external intervention, and continuous learning (or adaptive models) that modify outputs automatically in real-time. In reality, there are likely to be several levels of change control for AI—from traditional concepts that are already known, to accelerated concepts that may need additional levels of control.

The more sophisticated of these data-driven systems (i.e., super-intelligent AI) can surpass human cognition in their ability to process enormous and complicated data sets and engage in higher levels of abstraction. Utilizing multiple layers of statistical analysis and deep learning/neural networks, these systems act as black boxes[4] producing protocols and algorithms for diagnosis or therapy that are not readily understandable by clinicians or explicable to patients.

Data-driven machine learning AI systems can be further divided into locked models and continuous learning models:

  • Locked models[5] employ algorithms that are developed using training data and machine learning, which are then fixed so neither the internal algorithms nor system outputs change automatically (though changes can be accommodated in a stepwise manner).
  • Continuous learning models (or adaptive models)[6] utilize newly received data to test the assumptions that underlie their operations in real-world use and, when potential improvements are identified, the systems are programed to automatically modify internal algorithms and update external outputs.

The special characteristics of machine learning and deep-learning AI systems differentiate them from rules-based systems and more traditional medical devices in specific ways. First, they learn—these systems not only treat patients, but are capable of assessing the results of treatment both for individuals and across populations, as well as making predictions about improving treatment to achieve better patient outcomes. Second, they are capable of autonomy—some of these systems have the potential to change (and presumably improve) processes and outputs, without direct clinical oversite or traditional validation. Third, because of their sophisticated computational abilities, the predictions developed by these systems may, to some degree, be inexplicable to patients and clinicians. Combined, these characteristics blur the essential nature of the devices themselves, changing them from being simply tools used under the direction of clinicians to systems capable of making autonomous clinical judgements about treatment.



[1]   One definition of “Think” in the Cambridge Dictionary is “to use one’s mind to understand.”

[2]  Daniels, et al, Current State and Near-Term Priorities for AI-Enabled Diagnostic Support Software in Health Care (White Paper), Duke Margolis Center for Health Policy, 2019, p. 10

[3]   Ibid. For the purposes of this blog post, the terms data-driven, and machine-learning are synonymous, as are the terms continuous learning and adaptive models.

[4]  The metaphor of “black box” is used widely and with different connotations, but with respect to AI, we are not simply talking about a lack of visibility with respect to mechanisms or calculations, but also to the inscrutability of the basic rationale for performance.

[5]  The term “locked” with respect to AI has been defined as “a function/model that was developed through data-based AI methods, but does not update itself in real time (although supplemental updates can be made to the software on a regular basis).” [Source: Duke, Current State and Near-Term Priorities for AI-Enabled Diagnostic Support Software in Health Care]. A “locked” data-driven algorithm, even if externally validated, is not a rules-based algorithm, because that locked AI algorithm is not based on current, rules-based medical knowledge

[6] Duke Margolis Center for Health Policy, 2019, p. 12.

This is an excerpt from the BSI/AAMI white paper: Machine learning AI in medical devices: adapting regulatory frameworks and standards to ensure safety and performanceTo browse our collection of medical device white papers, please visit the Insight page on the Compliance Navigator website.

Request more information today for a call back from a member of our sales team so that you can get a better understanding of how Compliance Navigator can meet your needs.  

The Compliance Navigator blog is issued for information only. It does not constitute an official or agreed position of BSI Standards Ltd or of the BSI Notified Body.  The views expressed are entirely those of the authors.