Artificial intelligence and machine learning (AI/ML) are increasingly transforming the healthcare sector. From spotting malignant tumors to reading CT scans and mammograms, AI/ML-based technology is faster and more accurate than traditional devices – or even the best doctors. But along with the benefits come new risks and regulatory challenges.

Given the significant value of this adaptive system, a fundamental question for regulators today is whether authorization should be limited to the version of technology that was submitted and evaluated as being safe and effective, or whether they permit the marketing of an algorithm where greater value is to be found in the technology’s ability to learn and adapt to new conditions.

The researchers looked at the risks associated with this update problem, considering the specific areas which require focus and ways in which the challenges could be addressed. The key to strong regulation, they say, is to prioritize continuous risk monitoring.

As regulators move forward, the researchers recommend they develop new processes to continuously monitor, identify, and manage associated risks. They suggest key elements that could help with this, and which may in the future themselves be automated using AI/ML — possibly having AI/ML systems monitoring each other.

While the paper draws largely from the FDA’s experience in regulating biomedical technology, the lessons and examples have broad relevance as other countries consider how they shape their associated regulatory architecture. They are also important and relevant for any business that develops AI/ML embedded products and services.