The use of artificial intelligence also poses risks. Researchers from the USA are now calling for warnings for AI systems, similar to those for prescription drugs.
AI systems are becoming more and more sophisticated and are therefore increasingly being used in safety-critical situations – including in healthcare. Researchers from the USA are therefore calling for these systems to be used appropriately to ensure “responsible use” in the healthcare system.
MIT Professor Marzyeh Ghassemi and Professor Elaine Nsoesie from Boston University call for this in a commentary in the specialist magazine Nature Computational Science therefore warnings – similar to prescription medications.
Do AI systems in healthcare need alerts?
Devices and medications used in the US healthcare system must first go through a certification system. This is done, for example, by the federal agency Food and Drug Administration (FDA). Once they have been approved, they will continue to be monitored.
However, models and algorithms – with and without AI – largely circumvent this approval and long-term monitoring, as MIT professor Marzyeh Ghassemi criticizes. “Many previous studies have shown that predictive models need to be evaluated and monitored more carefully,” she explains in an interview.
This applies especially to the newer generative AI systems. Existing research has shown that these systems are “not guaranteed to work appropriately, robustly or unbiased”. This could lead to distortions, which could remain undetected due to a lack of monitoring.
This is what the labeling of AI could look like
Professors Marzyeh Ghassemi and Elaine Nsoesie are therefore calling for responsible usage instructions for artificial intelligence. These could follow the FDA approach to creating prescription labels.
As a society, we have now understood that no pill is perfect – there is always some risk. We should have the same understanding of AI models. Every model – with or without AI – is limited.
These labels could make clear the time, place, and nature of an AI model’s intended use. This could also contain information about the period in which the models trained with which data.
According to Ghassemi, this is important because AI models that have only trained in one location tend to perform worse when they are used in another location. However, if users have access to the training data, for example, it could sensitize them to “potential side effects” or “undesirable reactions”.
Also interesting:
- This is how much electricity and water ChatGPT needs for a single email
- With AI: This is how the police want to solve crimes soon
- Research: AI develops more creative ideas than 50 scientists combined
- Without consent: LinkedIn automatically uses user data for AI training
The article Do AI systems need warnings – like medications? by Maria Gramsch first appeared on BASIC thinking. Follow us too Facebook, Twitter and Instagram.
As a Tech Industry expert, I believe that AI systems do need warnings similar to medications. While AI systems can greatly benefit society in numerous ways, they also have the potential to cause harm if not used properly. Just like medications have warnings about potential side effects and proper usage, AI systems should also come with warnings about their limitations, potential biases, and ethical considerations.
By providing warnings, users can be better informed about the risks and limitations of AI systems, allowing them to make more informed decisions about how to use and interact with these technologies. Additionally, warnings can help to promote transparency and accountability in the development and deployment of AI systems, ultimately leading to more responsible and ethical use of these powerful technologies.
Overall, I believe that incorporating warnings into AI systems is an important step towards ensuring their safe and responsible use, and ultimately maximizing the benefits that they can bring to society.
Credits