Background: Artificial intelligence has become an important aspect in medicine, especially in radiology. Objective: The aim of this article was to investigate the legal liability for AI-assisted radiographic diagnosis errors in view of French, Jordanian, and UAE Law. Methods: The authors analyze medical liability in the U.A.E, France and Jordan, focusing on how errors in diagnosis are handled, and if artificial intelligence plays a role in the diagnosis. The researchers tested many jurisdictions for demonstrating wrong doing so nothing happens to the person. Results: In the United Arab Emirates, fault is closely tied to the idea of “critical doctor mistakes,” where a tool is accidentally used which leads to great harm. In France the penalties of a law are made up of the bias comment and how well professionally they knew how to do their job. In Jordan, the outdate laws reflects a change in the way the government is running things. They got another used law to government things correctly. The criminal liability framework including Penal Code and Medical Liabilidad Law protect the patient more than protecting the doctors. While these systems are supposed to balance how well you will take care of the patient and you, the caregiver, something goes wrong and the machines aren’t perfect and we cannot keep trusting in them. Conclusion: Because of the fact that this is an important emergent technology, a bunch of recommendations must be met for several reasons. Some of these guidelines are to review important regulatory actions to give a setting for this kind of technology.
Key words: Artificial intelligence, radiology, criminal liability, medical law, comparative legal analysis
|