We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


A new whitepaper coauthored by researchers at the Vector Institute for Artificial Intelligence examines the ethics of AI in surgery, making the case that surgery and AI carry similar expectations but diverge with respect to ethical understanding. Surgeons are faced with moral and ethical dilemmas as a matter of course, the paper points out, whereas ethical frameworks in AI have arguably only begun to take shape.

In surgery, AI applications are largely confined to machines performing tasks controlled entirely by surgeons. AI might also be used in a clinical decision support system, and in these circumstances, the burden of responsibility falls on the human designers of the machine or AI system, the coauthors argue.

Privacy is a foremost ethical concern. AI learns to make predictions from large data sets — specifically patient data, in the case of surgical systems — and it’s often described as being at odds with privacy-preserving practices. The Royal Free London NHS Foundation Trust, a division of the U.K.’s National Health Service based in London, provided Alphabet’s DeepMind with data on 1.6 million patients without their consent. Separately, Google, whose health data-sharing partnership with Ascension became the subject of scrutiny last November, abandoned plans to publish scans of chest X-rays over concerns that they contained personally identifiable information.

Laws at the state, local, and federal levels aim to make privacy a mandatory part of compliance management. Hundreds of bills that address privacy, cybersecurity, and data breaches are pending or have already been passed in 50 U.S. states, territories, and the District of Columbia. Arguably the most comprehensive of them all — the California Consumer Privacy Act — was signed into law roughly two years ago. That’s not to mention the national Health Insurance Portability and Accountability Act (HIPAA), which requires companies to seek authorization before disclosing individual health information. And international frameworks like the EU’s General Privacy Data Protection Regulation (GDPR) aim to give consumers greater control over personal data collection and use.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

But the whitepaper coauthors argue measures adopted to date are limited by jurisdictional interpretations and offer incomplete models of ethics. For instance, HIPAA focuses on health care data from patient records but doesn’t cover sources of data generated outside of covered entities, like life insurance companies or fitness band apps. Moreover, while the duty of patient autonomy alludes to a right to explanations of decisions made by AI, frameworks like GDPR only mandate a “right to be informed” and appear to lack language stating well-defined safeguards against AI decision making.

Beyond this, the coauthors sound the alarm about the potential effects of bias on AI surgical systems. Training data bias, which concerns the quality and representativeness of data used to train an AI system, could dramatically affect a preoperative risk stratification prior to surgery. Underrepresentation of demographics might also cause inaccurate assessments, driving flawed decisions such as whether a patient is treated first or offered extensive ICU resources. And contextual bias, which occurs when an algorithm is employed outside the context of its training, could result in a system ignoring nontrivial caveats like whether a surgeon is right- or left-handed.

Methods to mitigate this bias exist, including ensuring variance in the data set, applying sensitivity to overfitting on training data, and having humans-in-the-loop to examine new data as it’s deployed. The coauthors advocate the use of these measures and of transparency broadly to prevent patient autonomy from being undermined. “Already, an increasing reliance on automated decision-making tools has reduced the opportunity of meaningful dialogue between the healthcare provider and patient,” they wrote. “If machine learning is in its infancy, then the subfield tasked with making its inner workings explainable is so embryonic that even its terminology has yet to recognizably form. However, several fundamental properties of explainability have started to emerge … [that argue] machine learning should be simultaneous, decomposable, and algorithmically transparent.”

Despite AI’s shortcomings, particularly in the context of surgery, the coauthors argue the harms AI can prevent outweigh the adoption cons. For example, in thyroidectomy, there’s risk of permanent hypoparathyroidism and recurrent nerve injury. It might take thousands of procedures with a new method to observe statistically significant changes, which an individual surgeon might never observe — at least not in a short time frame. However, a repository of AI-based analytics aggregating these thousands of cases from hundreds of sites would be able to discern and communicate those significant patterns.

“The continued technological advancement in AI will sow rapid increases in the breadths and depths of their duties. Extrapolating from the progress curve, we can predict that machines will become more autonomous,” the coauthors wrote. “The rise in autonomy necessitates an increased focus on the ethical horizon that we need to scrutinize … Like ethical decision-making in current practice, machine learning will not be effective if it is merely designed carefully by committee — it requires exposure to the real world.”

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics