医疗中的ai

介绍 (Introduction)

Artificial Intelligence is everywhere in today’s society — it speaks to us through our phones, recommends new shows for us to watch, and it filters out content that is irrelevant to us. It’s so ubiquitous that most of us go about our daily routines without comprehending its role in our lives. But it hasn’t yet altered the face of the healthcare industry in the same ways it’s revolutionized our shopping. Why is that?

当今社会无处不在,有一个 Rtificial Intelligence,它通过我们的电话与我们对话,为我们推荐新的节目供我们观看,并过滤掉与我们无关的内容。 它无处不在,以至于我们大多数人都在不了解日常生活中所扮演的角色的情况下进行日常工作。 但这并没有像彻底改变我们的购物方式一样改变医疗保健行业的面貌。 这是为什么?

There’s more than one answer. As I see it there are at least three good reasons that you don’t see widespread AI utilization in healthcare now. Given enough time, though, I believe we will overcome all these barriers.

答案不只一个。 正如我所看到的,至少有三个很好的理由使您现在看不到医疗保健中广泛使用的AI。 但是,如果有足够的时间,我相信我们将克服所有这些障碍。

To be clear, the type of AI I am discussing in this article is the kind which acts in place of a healthcare professional. Passive Artificial Intelligence, which simply helps support providers in their decision-making process, is already being heavily researched and changing the way we approach healthcare.

明确地说,我在本文中讨论的AI的类型是代替医疗保健专业人员的类型。 被动人工智能仅在支持提供者的决策过程中提供帮助,目前已经在大量研究中,并且正在改变我们处理医疗保健的方式。

联邦法规 (Federal Regulations)

Tingey Injury Law Firm on 廷吉伤害律师事务所的不Unsplash飞溅照片

One of the largest hurdles that AI has to overcome to be relevant in the healthcare space is the multitude of federal regulations designed to protect consumers. While there are many governing bodies unique to different countries, I will narrow the scope of this topic to the U.S. FDA. According to the official FDA guidelines, there are several different classes of medical devices [1, 2].

人工智能必须克服的最大障碍之一是与医疗保健领域相关,旨在保护消费者的众多联邦法规。 尽管有许多不同国家/地区独特的理事机构,但我将本主题的范围缩小到美国FDA。 根据FDA的官方指南,医疗设备有几种不同的类别[1、2]。

Class I—This category of device is defined as minimal risk, which is to say, the designer of the product can easily demonstrate to the FDA that the device in question either poses no threat of harm to consumers or very closely resembles one that has already been approved by the FDA. Approximately 47% of medical devices are in this category.

I类-此设备类别定义为最小风险 ,也就是说,产品的设计者可以轻松地向FDA证明所涉及的设备不会对消费者造成伤害威胁,或者非常类似于已经存在的设备被FDA批准。 大约47%的医疗设备属于此类别。

Class II—This category of device is defined as moderate risk. About 43% of medical devices are in this category. You can think of this category as medical devices which resemble pre-existing products but have some unique feature to them that could potentially hurt the consumer. One example of this would be a powered wheelchair, as it closely resembles prior art (i.e. non-powered wheelchairs) but has electronic components which, if they malfunction, could harm the user.

II类-此设备类别定义为中等风险 。 大约43%的医疗设备属于此类。 您可以将此类别视为类似于既有产品但具有某些独特功能的医疗设备,这可能会伤害消费者。 一个例子是电动轮椅,因为它与现有技术非常相似(即非电动轮椅),但是具有电子组件,如果它们发生故障,可能会伤害使用者。

Class III—This is reserved for the remaining 10% of medical devices that pose a high risk to consumers. These kinds of devices can kill people if they malfunction (e.g. pacemakers).

III类 -保留给对消费者构成高风险的其余10%的医疗设备。 这些类型的设备如果发生故障,可能会杀死人(例如起搏器)。

Autonomous Artificial Intelligence applications in healthcare mostly reside within Class III. A device that a nurse can use to identify melanoma without needing to consult an expert? An algorithm that automatically detects breast cancer? A neural network that prioritizes patients for doctors? All Class III.

医疗保健领域的自主人工智能应用大部分位于III类之内。 护士无需咨询专家即可用于识别黑色素瘤的设备? 自动检测乳腺癌的算法? 优先考虑患者的神经网络? 所有三级。

While it could be argued that each of these examples might be used to assist medical personnel instead of replacing experts, it’s impossible to say whether these devices might override the judgment of healthcare professionals. Sure, the radiologist could manually look over patient imaging like she is supposed to, but when the tool seems to be right most of the time, she may become complacent in exercising her judgment—and that can cost lives.

尽管可能会争辩到这些示例中的每一个都可以用来帮助医务人员而不是替代专家,但无法断言这些设备是否会超越医疗保健专业人员的判断。 当然,放射科医生可以按照自己的设想手动检查患者的影像,但是当该工具在大多数情况下似乎是正确的时,她可能会因为做出判断而自满,这可能会导致生命损失。

But that leads me to the next hurdle: even if the FDA approves these medical devices, will healthcare providers and their patients trust them?

但这使我进入下一个障碍:即使FDA批准了这些医疗设备,医疗保健提供者及其患者也会信任它们吗?

患者和提供者信任 (Patient and Provider Trust)

National Cancer Institute on 美国国家癌症研究所的Unsplash照片

Let’s start by walking through an example of a bad implementation of AI in healthcare. Imagine you are a doctor. Like the rest of your colleagues, you spent over a decade taking challenging university classes, struggling through your residencies, and otherwise working your fingers to the bone to succeed in your profession. After years of toil, you finally made it. You’re a respected healthcare professional at a reputable hospital, you frequently read up on the latest innovations in your field, and you know how to prioritize the needs of your patients.

让我们从一个在医疗保健中错误实施AI的例子开始。 想象你是一名医生。 像其他同事一样,您花了十多年的时间参加具有挑战性的大学课程,在居民中挣扎,或者用手指指骨以取得成功。 经过多年的辛劳,您终于做到了。 您是一家信誉卓著的医院中受人尊敬的医疗保健专业人员,经常阅读有关该领域的最新创新知识,并且知道如何优先考虑患者的需求。

Suddenly, Artificial Intelligence, which you are familiar with from the many scholarly journals you read, starts being used in your hospital. In this particular case, maybe it predicts the length of a patient’s stay so you can better plan for clinical trials. You notice that, from time to time, it is just dead wrong in its predictions. You even start to suspect the algorithm may have suffered one of the many issues which plague AI, such as model drift. You don’t trust it and begin to override its judgment in favor of your own — you’re the doctor, after all, and this is what you’ve studied for! It’s your job to give excellent care to your patients, and ultimately, you feel the algorithm doesn’t allow you to do that.

突然之间,您在阅读的许多学术期刊中都熟悉的人工智能开始在您的医院中使用。 在这种情况下,它可以预测患者的住院时间,因此您可以更好地计划临床试验。 您会发现,它的预测有时是完全错误的。 您甚至开始怀疑该算法可能遭受了困扰AI的众多问题之一,例如模型漂移。 您不信任它,而是开始超越它的判断来支持自己的判断-毕竟,您是医生,这就是您学习的目的! 为患者提供出色的护理是您的工作,最终,您觉得该算法不允许您这样做。

What happens when Artificial Intelligence tries to do more of your job for you? Will you trust it?

当人工智能试图为您做更多工作时会发生什么? 你会相信吗?

The heart of the problem in the above scenario is the opaqueness of the model described. In that situation, the developer behind the algorithm didn’t consider that both doctors and patients want to know the why as well as the what. Even if the Artificial Intelligence implementation described above actually gave very accurate assessments of a patient’s length of stay, it never tried to back up the reasoning behind its prediction.

在上述情况下,问题的核心是所描述模型的不透明性。 在这种情况下,算法背后的开发人员并不认为医生和患者都想知道原因以及原因 。 即使上述人工智能实现实际上对患者的住院时间进行了非常准确的评估,它也从未尝试过支持其预测背后的理由。

A more ideal model would incorporate something like SHAP values, which you can read more about in this excellent article by Dr. Dataman. Essentially, they allow the model to provide what’s called local feature importance, which in plain English means an estimation of why this particular case has the predicted outcome that it does [3]. Even though it doesn’t change the algorithm itself in any way, it gives both the provider and the patient insight into its judgment, and in an evidence-based industry like healthcare, that is invaluable.

一个更理想的模型将包含SHAP值之类的内容,您可以在Dataman博士的这篇出色文章中了解更多信息。 从本质上讲,它们使模型能够提供所谓的局部特征重要性 ,用简单的英语来说,意味着对这种特殊情况为何具有其预期结果的估计[3]。 即使它不会以任何方式改变算法本身,它也可以使提供者和患者都对它的判断有深刻的了解,在医疗等基于证据的行业中,这是无价的。

When Artificial Intelligence explains its decision-making process, it is easier for patients and providers to trust it.

当人工智能解释其决策过程时,患者和提供者就更容易信任它。

Indeed, AI has vast capabilities to aid in clinical decision-making. It’s capable of picking up on complex patterns that only become apparent when patient data is viewed in aggregate—things that would be impossible for us to reasonably expect a human doctor to detect. And while they can’t take the place of a personal healthcare provider, they can provide decision support and help health workers see trends they wouldn’t have noticed otherwise. But this decision support is only possible when the algorithm explains itself.

确实,人工智能具有协助临床决策的巨大能力。 它能够选择复杂的模式,而这些模式只有在汇总查看患者数据时才会显现出来,而这是我们无法合理预期人类医生能够检测到的。 尽管他们无法代替个人医疗保健提供者,但他们可以提供决策支持并帮助医护人员了解他们原本不会注意到的趋势。 但是,只有当算法说明自身时,这种决策支持才有可能。

人工智能驱动医疗保健的伦理 (The Ethics of AI-Driven Healthcare)

Photo by Arnold Francisca on Unsplash
照片由Arnold Francisca在Unsplash上拍摄

Artificial Intelligence is a complex topic, not just in its implementation but also in the ethical dilemmas it poses to us. In 2015, Google’s machine learning-driven photo tagger sparked controversy when it incorrectly labeled a black woman as a gorilla [4]. The social media backlash was justifiably enormous. A single misclassification was enough to draw the eyes of the world to the state of artificial intelligence. Who is responsible when things like this happen? How are we supposed to respond?

人工智能是一个复杂的话题,不仅在实现上,而且在它给我们带来的道德困境中。 2015年,Google的机器学习驱动的照片标记器引发误解,原因是它错误地将黑人女性标记为大猩猩[4]。 社交媒体的强烈反对是巨大的。 一次错误的分类就足以引起全世界对人工智能状态的关注。 这样的事情发生时,谁负责? 我们应该如何回应?

Unfortunately, issues like this are not uncommon even 5 years later. Just within the past month, MIT has had to take down the widely-used Tiny Images data set because of racist and offensive content that was discovered inside of it [5]. How many algorithms that are in use now learned from this data?

不幸的是,即使在5年后,类似的问题也并不少见。 仅仅在过去的一个月之内,麻省理工学院就不得不删除广泛使用的Tiny Images数据集,因为其中发现了种族主义和令人反感的内容[5]。 现在从该数据中学到了多少种正在使用的算法?

These issues may, on their face, seem irrelevant to AI in healthcare, but I bring them up for a couple reasons:

从表面上看,这些问题似乎与医疗保健中的AI无关,但我提出这些问题有两个原因:

  1. They demonstrate that, even despite our best intentions, biases can manifest themselves inside our model他们表明,即使我们有最好的意愿,偏见也会在我们的模型中显现出来
  2. It frequently happens that these biases only become apparent once the model is already released into the world and has made a mistake经常发生的是,这些偏见只有在模型已经发布并犯错后才变得明显

Can we confidently say that this will not also happen in the healthcare space? I don’t believe we can. As researchers, there is more work to be done to filter out inherent biases in our data, in our pre-processing techniques, in our models, and in ourselves. The biggest barrier to AI in healthcare is the lack of a guarantee on any given model’s equitability, safety, and effectiveness in all potential use cases.

我们能否自信地说在医疗保健领域也不会发生这种情况? 我不相信我们可以。 作为研究人员,还有更多工作要做,以过滤掉我们的数据,预处理技术,模型以及我们自己的内在偏差。 医疗保健中AI的最大障碍是在所有潜在用例中都无法保证任何给定模型的公平性,安全性和有效性。

There are already great efforts to improve this situation. Libraries like Aequitas are making it easier than ever for developers to test the biases of their models and their data [6]. Along with this, researchers and developers alike are becoming more aware of the effects of model bias, which will lead to further development of tools, techniques, and best practices for detecting and handling model biases. AI may not be ready for prime time in healthcare today, but I, along with many others, will be working hard to get it there. Given the proper care and attention, I believe that AI has the power to change the face of healthcare as we know it.

已经有很大的努力来改善这种状况。 像Aequitas这样的库使开发人员比以往任何时候都更容易测试模型和数据的偏差[6]。 随之而来的是,研究人员和开发人员都越来越意识到模型偏差的影响,这将导致进一步开发用于检测和处理模型偏差的工具,技术和最佳实践。 人工智能可能尚未为医疗保健的黄金时间做好准备,但我将与其他许多人一起努力实现这一目标。 有了适当的照顾和关注,我相信AI可以改变我们所知道的医疗保健的面貌。

关于我 (About Me)

My name is Josh Cardosi, and I am a Master’s student studying Artificial Intelligence applications in healthcare. You can read more about how I got here in this article. While the issues I talked about above are very real and will need to be addressed, I strongly believe that we will overcome them and, in doing so, change the state of healthcare for the better. I believe it will lead to better health service utilization, decreased patient mortality, and higher patient and provider confidence in treatment plans.

我叫Josh Cardosi,我是研究人工智能在医疗保健中应用的硕士生。 您可以在本文中了解有关我如何到达这里的更多信息。 尽管我在上面谈论的问题是非常实际的,需要解决,但我坚信我们将克服这些问题,并在此过程中改善医疗状况。 我相信这将导致更好地利用卫生服务,降低患者死亡率以及提高患者和医护人员对治疗计划的信心。

Feel free to connect with me on LinkedIn. I love reading your messages and chatting about AI in healthcare or machine learning in general.

随时在LinkedIn上与我联系。 我喜欢阅读您的消息并就医疗保健或机器学习中的AI进行聊天。

[1] Classify Your Medical Device (2020), U.S. Food and Drug Administration

[1]美国食品药品监督管理局(2020)对医疗设备进行分类

[2] What’s the Difference Between the FDA Medical Device Classes? (2020), BMP Medical

[2] FDA医疗器械类别之间有何区别? (2020),BMP Medical

[3] Dataman, Explain Your Model with the SHAP Values (2019), Towards Data Science

[3] Dataman, 使用SHAP值解释模型 (2019年),迈向数据科学

[4] J. Snow, Google Photos Still Has a Problem with Gorillas (2018), MIT Technology Review

[4] J. Snow,《 谷歌照片仍然对大猩猩有问题》 (2018年),麻省理工学院技术评论

[5] K. Johnson, MIT takes down 80 Million Tiny Images data set due to racist and offensive content (2020), Venture Beat

[5] 麻省理工学院的 K. Johnson, 由于种族主义和令人反感的内容 (2020),Venture Beat删除了8000万个Tiny Images数据集

[6] The Bias and Fairness Audit Toolkit for Machine Learning (2018), Center for Data Science and Public Policy

[6] 机器学习偏差与公平审计工具包 (2018年),数据科学与公共政策中心

翻译自: https://towardsdatascience.com/barriers-to-ai-in-healthcare-41892611c84a

医疗中的ai


http://www.taodudu.cc/news/show-1874144.html

相关文章:

  • uber大数据_Uber创建了深度神经网络以为其他深度神经网络生成训练数据
  • http 响应消息解码_响应生成所需的解码策略
  • 永久删除谷歌浏览器缩略图_“暮光之城”如何永久破坏了Google图片搜索
  • 从头实现linux操作系统_从头开始实现您的第一个人工神经元
  • 语音通话视频通话前端_无需互联网即可进行数十亿视频通话
  • 优先体验重播matlab_如何为深度Q网络实施优先体验重播
  • 人工智能ai以算法为基础_为公司采用人工智能做准备
  • ieee浮点数与常规浮点数_浮点数如何工作
  • 模型压缩_模型压缩:
  • pytorch ocr_使用PyTorch解决CAPTCHA(不使用OCR)
  • pd4ml_您应该在本周(7月4日)阅读有趣的AI / ML文章
  • aws搭建深度学习gpu_选择合适的GPU进行AWS深度学习
  • 证明神经网络的通用逼近定理_在您理解通用逼近定理之前,您不会理解神经网络。...
  • ai智能时代教育内容的改变_人工智能正在改变我们的评论方式
  • 通用大数据架构-_通用做法-第4部分
  • 香草 jboss 工具_使用Tensorflow创建香草神经网络
  • 机器学习 深度学习 ai_人工智能,机器学习和深度学习。 真正的区别是什么?...
  • 锁 公平 非公平_推荐引擎也需要公平!
  • 创建dqn的深度神经网络_深度Q网络(DQN)-II
  • kafka topic:1_Topic️主题建模:超越令牌输出
  • dask 于数据分析_利用Dask ML框架进行欺诈检测-端到端数据分析
  • x射线计算机断层成像_医疗保健中的深度学习-X射线成像(第4部分-类不平衡问题)...
  • r-cnn 行人检测_了解用于对象检测的快速R-CNN和快速R-CNN。
  • 语义分割空间上下文关系_多尺度空间注意的语义分割
  • 自我监督学习和无监督学习_弱和自我监督的学习-第2部分
  • 深度之眼 alexnet_AlexNet带给了深度学习的世界
  • ai生成图片是什么技术_什么是生成型AI?
  • ai人工智能可以干什么_我们可以使人工智能更具道德性吗?
  • pong_计算机视觉与终极Pong AI
  • linkedin爬虫_这些框架帮助LinkedIn大规模构建了机器学习

医疗中的ai_医疗保健中自主AI的障碍相关推荐

  1. 神码ai人工智能写作机器人_机器学习与医学:人工智能在医疗保健中的陷阱

    神码ai人工智能写作机器人 Bias in Artificial Intelligence (AI) is the most dangerous factor in the development o ...

  2. 增强现实在医疗保健中的应用:突破传统医疗的局限性

    作者:禅与计算机程序设计艺术 近年来,随着VR/AR技术.人工智能(AI)技术.大数据分析技术等的革命性的进步,可以预见到未来"数字化"时代将会催生出医疗领域的一场变革.特别是在& ...

  3. 【强化学习-医疗】医疗保健中的强化学习:综述

    Article 作者:Chao Yu, Jiming Liu, Shamim Nemati 文献题目:医疗保健中的强化学习:综述 文献时间:2020 文献链接:https://arxiv.org/ab ...

  4. 超越ChatGPT:AgentGPT正在将自主AI带到浏览器中

    你好,欢迎来到人工智能领域的新时代!今天我们介绍AgentGPT,这是一款最前沿的基于浏览器的平台,旨在革新人工智能的自主性.这项开创性的技术让你能够在舒适的网络浏览器中创建.配置和部署定制化的人工智 ...

  5. 医疗机器人软件中的机器人协作技术:机器人技术在医疗保健中的应用

    作者:禅与计算机程序设计艺术 <64. "医疗机器人软件中的机器人协作技术:机器人技术在医疗保健中的应用"> 引言 医疗机器人软件中机器人协作技术是近年来备受关注的研究 ...

  6. 人工智能会话代理在医疗保健中的有效性:系统综述

    #论文泛读 ##The Effectiveness of Artificial Intelligence Conversational Agents in Health Care: Systemati ...

  7. 医疗保健数据接口_医疗保健中的人工智能

    医疗保健数据接口 Introduction 介绍 Artificial intelligence refers to simulating the behavior of humans, so tha ...

  8. 医疗保健中的自然语言处理

    CDA数据分析师 出品 当涉及医疗保健行业时,人们可能会想到AI方法的众多用例,例如机器视觉或预测分析.但是, 自然语言处理(NLP)在医疗保健中的应用也多种多样. 在本文中,我将介绍NLP为医院和医 ...

  9. 关于酶抑制剂靶点预测中的化合物库定制与AI虚拟筛选药物服务

    关于酶抑制剂靶点预测中的化合物库定制与AI虚拟筛选药物服务 小编分享化合物库定制与AI虚拟筛选药物服务之酶抑制剂靶点预测,一起来看: 化合物库定制 目标导向合成( targetoriented syn ...

  10. 大数据在医疗保健中的真正愿景

    从可穿戴技术的影响到促进癌症研究的潜力,医疗保健领域的大数据技术引起了很多热议.然而,大数据在医疗保健中的真正愿景不在于个人数据收集或使用不同的应用程序,而在于结合医疗保健数据为医生创造新资源的潜力. ...

最新文章

  1. nested exception is java.lang.IllegalStateException: Context namespace element 'annotation-config' a
  2. 007_ServletConfig
  3. IE这回在css flex中扳回一局?
  4. SQL语句 SELECT LIKE用法详解
  5. Node.js基本讲解
  6. 给初级拍摄者的十条好建议
  7. Twemproxy调研总结
  8. iPhone 13拍照马赛克、换屏无法解锁Face ID、iPad mini 6“果冻屏”:等“百香果”吧...
  9. WebRTC系列- SDP详解
  10. snipaste_截图神器
  11. 129:vue+openlayers:extent 在EPSG:4326,EPSG:3857,EPSG:900913,CRS:84的范围值
  12. 大数据之Kafka介绍
  13. java程序员月薪一万很难?(要到什么程度)
  14. js实现视频直播,结合bilibili开源项目
  15. Linux是什么 ?
  16. 经典神经网络模型整理
  17. 数据保护新愿景:欧盟GDPR十个误解与争议
  18. MATLAB画图-plot-线形、颜色、数据点形状的选择
  19. 汇编实现两位数(包括负数)以内的输入,排序和输出
  20. Himall商城插件内部成员\获取已安装的插件信息\获取指定的插件信息

热门文章

  1. JAR运行出现ClassNotFoundException异常的解决办法
  2. A.I. Wiki 人工智能
  3. Beginning Python chapter 3: Working with strings
  4. 返回固定页面的http服务器
  5. 190330每日一句
  6. Atitit 读取文本文件内容功能的实现 艾提拉 总结 attilax总结 1.1. FileUtilsAti.readFileToStringAutoDetectEncode(txtF); 1 1
  7. 高级人才、专业技术人才、技能人才 目录 1. 高级人才, 1 1.1. 专业技术人才 2 2. 专业技术人才 2 3. 高技能人才 3 1.高级人才, 可迁入本市市区落户,其配偶、未婚子女(含离
  8. Atitit.得到网络邻居列表java php c#.net python
  9. paip.提升效率----几款任务栏软件
  10. paip.提升用户体验---提取FLASH中图片