神码ai人工智能写作机器人

Bias in Artificial Intelligence (AI) is the most dangerous factor in the development of AI algorithms. Yes, even more important than that ubiquitous fear immortalized in movies — that robots will kill us all.

人工智能(AI) 乙 IAS是在AI算法的开发中最危险的因素。 是的,比在电影中永生不朽的恐惧更重要的是,机器人会杀死我们所有人。

Those most at risk for suffering at the hands of bias in AI algorithms are the most vulnerable members of our society. This is no different in healthcare.

那些在AI算法偏见中面临最大风险的人是我们社会中最脆弱的成员。 在医疗保健方面也是如此。

According to a study published in the American Medical Association Journal of Ethics,

根据《美国医学协会伦理杂志》上发表的一项研究,

“In contrast to human bias, algorithmic bias occurs when an AI model, trained on a given dataset, produces results that may be completely unintended by the model creators.”

“与人为偏见相反,当在给定数据集上训练的AI模型产生的结果可能是模型创建者完全不希望的时,就会发生算法偏差。”

This differentiates bias in an AI algorithm from bias in a non-AI based algorithm. In a non-AI based algorithm, a software developer can determine the source of the resulting bias and change the formula to remove the bias.

这将AI算法中的偏差与非基于AI算法中的偏差区分开来。 在基于非AI的算法中,软件开发人员可以确定所得偏差的来源,并更改公式以消除偏差。

In an AI algorithm, the software arrives at the clinical decision using a method hidden from the software developer. This phenomenon, called black-box learning, makes it more difficult to detect and fix the bias in an algorithm.

在AI算法中,软件使用对软件开发人员隐藏的方法来做出临床决策。 这种称为黑盒学习的现象使检测和修复算法中的偏差变得更加困难。

人工智能的好处 (Benefits of AI)

At a fundamental level, a software program that can autonomously operate increases efficiency. A program that quickly learns from its mistakes translates to a diagnostician that is far faster and smarter than a human diagnostician (also known as a doctor).

从根本上讲,可以自主运行的软件程序可以提高效率。 快速从错误中学习的程序将转化为比人类诊断师(也称为医生)更快,更聪明的诊断师。

Better diagnosticians mean fewer medical errors. It means shorter wait times for diagnoses. It means personalized treatment plans. Over time and with scale, this should result in humans who live longer and with fewer illnesses.

更好的诊断师意味着更少的医疗错误。 这意味着更短的诊断等待时间。 这意味着个性化的治疗计划。 随着时间和规模的增长,这将导致人类寿命更长,疾病更少。

And yes, it will also mean that we will need fewer doctors.

是的,这也意味着我们将需要更少的医生。

This result of AI is worthy of a separate discussion. In short, if you’re reading this article, you’ll likely always have the option of a human doctor in your lifetime or an algorithm that human doctors are overseeing.

AI的结果值得单独讨论。 简而言之,如果您正在阅读本文,那么您一生中都可能会选择一名医生,或者由医生监督的算法。

The transition to machine learning algorithms has long been underway but like automation processes before it, AI technology will need humans to oversee it for many years to come.

向机器学习算法的过渡已经进行了很长时间,但是就像之前的自动化流程一样,人工智能技术将需要人类在未来很多年对其进行监督。

So how does AI show up in your care?

那么AI如何在您的护理中显示呢?

数据驱动的临床决策支持 (Data-driven clinical decision support)

AI-based clinical decision support (CDS) systems have already been in use in several specialties for different medical tasks.

基于AI的临床决策支持(CDS)系统已经在多个专业中用于不同的医疗任务。

These software algorithms incorporate either human knowledge or large amounts of data (also called machine learning) to provide a clinical recommendation.

这些软件算法结合了人类知识或大量数据(也称为机器学习 )来提供临床建议。

The advantage of knowledge-driven CDS systems is that the basis or logic for their recommendations is easy to discern. However, data-driven CDS systems may arrive at innovative clinical recommendations but the logic may be much less transparent.

知识驱动的CDS系统的优势在于,可以轻松分辨出其建议的基础或逻辑。 但是,数据驱动的CDS系统可能会提出创新的临床建议,但是逻辑可能不那么透明。

行政任务 (Administrative tasks)

This is one of the most practical uses for AI algorithms in many industries. Outsourcing billing or medical record maintenance to algorithms frees up time for healthcare staff to focus on patient care. It also reduces overall healthcare costs.

这是许多行业中AI算法最实际的用途之一。 将账单或病历维护外包到算法中,可以腾出时间让医护人员专注于患者护理。 它还可以降低总体医疗保健成本。

大规模数据分析,用于模式识别 (Large scale data analysis for pattern identification)

AI can analyze large amounts of historical data to tease out novel insights that can predict the future course of a person’s health. AI is the key to providing personalized healthcare that incorporates many types of personal data.

人工智能可以分析大量的历史数据,以挖掘出新颖的见解,这些见解可以预测一个人未来的健康历程。 人工智能是提供包含多种类型个人数据的个性化医疗保健的关键。

Data amassed from a person’s genomics, their medical record, and even their smartwatch can alert the person and their doctor to medical risks and hopefully someday predict their long-term outlook.

从一个人的基因组学,他们的病历甚至他们的智能手表中收集的数据可以提醒该人和他们的医生医疗风险,并希望有一天能预测他们的长期前景。

筛选 (Screening)

Everyone has a radiologist. If you have ever received an Xray or Ultrasound, your radiologist analyzed your images and issued a report to the doctor who ordered your exam.

每个人都有放射科医生。 如果您曾经收到过X射线或超声波检查,则放射科医生会分析您的图像,并向下达检查命令的医生发出报告。

One issue that comes up for radiologists where AI is of tremendous help is with routine studies. Routine means there is no urgent need for a diagnosis, such as Chest X-rays ordered at annual physicals.

对于放射线医师来说,人工智能的巨大帮助之一就是常规研究。 例行检查意味着没有急诊诊断的必要,例如按年度体检要求进行的胸部X光检查。

A problem arises when a catastrophic abnormality such as a markedly enlarged aorta in a person who is complaining of mild backache. This means the X-ray needs urgent attention. But if the list of routine X-rays is long (as commonly occurs in most hospitals), the X-ray may not be interpreted for hours or sometimes 1–2 days.

当抱怨轻度腰酸的人发生灾难性异常(例如主动脉明显肿大)时,就会出现问题。 这意味着需要紧急关注X射线。 但是,如果常规X射线的清单很长(在大多数医院中通常会发生这种情况),则X射线可能要数小时甚至有时是1-2天才能被解释。

A software algorithm trained on data from thousands of X-rays will be able to screen and detect this urgent abnormality before the human radiologist can get to reading it. This also allows the radiologist to focus on emergency examinations.

经过训练的软件算法可以对来自数千个X射线的数据进行训练,从而可以在人类放射线医师能够读取之前筛选和检测这种紧急异常情况。 这也使放射科医生可以集中精力进行紧急检查。

The overarching appeal of algorithms for screening, diagnosis, and treatment is also based on the fact that unlike we petty, pathetic, biased humans, software algorithms are objective and free of human error.

用于筛选,诊断和治疗的算法的最大吸引力还基于这样一个事实,即与我们小巧,可怜,有偏见的人不同,软件算法是客观的并且没有人为错误。

Or are they?

还是他们?

AI的陷阱 (Pitfalls of AI)

The problems of AI in healthcare fall into three categories.

医疗保健中的AI问题可分为三类。

  • Use of already biased data to train a software algorithm.使用已经存在偏差的数据来训练软件算法。
  • A software algorithm that contains no bias but has a biased design or organization.一种软件算法,不包含任何偏见,但具有偏颇的设计或组织。
  • A software algorithm is incorrectly applied and thus produces bias.错误地应用了软件算法,从而产生偏差。

Let’s look at some examples.

让我们看一些例子。

有偏数据 (Biased data)

You might be wondering: Why would anyone put flawed data into an algorithm?

您可能想知道:为什么会有人将有缺陷的数据放入算法中?

  1. Humans are biased. It’s a longstanding, unfortunate feature of healthcare like it is in many systems in the US (including the technology industry).人类有偏见。 与美国许多系统(包括技术行业)一样,它是医疗保健的一项长期而不幸的功能。
  2. Unconscious bias or being unaware that one lacks objectivity results in data containing hidden bias.无意识的偏见或不知道自己缺乏客观性会导致数据包含隐藏的偏见。

For example, when Serena Williams began having chest pain and difficulty breathing after giving birth, she knew she was having a pulmonary embolism. She’d had them before.

例如,当塞雷娜·威廉姆斯出生后开始出现胸痛和呼吸困难时,她知道自己患有肺栓塞。 她以前有过。

Serena also had a history of developing blood clots in her leg veins, a common precursor to a pulmonary embolism. When she told her nurse, the nurse assumed she was confused as a side effect of her pain medication. Serena’s doctor ordered a sonogram of her legs instead of a CT scan of her chest.

小威(Serena)在腿部静脉中也有血栓形成的历史,这是肺栓塞的常见先兆。 当她告诉护士时,护士认为她因服用止痛药而感到困惑。 Serena的医生下令对她的双腿进行超声检查,而不是对胸部进行CT扫描。

Pulmonary embolisms are a common cause of sudden death. Serena had the privilege and knowledge to advocate for herself. She also was lucky to have smaller emboli that didn’t kill her in the delay until the doctor finally ordered the correct test.

肺栓塞是猝死的常见原因。 Serena有特权和知识为自己辩护。 她还很幸运地拥有较小的栓子,直到医生最终命令进行正确的检查后,她才在延误中杀死了她。

Serena’s case isn’t unique. But many Black mothers are not so lucky.

Serena的案子并非唯一。 但是许多黑人母亲并不那么幸运。

Black mothers with high levels of education and socioeconomic access are more likely to have severe postpartum complications compared to white mothers according to this 2016 study of maternal morbidity in NYC.

根据这项2016年纽约市孕产妇发病率的研究,与白人母亲相比,受过较高教育和社会经济访问的黑人母亲更有可能发生严重的产后并发症。

Now imagine you want to train an AI algorithm to screen for patients at risk for postpartum complications using data from childbirths at an NYC hospital. Your data will include visits of Black mothers who receive biased treatment from healthcare professionals in the hospital.

现在,假设您想训练一个AI算法,使用纽约市一家医院的分娩数据筛查有产后并发症风险的患者。 您的数据将包括黑人母亲的就诊次数,这些黑人母亲在医院的医疗保健专业人员中受到偏见治疗。

This subset of your data isn’t labeled racist, of course. Patients rarely get the opportunity to provide this kind of feedback. The next time these Black women need care, they may opt to visit a different hospital, in the hopes of escaping bias.

当然,您的数据子集没有标记为种族主义者。 患者很少有机会提供这种反馈。 这些黑人妇女下次需要护理时,可能会选择去另一家医院,以逃避偏见。

In contrast to human bias, algorithmic bias occurs when an AI model, trained on a given dataset, produces results that may be completely unintended by the model creators.

与人为偏见相反,当在给定数据集上训练的AI模型产生的结果可能是模型创建者完全不希望的时,就会发生算法偏差。

If your data only uses visits from this single hospital, the algorithm will not select this high-risk group because they’ll show fewer hospital visits in total. Yet this is precisely the group that needs more resources to reduce the risk of postpartum complications.

如果您的数据仅使用来自该单一医院的就诊,则该算法将不会选择此高风险组,因为它们总共将显示较少的医院就诊。 然而,正是这个团体需要更多的资源来减少产后并发症的风险。

偏差算法设计 (Biased algorithm design)

A study in the journal Science revealed racial bias in a widely used algorithm designed to determine which patients in a healthcare system would benefit from additional high-risk care management programs. These programs include specialized health care professionals and other costly resources. Software that can identify the people who would benefit from these resources the most is desirable.

《科学》杂志上的一项研究揭示了一种广泛使用的算法中的种族偏见,该算法旨在确定医疗保健系统中的哪些患者将从其他高风险护理管理计划中受益。 这些计划包括专业的医疗保健专业人员和其他昂贵的资源。 需要一种能够确定将最受益于这些资源的人员的软件。

The sickest patients will benefit the most from the added resources. The algorithm used health care cost as a proxy for health care needs. In isolation, this is a reasonable assumption because the sicker a person is, the higher their health care needs are, hence more money is required for their care.

最病的患者将从增加的资源中受益最大。 该算法使用医疗保健成本作为医疗保健需求的代理。 孤立地讲,这是一个合理的假设,因为一个人越重病,其医疗保健需求就越高,因此需要更多的钱来照顾他们。

The dataset shows that health care costs for Black people are slightly less than whites but doesn’t take into account several factors.

数据集显示,黑人的医疗保健费用略低于白人,但未考虑多个因素。

  • Black people may not see doctors as regularly due to inherent cultural distrust in a system that has been biased against them.由于固有的文化上的不信任,黑人可能不会经常看医生,因为黑人对自己的医疗体系存有偏见。
  • They may have insufficient insurance coverage that restricts access to specialists or expensive medication.他们可能没有足够的保险,限制了获得专家或昂贵药物的获取。
  • Their care may be fragmented—they see healthcare professionals in different hospitals and health care systems like the Black mothers in the example above.他们的护理可能分散,他们看到不同医院和医疗保健系统中的医疗专业人员,例如上例中的黑人母亲。

This isn’t just this specific piece of software. Using cost as a predictor of health needs in algorithms is a widely-used practice.

这不仅仅是这个特定的软件。 在算法中使用成本作为健康需求的预测指标是一种广泛使用的做法。

算法的有偏应用 (Biased application of an algorithm)

An algorithm designed to screen for signals of an impending heart attack. If the algorithm is taught using a dataset comprised of white men only and is used for this patient population, there’s no bias.

一种旨在筛选即将发生的心脏病发作信号的算法。 如果仅使用仅由白人组成的数据集来教授该算法,并且将该算法用于该患者人群,则不会有偏差。

If it’s incorrectly applied to women, who often show different symptoms when having a heart attack, the algorithm will fail to detect some women who are about to have a heart attack.

如果将它错误地应用于心脏病发作时常常表现出不同症状的女性,则该算法将无法检测到一些即将患有心脏病的女性。

To be clear, AI/Machine Learning technology is already in wide use in algorithms that affect our healthcare decisions. It is NOT currently operating without human supervision.

需要明确的是,人工智能/机器学习技术已经广泛应用于影响我们医疗决策的算法中。 目前,没有人的监督,它就无法运行。

It’s important for people who interact with healthcare to know that bias may play a role in decisions being made for them.

对于与医疗保健互动的人们来说,重要的是要知道偏见可能会在为他们做出的决策中起作用。

Unfortunately, the onus often falls to sick patients and their families to advocate for themselves. In the case of AI algorithms, advocacy means asking how determinations are made (for example, in the insurance process).

不幸的是,疾病患者和他们的家人常常有责任为自己辩护。 对于AI算法,倡导意味着询问如何做出决定(例如,在保险过程中)。

It’s also crucial for physicians and other healthcare workers to know bias can exist in software programs.

对于医生和其他医护人员来说,了解软件程序中可能存在偏见也至关重要。

Assuming that objectivity is inherent in AI-based software programs is naive and will continue to perpetuate medical error.

假设客观性是基于AI的软件程序中固有的,这是幼稚的,并将继续使医疗错误永久化。

翻译自: https://medium.com/swlh/machine-learning-medicine-the-pitfalls-of-ai-in-healthcare-996839cc9b97

神码ai人工智能写作机器人


http://www.taodudu.cc/news/show-4515248.html

相关文章:

  • mysql 查询倒数第几
  • 倒数第二天
  • c语言计算阶乘的倒数,C#计算阶乘和的倒数
  • 单链表中倒数第K个结点
  • c语言单链表删除倒数第k个数,在单链表中删除倒数第k个节点
  • c语言找出链表中倒数第k的数,查找链表中倒数第k个结点
  • java里Math求倒数_java倒数60s实现
  • Pytorch求张量的倒数
  • python量化实战 顾比倒数线_顾比倒数线 主图源码
  • 矩阵倒数
  • 牛顿迭代法求平方根倒数
  • python求f的倒数_Python中整数的倒数
  • python量化实战 顾比倒数线_顾比倒数线的画法
  • 转载==数论倒数,又称逆元(我整个人都倒了( ̄﹏ ̄))
  • 倒数问题
  • 香港科大汪校长轻松访谈(2)|与第一代中国基金经理刘央笑看人生(精编版)
  • 美业SaaS的创业分享之[定位]:美业SaaS的定位到底是工具还是平台
  • c语言程序教师节祝福,2015年教师节祝福语(大学生适用)
  • 比特熊故事汇独家|英特尔“非典型性女博士”的大跨步人生
  • python定义学生类和教师类_Python3 类的继承小练习
  • python创建学生类和教师类,python,学校成员类的例子,老师和学生(python class父类与子类之间的联系与逻辑)...
  • python中小学示范课_Python正课72 —— 继承
  • steam 好友网络无法访问解决方法
  • 英雄连2显示无法连接服务器,英雄连2steam无法连接到更新服务器 | 手游网游页游攻略大全...
  • steam无法连接到更新服务器的问题
  • 微信小程序调用同页面的自定义函数undefined
  • java单例模式构造器初始化_秒懂java单例模式,java私有构造器与一夫一妻制
  • 夫琅禾费单缝衍射matlab分析,夫琅禾费单缝衍射光强分析与探讨
  • 单缝孔径平面内移动_傅里叶变换解夫琅禾费衍射问题的几个例子_3
  • 夫琅禾费单缝衍射matlab分析,夫琅禾费单缝衍射光强分布MATLAB分析毕业设计论文...

神码ai人工智能写作机器人_机器学习与医学:人工智能在医疗保健中的陷阱相关推荐

  1. 神码ai人工智能写作机器人_机器学习简介part1与人工智能的比较

    神码ai人工智能写作机器人 https://www.eastwestbank.com/ReachFurther/en/News/)https://www.eastwestbank.com/ReachF ...

  2. 神码ai人工智能写作机器人_机器学习和人工智能最佳书籍

    神码ai人工智能写作机器人 Here you will get list of best books for Machine Learning and Artificial Intelligence ...

  3. 神码ai人工智能写作机器人_机器学习和人工智能中的多样性推荐系统

    神码ai人工智能写作机器人 人工智能 , 机器学习 (Artificial Intelligence, Machine Learning) 每天,您都会受到机器学习和AI推荐算法的影响. (Every ...

  4. 神码ai人工智能写作机器人_人工智能和机器学习的最佳资源

    神码ai人工智能写作机器人 机器学习指南 (MACHINE LEARNING GUIDE) Half of this crazy year is behind us and summer is her ...

  5. 神码ai人工智能写作机器人_从云到设备到边缘的人工智能和机器学习的未来

    神码ai人工智能写作机器人 A brief overview of the state-of-the-art in training ML models on devices. For a more ...

  6. 神码ai人工智能写作机器人_人工智能和机器学习可以改善营销的6种方式

    神码ai人工智能写作机器人 Six months ago, bustling cities with flourishing businesses and communities across the ...

  7. 神码ai人工智能写作机器人_真正的人工智能和机器学习的未来

    神码ai人工智能写作机器人 "Is there a true AI?" This is one question that a lot of experts in the indu ...

  8. 神码ai人工智能写作机器人_神经符号AI为我们提供具有真正常识的机器

    神码ai人工智能写作机器人 By Katia Moskvitch 卡蒂亚·莫斯科维奇(Katia Moskvitch) "那只狗躲在床底下. 再次." ("The dog ...

  9. 神码ai人工智能写作机器人_游戏AI:机器人反击!

    神码ai人工智能写作机器人 以下是摘自Earle Castledine撰写的新书HTML5 Games:Ninja的新手 . 这本书的访问权限包含在SitePoint Premium会员资格中,或者您 ...

最新文章

  1. 如何编写数据库存储过程?
  2. fedora 16 mysql远程连接
  3. http://jackielieu.blog.51cto.com/5586910/1161944
  4. 工业级光纤收发器的“附加属性“功能介绍
  5. 机载计算机结构,机载计算机
  6. python实现哈希表
  7. oracle字段去重查询,oracle怎么去重查询
  8. Windows Phone 8初学者开发—第18部分:在页面间导航
  9. 一起来全面解析5G网络领域最关键的十大技术
  10. 大学生创新创业类竞赛参赛指南
  11. 怀揣梦想,我依靠自己,往后余生越来越精彩
  12. 软件测试公司常见的部门有哪些?
  13. (十)Intellij 远程调试 a bug
  14. Debian11之Docker稳定版本安装
  15. centos换163(网易)源
  16. 江苏二本大学计算机专业排名6,2021江苏二本大学排名及分数线表
  17. 微信公众号支付问题 - 当前页面的URL未注册
  18. AS3中对声音的控制
  19. 12-11 网易实习一面
  20. source命令用法

热门文章

  1. jenkins配置企业微信机器人通知,自定义通知内容
  2. 职业学校计算机专业总结,职业学校期末总结范文
  3. 【编程入门】开源记事本(安卓版)
  4. 常用稳压二极管参数表
  5. Directional库的学习记录
  6. Unreal Engine4(虚幻4)学习心得-材质
  7. 批量删除word中的换行符号
  8. e5430支持服务器内存,手贱!入手了逆天护舒宝771四核E5430平台,再战IGAME GTX650TI BOOST...
  9. 测试UDP端口的方法
  10. 使用余弦定理计算反三角函数却报超出定义域