ai人工智能可以干什么

为什么要阅读这篇文章? (Why should you read this article?)

What are the most pressing issues when it comes to ethics in AI and robotics? How will they affect the way we live (and work)? Sooner or later these issues will concern you, whether you work in the field or not. Here we will go through the main ideas contained in the paper Robot ethics: Mapping the issues for a mechanized world, while I add some of my own input. You will not have many answers, but will probably start asking the right questions.

关于人工智能和机器人技术的道德问题,最紧迫的问题是什么? 它们将如何影响我们的生活(和工作)方式? 无论您是否在野外工作,这些问题迟早都会引起您的关注。 在这里,我们将介绍“ 机器人道德:映射机械化世界中的问题”一文中包含的主要思想,同时添加一些自己的意见。 您不会有很多答案,但是可能会开始提出正确的问题。

真的是机器人吗? (What is a robot, really?)

Although this question might seem a bit too basic, it is important to outline a precise definition of what actually is a robot (and thus what is not one).

尽管这个问题似乎有点太基础了,但重要的是要概述一个确切的定义,即什么是机器人(因此什么不是机器人)。

There are some obvious cases, such as a high-level AI-enhanced autonomous military drone (which is probably considered a robot by any reasonable definition) and a regular remote-controlled old-school car (which is usually not considered to be a robot). But what about the grey area? A human-controlled drone that is capable of eventually finding its way back to its owner, is it a robot?

有一些明显的情况,例如高级AI增强的自主军用无人机(按任何合理的定义可能被认为是机器人)和常规的遥控老式汽车(通常不被认为是机器人) )。 但是灰色区域呢? 最终能够找到其主人的人类控制无人机,是机器人吗?

The answer to what is a robot is not straightforward, so there is no consensus around it. The article then comes up with a working definition, to facilitate the discussion:

关于什么是机器人的答案并不简单,因此围绕它尚无共识。 然后,本文提出了一个有效的定义,以促进讨论:

“a robot is an engineered machine that senses, thinks, and acts.”

“机器人是一种能够感知,思考和行动的工程机械。”

This definition implies robots must be equipped with sensors and with some sort of intelligence to guide its actions, and it also includes biological and virtual robots.

这个定义意味着机器人必须配备传感器和某种智能以指导其动作,并且还包括生物和虚拟机器人。

说到机器人,我们现在在哪里? (When it comes to robots, where are we now?)

Now that we are on the same page, what are robots doing today, and what will they be able to do in the future? The answer to these questions change quite fast these days but, generally speaking, robots are used mostly for repetitive tasks that do not require a lot of judgement (such as vacuuming). They are also very useful when it comes to dangerous tasks, such as mine detection and bomb defusing.

既然我们在同一页上,那么今天的机器人在做什么,将来它们又能做什么? 这些问题的答案近来变化很快,但是通常来说,机器人通常用于不需要很多判断(例如吸尘)的重复性任务。 当涉及到危险任务(例如探雷和炸药分解)时,它们也非常有用。

A successful example are the ubiquitous Roomba vacuum-cleaners, which account for almost half of the world’s service robots. The ones that pose the most pressing ethical issues, however, are in other areas of expertise:

一个成功的例子是无处不在的Roomba吸尘器,该吸尘器约占世界服务机器人的一半。 但是,构成最紧迫的道德问题的是其他专业领域:

Photo by Lenny Kuhne on Unsplash
Lenny Kuhne在Unsplash上拍摄的照片

劳动自动化 (Labour automation)

Probably one of the most commented consequences of the recent advances in AI is that human labour is quickly being replaced by robots, which can be less prone to error, do not suffer from fatigue and emotional issues, and are usually cheaper to maintain in the long run. Will this really happen, though? Or is it just a way to make good headlines? Well, the answer probably lies in the middle. Some jobs will definitely die due to automation in the next few years, but this has been happening since way before AI came to exist. Some examples of jobs replace by machines a long time ago include the “bowling alley pinsetter”, young boys who set up the bowling pins for clients and “human alarm clocks”, responsible for waking people up by knocking on their windows. Even though this fear of being replaced looks recent, it has actually been around for ages and, so far, it has not come to happen.

AI的最新进展中最受关注的后果之一可能是,人工正在Swift被机器人取代,这种机器人不太容易出错,不会遭受疲劳和情绪问题,并且长期维护成本通常较低跑。 不过,这真的会发生吗? 还是只是成为头条新闻的一种方式? 好吧,答案可能在中间。 在未来几年中,某些工作肯定会由于自动化而死亡,但这是在AI出现之前就已经发生了。 很久以前,用机器代替工作的一些例子包括“保龄球馆保镖”,为客户设置保龄球保龄球的小男孩和“人闹钟”,负责敲开窗户唤醒人们。 尽管这种担心被替换的恐惧看起来是最近的,但实际上已经存在了很多年了,到目前为止,还没有发生。

One reason is that, although there is a lot of hype behind the capabilities of AI, current robots are still far from being able to do some of the most mundane activities we do these days. Another important reason is that most robots have a very narrow scope: they are usually really good at performing a hyper-specific task, whereas most jobs require a more generalist approach. Finally, new jobs are being created all the time, many of which related to building and operating robots.

原因之一是,尽管AI的功能背后有很多宣传,但当前的机器人仍无法完成我们如今所做的一些最平凡的活动。 另一个重要的原因是,大多数机器人的范围非常狭窄:它们通常非常擅长执行超特定任务,而大多数工作则需要更通用的方法。 最后,新的工作一直在创造,其中很多与建造和操作机器人有关。

So, to answer our question: yes, many jobs will no longer exist and will be replaced by robots. This is actually a constant process that has been happening for centuries, and it will continue to happen. Human labour, however, will remain relevant for a long time, just in different ways. It is important for us, then, to prepare for this new world and to understand which skills will be the most needed in it. I believe those are either highly technical skills, related to coding and AI, or activities that require a human approach and a very diverse skillset, which are quite hard to be automated.

因此,回答我们的问题:是的,许多工作将不再存在,将由机器人代替。 实际上,这是一个持续不断的过程,已经存在了多个世纪,并且还将继续发生。 但是,人类劳动将长期保持相关性,只是以不同的方式。 因此,对我们来说,为这个新世界做准备并了解哪些技能是其中最需要的很重要。 我相信这些要么是与编码和AI相关的高技术技能,要么是需要人工方法和非常多样化的技能组合的活动,而这些活动很难实现自动化。

Photo by Michael Marais on Unsplash
Michael Marais在Unsplash上的照片

军事 (Military)

Military robots come in many shapes, from bomb-defusing cars to weapon-equipped drones. On one hand, they can save lives, by replacing soldiers when it’s time for dangerous work. On the other hand, there’s the obvious question: is it ethical to use so much power and technology to kill people? The second, not-so-obvious question is: should we let AI decide who to kill?

军事机器人的形态多种多样,从拆弹车到装备武器的无人机。 一方面,他们可以在需要危险工作的时候更换士兵,从而挽救生命。 另一方面,存在一个明显的问题:使用这么多的力量和技术杀死人是否合乎道德? 第二个不太明显的问题是:我们应该让AI决定杀死谁吗?

Even though it might sound absurd at first, robots can actually think before they shoot: they don’t get scared, they don’t panic, they don’t have prejudices (or at least they shouldn’t). They can also be less prone to error than humans. It can, however, turn the odds even more in favour of military powerhouses: imagine a war between the U.S. and Venezuela, except that North-Americans can control robots from far away, risking nothing, while the South-Americans are being exterminated, without a chance of fighting back. This scenario is actually a possibility for the future.

尽管乍一看听起来很荒谬,但机器人实际上可以在射击之前进行思考:它们不会感到恐惧,不会惊慌,不会有偏见(或者至少不应该有偏见)。 它们也比人类更不容易出错。 但是,它可以使可能性更大,更倾向于军事强国:想象一下美国和委内瑞拉之间的战争,只是北美人可以控制遥远的机器人,不承担任何风险,而南美人正在灭绝,而没有反击的机会。 这种情况实际上是未来的可能性。

We can’t really state what is or is not ethical here, it depends too much on cultural factors, but it is definitely worthwhile taking into consideration future regulatory risks when developing new AI. The field is evolving fast, and regulation struggles to keep up, but keep in mind that just because something is not regulated now, that doesn’t mean this will still be the case in 5 years.

我们在这里不能真正说明什么是道德,这在很大程度上取决于文化因素,但是在开发新的AI时考虑到未来的监管风险绝对值得。 该领域正在快速发展,监管努力难以跟上,但是请记住,仅仅因为现在没有监管,这并不意味着5年后仍会如此。

友谊 (Companionship)

This might be one of the most controversial applications for AI: sex robots are getting more and more realistic. You can choose not only how your robot looks, but also how it will react to your advances. Many questions have come up because of those features, such as: is it ok to make a robot that looks a lot like a celebrity (or someone you know)? What about one that opposes any sexual interaction, so you can simulate a rape?

这可能是AI最具争议的应用程序之一:性机器人变得越来越现实。 您不仅可以选择机器人的外观,还可以选择它将对您的进步做出什么样的React。 由于这些功能,出现了许多问题,例如:制造看起来很像名人(或您认识的人)的机器人可以吗? 反对任何性交的人呢,可以模拟强奸呢?

One one hand, there might be serious psychological consequences not yet unveiled, for people who use these toys. They could also reinforce unhealthy sexual behaviour and expectations. On the other hand, they might be exactly what some people need to let off some steam, meaning they will be less prone to practice their unwanted fantasies with another human being.

一方面,对于使用这些玩具的人来说,可能会有尚未揭晓的严重心理后果。 它们还可以增强不健康的性行为和期望。 另一方面,它们可能正是某些人需要释放的动力,这意味着他们将不太容易与另一个人一起实践自己不想要的幻想。

All of these are new issues, not yet addressed, but there might be many interesting questions to be answered in fields such as Philosophy or Psychology.

所有这些都是尚未解决的新问题,但是在哲学或心理学等领域可能会回答许多有趣的问题。

我们应该考虑哪些因素? (What factors should we look at?)

安全与错误 (Safety and errors)

AI is prone to error, mainly due to two factors: it is made by humans, so the actual code might contain bugs or logic flaws; and it is often based on probability, meaning that even when the code is perfectly done, its actions are based on imperfect information and will always entail some degree of risk.

人工智能容易出错,主要是由于两个因素:人工智能是人为造成的,因此实际代码可能包含错误或逻辑缺陷; 而且它通常基于概率,这意味着即使代码完美完成,其操作也基于不完善的信息,并且始终会带来一定程度的风险。

These two sources of failure are, for the moment, inevitable. We should, however, always compare the expected levels of error of machines and humans. For instance, it is not unusual for cops to mistake normal objects such as umbrellas or drills for guns, and end up shooting innocent people. Should we stop using human cops? Every 1.35 million people are killed in road accidents around the world. Should we stop people from driving?

目前,这两种失败源都是不可避免的。 但是,我们应该始终比较机器和人为的预期错误水平。 例如,警察通常将伞或伞等正常物体误认为枪支,并最终射击无辜的人。 我们应该停止使用人类警察吗? 全球每有135万人死于交通事故。 我们应该阻止人们开车吗?

These are just a few examples that illustrate that, from an utilitarian perspective, it doesn’t really matter if machines make mistakes, as long as they don’t cause more damage than humans in the same activity. In order to ensure this, of course, new technology should be thoroughly tested in a safe environment first. Measures to reduce possible damage (for example, using non-lethal weapons for robot-cops first) should be taken and, only when error levels are lower than human level by a considerable amount, the innovation should be implemented, in order to ensure a safety margin.

这些只是几个例子,从功利主义的角度来看,只要机器在同一活动中不会造成比人类更大的损害,机器是否会犯错并不重要。 为了确保这一点,当然应该首先在安全的环境中对新技术进行彻底的测试。 应该采取减少可能的损害的措施(例如,首先使用非致命武器作为机器人警察的武器),并且只有当错误水平比人的水平低很多时,才应进行创新,以确保安全裕度。

However, we, as a society, are still quite uncomfortable with the idea of completely replacing humans by robots in certain activities. It just “feels too risky”. That is completely fine, since we should, indeed, approach the unknown carefully. My hope is, however, that in the future, once people are more used to this idea, the decision should be more based in the comparison between the two levels of error than in the moral implications. This type of reasoning can literally save lives down the road.

但是,作为一个社会,我们仍然不满意在某些活动中用机器人完全替代人类的想法。 它只是“觉得太冒险了”。 完全可以,因为我们确实应该谨慎对待未知事物。 但是,我希望,将来,一旦人们更习惯了这个想法,该决定就应该更多地基于两个错误级别之间的比较,而不是道德含义上。 这种推理可以从字面上挽救生命。

Photo by Giammarco Boscaro on Unsplash
Giammarco Boscaro在Unsplash上拍摄的照片

法律与道德 (Law and ethics)

The recent technological advances require new legislation, to answer the moral questions that have never been asked before. When a robot makes a mistake, who should be held accountable? Its owner, the team of developers who made it? When we really start mixing biological tissue and robotics, will these robots be considered people, or something else in between? Will they have rights? When we start adding microchips to peoples brains, what is the line between an actual person and a cyborg?

最近的技术进步需要新的立法,以回答以前从未提出过的道德问题。 当机器人犯错时,应该追究谁的责任? 它的所有者,开发团队是谁创造的? 当我们真正开始混合生物组织和机器人技术时,这些机器人会被视为人,还是介于两者之间? 他们有权利吗? 当我们开始在人们的大脑中添加微芯片时,实际的人与人之间的界线是什么?

And what about international law? When a robot is designed in California, assembled in China and exported all over the world, which ethics will it follow? Will a developer in the Silicon Valley be up to program a robot to kill protesters in Hong Kong? Will the developer be held accountable in the US if this robot actually kills someone in China?

那国际法呢? 当机器人在加利福尼亚设计,在中国组装并出口到世界各地时,它将遵循什么道德规范? 硅谷的开发人员会否编程机器人杀死香港的抗议者? 如果该机器人实际上在中国杀死了某人,开发商将在美国承担责任吗?

To be honest, we are still far from that level of sophistication, but we might still see it in our lifetime and be impacted by new legislation. Who would have thought, for instance, 40 years ago, that data and privacy would be such important topics, protected (or not) by so many different laws?

坦白地说,我们离成熟水平还很远,但是我们可能仍然会在我们的一生中看到它,并且会受到新法规的影响。 例如,谁会想到40年前,数据和隐私将成为如此重要的主题,受到(或不受)许多不同法律的保护?

社会影响 (Social impact)

All the issues we have addressed so far will have a significant impact in our society, but I would say the two most relevant ones are the loss of jobs, and the changes in human relationships.

到目前为止,我们已经解决的所有问题都会对我们的社会产生重大影响,但是我想说两个最相关的问题是失业和人际关系的变化。

As I said, technological advances have been happening for ages now, replacing human in many activities and killing old jobs, while new career oportunities arise. That is fine, it allows us to work on more significant endeavors, while robots get the boring ones. That does not mean, however, that new jobs will come to replace old ones forever. It might be the case that, in a certain point in the future, less overall human work is needed, and people become less relevant. Guess who those people are? Probably poor, uneducated people from peripherical countries, since they have the jobs that are the most easily automated. What happens then?

正如我所说,技术进步已经发生了很多年了,在许多活动中替代人类并杀死旧工作,同时出现了新的职业机会。 很好,它使我们可以进行更重要的工作,而机器人则可以进行无聊的工作。 但是,这并不意味着新的工作将永远取代旧的工作。 在将来的某个特定时刻,可能会需要较少的总体人力工作,并且人们的相关性也会降低。 猜猜那些人是谁? 来自外围国家的穷人,未受教育的人,因为他们拥有最容易自动化的工作。 那会发生什么呢?

Many people bet on Universal Basic Income: the idea that the government would give a minimum level of income to everyone. This would mean that, even if you lost your job, you would still have where to live and what to eat. This money would possibly come from taxing big companies that have saved a lot by automating their production. Would it work? We can’t know for sure, but it definitely looks promising. But what then? Would people just stop working and move on with their lives? Although this might seem like a dream to many people, our society was not built around this sort of life, and many people need to work so they feel like they have a life purpose. Not an easy equation to solve.

许多人押注通用基本收入:政府将给所有人最低收入的想法。 这意味着,即使您失业,您仍然可以住哪里和吃什么。 这笔钱可能来自对通过自动化生产而节省了很多钱的大公司征税。 能行吗? 我们不能确定,但​​是看起来确实很有希望。 但是那又怎样呢? 人们会停止工作并继续生活吗? 尽管对于许多人来说这似乎是一个梦想,但我们的社会并不是建立在这种生活方式的基础上的,因此许多人需要工作,因此他们觉得自己有人生目标。 这不是一个容易解决的方程式。

The other major impact of robots and AI will be in human relationships: when we reach the point where it gets hard to distinguish between AI and an actual human, it will get quite easy to start creating feelings towards robots. If you have ever seen the film Her, you know what I mean. Could this mean reducing time spent with other people? Is this a bad thing? Will we start creating unrealistic expectations towards other people, based on our experiences with robots? Will people be allowed to marry robots?

机器人和AI的另一个主要影响将是人与人之间的关系:当我们难以区分AI和实际人类时,就很容易开始对机器人产生情感。 如果您看过电影《 她》 ,您就会明白我的意思。 这是否意味着减少与他人在一起的时间? 这是坏事吗? 我们会根据我们在机器人方面的经验开始对他人提出不切实际的期望吗? 人们会被允许嫁给机器人吗?

结论 (Conclusion)

As I said, this article contains more questions than answers, but it would be irresponsible (and quite presumptuous too) to try and give too many answers: they would all probably be wrong.

就像我说的那样,本文包含的问题多于答案,但是尝试给出过多的答案将是不负责任的(并且也很冒昧):它们都可能是错误的。

I hope, however, this has given you a bit of food for thought, either just for the sake of it, if you work in the field, so that you can start incorporting these questions in your next project meetings at work.

但是,我希望,如果您在野外工作,这可以给您带来一些思考的机会,或者仅仅是为了它的缘故,以便您可以在下一次工作中的会议中开始提出这些问题。

If you would like to go further, I recommend two books by Yuval Noah Harari, which you have probably heard of before: Homo Deus and 21 Lessons for the 21st Century. They discuss some of these same questions and many more, about the future of humankind. If you want to learn more about how robots engage in creative activities such as songwriting, check out this article on Creativity and AI.

如果您想走得更远,我建议您阅读两本尤瓦尔·诺亚·哈拉里(Yuval Noah Harari)所著的书:《 现代人类》《 21世纪的21个教训》 。 他们讨论了一些同样的问题,以及有关人类未来的更多问题。 如果您想了解有关机器人如何参与诸如写歌之类的创造性活动的更多信息,请查阅有关创造性和人工智能的文章。

“It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be … This, in turn, means that our statesmen, our businessmen, our everyman must take on a science fictional way of thinking” — Isaac Asimov

“变化,持续变化,不可避免的变化是当今社会的主导因素。 不仅要考虑现实世界,还要考虑未来世界,再也不能做出明智的决定。这反过来意味着我们的政治家,商人,每个人都必须采取科幻小说的方式思想” —艾萨克·阿西莫夫(Isaac Asimov)

This article is loosely based on a paper published on Artificial Intelligence, addressing some of the most important issues surrounding artificial intelligence and robotics: Robot ethics: Mapping the issues for a mechanized world by Patrick Lin, Keith Abney and George Bekey, with some of my personal input as well.

本文大致基于发表在《 人工智能》上的一篇论文,该论文解决了围绕人工智能和机器人技术的一些最重要的问题: 机器人伦理学:由Patrick Patrick,Keith Abney和George Bekey撰写的有关机械化世界的问题的地图,个人输入也是如此。

Feel free to reach out to me on LinkedIn if you would like to discuss further, it would be a pleasure (honestly).

如果您想进一步讨论,请随时在LinkedIn上与我联系,这是一种荣幸(诚实)。

翻译自: https://towardsdatascience.com/can-we-make-artificial-intelligence-more-ethical-a0fb7efcb098

ai人工智能可以干什么


http://www.taodudu.cc/news/show-1874116.html

相关文章:

  • pong_计算机视觉与终极Pong AI
  • linkedin爬虫_这些框架帮助LinkedIn大规模构建了机器学习
  • 词嵌入生成词向量_使用词嵌入创建诗生成器
  • 端到端车道线检测_如何使用Yolov5创建端到端对象检测器?
  • 深度学习 检测异常_深度学习用于异常检测:全面调查
  • 自我监督学习和无监督学习_弱和自我监督的学习-第3部分
  • 聊天工具机器人开发_聊天机器人-精致的交流工具? 还是您的客户服务团队不可或缺的成员?...
  • 自我监督学习和无监督学习_弱和自我监督的学习-第4部分
  • ai星际探索 爪子_探索AI地牢
  • 循环神经网络 递归神经网络_递归神经网络-第5部分
  • 用于小儿肺炎检测的无代码AI
  • 建筑业建筑业大数据行业现状_建筑—第2部分
  • 脸部识别算法_面部识别技术是种族主义者吗? 先进算法的解释
  • ai人工智能对话了_产品制造商如何缓解对话式AI中的偏见
  • 深度神经网络 轻量化_正则化对深度神经网络的影响
  • dbscan js 实现_DBSCAN在PySpark上的实现
  • 深度学习行人检测简介_深度学习简介
  • ai初创企业商业化落地_初创企业需要问的三个关于人工智能的问题
  • scikit keras_使用Scikit-Learn,Scikit-Opt和Keras进行超参数优化
  • 异常检测时间序列_DeepAnT —时间序列的无监督异常检测
  • 机器学习 结构化数据_聊天机器人:根据结构化数据创建自然语言
  • mc2180 刷机方法_MC控制和时差方法
  • 城市ai大脑_激发AI研究的大脑五个功能
  • 神经网络算法优化_训练神经网络的各种优化算法
  • 算法偏见是什么_人工智能中的偏见有什么作用?
  • 查看-增强会话_会话助手平台-Hinglish Voice等!
  • 可解释ai_人工智能解释
  • 机器学习做自动聊天机器人_聊天机器人业务领袖指南
  • 神经网络 代码python_详细使用Python代码和数学构建神经网络— II
  • tensorflow架构_TensorFlow半监督对象检测架构

ai人工智能可以干什么_我们可以使人工智能更具道德性吗?相关推荐

  1. ai人工智能可以干什么_什么是情感AI,为什么要关心

    ai人工智能可以干什么 Recently I had the opportunity to attend the inaugural Emotion AI Conference, organized ...

  2. 用阿里云托管服务器怎么托管_云托管使企业更具竞争力的8个原因

    用阿里云托管服务器怎么托管 Organisations are flocking to cloud computing in greater numbers than ever before and ...

  3. 残疾人软件开发_组织如何使残疾人更具包容性

    残疾人软件开发 "多样性被邀请参加聚会,包容性被要求跳舞." -维纳·迈尔斯(Verna Myers) 考虑到这一点,社区应邀请尽可能多的人跳舞. 如今,多样性和包容性在技术界引起 ...

  4. ai人工智能市场客户_投资管理中的人工智能可提升客户关系和回报

    ai人工智能市场客户 Let's be honest. An investment manager's clients probably won't care about the fancy AI t ...

  5. python在人工智能应用锁_解读! Python在人工智能中的作用

    人工智能是一种未来性的技术,目前正在致力于研究自己的一套工具.一系列的进展在过去的几年中发生了:无事故驾驶超过300000英里并在三个州合法行驶迎来了自动驾驶的一个里程碑:IBM Waston击败了J ...

  6. python人工智能决策系统_用Python学人工智能

    spContent=本课程是教育部-百度产学合作协同育人项目成果,课程将介绍智能计算机系统设计的基本思想和技术,具体重点将放在使用Python语言实现上述的智能系统.课程中学习的技术适用于各类人工智能 ...

  7. 人工智能python营_贪心学习院人工智能python编程特训营

    贪心学习院人工智能python编程特训营 实战一项目作业 情报密码 #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ ...

  8. python在人工智能应用锁_饮冰三年-人工智能-Python-35权限管理(万能的权限通用模块)...

    自定义权限认证 1:修改model.py类.主要是添加两个class from django.db importmodelsfrom django.contrib.auth.models import ...

  9. ai人工智能对话了_对话人工智能模型

    ai人工智能对话了 How can chatbots become truly intelligent by combining five different models of conversati ...

  10. 神码ai人工智能写作机器人_机器学习简介part1与人工智能的比较

    神码ai人工智能写作机器人 https://www.eastwestbank.com/ReachFurther/en/News/)https://www.eastwestbank.com/ReachF ...

最新文章

  1. Unity持久化存储之PlayerPrefs的使用
  2. 面向对象的JavaScript编程
  3. ASP.NET 5 and .NET Core RC 准备投入使用
  4. (转)Python 用hashlib求中文字符串的MD5值
  5. 极市电大 | 京东AI时尚挑战赛Top3技术方案
  6. 史上最全memcached面试26题和答案
  7. 重命名myclipse中web项目名称的过程
  8. 求单链表结点的阶乘和
  9. Zeppelin 可视化操作spark sql
  10. 【英语学习】【Level 07】U08 Old Stories L1 The old times
  11. 【实战】使用Job来修改Transform
  12. @WebFilter()配置servlet访问出现404的原因
  13. 还在低效搬砖?看 BIM 如何颠覆了土木工程?
  14. python绘制混淆矩阵
  15. python cpk计算器_Python进行CPK计算
  16. 社交网络图形可视化工具Gephi使用教程
  17. Mysql-多表查询
  18. 我的微信小程序登陆界面
  19. oracle设置查看格式,Oracle 设置显示格式
  20. 如何采用一套程序代码,实现系统的“千人千面”

热门文章

  1. 列表显示数据 但是数据的字体颜色要js添加
  2. PNAS A scalable pipeline for designing reconfigurable organisms
  3. 190414每日一句
  4. 局域网远程访问时显示密码过期
  5. unity 变量的编译
  6. Atitit各种驱动的xdd tdd bdd设计 ATDD ddd v3 u66.docx Atitit各种驱动的xdd tdd bdd设计 ATDD ddd v2 s66 开发方法论与效率提
  7. Atiitt 图像处理的常见功能业务用途与类库与功能实现 目录 1. 常见业务场景 2 1.1. 缩略图 2 1.2. 判断图像大小分辨率要求 长度 宽度 2 1.3. 图像格式 转换,,黑白图像
  8. Atitit data struts art 数据结构的艺术 数据结构之道 attilax著 目录 1. 分类 1 1.1. 按照元素关系分(集合,列表,tree,map,图) 1 1.2. 按
  9. Atitit git push 报错 remote: error: hook declined to update git push 报错 remote: error: hook declined
  10. Atitit.故障排除系列-----apache 不能启动的排除