机器学习算法的差异

人工智能道德与考量 (AI Ethics and Considerations)

Shortly after I began my machine learning courses, it dawned on me that there is an absurd exaggeration in the media concerning the state of artificial intelligence. Many are under the impression that artificial intelligence is the study of developing conscious robotic entities soon to take over planet earth. I typically brace myself whenever someone questions what I study since my response is often prematurely met with a horrified gasp or angry confrontation. And understandably so.

在我开始机器学习课程后不久,我突然意识到媒体上关于人工智能状态的荒唐夸张。 许多人给人的印象是,人工智能是研究有意识的机器人实体即将接管地球的研究。 每当有人质疑我的学习内容时,我通常都会做好准备,因为我的回答经常过早地惊恐不已或充满生气。 可以理解的是。

有意识的机器人实体即将接管吗? (Conscious Robotic Entities Soon to Take Over?)

However, the reality is that machine learning is not a dangerous magic genie, nor is it any form of a conscious entity. For simplicity’s sake, I typically say that the essence of AI is math. Some say it’s just ‘glorified statistics’. Or as Kyle Gallatin has so eloquently put it, ‘machine learning is just y=mx+b on crack.’

但是,现实情况是,机器学习不是危险的魔术精灵 ,也不是任何形式的有意识实体。 为了简单起见,我通常会说AI的本质是数学。 有人说这只是“华丽的统计数据” 。 或正如凯尔·加拉廷(Kyle Gallatin)雄辩地说的那样, 机器学习仅仅是y = mx + b。 '

Of course, this is a simplification since machine learning pulls from many disciplines such as computer science, neuroscience, mathematics, the scientific method, etc. But the point is that the media is suffused with verbiage that makes it feel as though we are in direct danger of being taken over by artificially intelligent beings.

当然,这是一种简化,因为机器学习来自许多学科,例如计算机科学,神经科学,数学,科学方法等。但是,关键是,媒体充斥着大量的语气,使我们感觉好像我们在直接被人工智能存在的危险。

The truth is, we are not. But there are many other insidious issues in the production of machine learning that often goes overlooked. Rachel Thomas, a co-founder of fast.ai, has mentioned that she, along with other machine learning experts, believe that the ‘hype about consciousness in AI is overblown’ but ‘other (societal) harms are not getting enough attention’. Today, I want to elaborate on one of these societal harms that Rachel addresses: that ‘AI encodes and magnifies bias’.

事实是,我们不是。 但是在机器学习的生产中还有许多其他隐患,这些问题经常被忽略。 fast.ai的联合创始人雷切尔·托马斯(Rachel Thomas)提到她与其他机器学习专家一起相信,``关于AI意识的炒作已被夸大了'',但``其他(社会)危害并未引起足够的重视'' 。 今天,我想详细阐述雷切尔(Rachel)解决的其中一种社会危害:“人工智能编码并放大了偏见”。

机器学习的真正危险:垃圾吞噬 (The Real Hazard of Machine Learning: Garbage In Garbage Out)

The most unsettling aspect of this — the idea of AI magnifying bias — is that the very promise of machine learning in the automation of social processes is to hold the highest degree of neutrality. It is well known that doctors can hold bias during diagnosis in healthcare or a jury may hold bias during sentencing in criminal justice. Machine learning should ideally synthesize a large amount of variables in the record and provide a neutral assessment.

其中最令人不安的方面是AI放大偏差的想法,即社交过程自动化中机器学习的最大前景就是保持最高的中立性。 众所周知,医生在医疗保健诊断过程中可能会产生偏见,而陪审团在刑事司法中的判刑过程中可能会产生偏见。 理想情况下,机器学习应该综合记录中的大量变量并提供中立的评估。

“But what happened was that machine learning programs perpetuated our biases on a large scale. So instead of a judge being prejudiced against African Americans, it was a robot.” — Brian Resnick

“但是发生的事情是,机器学习程序使我们的偏见长期存在。 因此,它不是一个对非裔美国人有偏见的法官,而是一个机器人。” 布赖恩·瑞斯尼克

We expect the model to be objective and fair; it is this disillusioned position of objectivity that makes the entire ordeal feel insidious and particularly disappointing.

我们希望该模型是客观和公正的; 正是这种幻灭的客观性位置使整个折磨变得阴险而特别令人失望。

So how does this happen?

那么这是怎么发生的呢?

(source)(来源)

“Garbage in garbage out” is a well known computer science axiom that means poor quality input produces poor quality output. Typically, ‘non-garbage’ input would refer to clean, accurate, and well-labeled training input. However, we can now see that our garbage input could very well be a polished, accurate representation of our society as it has acted in the past. The real hazard in machine learning has less to do with robotic conscious entities and more to do with another type of conscious entity — human beings. When societally biased data is used to train a machine learning model, the insidious outcome is a discriminatory machine learning model that predicts the societal biases we aim to eliminate.

“垃圾中的垃圾”是一种众所周知的计算机科学公理,这意味着质量差的输入会产生质量差的输出。 通常,“非垃圾”输入是指干净,准确且标签明确的培训输入。 但是,我们现在可以看到,我们的垃圾输入很可能可以像过去一样准确地代表着我们的社会。 机器学习中的真正危险与机器人意识实体无关,而与另一种意识实体-人类有关。 当使用社会偏见的数据来训练机器学习模型时,阴险的结果是一种歧视性的机器学习模型,该模型预测了我们旨在消除的社会偏见。

更高的准确性!=更好的社交成果 (Higher Accuracy != Better Social Outcomes)

The issue extends further from prediction and towards perpetuation; we create a type of reinforcement loop.

这个问题进一步从预测延伸到永续; 我们创建一种加固循环。

For example, let’s say that a business owner wants to predict which of their customers would be likely to buy certain products so they could offer a special bundle. They go on to ask a data scientist to build a predictive algorithm and use this to advertise to the select group. At this point, the model is not simply predicting which customers will purchase — it is reinforcing it.

例如,假设企业主希望预测他们的哪些客户可能会购买某些产品,以便他们可以提供特殊捆绑销售商品。 他们继续要求数据科学家构建一种预测算法,并使用它来向选定的群体做广告。 在这一点上,该模型不仅在预测将要购买的客户,而且还在加强它。

While innocuous in this example, this can lead to harmful outcomes for social processes. This is exactly what led to these unanticipated headlines:

尽管在此示例中是无害的,但这可能导致社会过程的有害结果。 这正是导致这些意外标题的原因:

Figure II. Photo by author. Citations at the end of the article.
图二。 图片由作者提供。 文章结尾的引文。

Again, if our application is directed towards medical care for the purpose of predicting which group should get more attention based on prior data, we are not simply predicting for the sake of optimization, we are now actively magnifying and perpetuating prior disparities.

同样,如果我们的应用程序是针对医疗的,目的是根据先前的数据预测哪个组应该引起更多关注,那么我们并不是仅仅出于优化目的而进行预测,而是正在积极扩大和延续先前的差异。

那么,我们是否因为知道会导致世界毁灭而废除机器学习? (So do we abolish machine learning because we knew it would lead to world destruction?)

In short, no. But perhaps we should reimagine the way we practice machine learning. As previously mentioned, when I first began to practice machine learning, the over-exaggerated commonplace fear of artificial intelligence developing consciousness began to humor me a bit. I thought that perhaps the worst thing that could happen would be misuse like that of any tool we have, albeit misuse is perhaps more apparent of a physical tool than that of a digital tool.

简而言之,没有。 但是也许我们应该重新想象我们练习机器学习的方式。 如前所述,当我刚开始练习机器学习时,对人工智能发展意识的过度夸张的恐惧开始让我有些幽默。 我认为,可能发生的最糟糕的事情将是像我们拥有的任何工具一样的滥用,尽管滥用实际上可能是物理工具而不是数字工具。

However, the short film, ‘Slaughterbots’ by Alter on YouTube provoked a lot of thought regarding ethics and the possible dangers of autonomous artificial intelligence. The primary reason that the ‘Future of Life Institute’ created the film was to communicate the following idea: “Because autonomous weapons do not require individual human supervision, they are potentially scalable weapons of mass destruction — unlimited numbers could be launched by a small number of people.”

但是,短片《 Alter》在YouTube上的短片《 Slaughterbots》激起了人们对伦理学和自治人工智能的潜在危险的许多思考。 “生命的未来研究所”制作这部电影的主要原因是传达以下想法: “由于自动武器不需要个人的监督,它们可能是可扩展的大规模杀伤性武器,少数人可以发射无限数量的武器。人。”

In the context of this short film, the drones were exploited with the intent to harm. However, could disastrous unintentional repercussions arise from the use of A.I. systems? What would happen if we create AI to optimize for a loosely defined goal and loosely defined restraints without any supervisory precautions, and realize it was more than we bargained for? What if we create a system with great intentions to be used for social good but we wind up with catastrophic and irreversible damages? The lack of consciousness becomes irrelevant and yet doesn’t minimize the potential harm.

在这部短片的背景下,无人驾驶飞机受到了伤害。 但是,使用AI系统是否会造成灾难性的意外影响? 如果我们创建AI来优化松散定义的目标和松散定义的约束而没有任何监督预防措施,并且意识到它超出了我们的讨价还价,将会发生什么? 如果我们创建了一个用于社会公益的强烈意图的系统,但是却遭受了灾难性且不可逆转的损失,该怎么办? 意识的缺乏变得无关紧要,但是并没有将潜在的危害降到最低。

Then, I began stumbling across relevant resources that challenged the current standard model of artificial intelligence and addressed these issues which is what ultimately led to this synthesis of a blog post.

然后,我开始涉足有关资源,这些资源挑战了当前的人工智能标准模型,并解决了这些问题,最终导致了博客文章的综合。

逆向强化学习 (Inverse Reinforcement Learning)

The first had been ‘Human Compatible’ by Stuart Russell which suggests that the standard model of AI is problematic due to the lack of intervention. In the current standard model, we focus on optimizing our initially set metrics without any human-in-the loop supervision. Russell challenges this with the hypothetical situation that we realize after some time that the consequences of our initial goals weren’t exactly what we wanted.

第一个是Stuart Russell撰写的“ Human Compatible”,它表明AI的标准模型由于缺乏干预而存在问题。 在当前的标准模型中,我们专注于优化初始设置的指标,而无需任何人工干预。 拉塞尔用一种假设的情况来挑战这一点,我们在一段时间后意识到,最初目标的结果并不完全是我们想要的。

Instead, Stuart proposes that rather than using our AI systems to optimize for a fixed goal, that we create them with flexibility to adapt to our potentially vacillating goals. This means programming in a level of uncertainty in the algorithm where it cannot be completely certain that it knows our goals, so it will deliberately ask if it needs to be redirected or switched off. This is known as ‘Inverse Reinforcement Learning.’

相反,Stuart提出,与其使用我们的AI系统为固定目标进行优化,不如让我们灵活地创建它们以适应潜在的不稳定目标。 这意味着无法在算法中以一定程度的不确定性进行编程,无法完全确定它是否知道我们的目标,因此它将故意询问是否需要重定向或关闭它。 这就是所谓的“逆向强化学习”。

Below you can see the difference between the common reinforcement learning goals and inverse reinforcement learning goals:

在下面,您可以看到常见的强化学习目标和逆强化学习目标之间的区别:

source)来源 )

With traditional reinforcement learning, the goal is to find the best behavior or action to maximize reward in a given situation. For example, in the domain of self-driving cars, the model will receive a small reward for every moment it it remains centered on the road and receives a negative reward if it runs a red light. The model is moving through the environment trying to find the best course of action to take to maximize reward. Therefore, a reinforcement learning model is fed a reward function and attempts to find the optimal behavior.

在传统的强化学习中,目标是找到最佳行为或行动,以在给定情况下最大化回报。 例如,在自动驾驶汽车领域,该模型将在其始终以道路为中心的每时每刻获得少量奖励,如果闯红灯则将获得负奖励。 该模型正在整个环境中移动,试图找到最佳的行动方案以最大化回报。 因此,强化学习模型会获得奖励函数,并试图找到最佳行为。

However, sometimes the reward function is not obvious. To account for this, inverse reinforcement learning is fed a set of behaviors and it tries to find the optimal reward function. Given these behaviors, what does the human really want? The initial goal of IRL was to uncover the reward function under the assumption that the given behavior is the most favorable behavior. However, we know that this isn’t always the case. Following this logic, this process may help us unveil the ways in which humans are biased which would, in turn, allow us to correct future mistakes through awareness.

但是,有时奖励功能并不明显。 为了解决这个问题,逆向强化学习被提供了一组行为,并试图找到最佳的奖励函数。 鉴于这些行为, 人类真正想要什么? IRL的最初目标是在给定行为是最有利行为的假设下发现奖励函数。 但是,我们知道情况并非总是如此。 按照这种逻辑,这个过程可以帮助我们揭示人类偏见的方式,从而使我们能够通过意识来纠正未来的错误。

有偏COMPAS算法 (Biased COMPAS Algorithm)

Another resource that is relevant and timely is the episode ‘Racism, the criminal justice system, and data science’ from Linear Digressions. In this episode, Katie and Ben tactfully discuss COMPAS, an algorithm that stands for Correctional Offender Management Profiling for Alternative Sanctions. This algorithm is legal to be utilized by judges during sentencing in a few US states to predict the likeliness of a defendant committing a crime again.

另一个与时俱进相关的资源是《线性新闻》中的“种族主义,刑事司法系统和数据科学”。 在这一集中,Katie和Ben巧妙地讨论了COMPAS,该算法代表针对替代制裁的惩教人员管理分析。 该算法在美国一些州的法官判刑中使用是合法的,以预测被告再次犯罪的可能性。

Dressel et al., Science Advances, EAAO55850, 2018 (J.Dressel等人,《科学进展》,EAAO55850,2018年 ( source)来源 )

However, various studies have challenged the accuracy of the algorithm, discovering racially discriminatory results despite lacking race as an input. Linear Digressions explores potential reasons that racially biased results arise and wraps up with lingering string of powerful, thought-provoking ethical questions:

但是,各种研究都对算法的准确性提出了挑战,尽管缺乏种族作为输入,却发现了具有种族歧视性的结果 。 线性题外话探究了产生种族偏见的潜在原因,并缠结了缠绵的一连串强有力的,发人深省的道德问题:

What is a fair input to have for an algorithm? Is it fair to have an algorithm that is more accurate if it introduces injustice when you consider the overall context? Where do the inputs come from? In what context will the output be deployed? When inserting algorithms into processes that are already complicated and challenging, are we spending enough time examining the context? What are we attempting to automate, and do we really want to automate it?

算法有什么合理的输入? 如果考虑到整体情况,引入不公正的算法是否更准确? 输入来自哪里? 输出将在什么情况下部署? 在将算法插入已经非常复杂和具有挑战性的流程中时,我们是否花费了足够的时间检查上下文? 我们正在尝试自动化什么,我们真的要自动化吗?

This last string of questions that Katie so neatly presents at the end of episode are wonderfully pressing questions that left a lasting impression given the fact that I am such a huge proponent of machine learning for social good. I am positive that these considerations will become an integral part of each complicated, data-driven, social problem I aim to solve using an algorithm.

凯蒂在剧集结束时如此巧妙地提出的最后一串问题极好地紧迫地提出了一些问题,这些问题给人留下了持久的印象,因为我是机器学习对社会公益的巨大支持者。 我很肯定这些考虑因素将成为我要使用算法解决的每个复杂的,数据驱动的社会问题的组成部分。

最后的想法和思考 (Final Thoughts and Reflections)

These models have hurt many people on a large scale while providing a false sense of security and neutrality, but perhaps what we can gain out of this is the acknowledgement of the undeniable underrepresentation in our data. The lack of data for certain minority groups are evident when the algorithms plainly do not work with these groups.

这些模型在给人一种错误的安全性和中立性的同时,已经伤害了许多人,但也许我们可以从中得到的是对数据中不可否认的代表性不足的认识。 当算法明显不适用于这些群体时,某些少数群体的数据明显不足。

Bias is our responsibility to, at the very least, recognize, so that we can push initiatives forward to reduce them.

偏差至少是我们的责任,这是我们的责任,以便我们可以推动倡议以减少倡议。

Moreover, in “What Do We Do About the Biases in AI,” James Manyika, Jake Silberg and Brittany Presten present six ways in which management teams can maximize fairness with AI:

此外,在“我们如何应对人工智能的偏见”中, James Manyika,Jake Silberg和Brittany Presten提出了六种管理团队可以最大程度地利用AI公平的方法:

  1. Remain up to-date on the research surrounding Artificial Intelligence and Ethics.保持有关人工智能和伦理学的最新研究。
  2. Establish a process that can reduce bias when AI is deployed建立可以减少部署AI时的偏见的流程
  3. Engage in fact-based conversations around potential human biases围绕潜在的人类偏见进行基于事实的对话
  4. Explore ways in which humans and machines can integrate to combat bias探索人与机器整合以克服偏见的方式
  5. Invest more efforts in bias research to advance the field加大对偏见研究的投入,以推动该领域的发展
  6. Invest in diversifying the AI field through education and mentorship通过教育和指导投资使AI领域多样化

Overall, I am very encouraged by the capability of machine learning to aid human decision-making. Now that we are aware of the bias in our data, we are responsible to take action to mitigate these biases so that our algorithm could truly provide a neutral assessment.

总体而言,我对机器学习有助于人类决策的能力感到非常鼓舞。 既然我们意识到了数据中的偏差,我们有责任采取措施缓解这些偏差,以便我们的算法能够真正提供中立的评估。

In light of these unfortunate events, I am hopeful that in the coming years, there will be more dialogue on AI regulation. There are already wonderful organizations such as AI Now, among others, that are dedicated to research that revolves around understanding the social implications of artificial intelligence. It is now our responsibility to continue this dialogue and move forward to a more transparent and just society.

鉴于这些不幸的事件,我希望在未来几年中,将有更多关于AI监管的对话。 已经有一些出色的组织,例如AI Now等,致力于研究围绕理解人工智能的社会含义而开展的研究。 现在,我们有责任继续这种对话,并朝着更加透明和公正的社会前进。

source)源 )

Articles Used for Figure II:

用于图二的文章:

  1. AI is sending people to jail — and getting it wrong

    人工智能正在将人们送进监狱-并弄错了

  2. Algorithms that run our lives are racist and sexist. Meet the women trying to fix them

    运行我们生活的算法是种族歧视和性别歧视。 认识试图修复她们的女人

  3. Google apologizes after its Vision AI produced racist results

    Google在Vision AI产生种族歧视结果后道歉

  4. Healthcare Algorithms Are Biased, and the Results Can Be Deadly

    医疗保健算法存在偏差,结果可能是致命的

  5. Self-Driving cars more likely to drive into black people

    自动驾驶汽车更有可能驶入黑人

  6. Why it’s totally unsurprising that Amazon’s recruitment AI was biased against women

    为什么亚马逊的招聘AI偏向女性完全不奇怪

翻译自: https://towardsdatascience.com/our-machine-learning-algorithms-are-magnifying-bias-and-perpetuating-social-disparities-6beb6a03c939

机器学习算法的差异


http://www.taodudu.cc/news/show-1874074.html

相关文章:

  • ai人工智能_AI破坏已经开始
  • 无监督学习 k-means_无监督学习-第5部分
  • 负熵主义者_未来主义者
  • ai医疗行业研究_我作为AI医疗保健研究员的第一个月
  • 梯度离散_使用策略梯度同时进行连续/离散超参数调整
  • 机械工程人工智能_机械工程中的人工智能
  • 遗传算法是机器学习算法嘛?_基于遗传算法的机器人控制器方法
  • ai人工智能对话了_对话式AI:智能虚拟助手和未来之路。
  • mnist 转图像_解决MNIST图像分类问题
  • roc-auc_AUC-ROC技术的局限性
  • 根据吴安德(斯坦福大学深度学习讲座),您应该如何阅读研究论文
  • ibm watson_使用IBM Watson Assistant构建AI私人教练-第1部分
  • ai会取代程序员吗_机器会取代程序员吗?
  • xkcd目录_12条展示AI真相的XKCD片段
  • 怎样理解电脑评分_电脑可以理解我们的情绪吗?
  • ai 数据模型 下载_为什么需要将AI模型像数据一样对待
  • 对话生成 深度强化学习_通过深度学习与死人对话
  • 波普尔心智格列高利心智_心智与人工智能理论
  • 深度学习计算机视觉的简介_商业用途计算机视觉简介
  • slack 聊天机器人_使用Node.js和Symanto的Text Analytics API在Slack中创建情感机器人
  • c语言八数码问题启发式搜索_一种快速且简单的AI启发式语言学习方法
  • 机器学习库线性回归代码_PyCaret回归:更好的机器学习库
  • 元学习:学习学习
  • 深度学习去雨论文代码_将深度学习研究论文转换为有用的代码
  • r-cnn 行人检测_了解对象检测和R-CNN。
  • 情态 语态_情绪与情态与对话情感
  • gan loss gan_我的GAN怎么了?
  • h5py group_人工智能驱动的零售:H&M Group如何做到
  • openai-gpt_GPT-3的不道德故事:OpenAI的百万美元模型
  • 通话时自动中断音乐播放_您知道用户在何处以及为何中断通话吗?

机器学习算法的差异_我们的机器学习算法可放大偏差并永久保留社会差异相关推荐

  1. 范数在机器学习中的作用_设计在机器学习中的作用

    范数在机器学习中的作用 Today, machine learning (ML) is a component of practically all new software products. Fo ...

  2. 机器学习模型管理平台_如何管理机器学习模型

    机器学习模型管理平台 Michael Berthold是KNIME的创始人兼首席执行官. 在当今快节奏的分析开发环境中,数据科学家通常承担的任务远不只是建立机器学习模型并将其部署到生产中. 现在,他们 ...

  3. 机器学习 大数据 数据挖掘_什么是机器学习? 来自数据的情报

    机器学习 大数据 数据挖掘 机器学习的定义 机器学习是人工智能的一个分支,其中包括用于自动根据数据创建模型的方法或算法. 与通过遵循明确的规则执行任务的系统不同,机器学习系统从经验中学习. 基于规则的 ...

  4. 蚁群算法java实现_简单蚁群算法 + JAVA实现蚁群算法

    一 引言 蚁群算法(ant colony optimization,ACO),又称蚂蚁算法,是一种用来在图中寻找优化路径的机率型技术.它由Marco Dorigo于1992年在他的博士论文中引入,其灵 ...

  5. labuladong的算法小抄_学会了回溯算法,我终于会做数独了

    经常拿回溯算法来说事儿的,无非就是八皇后问题和数独问题了.那我们今天就通过实际且有趣的例子来讲一下如何用回溯算法来解决数独问题. 一.直观感受 说实话我小的时候也尝试过玩数独游戏,但从来都没有完成过一 ...

  6. 社区发现算法python视频_社区发现FN算法Python实现

    社区发现FN算法Python实现 算法原理 评价指标 结果对比 源码 ​2004年,Newman在GN(Girvan and Newman, 2002)算法的基础上,提出了另外一种快速检测社区的算法, ...

  7. 银行家算法是什么_什么是银行家算法?

    银行家算法是什么 Banker's algorithm is a deadlock avoidance algorithm. It is named so because this algorithm ...

  8. python贝叶斯算法的论文_朴素贝叶斯算法从入门到Python实践

    1,前言 很久不发文章,整理些干货,希望相互学习吧.进入主题,本文主要时说的为朴素贝叶斯分类算法.与逻辑回归,决策树一样,是较为广泛使用的有监督分类算法,简单且易于理解(号称十大数据挖掘算法中最简单的 ...

  9. a*算法的优缺点_轻松理解机器学习算法-朴素贝叶斯

    1.预备知识 贝叶斯定理(Bayes' theorem)是概率论中的一个定理,它跟随机变量的条件概率以及边缘概率分布有关.通常事件A在事件B发生的条件下的概率,与事件B在事件A发生的条件下的概率是不一 ...

  10. 机器学习 模型性能评估_如何评估机器学习模型的性能

    机器学习 模型性能评估 Table of contents: 目录: Why evaluation is necessary?为什么需要评估? Confusion Matrix混淆矩阵 Accurac ...

最新文章

  1. 基于Spring+SpringMvc实现的足球队管理系统
  2. java 循环读取文件_您如何用Java连续读取文件?
  3. 博客作业03--栈和队列
  4. 密码技术--消息认证码及go语言应用
  5. 如果把西游记倒过来看,这才是真正的社会
  6. Promise 的基础用法
  7. 时间同步绝对是一个大问题
  8. ue查找文件中每行第二个单词_UI设计和UE/UX设计有什么区别?它们的晋升路径是什么?...
  9. 关于ShowModalDialog数据缓存的清除
  10. 解决 ‘Could not fetch URL https://pypi.python.org’的问题
  11. 产品设计体会(6010)有关网站改版
  12. 卡方检验用于特征选择
  13. 一款轻量级android图表组件SimpleChart-Kotlin
  14. java存档_Java实现简单棋盘存档和读取功能
  15. ECharts学习(持续更新中)
  16. 用postman测试post接口的设置步骤,参数为json
  17. VHDL与MATLAB卷积译码,基于VHDL的卷积编码实现 详解卷积编码的应用
  18. Iconfont 替代品网站 图标网站推荐
  19. 【2022年】安装vm虚拟机unbuntu 服务器版
  20. Windows 10 LTSC官方版本下载地址

热门文章

  1. property attribute: assign, strong, weak, unsafe_unretain and copy
  2. DOS命令行使用pscp实现远程文件和文件夹传输(转)
  3. [ 淘宝商城 ] 商城SEO
  4. (转)PMP的项目管理5大组
  5. 190616每日一句
  6. Origin 在新打开的工作区添加列
  7. Atitit 反模式 黑名单 异常处理 反模式(antipatterns) 目录 1.1. 记录并抛出(log and throw) 1 1.2. 抛出异常基类(Throwing Excepti
  8. Atitit 命令行执行sql 跨语言 目录 1.1. 无需输入密码,那就不要-p参数即可 1 1.2. 4.使用mysql命令执行 1 1.3. 5.mysql命令执行sql,并将查询结果保存到
  9. Atitit httpclient 概述  rest接口
  10. Atitit ftp概念与ftpclient 目录 1. Concept 1 1.1. Tftp(simple ftp) sftp ssh port22 1 1.2. ftp server