Alex Hanna博士是社会学家和研究科学家,致力于Google的机器学习公平性和道德AI。 (Dr. Alex Hanna is a sociologist and research scientist working on machine learning fairness and ethical AI at Google.)

Before that, she was an Assistant Professor at the Institute of Communication, Culture, Information and Technology at the University of Toronto. She received her PhD in Sociology from the University of Wisconsin-Madison.

在此之前,她是多伦多大学传播,文化,信息和技术研究所的助理教授。 她获得了威斯康星大学麦迪逊分校的社会学博士学位。

To learn more about Dr. Alex Hanna’s background and work, you can check out her personal website and follow her on Twitter.

要了解有关Alex Hanna博士的背景和工作的更多信息,您可以查看她的 个人网站 并在 Twitter上 关注她

1.您在公司中扮演什么角色? (1. What’s your role within the company?)

I’m a research scientist within the Ethical AI team at Google. The Ethical AI team focuses on ensuring that AI is deployed in ethical and fair ways. People on our team have been focused on a few different domains around fairness in algorithmic systems, and understanding transparency in models and data, and also trying to understand various ways of reporting all those levels.

我是道德AI团队的研究科学家 在Google。 道德的AI团队致力于确保以道德和公平的方式部署AI。 我们团队中的人们专注于围绕算法系统公平性的几个不同领域,理解模型和数据的透明性,并试图了解报告所有这些级别的各种方式。

I’m the team’s first sociologist, and a lot of what I do is focusing on understanding the assumptions of data that are used in machine learning systems, where the data comes from, and the sorts of considerations that are given when it comes to the data that machine learning models are trained on.

我是该团队的第一位社会学家,我所做的很多工作都集中在理解机器学习系统中使用的数据假设,数据来自何处以及在涉及到数据时要考虑的各种因素。训练机器学习模型的数据。

2.您的背景是什么? 您是如何参与这项工作的,直到今天您所处的位置? (2. What’s your background? How did you get involved with this work and end up where you are today?)

My PhD is in Sociology, and I started getting involved with AI ethics around 2017, when I attended a multi disciplinary workshop in the Netherlands with a few people that are in the space. I started wanting to get involved a bit more, and so I started to read the literature on it.

我的博士学位是社会学,我于2017年左右开始涉足AI伦理学,当时我参加了在荷兰举行的一次多学科研讨会,当时在场的人数很少。 我开始想更多地参与其中,因此我开始阅读有关它的文献。

Before I was at Google, I was a professor at the University of Toronto’s Institute of Communication, Culture, Information and Technology. Then I came to Google, first in a different position; but then, I moved into the research direction, as I was already doing that work, which is how I found my way into the role.

在加入Google之前,我曾是多伦多大学传播,文化,信息和技术学院的教授 。 然后我来到Google,先是担任另一个职位。 但是后来,由于我已经在从事研究工作,所以我朝研究方向发展,这就是我找到角色的方式。

Google is very collaborative. I really appreciated that I’ve had the opportunity to work with a lot of different people, and to do work that I found quite important, and to publish it in venues which I thought were important. When I was at Toronto, I was not really having the sort of conversations that I was wanting to have, so that move has been excellent.

Google非常合作。 我非常感谢我有机会与许多不同的人一起工作,并做了我认为非常重要的工作,并将其发布在我认为很重要的场所。 当我在多伦多时,我并没有真正想要的那种对话,所以这一举动非常出色。

The team is really great — it’s got people from a wide variety of backgrounds, and is racially and gender diverse, probably more than any other team I’ve ever worked on, inside academia or outside.

这个团队真的很棒-拥有来自不同背景的人,种族和性别也各不相同,这可能比我曾经在学术界或外部工作过的任何团队都要多。

3.您在公司内部如何运作?您的日常工作是怎样的? (3. How do you operate within the company and what is your day to day work like?)

Our team is a research team that sits within Google Brain. Google Brain is oriented such that you can create things or do research that doesn’t necessarily have to be connected to product; but because of who we are and what we work on, we’re often in many conversations about products and policy. So, we can do both original research which may not have a direct bearing on product and what products do, but also work that has very serious policy and product implications.

我们的团队是位于Google Brain内部的研究团队。 Google Brain面向您,您可以创建不一定要与产品相关联的事物或进行研究; 但是由于我们是谁以及我们在做什么,我们经常在关于产品和政策的许多对话中。 因此,我们既可以进行可能与产品没有直接关系的原始研究,又可以进行产品的作用,但是也可以进行对政策和产品有非常严重影响的研究。

There’s also a lot of different stuff that we do both internally and externally related to policies around fairness and data ethics that people have come to our team to ask about. One thing I’m working on right now is thinking about how we annotate for gender in machine learning systems.

我们在内部和外部也有很多不同的东西,涉及到公平和数据道德方面的政策,人们来找我们的团队来询问。 我现在正在做的一件事是考虑如何在机器学习系统中为性别添加注释。

Right now, machine learning systems such as most facial recognition or facial analysis systems look at a face and make some sort of judgment about gender, which is nonsensical because you can’t judge gender from someone’s face, gender identity is an internal state — what you’re getting at is simply something more like gender expression. Google has already stepped away from building a public API gender classifier, and has removed the gender terms from their vision APIs.

目前,诸如大多数面部识别或面部分析系统之类的机器学习系统会看着一张脸,并对性别做出某种判断,这是荒谬的,因为您无法从某人的脸上判断性别,性别认同是一种内部状态–您得到的仅仅是性别表达。 Google已经远离建立公共API性别分类器的步骤,并已从其视觉API中删除了性别术语 。

This and similar things are in no small part due to our team. So we’re continuing that work, and we’re trying to come up with internal and external guidelines; because you shouldn’t annotate gender in order to build a classifier, and you need to take into consideration what the purpose of the system would be, and how it can potentially have detrimental downstream effects. So that’s one project I’m involved with.

这和类似的事情在很大程度上要归功于我们的团队。 因此,我们正在继续进行这项工作,并试图提出内部和外部准则; 因为您不应该为了建立分类器而对性别进行注释,并且需要考虑系统的目的以及它可能对下游产生不利影响的原因。 这是我参与的一个项目。

We focus on original research, but with real product and policy impact.

我们专注于原始研究,但会对产品和政策产生实际影响。

4.您或您的团队所影响的积极变化的具体例子是什么? (4. What’s a concrete example of a positive change you or your team influenced?)

Model Cards is a framework that we published on two years ago at ACM FAT* (now ACM FAccT). It’s a way of reporting on models, and their performance across different demographic groups and also particular ethical considerations for models. That work was published in an academic venue, but then within the last year it was adopted in many different places. For instance, Google Cloud has two public APIs that now have public model cards, so anybody can go in and look at how they do across different population subgroups, and report out what they’re doing and how well they do.

模型卡是我们两年前在ACM FAT *(现为ACM FAccT)上发布的框架。 这是报告模型,模型在不同人口群体中的表现以及模型的特殊道德考虑因素的一种方式。 该作品在学术场所出版,但在去年被许多不同的地方采用。 例如, Google Cloud有两个公共API,现在有公共模型卡 ,因此任何人都可以进入并研究它们在不同人口子群体中的表现,并报告它们的工作状况和运行状况。

Example of Google’s Model Cards on Face Detection
Google的人脸识别模型卡示例

The model card work has also led to new technical infrastructure, namely Fairness Indicators, that allows for statistics that are a part of the framework to be computed more automatically. And, the framework outlines the steps that are necessary if you’re going to do this work, and what you need to consider — it’s not just pushing a button and seeing how your thing does, and then walking away from it. You have to think deeply about the model, and how it’s being used in practice. So that itself is something that’s very appealing to particular teams.

模型卡的工作还导致了新的技术基础结构,即公平性指标 ,它允许更自动地计算作为框架一部分的统计数据。 而且,该框架概述了如果您要进行此工作所需的步骤以及需要考虑的内容–它不仅是按一个按钮并查看您的工作方式,然后再走一步。 您必须深思模型,以及如何在实践中使用它。 因此,它本身对于某些团队来说非常有吸引力。

5.做这项工作或担任这个角色最让您惊讶的是? (5. What’s surprised you the most about doing this work or being in this role?)

I’ve been happily surprised that there’s a team that I’ve found that is very interdisciplinary, and where I’m still very welcome and happy. And this is my first job outside of academia, so I guess I was surprised that there’s a lot of interest in this kind of work.

令我感到惊讶的是,我发现有一支非常跨学科的团队,而且我仍然非常欢迎和高兴。 这是我在学术界以外的第一份工作,所以我想我对这种工作引起了极大的兴趣,对此我感到惊讶。

I guess it’s not surprising that some of the more logical conclusions of this work clash with the imperatives of working in a corporation in a capitalist economy, but I also think that part of this work is about trying to get people to the point of realizing that.

我想这项工作的一些更合乎逻辑的结论与在资本主义经济中在公司工作的必要性相冲突并不奇怪,但是我也认为这项工作的一部分是试图使人们意识到这一点。 。

For instance, Manny Moss, Jake Metcalf, and danah boyd wrote this article that talks about what they call ethics owners in tech corporations, that is, people in my type of roles. They’ve tried to go through what this looks like across different companies by performing qualitative interviews.

例如, 曼尼·莫斯 ( Manny Moss) , 杰克·梅特卡夫 ( Jake Metcalf )和丹娜·博伊德(danah boyd)撰写了这篇文章,讨论了他们所谓的科技公司的道德所有者 ,即担任我这种角色的人。 他们试图通过进行定性访谈来了解不同公司的情况。

Image of the article header (“Owning Ethics”) authored by Jacob Metcalf, Emanuel Moss and danah boyd
Jacob Metcalf,Emanuel Moss和danah boyd撰写的文章标题(“拥有伦理”)的图像

And it confirms my intuitions when they say that there are limitations — what they’re effectively saying is that ethics owners are trying to do X, and guide product and policy development in particular ways, but market fundamentalism and tech solutionism resist those pushes.

当他们说有局限性时,这证实了我的直觉-他们实际上是在说道德负责人试图做X,并以特定的方式指导产品和政策的开发,但是市场原教旨主义和技术解决方案却抵制了这些推动。

6.您最大的挑战和收获是什么? 您认为科技公司有可能关心道德并认真对待吗? 我们是否会达到伦理学家跨团队扎根的地步? (6. What have been your biggest challenges and takeaways? Do you think it’s possible for tech companies to care about ethics and take it seriously? Will we ever get to a point where ethicists are embedded across teams?)

The challenge of building ethical tech is that we are embedded in a system of racial surveillance capitalism. No one necessarily goes out and intentionally tries to make an AI that works poorer for people with darker skin, or to return search results that show sexualized images when you search Black or Asian girls. As Ruha Benjamin and Safiya Noble, amongst other critical tech and race scholars have shown, these machines are racist in their effects but typically not their intent. And that manifests in myriad ways because Google Search and AI are sociotechnical systems.

建设道德技术的挑战在于,我们必须融入种族监督资本主义体系中。 没有人一定会走出去,有意尝试使一种AI对皮肤较黑的人表现较差 ,或返回当搜索黑人或亚洲女孩时显示性图像的搜索结果。 正如鲁哈·本杰明 ( Ruha Benjamin)和萨菲娅·诺布尔 ( Safiya Noble )以及其他批判性技术和种族学者所表明的那样,这些机器的作用是种族主义的,但通常不是意图的。 由于Google搜索和AI是社会技术系统,因此这以多种方式体现。

Video: Joy Buolamwini’s Ted Talk, “How I’m Fighting Bias in Algorithms”, November 2016
视频:Joy Buolamwini的Ted演讲,“我如何在算法中对抗偏差”,2016年11月

It’s not just the training data that these systems are based upon, but because these systems are developed within this social and economic system. However, the incentives to act ethically can’t be left only to negative sanctions like brand risk. That’s why there needs to be regulation for data protection and AI — because it presents a way to actually force Silicon Valley’s hand on it.

这些系统不仅基于训练数据,还因为这些系统是在此社会和经济系统中开发的。 但是,采取道德行动的动机不能仅归因于品牌风险等负面制裁。 这就是为什么需要对数据保护和AI进行监管的原因-因为它提供了一种实际迫使硅谷介入的方法。

7.您如何处理未认真对待自己的声音或工作的情况? 您有关于如何处理的建议吗? (7. How do you deal with situations where your voice or work isn’t taken seriously? Do you have advice for how to deal with that?)

I suppose I just get louder. But also finding people who will be advocates or champions. I think that’s a good strategy [though] building those networks takes time.

我想我会变得更大声。 而且还要找到将成为倡导者或拥护者的人。 我认为这是一个好策略,尽管建立这些网络需要时间。

8.您希望在此空间中的哪个区域能发挥更大的作用或希望看到改善? (8. What’s an area in this space that you wish you could make more of an impact in or want to see improve?)

I think that a lot of the vision of this work is still that people consider it to be highly technical, when it shouldn’t be considered technical. There’s plenty of work that’s been done in this space that really comes from different fields. Researchers need to be humble, especially technical researchers.

我认为这项工作的很多愿景仍然是人们认为它是高度技术性的,而不应将其视为技术性的。 在这个领域已经完成了很多工作,这些工作实际上来自不同领域。 研究人员必须谦虚,尤其是技术研究人员。

And of course I’m going to say this, because much of the research I do is non-technical, but there’s a plethora of work that’s been done in science and technology studies and sociology and whatnot that has a lot of bearing on this, and I feel like those ideas need to be taken seriously.

当然,我要说的是,因为我所做的大部分研究都是非技术性的,但是在科学技术研究和社会学领域已经做了大量的工作,而与此相关的事情很多,我觉得这些想法需要认真对待。

If you just think it’s a technical problem, you’re missing the forest for the trees.

如果您只是认为这是一个技术问题,那么您就错过了森林。

9.您仰望谁? 谁激励你? (9. Who do you look up to? Who inspires you?)

A lot of the folks driving the AI ethics conversations are women, especially Black women, women of color, and queer folks. A lot of the folks who are doing the heavy lifting, or have the most nuanced and creative views are that.

推动AI伦理对话的很多人都是女性,尤其是黑人女性,有色人种和同志。 许多正在做繁重工作或拥有最细微和最有创意的观点的人就是这样。

This is a funny thing to say, but I really respect my manager, Timnit Gebru (Twitter). In 2016 she went to NeurIPS and it was all these white guys, and afterwards she started Black in AI as a group to advocate for more black AI researchers, and it’s awesome. It took an immense amount of work. And instead of just being like, I don’t want to work in AI, she was like, let’s just change this entire field. So I really respect her and her advocacy. She’s been an immense advocate for me in social science research, and is just a great advocate in general.

这句话很有趣,但是我非常尊重我的经理Timnit Gebru ( Twitter )。 在2016年,她去了NeurIPS ,所有这些都是白人,然后她在AI中成立了Black,作为一个团队来倡导更多的黑人AI研究人员,这很棒。 花费了大量的工作。 她不只是想不想,我不想在人工智能领域工作,而是让我们改变整个领域。 因此,我非常尊重她和她的倡导。 她一直是我在社会科学研究领域的大力拥护者,而且总体而言只是一名伟大的拥护者。

Video: “Trends in Fairness and AI Ethics with Timnit Gebru” — The TWIML AI Podcast, January 2020
视频:“与蒂姆尼特·格布鲁(Timnit Gebru)交流公平与人工智能的趋势” — TWIML人工智能播客,2020年1月

I have immense amounts of respect for the organizing work that Tawana Petty has done in Detroit, both with the Detroit Community Technology Project and the Our Data Bodies project, along with the rest of her collaborators in ODB: Tamika Lewis, Mariella Saba, Seeta Peña Gangadharan, and Kim Reynolds. Mariella is also involved with the Stop LAPD Spying Coalition, who have been out in front fighting policing and surveillance technologies.

我对Tawana Petty在底特律所做的组织工作深表敬意,包括底特律社区技术项目和我们的数据机构项目,以及她在ODB中的其他合作者:Tamika Lewis,Mariella Saba,SeetaPeña Gangadharan和Kim Reynolds。 玛丽埃拉(Mariella)还参与了制止LAPD间谍联盟 ( Stop LAPD Spying Coalition)的工作 ,后者在前线警务和监视技术方面一直处于领先地位。

I also really respect danah boyd, who I know has been doing this work for years, and Joan Donovan (Twitter), who started the Critical Internet Studies Slack workspace, is someone who’s also been at this for quite some time. And my friends who have known me for like a decade, like Anna Lauren Hoffmann (Twitter), who helped bring me into this work. She’s a professor at the University of Washington.

我也非常尊敬danah boyd ,我知道他从事这项工作已经很多年了,而开始了Critical Internet Studies Slack工作区的Joan Donovan ( Twitter )也已经从事了很长时间。 还有我认识我已有十年之久的朋友,例如安娜·劳伦·霍夫曼 ( Anna Lauren Hoffmann , 推特 ),他帮助我从事了这项工作。 她是华盛顿大学的教授。

10. 对于想要参与但不知道如何开始的学生,应届毕业生或技术工作者,您有什么建议? (10. What advice do you have for students, new grads or tech workers who want to get involved but don’t know how to start?)

I think what they should do is work on assessing the field. Some part of that is trying to understand what kind of subsection of the work really interests them — it could be something that’s technical, legal scholarship, social theory, political philosophy, or critical race theory, or sociology of gender.

我认为他们应该做的是评估该领域的工作。 其中的一部分是试图弄清作品的哪一部分真正让他们感兴趣—可能是技术,法律学术,社会理论,政治哲学,批判种族理论或性别社会学。

There’s lots of different angles in which to come from it, and there’s just so much work that still needs to be done, especially in translating some of the concepts from one field to another.

它有很多不同的角度,还有很多工作要做,尤其是将某些概念从一个领域转换到另一个领域时。

Folks need to explore the different angles and find what they’re interested in or passionate about, and then from there, read what they find, follow citation trails, and find the scholars, organizers, and activists who are doing work in this space.

人们需要探索不同的角度,找到自己感兴趣或感兴趣的事物,然后从那里阅读发现的事物,遵循引文线索,并找到在此领域开展工作的学者,组织者和活动家。

感谢您收看采访并支持该系列! (Thanks for checking out the interview and supporting the series!)

This project was started by Tiffany Jiang and Shelly Bensal out of curiosity. Even before we graduated and started working in tech, we asked ourselves: Who are the ethicists working within tech companies today? Which companies offer such roles or teams? How much of the work is self-initiated? Lastly, what does “responsible innovation” or “ethics” work entail exactly?

这个项目是由Tiffany Jiang和Shelly Bensal出于好奇而发起的。 甚至在我们毕业并开始从事技术工作之前,我们就问自己: 今天在技术公司工作的道德主义者是谁? 哪些公司提供此类角色或团队? 多少工作是自我发起的? 最后,“负责任的创新”或“道德”的工作究竟意味着什么?

We hope this series can serve as a helpful resource to students, new grads or anybody wishing to do work in this space but don’t know how to get involved. If you have any thoughts or comments you want to share with us, we’d love to hear them. Let us know if there’s someone you’d like us to interview next!

我们希望本系列可以对学生,应届毕业生或希望在此领域工作但不知道如何参与的任何人提供有用的资源。 如果您有任何想法或意见要与我们分享,我们很乐意听到。 让我们知道是否有人想要我们接下来面试!

Twitter: (@EthicsModels) | Email: ethicsmodels.project@gmail.com.

Twitter: (@EthicsModels) | 电子邮件:ethicsmodels.project@gmail.com。

翻译自: https://medium.com/ethics-models/interview-with-dr-alex-hanna-researcher-on-googles-ethical-ai-team-28f61d8b3a33

http://www.taodudu.cc/news/show-994874.html

相关文章:

  • python度量学习_Python的差异度量
  • 网页视频15分钟自动暂停_在15分钟内学习网页爬取
  • django 性能优化_优化Django管理员
  • ai驱动数据安全治理_JupyterLab中的AI驱动的代码完成
  • python中定义数据结构_Python中的数据结构—简介
  • 数据质量提升_合作提高数据质量
  • 删除wallet里面登机牌_登机牌丢失问题
  • 字符串操作截取后面的字符串_对字符串的5个必知的熊猫操作
  • 数据科学家访谈录 百度网盘_您应该在数据科学访谈中向THEM提问。
  • power bi函数_在Power BI中的行上使用聚合函数
  • 大数定理 中心极限定理_中心极限定理:直观的遍历
  • 探索性数据分析(EDA)-不要问如何,不要问什么
  • 安卓代码还是xml绘制页面_我们应该绘制实际还是预测,预测还是实际还是无关紧要?
  • 云尚制片管理系统_电影制片厂的未来
  • t-sne原理解释_T-SNE解释-数学与直觉
  • js合并同类数组里面的对象_通过同类群组保留估算客户生命周期价值
  • com编程创建快捷方式中文_如何以编程方式为博客创建wordcloud?
  • 基于plotly数据可视化_如何使用Plotly进行数据可视化
  • 用Python创建漂亮的交互式可视化效果
  • php如何减缓gc_管理信息传播-使用数据科学减缓错误信息的传播
  • 泰坦尼克号 数据分析_第1部分:泰坦尼克号-数据分析基础
  • vba数组dim_NDArray — —一个基于Java的N-Dim数组工具包
  • python算法和数据结构_Python中的数据结构和算法
  • python dash_Dash是Databricks Spark后端的理想基于Python的前端
  • 在Python中查找子字符串索引的5种方法
  • 趣味数据故事_坏数据的好故事
  • python分句_Python循环中的分句,继续和其他子句
  • python数据建模数据集_Python中的数据集
  • usgs地震记录如何下载_用大叶草绘制USGS地震数据
  • 数据可视化 信息可视化_更好的数据可视化的8个技巧

Alex Hanna博士:Google道德AI小组研究员相关推荐

  1. Jina AI创始人肖涵博士解读多模态AI的范式变革

    我们正处于人工智能新时代的风口浪尖,正从单模态大步迈向多模态 AI 时代.在 Jina AI,我们的 MLOps 平台帮助企业和开发者加速整个应用开发的过程,在这一范式变革中抢占先机,构建起着眼于未来 ...

  2. Talk预告 | 腾讯AI Lab研究员童湛南京大学谈婧:基于注意力机制的视频自监督表示学习和时序动作检测

    本期为TechBeat人工智能社区第465期线上Talk! 北京时间12月22日(周四)20:00,腾讯AI Lab研究员--童湛&南京大学计算机科学与技术系硕士研究生--谈婧的Talk将准时 ...

  3. 想去Google做AI?先看完这套面试指南(附面试题)

     作者 | 阿司匹林 出品 | 人工智能头条(公众号ID:AI_Thinker) 凭借强大的技术实力和良好的工作氛围,Google 对求职者一直有着强大吸引力. 虽然 Google 在几年前就已经 ...

  4. 想去Google做AI?面试题在手,全程无忧!

    作者 | 阿司匹林 出品 | 人工智能头条(公众号ID:AI_Thinker) 凭借强大的技术实力和良好的工作氛围,Google 对求职者一直有着强大吸引力. 虽然 Google 在几年前就已经退出了 ...

  5. cloud 部署_使用Google Cloud AI平台开发,训练和部署TensorFlow模型

    cloud 部署 实用指南 (A Practical Guide) The TensorFlow ecosystem has become very popular for developing ap ...

  6. 作者:​邓波(1973-),男,博士,北京系统工程研究所研究员。

    邓波(1973-),男,博士,北京系统工程研究所研究员,长期从事软件体系结构.分布式计算与数据处理以及软件质量保证等方向的技术研究工作,是所在单位计算机软件总体技术的学术带头人,目前是中国计算机学会高 ...

  7. 作者:洪学海(1967-),男,博士,中国科学院计算技术研究所研究员,信息技术战略研究中心常务副主任。...

    洪学海(1967-),男,博士,中国科学院计算技术研究所研究员,信息技术战略研究中心常务副主任,兼任中国科学院计算机网络信息中心信息化战略与评估中心主任,主要从事高性能计算.信息服务计算以及信息技术与 ...

  8. 作者:黎建辉(1973-),男,博士,中国科学院计算机网络信息中心研究员、博士生导师...

    黎建辉(1973-),男,博士,中国科学院计算机网络信息中心研究员.博士生导师,大数据技术与应用发展部主任,CODATA中国委员会秘书长,主要研究方向为大数据管理.大数据分析与处理.

  9. 作者:陈昕(1982-),女,博士,中国科学院计算机网络信息中心研究员

    陈昕(1982-),女,博士,中国科学院计算机网络信息中心研究员,主要研究方向为数据可视分析.科学数据管理与服务.

最新文章

  1. ATMEGA328实验电路板
  2. php基础标签大全,HTML基础之HTML常用标签
  3. 图解HashMap(一)
  4. 华为人工智能计算机平台,华为发布首个人工智能移动计算平台
  5. 美团酒旅起源数据治理平台的建设与实践
  6. php常用判断蜘蛛的代码
  7. 如何能自动上传公众号文章到网站里面!
  8. Spring MVC 数据回显
  9. 用AWK来过滤nginx日志中的特定值~~~
  10. Linux学习笔记之查看Linux版本信息
  11. 怎么解绑 微信公众号 小程序 开发平台 开发者
  12. 【网络攻防技术】实验八——SQL注入实验
  13. Blender建模06
  14. uniapp --自我学习
  15. NBMA和BMA的交换方式
  16. 谈谈对Spring IOC(控制反转)的理解--转
  17. 写给自己,人生路远,勿忘初心
  18. 3D打印成型成型原理有哪些?性价比高的教学3D打印机如何选购?
  19. python ctypes教程_python ctypes是什么
  20. linux安装vim plug,VIM 插件管理工具 vim-plug 简明教程

热门文章

  1. Linux系统【五】进程间通信-共享内存mmap
  2. 【Java学习笔记六】常用数据对象之String
  3. Linux pause函数 详解
  4. linux字符驱动之自动创建设备节点
  5. 23. 合并K个排序链表
  6. 判断用户的参数(条件测试语句)
  7. Java集合(一):Java集合概述
  8. mysql 链式查询_MySQL的链接查询
  9. day28 socketserver
  10. WPF自定义控件之列表滑动特效 PowerListBox