面试问到处理过什么棘手问题

Artificial Intelligence is an umbrella term used to describe a rapidly evolving, highly competitive technological field. It is often used erroneously and has come to define so many different approaches that even some experts are not able to define, in plain terms, exactly what artificial intelligence is. This makes the rapidly growing field of AI tricky to navigate and even more difficult to regulate properly.

人工智能是一个笼统的术语,用于描述快速发展,竞争激烈的技术领域。 它经常被错误地使用,并且已经定义了许多不同的方法,甚至一些专家也无法简单地定义什么是人工智能。 这使得Swift发展的AI领域难以驾驭,甚至难以正确调节。

The point of regulation should be to protect people from physical, mental, environmental, social, or financial harm caused by the actions or negligence of others. (Some may add requirements like “fairness” or “transparency”, or expand the protection to animals, plants, institutions, historic landmarks, etc. For this article, let’s stick to the general point described above). Regulation doesn’t guarantee that accidents won’t happen. But, if something were to go wrong, there has to be a fix. This requires both explainability (to know why the error occurred) and determinism (to assure the fix works every time) in the solution.

监管的重点应该是保护人们免受他人的行为或疏忽造成的身体,精神,环境,社会或经济伤害。 (有些人可能会添加诸如“公平”或“透明度”之类的要求,或者将保护范围扩大到动植物,机构,历史地标等。对于本文,让我们坚持上述一般要点)。 法规并不能保证不会发生事故。 但是,如果出现问题,必须进行修复。 这既需要解决方案的可解释性(要知道为什么会发生错误),也需要确定性(以确保每次都能修复)。

Imagine if someone asked, “When should we start putting regulations on computer software?” That’s not a very precise question. What kind of software? Video games? Spread sheets? Malware? Similarly, artificial intelligence can be implemented in many ways. It’s important to distinguish types and use cases. Below, some basic types are briefly described.

假设有人问:“我们什么时候应该开始对计算机软件制定法规?” 这不是一个非常精确的问题。 什么样的软件? 视频游戏? 分页了吗? 恶意软件? 同样,人工智能可以通过多种方式实现。 区分类型和用例很重要。 下面,简要描述一些基本类型。

1. Automation — A lot of what is called “AI” today is simply what was called “automation” a decade ago. Someone notices a pattern in their daily work with some variables, and writes up a program to repeat that work for them. There’s no learning by the program. The “intelligence” is provided by the developer when coding. Occasionally, some patterns change or new variables appear requiring the developer to update the code. If you’ve ever created a “macro” (or cleverly used redstone in Minecraft), then you’ve automated your work.

1. 自动化 -如今,许多所谓的“ AI”就是十年前的“自动化”。 有人在日常工作中注意到带有一些变量的模式,并编写了一个程序来为他们重复这项工作。 该程序没有学习。 开发人员在编码时提供“情报”。 有时,某些模式会更改或出现新的变量,要求开发人员更新代码。 如果您曾经创建过一个“宏”(或者在Minecraft中巧妙地使用了红石),那么您就可以使工作自动化。

2. Modeling — A step more sophisticated than mere automation, modeling requires a developer to understand the problem enough to consider edge cases, variables, and patterns not yet seen. The better the model, the more of the possibility space is covered without going overboard to handle cases that will never be encountered. Models are also not capable of learning. Again, the “intelligence” is provided by the developer. Models are static and require manual effort to improve over time. These work best in deterministic, well-defined problem domains where information is fully known, such as chess. All the rules are clearly understood. Pieces move in exactly specified ways. Everyone sees the full board — nothing is hidden. The variable is the opponent’s choices, but they are restricted to a finite set of possibilities. Brute-force methods that test all possibilities before selecting, or search algorithms (e.g. A*, pronounced “A star”) that reduce the number of possibilities to test, find optimal game play. People have also applied models to non-deterministic (i.e. stochastic) problems that also don’t provide all the information. This is what the weather service does when telling you that there is an 80% chance of rain tomorrow. There’s no way for them to know where every molecule is, or their velocity, yet there are multiple “weather models” that provide a reasonable accounting of the possibilities. Notice that the results are returned as probabilities instead of absolutes like in chess. Similarly, quants (“quantitative analysts”) create models to predict the stock market.

2. 建模 - 建模比单纯的自动化要复杂得多,它要求开发人员充分了解问题,以考虑尚未发现的极端情况,变量和模式。 模型越好,在不过度处理无法遇到的案例的情况下,覆盖的可能性空间就越大。 模型也无法学习。 同样,“情报”由开发商提供。 模型是静态的,需要人工来逐步改进。 这些方法在确定性的,定义明确的问题域中效果最好,在这些域中信息是众所周知的,例如国际象棋。 所有规则都清楚理解。 作品以确切指定的方式移动。 每个人都看到了整个局面-没有任何隐藏的东西。 变量是对手的选择,但仅限于有限的可能性集。 在选择之前测试所有可能性的蛮力方法,或搜索算法(例如A *,发音为“ A star”)以减少测试可能性的数量,从而找到最佳的游戏玩法。 人们也将模型应用于不确定的(即随机的)问题,这些问题也无法提供所有信息。 这是气象服务部门在告诉您明天明天下雨的机会为80%时所做的事情。 他们无法知道每个分子的位置或速度,但是有多个“天气模型”可以合理地解释各种可能性。 请注意,结果以概率形式返回,而不是像象棋那样以绝对值形式返回。 同样,量化指标(“量化分析师”)会创建模型来预测股市。

3. Machine Learning — Providing the software a way to change its internal models starts to remove the human from some parts of the solution. Essentially, data becomes the majority of the system’s programming. Humans, however, still create machine learning models, select what they believe is relevant data to use for training, iterate and interpret results until the answers fit the developer’s belief of what a good answer should look like. All this introduces human bias into the solution. Examples of machine learning techniques include “deep learning”, “convolutional neural networks”, “support vector machines”, and “random forests”. These solutions address problem domains that are stochastic in nature, and/or have missing or hidden information. Games of chance, use cases like stock market predictions, actuarial sciences, or complicated “big data” all good candidates for machine learning techniques.

3. 机器学习 -为软件提供一种更改其内部模型的方法,这开始使解决方案中某些部分的人员不再需要。 本质上,数据成为系统编程的主要内容。 但是,人类仍然会创建机器学习模型,选择他们认为相关的数据用于训练,迭代和解释结果,直到答案符合开发人员对好的答案应该是什么样的信念为止。 所有这些都将人为的偏见引入了解决方案。 机器学习技术的示例包括“深度学习”,“卷积神经网络”,“支持向量机”和“随机森林”。 这些解决方案解决了本质上是随机的和/或信息丢失或隐藏的问题域。 机会游戏,诸如股票市场预测之类的用例,精算科学或复杂的“大数据”都是机器学习技术的良好候选人。

4. Artificial General Intelligence (AGI) — This is the holy grail of, well, everything. Building machines that learn and think generally and can adapt to environmental changes on their own without human involvement is the endgame of computing, to pick just one field. There may seem to be a large gap from 3 to 4, but the reality is that all the other solutions are just variations of what already exist.

4. 人工智能 (AGI)-这是一切的圣杯。 仅学习一个领域,构建具有一般学习能力和思维能力并且能够自行适应环境变化而无需人类参与的机器就是计算的最终目标。 从3到4似乎有很大的差距,但是现实是所有其他解决方案只是已经存在的解决方案的变体。

Let’s consider what regulation would mean for each type. Keep in mind that regulation requires people to review and understand what is happening between the inputs and outputs of a system enough to assure the solution can do no harm. Regulation is built on trust. We trust that the regulators both understand and are competent in doing their jobs. We can’t automate regulation, otherwise we are stuck in an endless loop to regulate the software that regulates software that regulates software…see? Regulation requires humans, trust, and competence.

让我们考虑一下每种类型的法规意味着什么。 请记住,法规要求人们检查并了解系统的输入和输出之间发生的情况,以确保解决方案不会造成任何损害。 监管建立在信任之上。 我们相信,监管者既能够理解又能胜任其工作。 我们不能自动执行调节,否则我们陷入了无休止的循环来调节软件,而调节软件又可以调节软件……看吗? 监管需要人类,信任和能力。

Regulations currently exist for the first two types of AI. The third and fourth types become tricky.

当前存在针对前两种AI的法规。 第三和第四类型变得棘手。

Regulating Automation — These programs rely exclusively on the work of people. These types of programs are already regulated in industries and critical applications. For example, the autopilot software on an aircraft must pass stringent certifications, as do many medical devices. Regulation can consist of auditing the software to uncover malware, bugs, or deficiencies. Basically, this aggregates the responsibilities of one person (the developer) with another (the independent auditor). The assumption is that a second person’s check will catch any irregularities or issues. (This is a trivialized view of what actually occurs. Ideally, there are multiple testing teams at different stages of development and deployment). Regulation works here because humans can understand these systems.

调节自动化 -这些程序完全依靠人的工作。 这些类型的程序已经在行业和关键应用中受到监管。 例如,飞机上的自动驾驶软件必须像许多医疗设备一样通过严格的认证。 监管可以包括对软件进行审核以发现恶意软件,错误或缺陷。 基本上,这将一个人(开发人员)与另一个人(独立审计师)的职责汇总在一起。 假设第二个人的支票会发现任何不合规定或问题。 (这是实际情况的简单视图。理想情况下,有多个测试团队处于不同的开发和部署阶段)。 监管之所以在这里起作用,是因为人们可以理解这些系统。

Regulating Modeling — These programs also rely exclusively on the work of people. Therefore, they lend themselves well to regulation. Modeling is a step more sophisticated than automation, so this does get trickier. But, it is still within the realm of trustworthy people competently executing their regulatory duties. The financial models used by banks, for example, are very highly regulated to ensure they work without bias. Modelers must prove to regulators that their models don’t discriminate, say, loan provisions based on ethnicity.

规范建模 -这些程序还完全依赖于人员的工作。 因此,它们很适合监管。 建模比自动化要复杂得多,因此这确实变得棘手。 但是,它仍然属于可信赖的人,他们有能力执行其监管职责。 例如,银行使用的财务模型受到严格的监管,以确保它们运作时不会产生偏差。 建模者必须向监管机构证明,他们的模型不会歧视基于种族的贷款条款。

Regulating Machine Learning — Since these techniques are implemented specifically to tackle problems that are too difficult for mere mortals to understand, regulating them requires something different from the prior two types. Regardless of how trustworthy and competent the regulators, they won’t fully understand the internals of the majority of these machine learning solutions. At least not for any interesting, real-world problems. Definitely not for any non-deterministic solutions. Even treating a specific technique as an understandable model ignores the behavior of that model under load from unvalidated data. Perhaps regulation means that all training data must be verified and validated prior to digestion by the algorithm? This negates the point of doing machine learning in the first place. An example of where regulation is needed for this type of AI is in autonomous vehicles. A suggestion of evidence based results to determine safety is mentioned in a January 2020 article by AP News regarding multiple Tesla crashes where, the executive director of the Center for Auto Safety in Washington said,

调节机器学习 -由于这些技术是专门为解决凡人难以理解的问题而实施的,因此对它们的调节需要与前两种类型有所不同。 无论监管机构如何值得信赖和胜任,他们都不会完全理解大多数这些机器学习解决方案的内部。 至少不是针对任何有趣的现实问题。 绝对不是任何不确定性解决方案。 即使将特定技术视为可理解的模型,也忽略了未验证数据在负载下该模型的行为。 也许法规意味着必须在算法消化之前对所有训练数据进行验证和确认? 首先,这否定了进行机器学习的目的。 在自动驾驶汽车中,需要针对此类AI进行监管的示例。 华盛顿汽车安全中心执行主任在2020年1月的AP News文章中提到了关于基于证据的结果来确定安全性的建议,其中涉及多起特斯拉车祸。

“At some point, the question becomes: How much evidence is needed to determine that the way this technology is being used is unsafe? In this instance, hopefully these tragedies will not be in vain and will lead to something more than an investigation by NHTSA.”

“在某个时候,问题就变成了:需要多少证据来确定该技术的使用方式是不安全的? 在这种情况下,希望这些悲剧不会白费,只会带来更多的事情,而不仅仅是美国国家公路交通安全管理局的调查。”

The article goes on to say:

文章继续说:

“Levine and others have called on the agency to require Tesla to limit the use of Autopilot to mainly four-lane divided highways without cross traffic. They also want Tesla to install a better system to monitor drivers to make sure they’re paying attention all the time. Tesla’s system requires drivers to place their hands on the steering wheel. But federal investigators have found that this system lets drivers zone out for too long.”

莱文和其他人呼吁该机构要求特斯拉将自动驾驶仪的使用范围限制在主要四车道分开的高速公路,且不得交叉行驶。 他们还希望特斯拉安装一个更好的系统来监视驱动程序,以确保他们一直在关注。 特斯拉的系统要求驾驶员将手放在方向盘上。 但是联邦调查人员发现,该系统使驾驶员的区域划分时间过长。”

3 crashes, 3 deaths raise questions about Tesla’s Autopilot

3起撞车,3人死亡引发了有关特斯拉自动驾驶仪的疑问

That’s an appropriate measure for regulating stochastic machine learning systems. It’s less about the known limitations of the software, and more about the way it ought to be -and not be- used.

这是调节随机机器学习系统的适当措施。 它与软件的已知局限性无关,而与软件应被使用和不被使用的方式有关。

Alternative deterministic and explainable machine learning algorithms exist. These offer themselves better to regulations. If regulatory laws are required on the software for specific use cases, then the solutions must be implemented using these fully explainable technologies. These are absolutely necessary for mission-critical applications that attempt to replace type one AIs in industries that are already highly regulated.

存在替代的确定性和可解释的机器学习算法。 这些使自己更好地符合法规。 如果针对特定用例需要软件上的法规,则必须使用这些完全可解释的技术来实施解决方案。 对于试图替换已经受到严格监管的行业中的一类AI的任务关键型应用程序,这些绝对是必需的。

Regulating Artificial General Intelligence — For there to be any chance of regulating AGI solutions, the components of that solution must be completely deterministic and explainable. Compared to the others, this one would seem to be the most challenging in terms of regulations. But, consider the goal: It is ultimately intended that these systems work like human minds. At that point, regulations would revert to regular laws to which we hold people accountable. But, there won’t be a light-switch moment. Before reaching full human-level intelligence, these systems will first evolve through much more humble abilities. They may progress through the equivalence of snail, mouse, squirrel, dog, and monkey minds. If allowed to apply their decisions, it is necessary that some randomness enters the algorithm. This is simply due to decision science, and not a feature of the AI/AGI. Deterministic and stochastic pathways can be separable, therefore regulated independently.

调节人工智能 -为了有可能调节AGI解决方案,该解决方案的组件必须是完全确定性和可解释的。 与其他法规相比,就法规而言,这似乎是最具挑战性的。 但是,请考虑一下目标:最终希望这些系统像人类的思想一样工作。 届时,法规将恢复为我们要追究人民责任的常规法律。 但是,不会有电灯开关的时刻。 在达到完整的人类智能之前,这些系统将首先通过更谦卑的能力发展。 它们可能会通过蜗牛,老鼠,松鼠,狗和猴子的思想等同而发展。 如果允许应用他们的决策,则必须将一些随机性输入算法。 这仅仅是由于决策科学,而不是AI / AGI的功能。 确定性和随机路径可以是可分离的,因此可以独立调节。

Regulations for AI systems already exist. This trend will continue. Unfortunately, our society tends to be reactive instead of proactive. It is likely that regulations will only be implemented after harmful events occur. When these tragedies occur, we shouldn’t rush to blame the technology or developer, solely. Users that misuse the technology and lawmakers that fail to educate themselves on the technology must also shoulder the blame.

人工智能系统的法规已经存在。 这种趋势将继续下去。 不幸的是,我们的社会倾向于被动而不是主动。 法规可能仅在有害事件发生后才能实施。 当这些悲剧发生时,我们不应该仅仅责怪技术或开发人员。 滥用技术的用户和未能对技术进行自我教育的立法者也必须承担责任。

A driver that doesn’t follow the Tesla Autopilot instructions of staying awake, keeping hands on the steering wheel, or operating it only on highways is using the system outside of its design. If an accident occurs, that driver ought to be held responsible. It’s not enough to claim ignorance of the limitations and blame the engineers. Nor are the Tesla marketers absque culpa for naming the system “Autopilot”, giving consumers an overhyped sense of functionality. It is within these very human deceptions, self-made or as active participants, that lawmakers can impose controls. They must, however, be willing to work hard in understanding the technology so that they can delineate where the technology’s limitations end and human limitations begin.

不遵循特斯拉自动驾驶仪的指示保持清醒,把手放在方向盘上或仅在高速公路上操作的驾驶员正在使用其设计之外的系统。 如果发生事故,该驾驶员应承担责任。 仅仅声称对限制的无知并责怪工程师是不够的。 特斯拉营销人员也不是将系统命名为“ Autopilot”的罪魁祸首,从而给消费者带来了过度的功能感。 立法者可以在这些非常人性的欺骗中,无论是自制的还是作为积极参与者,都可以施加控制。 但是,他们必须愿意努力理解技术,以便他们能够描述技术局限性从何而来和人为局限性开始的地方。

翻译自: https://medium.com/swlh/why-is-regulating-artificial-intelligence-so-tricky-f202c967c2b4

面试问到处理过什么棘手问题


http://www.taodudu.cc/news/show-1873889.html

相关文章:

  • python svm向量_支持向量机(SVM)及其Python实现
  • 游戏世界观构建_我们如何构建技术落后的世界
  • 信任的机器_您应该信任机器人吗?
  • ai第二次热潮:思维的转变_基于属性的建议:科技创业公司如何使用AI来转变在线评论和建议
  • 建立RoBERTa模型以发现Reddit小组的情绪
  • 谷歌浏览器老是出现花_Google全新的AI平台值得您花时间吗?
  • nlp gpt论文_开放AI革命性的新NLP模型GPT-3
  • 语音匹配_什么是语音匹配?
  • 传统量化与ai量化对比_量化AI偏差的风险
  • ai策略机器人研究a50_跟上AI研究的策略
  • ai人工智能 工业运用_人工智能在老年人健康中的应用
  • 人工智能民主化无关紧要,数据孤岛以及如何建立一家AI公司
  • 心公正白壁无瑕什么意思?_人工智能可以编写无瑕的代码后,编码会变得无用吗?
  • 人工智能+社交 csdn_关于AI和社交媒体虚假信息,我们需要尽快进行三大讨论
  • 标记偏见_人工智能的影响,偏见和可持续性
  • gpt2 代码自动补全_如果您认为GPT-3使编码器过时,则您可能不编写代码
  • 机器学习 深度学习 ai_什么是AI? 从机器学习到决策自动化
  • 艺术与机器人
  • 中国ai人工智能发展太快_中国的AI:开放采购和幕后玩家
  • 让我们手动计算:深入研究Logistic回归
  • vcenter接管_人工智能接管广告创意
  • 人工智能ai算法_当AI算法脱轨时
  • 人工智能 企业变革_我们如何利用(人工)情报变革医院的运营管理
  • ai 道德_AI如何提升呼叫中心的道德水平?
  • 张北草原和锡林郭勒草原区别_草原:比您不知道的恶魔还强
  • keras pytorch_使用PyTorch重新创建Keras功能API
  • 人工智能ai应用高管指南_解决AI中的种族偏见:好奇心指南
  • 人工智能ai以算法为基础_IT团队如何为AI项目奠定正确的基础
  • ai人工智能_AI偏见如何发生?
  • unityui计分_铅计分成长

面试问到处理过什么棘手问题_为什么调节人工智能如此棘手?相关推荐

  1. 面试问到处理过什么棘手问题_如何处理{保守,棘手,烦人的} API

    面试问到处理过什么棘手问题 您是否曾经与至少出于当前目的而使用不灵活的API进行斗争? 我选择了一种比较棘手的方案-使用参数调用super(-). 有时会有一些API定义构造函数,这些构造函数将强制使 ...

  2. 别看是面试问烂的题目,一面试你照样还是不会系列MySQL四种隔离级别,看完吊打面试官!

    别看是面试问烂的题目,一面试你照样还是不会系列MySQL四种隔离级别,看完吊打面试官! 什么是事务 事务是应用程序中一系列严密的操作,所有操作必须成功完成,否则在每个操作中所作的所有更改都会被撤消.也 ...

  3. 京东某程序员哀叹:在大厂快待废了,出去面试问自己kafka,竟然全忘了!

    在一家公司待久了就容易变得懒惰,原来的许多知识都会慢慢遗忘,这种情况并不少见. 一个京东员工发帖吐槽:感觉在大厂快待废了,出去面试问自己kafka,自己都忘记了.平时用的时候都是别人的中间件,用法不难 ...

  4. 面试问外观模式???这不就是设计模式里面的吗?我给你上一课吧,面试官

    面试问外观模式???这不就是设计模式里面的吗?我给你上一课吧,面试官 外观模式 介绍 实现 步骤 1 Shape.java 步骤 2 Rectangle.java Square.java Circle ...

  5. 前端社招第一次面试问到的题【面试通过5k】

    前端社招第一次面试问到的题[面试通过,工资5k] 1.px跟em的区别? 答:px就是一个绝对像素单位,是固定值,而em是相对单位值,如果自身定义了font-size,则em会根据font-sizef ...

  6. 面试问接口如何测试?

    面试问接口如何进行测试 我们用jmeter来进行接口测试. 第一步:开发接口测试的整体方案: 1.我们要分析出测试需求,并拿到开发提供的接口说明文档: 2.从接口说明文档中整理出接口测试案例,里面包括 ...

  7. 面试问为什么跳槽,该怎么回答?

    在面试过程中,面试官往往会问到候选人的跳槽经历,并询问原因.对于跳槽频繁的应聘者来说,这可能是一个尤其敏感的问题.面试问为什么跳槽,该怎么回答?小编整理了如下的内容供大家做参考. 首先,应聘者在面试时 ...

  8. mysql 查看表v空间自增涨_面试问烂的 MySQL 查询优化,看完屌打面试官!

    Java技术栈 www.javastack.cn 优秀的Java技术公众号 作者:唐立勇 https://segmentfault.com/a/1190000013672421 什么影响了数据库查询速 ...

  9. 面试问烂的 MySQL 查询优化,看完屌打面试官!

    Java大数据修炼之道 优秀的Java技术公众号 作者:唐立勇 https://segmentfault.com/a/1190000013672421 相关阅读 面试问烂的 MySQL 四种隔离级别, ...

  10. 【2023最新】我把面试问烂了的Web安全集合总结了一下(带答案)建议收藏

    人人都有一个进大厂的梦想,而进大厂的门槛也可想而知,所以这里整理了一份安全大厂的面试大全,看完文章如果对你有帮助的话希望能够点赞+收藏+关注!感谢! 本篇文章对于学习Web安全的朋友来说应该是目前最全 ...

最新文章

  1. 【组合数学】排列组合 ( 多重集排列 | 多重集全排列 | 多重集非全排列 所有元素重复度大于排列数 | 多重集非全排列 某些元素重复度小于排列数 )
  2. 2021 - 9 下旬 数据结构-线性表-循环队列-java实现代码
  3. PyTorch基础-模型的保存和加载-09
  4. 设计模式中的观察者模式
  5. firefox更新后标签没了_时隔三月,奶酪增强版 Chrome Edge 双双更新
  6. DEDE 会员调用方法详解
  7. 测试员35岁以后找不到工作?问完了几千人后,我们得到了答案......
  8. 用python画圣诞树-圣诞节!教你用 Python 画棵圣诞树
  9. 0配置EF连接MySql数据库_第八节:EF Core连接MySql数据库
  10. vhg电路是什么意思_over是什么意思?
  11. 关于ADC采样的采样频率,采样时间的问题
  12. 经典垃圾收集器(三)
  13. 2023年4月国产数据库大事记-墨天轮
  14. MQTT Broker 比较与选型
  15. 微信小程序电商项目开发实战漫谈
  16. BCB vs. VC++
  17. layui 日期选择器 laydate详细参数用法大全,一键复制粘贴
  18. 来也科技总裁、按键精灵之父,给开发者的一封信
  19. 如何找回word文档的打开密码
  20. ros卸载和安装,问题总结

热门文章

  1. Spring JDBC 【继承JdbcDaoSupport】
  2. [QT]文件夹过滤问题
  3. EXTJS学习笔记:grid之分组实现groupingview
  4. JS包含js文件 动态添加css
  5. Atitit 软件开发基本法 目录 1. 第一章 总纲(包含各大原则 定律 法则) 1 2. 爱提拉的原则 3 2.1. 简单原则 KISS 3 2.2. 提升可读性 面向人类编程 而不是面向机
  6. Atitit 字符串转换数组main参数解析 args splitByWholeSeparator String string= -host 101.1 8*124 -db 1
  7. atitit 解决教学记忆问题 压缩算法原理  哈夫曼 LZ77 gzip  zlib deflate算法.docx 目录 1. 压缩理论 1 1.1. 柯氏复杂性 1 2. 1 RLE 1
  8. Atitit mvc之道 attilax著 以vue为例 1. Atitti vue的几大概念 1 1.1. 声明式渲染 1 1.2. 条件与循环 2 1.3. 处理用户输入 click事件 2 1
  9. Atiitt 可视化 报表 图表之道 attilax著 Atitit.可视化与报表原理与概论 1.  信息可视化 1 2. Gui可视化 2 2.1. atitit 知识的可视化.docx 2
  10. Atitit.pagging  翻页功能解决方案专题 与 目录大纲 v3 r44.docx