前言:本文尝试由繁到简论述机器视觉的定义和发展历程:


1 什么是机器视觉

What is Computer Vision? Computer Vision has a dual goal.

From the biological science point of view, computer vision aims to come up with computational models of the human visual system.

From the engineering point of view, computer vision aims to build autonomous systems which could perform some of the tasks which the human visual system can perform (and even surpass it in many cases). Many vision tasks are related to the extraction of 3D and temporal information from time-varying 2D data such as obtained by one or more television cameras, and more generally the understanding of such dynamic scenes. Of course, the two goals are intimately related. The properties and characteristics of the human visual system often give inspiration to engineers who are designing computer vision systems. Conversely, computer vision algorithms can offer insights into how the human visual system works. In this paper we shall adopt the engineering point of view.


2 机器视觉发展历史:

2.1 1950s

To understand where we are today in the field of computer vision, it is important to know the history and the key milestones that shaped this rapidly evolving field. The field of computer vision emerged in the 1950s with research along three distinct lines. Replicating the eye, replicating the visual cortex and replicating the rest of the brain. These are cited in order according to their level of difficulty. One early breakthrough came in 1957 in the form of the Perceptron machine. The giant machine tightly tangled with wires was the invention of psychologist and computer vision pioneer, Frank Rosenblatt. Then the same year marked the birth of the pixel. In the spring of 1957, the National Bureau of Standards Scientists, Russell Kirsch, took a photo of his son Walden and scanned it into the computer. To fit the image into computer's limited memory, he divided the picture into a grid. This five centimeters square photo was the first digital image ever created.

2.2 1960s

In 1963, MIT graduate students, Larry, Roberts, submitted a PhD thesis outlining how machines can preserve solid three-dimensional objects by breaking them down into simple two-dimensional figures. Robert's block, well, provided the basis for future computer vision research. Another key moment in the field of computer vision was the founding of the Artificial Intelligence Lab at MIT. One of the co-founders was Marvin Minsky who famously instructed a graduate student in 1966 to connect a camera to a computer and have it describe what it sees, as a summer project. Fast forward, 50 years and we're still working on it. Around the time, ARPANET went live in the fall of 1969. Bell Lab scientists Willard Boyle and George Smith were busy inventing the charge-coupled device, the CCD, which converted photons into electrical impulses, quickly became the preferred technology for capturing high-quality digital images. In October 2009, they were awarded the Nobel Prize in physics for their invention.

2.2 1970s

The 70s saw the first commercial applications of computer vision technology, which is optical character recognition or OCR. Studies in the 1970s form the early foundations for many of the computer vision algorithms that exists today, including extraction of edges from images, labeling of lines, modeling and representations of objects as interconnections of smaller structures, optical flow, and motion estimation. The first commercial digital camera appeared in 1975. The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision.

2.3 1980s

2.4 1990s

projective 3D reconstructions

By the 1990s, research in projective 3D reconstructions led to better understanding of camera calibration. This evolved to methods of sparse 3D reconstructions of scenes from multiple images. Progress was being made in the field of stereo imaging and multi-view stereo techniques. At the same time, variations of graph cut algorithms were used to solve image segmentation, which enabled higher level interpretation of the coherent regions of images. This decade also marked the first time statistical learning techniques were used in practice to recognize faces. Toward the end of 1990s, a significant change came about with the increased interaction between the field of computer graphics and computer vision. This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering. In 2001, two computer scientists, Paul Viola and Michael Jones, triggered a revolution in the field of face detection, which brought computer vision into the spotlight. Later, the face detection framework was successfully incorporated in digital cameras. In 2004, David Lowe published the famous Scale Invariant Feature Transform, which is the breakthrough solution to the correspondence problem. Neural networks got their game on in 2005, where training them was made multiple times faster, much cheaper and more accurate by using off the shelf GPUs, popularly used in gaming consoles. Progress in the field of computer vision accelerated further due to the Internet, thanks to the larger annotated datasets of images becoming available online. The datasets helped drive the field forward by proposing difficult challenges. They also contributed to the rapid growth in importance of learning methods and they helped benchmark and rank computer vision techniques. The ImageNet project started in 2009, which is a large visual database designed for use in visual object recognition software research. Since 2010, the ImageNet Large Scale which will challenge has pitted people against computers to see who does a better job of identifying images. It is not an exaggeration to say that the artificial intelligence boom we see today could be attributed to a single event, the announcement of the 2012 ImageNet challenge results. A research team from University of Toronto submitted a deep convolutional neural network architecture called AlexNet, still used in research to this day, which bet the state of the art back then by a whopping 10.8 percentage point margin. Convolutional neural networks later became the neural network of choice for many data scientists, as it requires very little pre-programming compared to other image processing algorithms. In the last few years, CNNs have been successfully applied to identify faces, objects and traffic signs as well as powering vision in robots and self-driving cars. In 2014, a team of researchers at the University of Montreal introduced the idea that machines can learn faster by having two neural networks compete against each other. One network attempts to generate fake data that looks like the real thing and the other network tries to discriminate the fake from the real. Over time, both networks improve. The generator produces data so real that the discriminator can't tell the difference. Generative adversarial networks are considered a significant breakthrough in computer vision in the past few years. In 2015, Facebook announced that the DeepFace facial recognition algorithm identifies the current person 97.35 percent of the time, putting it more or less on par with humans. While computer vision was barely mentioned in the news before 2015, the news coverage about the topic grew by more than 500 percent since then. Today, we see computer vision related news in the media quite often. Let us have a look at few media articles in the recent times.


参考:

1 机器视觉历史

https://www.pulsarplatform.com/blog/2018/brief-history-computer-vision-vertical-ai-image-recognition/

2 机器视觉时间线

https://www.verdict.co.uk/computer-vision-timeline/

More than 50 years ago Marvin Minsky made the first attempt to mimic the human brain, triggering further research into computers’ ability to process information to make intelligent decisions. Over the years, the process of automating image analysis led to the programming of algorithms. However, it was only from 2010 onward, when there was acceleration in deep learning techniques. In 2012 Google Brain built a neural network of 16,000 computer processors which could recognise pictures of cats using a deep learning algorithm.

Listed below are the major milestones in the computer vision theme, as identified by GlobalData.

1959 – The first digital image scanner was invented by transforming images into grids of numbers.

1963 – Larry Roberts, the father of CV, described the process of deriving 3D info about solid objects from 2D photographs.

1966 – Marvin Minksy instructed a graduate student to connect a camera to a computer and have it described what it sees.

1980 – Kunihiko Fukushima built the ‘neocognitron’, the precursor of modern Convolutional Neural Networks.

1991-93 – Multiplex recording devices were introduced, together with cover video surveillance for ATM machines.

2001 – Two researchers at MIT introduced the first face detection framework (Viola-Jones) that works in real-time.

2009 – Google started testing robot cars on roads.

2010 – Google released Goggles, an image recognition app for searches based on pictures taken by mobile devices.

2010 – To help tag photos, Facebook began using facial recognition.

2011 – Facial recognition was used to help confirm the identity of Osama bin Laden after he is killed in a US raid.

2012 – Google Brain’s neural network recognized pictures of cats using a deep learning algorithm.

2015 – Google launched open-source Machine learning-system TensorFlow.

2016 – Google DeepMind’s AlphaGo algorithm beat the world Go champion.

2017 – Waymo sued Uber for allegedly stealing trade secrets.

2017 – Apple released the iPhone X in 2017, advertising face recognition as one of its primary new features.

2018 – Alibaba’s AI model scored better than humans in a Stanford University reading and comprehension test.

2018 – Amazon sold its real time face recognition system Rekognition to police departments.

2019 – The Indian government announced a facial recognition plan allowing police officers to search images through mobile app.

2019 – The US added four of China’s leading AI start-ups to a trade blacklist.

2019 – The UK High Court ruled that the use of automatic facial recognition technology to search for people in crowds is lawful.

2020 – Intel will launch the Intel Xe graphics card pushing into the GPU market.

2025 – By this time, regulation in FR will significantly diverge between China and US/Europe.

2030 – At least 60% of countries globally will be using AI surveillance technology (it is currently 43% according to CEIP).

This is an edited extract from the Computer Vision – Thematic Research report produced by GlobalData Thematic Research.

https://computer-vision-ai.com/blog/computer-vision-history/

To track the key dates in computer vision history, one should be ready to learn a lot about the boom of technologies in the 20th century in general. Computer vision was once considered a futuristic dream. Then it was labeled as a slowly emerging technology. Today, computer vision is an established interdisciplinary field.

The history of computer vision is a telling example of how one science can impact other fields over a short period of time. Computer vision made image processing, photogrammetry, and computer graphics possible. On top of that, the history of computer vision demonstrates how spheres remotely related to computers get influenced too. For instance, modern computer vision algorithms help to facilitate agriculture, retail shopping, post services etc.

In this post, we have decided to shed light on how computer vision evolved as a science. The main question we have raised is “Who and how contributed to the growth of computer vision technology?” To answer this question, we have prepared a short overview of milestones in computer vision history in the chronological order. Get ready to discover why the first attempt to teach computers to “see” were doomed. Also, we will examine the most influential works in computer vision published in the 20th century. Feel free to join us if you are a professional computer vision developer or just as an enthusiastic admirer of computers!

First Success (the 1950s – 1960s)

Constructing computer vision systems seemed to be an unrealistic task for scientists at the beginning of the 20th century. Neither engineers nor data analysts were fully equipped to extract information from images, let alone videos. As a result, the very idea of information being transformed into editable and systemized data seemed unrealistic. In practice, it meant that all image recognition services, including space imagery and X-rays analysis, demanded a manual tagging

(Source – Brain Wars: Minsky vs. Rosenblatt)

A group of scholars led by Allen Newell was also interested in the investigation the connection between an electronic device and data analysis. The group included Herbert Simon, John McCarthy, Marvin Minsky, and Arthur Samuel. Together, these scientists managed to come up with the first AI programs, for instance, the Logic Theorist and the General Problem Solver. These programs were used to teach computers to play checkers, speak English, and solve simple mathematical tasks. Inspired by the achieved results, the scholars got over-enthusiastic about the potential of computers. The famous Summer Vision Project is a telling example of how unrealistic most expectations about computer vision were back in the 1960s.

(Source – Vision memo)

Another important name in the history of computer vision is Larry Roberts. In his thesis ”Machine perception of three-dimensional solids” (issued in 1963), Roberts outlined basic ideas about how one could extract 3D information from 2D imagery. Known for his “block-centered” approach, Roberts laid down the foundations for further research on computer vision technology.

(Source – Machine Perception Of Three-Dimensional Solids)

The potential of computers and artificial intelligence (aka AI) mesmerized the scientific community in the middle of the 20th century. As a result, more computer labs got funded. That was mostly done by the Department of Defense of the USA). However, the goals that scientists set during the 1960s were too ambitious. They included the full discovery of an advanced computer vision technology, professional and mistake-free machine translation. There is no wonder why quite soon (already in the mid-1970s), the boom caused by AI research in general and computer vision studies, in particular, started to fade. The growing criticism of the AI and computer vision technology resulted in the first ”AI winter”.

Computer Vision History: AI Winter (1970s)

AI winter is a term used to describe a time when AI research was criticized and underfunded. There were several reasons why computer-related studies underwent a skeptical analysis in the 1970s, namely:

  1. Too high and often unrealistic expectations about the potential of AI and computer vision. Some of the goals set at the time are still to be reached. Needless to say, the research in AI and computer vision slowed down. That is because of the military, commercial, and philosophical pressure.
  2. The lack of scientific support on an international level. Although the computer vision laboratories started to appear already in the 1960s, mostly computer vision scholars worked as individuals and isolated groups.
  3. Imperfect computing capacity. As any data analysis implies processing huge amounts of images, characters, and videos, this analysis can be performed only on advanced devices with relatively high computing capacity.

All the above-mentioned factors combined led to a decrease in the quantity and quality of computer vision studies.

Despite the sharp cutback in the research, the 1970s are also marked as years when computer vision got its first recognition as a commercial field. In 1974, Kurzweil Computer Products offered their first optical character recognition (aka OCR) program. This program was aimed at overcoming the handicap of blindness. That was done by giving free access to media printed in any font. Sure thing, the intelligent character recognition quickly won the attention of the public sector. Moreover, it was used by the industries, and individual researchers.

(Source – People of the Assoc. for Computing Machinery: Ray Kurzweil)

Computer Vision History: Vision by Marr (1980s)

One of the main advocates of computer vision was David Marr. Thanks to his “Vision. A Computational Investigation into the Human Representation and Processing of Visual Information”, published in 1982 posthumously, computer vision was brought to a whole new level.

(Source – Vision: Marr Lecture Index)

What Marr offered was a simple and elegant way to create 3D sketches from 2D (and 2.5D) sketches of an image. During the first stage of computer vision analysis, Marr was applying edge detection and image segmentation techniques to an initial image in order to create 2D models or so-called “primal sketches”. After that, these primal sketches were further processed with a help of binocular stereo to get 2.5D sketches. The final stage of Marr’s analysis was about developing 3D models out of the 2.5D images.

In spite of the fact that modern computer vision scientists consider Marr’s approach to be too complicated and not goal driven, his work remains one of the biggest breakthroughs in the history of computer vision.

Mathematical Tools (the 1990s – until today)

With computers getting cheaper and quicker and the amount of data constantly increasing, computer vision history took a turn towards mathematical algorithms. Most modern studies on computer vision apply linear algebra, projective and differential geometry, as well as statistics to solve numerous tasks connected with image and video recognition and 3D modeling.

A telling example how computer vision is influenced by mathematics these days is the eigenface approach. Using the covariance matrix (a mathematics term) and findings of L. Sirovich and M. Kirby (1987), Matthew Turk and Alex Pentland created the eigenface automated face recognition system. Based on the probability theory and statistics, this system can be applied not only to identify the existing faces but also to generate new ones.

Lucas-Kanade method is one more example of how computer vision history is connected with mathematics. Developed by Bruce D. Lucas and Takeo Kanade, this method argues that pixels in the neighborhood are constant and dependant on each other. This is why it makes sense to identify the contents of an image by using the optical flow equation.

John F. Canny is another scholar who managed to make a solid contribution to the history of computer vision. He developed the Canny edge detector, the instrument that helps to identify a vast majority of image edges easily.

(Source – Canny edge detector demos)

As computer vision technology is getting comparatively inexpensive, more enterprises have implemented this technology in the manufacturing cycles. Computers are trained to scan products to check their quality, sort mail, and tag images and videos. Almost any phase of manufacturing can be tracked with a help of industrial machine vision technology.

However, as computer vision history demonstrates, human interference is still needed to train computers when it comes to image tagging and video tracking. To tag, sort, and identify both stable images and moving objects, any computer vision system has to be given enough examples, already marked and categorized. In other words, the future when computers take over the world is a bit more distant than it may seem for an average computer enthusiast.


https://www.phase1vision.com/resources/timeline

机器视觉【基础】什么是机器视觉?相关推荐

  1. 数字图像与机器视觉基础补充(2)--颜色转换+分割车牌

    录 一.彩色图像文件转灰度文件 1.1 使用opencv 1.1.1 通过cvtColor库将其转为灰度 1.1.2 通过分离RGB三个通道得到三个通道的灰度图 1.2 不使用opencv 二.将彩色 ...

  2. 【机器视觉】整合:机器视觉基础知识汇总

    本文系鼎酷IOT部落今日头条号原创,转载请注明出处. 到如今,中国已经成为世界机器视觉发展最为活跃地区,应用范围涵盖了工业.农业.医药.军事.航天.气象等国民经济各个行业.虽然机器视觉的成长速度非常快 ...

  3. 机器视觉基础知识/经典系统/常用领域以及机器视觉常用的图像处理库

    [0-定义] **计算机视觉:**一门研究如何使机器"看"的科学,更进一步的说,就是指用摄影机和计算机代替人眼对目标进行识别.跟踪和测量等机器视觉,并进一步做图像处理,用计算机处理 ...

  4. 【机器视觉基础知识整理】

    机器视觉基础知识整理 本文记录一些比较有用的机器视觉硬件知识的文章链接,留以备用.会不定时更新补充,也欢迎大家下方评论补充,一起将有用的好文收集整理起来,用的时候也方便一些.(注:本文只附录文章链接, ...

  5. 【计算机视觉】数字图像与机器视觉基础

    一.位图文件分析 1. 什么是位图 位图图像(bitmap),亦称为点阵图像或栅格图像,是由称作像素(图片元素)的单个点组成的.这些点可以进行不同的排列和染色以构成图样.当放大位图时,可以看见赖以构成 ...

  6. 数字图像与机器视觉基础补充(1)——区分位图和图像处理

    目录 一.图片转化 (1)原图 (2)转换为位图 (3)压缩率 二.区分位图 (1)16/32位位图对照 (2)256/16/单色位图对照 三.图片处理编程 (一)奇异函数分解(SDV) (二)用图像 ...

  7. 机器视觉基础笔记01

    机器视觉基础笔记01 1. 什么是图像?怎么定义图像? 图像定义为二维函数 *f(x,y)*,其中*x,y*是空间坐标,*f(x,y)*是点 *(x,y)*的幅值.灰度图像是一个二维灰度(或亮度)函数 ...

  8. 机器视觉(四)——机器视觉应用

    机器视觉(四)--机器视觉应用 目录 一.人脸识别 二 .物体跟踪 三.二维码识别 四.物体识别 一.人脸识别 人脸识别需要在输入的图像中确定人脸(如果存在)的位置.大小和姿态,往往用于生物特征识别. ...

  9. 施努卡:机器视觉系统光源(机器视觉环形光源应用)

    机器视觉系统光源 机器视觉光源中,只有环形光源是有角度的,不同的角度可以检测不同的产品,而其他的背光源.条形光源.线形光源.点光源.无影光源等都是没有角度的. 一般以灯珠的发光方向与水平的夹角来定义角 ...

  10. 机器视觉(12)- 机器视觉解决方案供应商

    机器视觉是一项综合技术,包括图像处理.机械工程技术.控制.电光源照明.光学成像.传感器.模拟与数字视频技术.计算机软硬件技术(图像增强和分析算法.图像卡. I/O卡等).一个典型的机器视觉应用系统包括 ...

最新文章

  1. python-冒泡排序
  2. NetBeans 时事通讯(刊号 # 22 - Aug 18, 2008)
  3. oracle nvachar 长度,Oracle中varchar、varchar2和nvarchar、nvarchar2
  4. ASP.NET MVC 入门7、Hellper与数据的提交与绑定
  5. python数据结构 树_Python数据结构——AVL树的实现
  6. 怎么设置服务器上的文件夹密码,文件服务器怎么设置文件夹密码
  7. 番茄花园 Ghost XP SP3 极速装机版 V2013.05
  8. .net core在线考试系统源码
  9. MYSQL 开窗函数
  10. IDEA官网以往版本下载
  11. 人类微生物组和缺失遗传力--读论文
  12. 科学计算机的用途,科学计算器应用介绍及主要用途
  13. 联发科毫米波雷达解决方案芯片MT2706(Autus R10)
  14. 获取空气质量站点数据 城市数据 日数据 时数据
  15. Mybatis-Plus 基础知识点
  16. 神奇葩! 史上最牛的博士论文答辩
  17. 网管技巧:如何修改路由器和交换机的密码
  18. 计算机三级网络技术(补充)
  19. Arista发布容器化、开放EOS版本
  20. 艾略特波段理论(转载)

热门文章

  1. 从一个简单的“欢迎“页面开始小程序之旅
  2. C语言 求sin(x)的近似值
  3. 最全总结,GitHub Action自动化部署
  4. C#LeetCode刷题之#16-最接近的三数之和(3Sum Closest)
  5. docker 容器 日志_如何为Docker容器设置日志轮换
  6. 数据可视化怎么完成的_完成期望后会发生什么:可视化育儿数据
  7. word 职称计算机考试大纲,全国职称计算机考试Word2003大纲.doc
  8. ipython和anaconda区别_anaconda和python区别
  9. 用python可以画的可爱的图形_利用Python绘制诱人的桑基图
  10. Python使用socketServer包搭建简易服务器过程详解