脑机接口入门第一篇文章,这篇写的不错,原文题目是A Beginner’s Guide to Brain-Computer Interface and Convolutional Neural Networks
之后我们会适时把它翻译成中文,以飨读者。其实这篇文章可读性也很强,没有很晦涩的句子,可以自行先理解一下。
(https://towardsdatascience.com/a-beginners-guide-to-brain-computer-interface-and-convolutional-neural-networks-9f35bd4af948)
作者主页是:https://towardsdatascience.com/@alexandregonfalonieri

Roadmap
Part 1:
The big picture of brain-computer interface and AI + Research papers
Part 2:
In-depth explanation of neural networks used with BCI

Can the mind connect directly with artificial intelligence, robots and other minds through brain-computer interface (BCI) technologies to transcend our human limitations?
For some, it is a necessity to our survival. Indeed, we would need to become cyborgs to be relevant in an artificial intelligence age.

Definition
Brain-Computer Interface (BCI): devices that enable its users to interact with computers by mean of brain-activity only, this activity being generally measured by ElectroEncephaloGraphy (EEG).

Electroencephalography (EEG): physiological method of choice to record the electrical activity generated by the brain via electrodes placed on the scalp surface.

Functional magnetic resonance imaging (fMRI): measures brain activity by detecting changes associated with blood flow.

Functional Near-Infrared Spectroscopy (fNIRS): the use of near-infrared spectroscopy (NIRS) for the purpose of functional neuroimaging. Using fNIRS, brain activity is measured through hemodynamic responses associated with neuron behaviour.

Convolutional Neural Network (CNN): a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data.

Visual Cortex: part of the cerebral cortex that receives and processes sensory nerve impulses from the eyes

History
Sarah Marsch, Guardian news reporter, said “ Brain-computer interfaces (BCI) aren’t a new idea. Various forms of BCI are already available, from ones that sit on top of your head and measure brain signals to devices that are implanted into your brain tissue.” (source)

Most BCIs were initially developed for medical applications. According to Zaza Zuilhof, Lead Designer at Tellart, “Some 220,000 hearing impaired already benefit from cochlear implants, which translate audio signals into electrical pulses sent directly to their brains.” (source)

The article called “The Brief History of Brain Computer Interfaces” gives us many information related to the history of BCI. Indeed, the article says “In the 1970s, research on BCIs started at the University of California, which led to the emergence of the expression brain–computer interface. The focus of BCI research and development continues to be primarily on neuroprosthetics applications that can help restore damaged sight, hearing, and movement. The mid-1990s marked the appearance of the first neuroprosthetic devices for humans. BCI doesn’t read the mind accurately, but detects the smallest of changes in the energy radiated by the brain when you think in a certain way. A BCI recognizes specific energy/ frequency patterns in the brain.

June 2004 marked a significant development in the field when Matthew Nagle became the first human to be implanted with a BCI, Cyberkinetics’s BrainGate™.

In December 2004, Jonathan Wolpaw and researchers at New York State Department of Health’s Wadsworth Center came up with a research report that demonstrated the ability to control a computer using a BCI. In the study, patients were asked to wear a cap that contained electrodes to capture EEG signals from the motor cortex — part of the cerebrum governing movement.

BCI has had a long history centered on control applications: cursors, paralyzed body parts, robotic arms, phone dialing, etc.
Recently Elon Musk entered the industry, announcing a $27 million investment in Neuralink, a venture with the mission to develop a BCI that improves human communication in light of AI. And Regina Dugan presented Facebook’s plans for a game changing BCI technology that would allow for more efficient digital communication.”

According to John Thomas, Tomasz Maszczyk, Nishant Sinha, Tilmann Kluge, and Justin Dauwels “A BCI system has four major components: signal acquisition, signal preprocessing, feature extraction, and classification.” (source)

Why does it matter?
According to Davide Valeriani, Post-doctoral Researcher in Brain-Computer Interfaces at the University of Essex, “The combination of humans and technology could be more powerful than artificial intelligence. For example, when we make decisions based on a combination of perception and reasoning, neurotechnologies could be used to improve our perception. This could help us in situations such when seeing a very blurry image from a security camera and having to decide whether to intervene or not.” (source)

What are these brain-computer interfaces actually capable of?
For Zaza Zuilhof, It depends who you ask and whether or not you are willing to undergo surgery. “For the purpose of this thought-experiment, let’s assume that healthy people will only use non-invasive BCIs, which don’t require surgery. In that case, there are currently two main technologies, fMRI and EEG. The first requires a massive machine, but the second, with consumer headsets like Emotiv and Neurosky, has actually become available to a more general audience.” (source)

However, BCI can also be a promising interaction tool for healthy people, with several potential applications in the field of multimedia, VR or video games among many other potential applications.

Davide Valeriani said that “The EEG hardware is totally safe for the user, but records very noisy signals. Also, research labs have been mainly focused on using it to understand the brain and to propose innovative applications without any follow-up in commercial products, so far… but it will change.(source)

Musk’s company is the latest. Its “neural lace” technology involves implanting electrodes in the brain to measure signals. This would allow getting neural signals of much better quality than EEG — but it requires surgery. Recently, he stated that brain-computer interfaces are needed to confirm humans’ supremacy over artificial intelligence.” (source)

This technology is still dangerous! Indeed, we made computers and know exactly how they work and how to “modify” them. However, we didn’t make our brains and we still don’t really know very well how they work. Much less how to “invade” them safely and successfully. We’ve made great progress, but not enough yet.

How Your Brain Works Now, And What’s To Come
In simple terms, your brain is divided into two main sections:

  • The limbic system
  • The neocortex.

The limbic system is responsible for our primal urges, as well as those related to survival, such as eating and reproducing. Our neocortex is the most advanced area, and it’s responsible for logical functions that make us good at languages, technology, business, and philosophy.

The human brains contains about 86 bilion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Every time, we think, move or feel, neurons are at work. Indeed, the brain generates huge amount of neural activities. Basically, small electric signals that moves from neuron to neuron are doing the work.

There are many signals that can be used for BCI. These signals can be divided into two categories:

  • Spikes
  • Field potentials

We can detect those signals, interpret them and use them to interact with a device.

According to Boris Reuderink, Machine Learning Consultant at Cortext, “One of the bigger problems in brain-computer interfaces is that the brain signals are weak and very variable. This is why it is difficult to train a classifier, and use it the next day, let alone use it on a different subject.” (source)

In order to insert Neural Lace, a tiny needle containing the rolled up mesh is placed inside the skull. The mesh is then injected and unveiled upon injection, encompassing the brain.

Artificial intelligence or Machine learning has received great attention for the development of BCI applications to solve difficult problems in several domains, in particular, medical and robotic fields. AI/ML has since become the most efficient tool for BCI systems. (source)

Let’s try to elaborate on these aspects a bit more below. Each of these aspects have their own field of research.
Signal Production
There are two ways of producing these brain signals:

According to Sjoerd Lagarde, Software Engineer at Quintiq, “Actively generating signals has the advantage that signal detection is easier, since you have control over the stimuli; you know for example when they are presented. This is harder in the case where you are just reading brain-waves from the subject.”

Signal Detection
There are different ways to detect brain signals. The most well known are EEG and fMRI, but there are others as well. EEG measures the electrical activity of the brain, fMRI the blood-flow in the brain. Each of these methods have their own dis/advantages. Some have a better temporal resolution (they can detect brain-activity as it happens), while others have a better spatial resolution (they can pin-point the location of activity).

The idea remains largely the same for other types of measuring techniques.

Signal Processing
One of the issues we will find when dealing with brain-data, is that the data tends to contain a lot of noise. When using EEG, for example, things like grinding of the teeth will show in the data, as well as eye-movements. This noise needs to be filtered out.

The data can now be used for detecting actual signals. When the subject is actively generating signals , we are usually aware of the kind of signals we want to detect. One example is the P300 wave, which is a so-called event related potential that will show up when an infrequent, task-relevant stimulus is presented. This wave will show up as a large peak in your data and you might try different techniques from machine learning to detect such peaks.

Signal Transduction
When you have detected the interesting signals in your data, you want to use them in some way that is helpful to someone. The subject could for example use the BCI to control a mouse by means of imagined movement. One problem you will encounter here is that you need to use the data you receive from the subject as efficiently as possible, while at the same time you have to keep in mind that BCI’s can make mistakes. Current BCI’s are relatively slow and make mistakes once in a while (For instance, the computer thinks you imagined left-hand movement, while in fact you imagined right-hand movement).” (source)

In the case of the Neural Lace, it integrates itself with the human brain. It creates a perfect symbiosis between human and machine.

These two sections work symbiotically with one another. An AI layer or third interface could lie on top of them, plugging us into a very new and advanced world and giving us the ability to stay on par with our AI robot friends.

This connection could give us access to increased memory storage, amazing machine learning capabilities and yes, telepathic-type communication with someone else without the need to speak.

“You have a machine extension of yourself in the form of your phone and your computer and all your applications . . . by far you have more power, more capability than the President of the United States had 30 years ago,” Elon Musk

Types of BCI
According to Amit Ray, Author of Compassionate Artificial Intelligence, “The most sophisticated BCIs are “bi-directional” BCIs (BBCIs), which can both record from and stimulate the nervous system.

Brain computer interfaces can be classified in three into three main groups:

In invasive techniques, special devices have to be used to capture data (brain signals), these devices are inserted directly into the human brain by a critical surgery. In Semi-invasive, devices are inserted into the skull on the top of the human brain. In general, non invasive are considered the safest and low-cost type of devices. However, these devices can only capture “weaker” human brain signals due to the obstruction of the skull. The detection of brain signals is achieved through electrodes placed on the scalp.

There are several ways to develop a noninvasive brain-computer interface, such as EEG (electroencephalography), MEG (magnetoencephalography), or MRT (magnetic resonance tomography). An EEG-based brain-computer interface is the most preferred type of BCI for studying. EEG signals are processed and decoded in control signals, which a computer or a robotic device perceives readily. The processing and decoding operation is one of the most complicated phases of building a good-quality BCI. In particular, this task is so difficult that from time to time science institutions and various software companies organize competitions to create EEG signals classification for BCI.

Convolutional Neural Network and BCI
CNN is a type of AI neural network based on visual cortex. It has the capacity to learn the appropriate features from the input data automatically by optimizing the weight parameters of each filter through the forward and backward propagation in order to minimize the classification mistake.

Human auditory cortex is arranged in hierarchical organization, similar to the visual cortex. In a hierarchical system, a series of brain regions performs different types of computation on sensory information as it flows through the system. Earlier regions or “primary visual cortex”, react to simple features such as color or direction. Later stages enable more complex tasks such as object recognition.

One advantage of using deep learning technique is that it requires minimal pre-processing since optimal settings are learned automatically. Regarding CNNs, feature extraction and classification are integrated into a single structure and optimized automatically. Moreover, fNIRS time series data of human subjects were input to the CNN. As the convolution is performed in the sliding show manner, the feature extraction process of CNN retains the temporal information of the time series data obtained by fNIRS.

However, one of the biggest issues in BCI reseach is the non-stationarity of brain signals. This issue makes it difficult for a classifier to find realiable patterns in the signals, resulting in bad classifying performances.” (source)

How can you start learning about BCI from scratch?
Hosea Siu, Aerospace engineering PhD student, said that “ For direct “brain” interfaces, you need a set of EEG electrodes, and for peripheral nervous system interfaces, you need EMG electrodes.

Once you can get that data into your computer, you’ll need to do some signal conditioning. Things like filtering for the frequency of signal you’re looking for, filtering out environmental noise (60 Hz noise from electrical lines is common in the US…).

After, you need to think about what you’re actually trying to have the system do. Do you need it to detect a particular change in your EEG patterns when you think about the color blue? Or do you need it to detect a change in your EMG when you’re moving a finger? What about the computer? Should it run a program? Type some text?

Think about how you’re going to label your data. How will the computer know initially that a particular signal is meaningful?
This is supervised learning. Choose your preferred classification method, get lots of labeled data, and train your system. You can use methods like cross-validation to check if your trained models are doing what you think they’re supposed to.

After all of this, you might have something that looks like a brain-computer interface.” (source)

Where can I find datasets for machine learning on brain-computer interfaces?

You can find several publicly available EEG datasets in the following website:

  • Free EEG data database freely ERP data publicly available
  • Berlin Brain-Computer Interface

Recent advances in artificial intelligence and reinforcement learning with neural interfacing technology and the application of various signal processing methodologies have enabled us to better understand and then utilize brain activity for interacting with computers and other devices.

For more information

  • https://www.theguardian.com/technology/2018/jan/01/elon-musk-neurotechnology-human-enhancement-brain-computer-interfaces
  • https://www.core77.com/posts/72957/When-Brain-Computer-Interfaces-Go-Mainstream-Will-Dystopian-Sci-Fi-Be-Our-Only-Guidance
  • http://www.brainvision.co.uk/blog/2014/04/the-brief-history-of-brain-computer-interfaces/
  • https://ieeexplore.ieee.org/document/8122608
  • https://observer.com/2017/04/elon-musk-wants-to-merge-man-and-machine-artificial-intelligence-eeg-neurotechnology/
  • https://medium.com/dxlab-design/how-will-brain-computer-interfaces-change-your-life-aa89b17c3325
  • https://team.inria.fr/potioc/bci-2/
  • https://pdfs.semanticscholar.org/5088/ab0900ef7d06023796f651f4ee5fa0fb36a0.pdf
  • https://www.quora.com/What-is-a-good-machine-learning-project-involving-brain-computer-interfaces
  • https://www.quora.com/How-do-current-brain-computer-interfaces-work
  • https://amitray.com/brain-computer-interface-compassionate-ai/
  • https://www.quora.com/How-can-I-start-learning-about-brain-computer-interface-from-scratch

#本篇由BCIduino脑机接口开源社区转载(公众号“BCIduino脑机接口社区”)。BCIduino脑机接口社区由来自北京航空航天大学、康奈尔大学、北京大学、首都医科大学等硕博发起成立,欢迎扫下面码加入社群(如果码过期了,可以添加wechat:cheitech,要求加入),也欢迎采购BCIduino脑电模块(某宝搜索即可)。

BCIduino转载|你的脑机接口入门第一篇文章(英)相关推荐

  1. 你知道“淘宝意念购“吗?阿里巴巴也入局脑机接口领域了...,

    阿里也开始要进军脑机接口了? 阿里的黑科技有很多,像去年双十一大面积使用的手机AR试装.2020人工智能大会中马云全息投影等.可以说阿里一直没有停止对黑科技的探索. 近日,阿里在淘宝造物节上公布了一项 ...

  2. 为什么华为200万招聘AI博士,马斯克却推出脑机接口对抗AI?

    作者 | 伍杏玲 来源 | CSDN(ID:CSDNnews) 7 月,华为一则薪资通知刷爆朋友圈:华为给8位博士应届生给予 89.6 万至 201 万的年薪.其中薪资最高的两位博士均研究人工智能相关 ...

  3. 首位植入脑机接口的患者通过Twitter发布信息

    脑机接口公司Synchron 给一名患有肌萎缩侧索硬化症(ALS) 的患者(PhilipO'Keefe)植入了脑机接口,PhilipO'Keefe将他的想法直接转化为文字,并首次通过 BCI 直接在社 ...

  4. 马斯克近日表示:Neuralink脑机接口有望明年用于人类

    12月7日,马斯克在华尔街日报CEO理事会论坛发言时表示,Neuralink公司的脑机接口有望在 2022 年之前用于植入人体,借助Neuralink的微芯片装置来帮助恢复四肢瘫痪者的全身功能. 目前 ...

  5. 值得关注!一种新型脑机接口--集成光子芯片的脑机接口是否可行?

    本文探讨了一种全新的脑机接口(BCI)的可行性,它可能带来新的技术.实验和临床应用.BCI是一种基于计算机的系统,它可以使活体大脑和外部机器之间进行单向或双向的通信.BCI读出大脑信号并将其转换成由机 ...

  6. 清华大学医学院张明君团队招聘脑机接口与微纳医学交叉领域博士后

    清华大学医学院张明君团队招聘博士后 课题组简介 本实验室瞄准于脑机接口与微纳医学的交叉领域,探索借助生物兼容纳米材料.微纳机器人技术.多模态信号分析及控制理论, 发展长效微/非侵入脑机接口技术与神经交 ...

  7. 挑战马斯克的Neuralink,另一家神秘的「脑机接口」公司已获投资

    提到脑机接口,你会想到什么? 脑机接口技术在马斯克的大力宣传之下,似乎与终极科幻,终极智能划上等号了.马斯克及其脑机接口公司Neuralink所展现的获取猪脑信号,猴子意念打游戏以及人类增强,记忆移植 ...

  8. 《原神》米哈游突然押注脑机接口,CEO:10年内造出10亿人生活的虚拟世界

    鱼羊 杨净 发自 凹非寺 量子位 报道 | 公众号 QbitAI 一入原神深似海,从此钱包是路人. 盆友,你氪出胡桃来了吗? 来自游戏玩家的一单单648,如今已经把<原神>的收益堆到了56 ...

  9. 学术篇 | 多模态fNIRS脑电分类——基于脑机接口的深度学习算法

    近年来,脑机接口(BCI)系统的发展受到神经科学家的广泛关注,脑机接口可以作为一种沟通手段,并为运动障碍患者的运动功能恢复.脑机接口(BCI)设计的一个重要部分是正确地对脑信号进行分类,这些信号过去是 ...

最新文章

  1. zookeeper配置文件详解
  2. 跨链项目Cosmos主网升级提案已开启投票 目前投票率为19.10%
  3. c语言中next和prior连在一起,C语言中双向链表和双向循环链表详解
  4. PAT_1032验证身份(15)
  5. 微信收款播报器提示服务器断开,微信收款语音提醒开启后收不到语音提醒怎么办? 专家详解...
  6. 高一计算机信息基础课本内容,高中信息技术基础(必修)_教案
  7. 百度关键词地区排名查询php源码,百度关键词地区排名查询
  8. 数字IC-1.2 用CMOS管构建逻辑门电路 及 逻辑化简方法(与、或、非、与非、或非)
  9. 一天看10000张黄图,鉴黄师的苦!!!
  10. Snaker的回退流程源码分析
  11. python学习004-----python中%s的各种用法
  12. 期货CTP接口C++源码与C#应用程序的对接
  13. vue实现图片切换效果
  14. 通过电脑远程链接termux
  15. C#常见容器ArrayList、List、HashSet、Hashtable 、Dictionary、Stack、Queue
  16. 使用OpenLayers根据经纬度对地图进行单个标点,以及点击标点弹框的实现(没有从后台获取经纬度数据)
  17. MATLAB批量将单通道图片转换为三通道图片
  18. 金蝶旗舰版4.0 供应链期末结账提示内存溢出
  19. 武大计算机学院博导张翔,张翔(武大教授)
  20. 服务器虚拟化设计与实现拓扑图,VMware服务器虚拟化解决具体技术方案(详细).doc...

热门文章

  1. Python批量提取docx格式Word文档中所有批注
  2. 贴图、纹理、材质的区别是什么?
  3. 10-75 spj-查询至少使用s1供应商所供应的全部零件的工程 (10 分)
  4. 商场智能触摸屏导视系统功能模块设计
  5. 谷歌及360极速浏览器Cookie 的查看!
  6. JAVA加密解密之消息认证码算法(Message Authentication Code,MAC)
  7. 手机发出微弱信号是如何被接收?
  8. simulink模块说明
  9. Xilinx MicroBlaze软核的使用-Uartlite
  10. 同济大学计算机学院陈震,罗烨 - 师资队伍 - 同济大学软件学院