From: https://liudongdong1.github.io/

level: ACM数据库 Embedded Networked Sensor Systems CCF_B
author: Yinggang Yu ,Dong Wang, Run Zhao, Qian Zhang ShangHaiJiaoTongUniversity
date: 2019 .11
keyword:

  • RFID,wireless sensing ,ongoing gesture recognition,adversarial learning

Paper: RFID ongoing Gesture


RFID Based Real-time Recognition of ongoing Gesture with Adversarial Learning

Summary

  1. 通过一个阅读器多个标签进行实验,使用CNN来分别提取相位和RSSI特征,并将特征连接,通过LSTM网络得到一个行为概率分布向量,通过SVM支持向量机模型来预测某个行为。

Research Objective

  • Application Area:

    • gesture-driven applications: input for video games have paramount and unavoidable issue about latency between completion of a gesture and its recognition
  • Purpose: fusing multimodal RFID data and extracting space-temporal information to enable a general ,pervasive ,environment independent ,user-invariant and real-time gesture-driven interactive system

Proble Statement

  • RFID based gesture recognition methods can handle gestures due to dynamic environment and degrade significantly when it comes to user diversity and environment variety
  • recent works on gesture detection are designed for recognition gesture after it is completed,there remain latency.

previous work:

  • Gesture Recognition

    • wearable sensor based: wrist-worn devices containing inertial sensors are utilized to recognize the eating gesture[28], identify smoking gesture[27], translate sign language[39, 41] ,identify fine-grained interactive gesture[12] require users to charge devices ios phone will deploy UWB ,what if achieve gesture recognize using UWB
    • Camera based: Leap Motion or Kinect to build gesture recognition systems to enable sign language translation at both word and sentence levels with high accuracy [9], recognize continuous sign language with weakly supervised learning [5] privacy ,light sensitive, NLOS
    • Wireless based: ultra-sound, WIFI, RFID ,mmWave active gestures recognition[19, 20, 36] ,sign language translation[22],keystrokes recognition [1], limb-level gesture detection[3, 6, 29, 44] work well in relatively limited and controlled environment and achieve high accuracy on some special users,but performance may degrade in some unstable or uncontrolled environments
  • Ongoing gesture detection:
    • [33] propose a smart watch based early gesture detection technology
    • [14] use deep learning techniques to predict twenty-five hand gestures online from videos
    • [15] design computer vision based early event detectors which enable sign language translation ,emotions recognitions
  • Domain Adversarial learning: learn robust representations which are discriminative on source domain while invariant between domains[10, 32]
    • [42] combine CNN and RNN with adversarial learning to extract sleep-specific and subject-invariant features from RF signals to predict sleep stage
    • [24] utilizes multi-task optimization to reduce variance across speakers and keep representations senone-discriminative
    • [31] a robust end-to-end speech recognition framework using generative adversarial network in a data-driven way.
    • differs from above , EUIGR takes the ongoing gesture into consideration and the training objectives of the user domain,environment domain, and gesture domain undertake different tasks, namely classification and sequence labeling respectively ??

Methods

  • RFID Communication Model:

  • Define parameters notations:

  • system overview: ??判别式网络是如何指导预测的? 是就用LSTM双向和定义三个loss函数?这部分不理解?

【Qustion 1】how to fuse multimodal RFID data

  • Data Collector Module:

    • Unwrapping Phase:
    • Resampling: tag reply sequence is not uniform in the time domain,due to random RFID tags respond to the reader,the packets loss caused by tag conficts and noise sample 的频率是如何设置 employed Hample identifier to filter out abnormal values in the stream and then resample
    • Constructing RFID Clip: sliding windows length is 20 ,sliding length is 10 ,(为什么这样设置没说,一个动作大概多长时间,根据采样频率来设置滑动窗口的大小会怎样)
    • one reader with n tags :the sliding window data representing

  • Feature Extractor Module: FR(t)=FE(C(t);θfe)F_R^{(t)}=FE(C^(t);\theta _{fe})FR(t)=FE(C(t);θfe) ,θfe\theta _{fe}θfe denotes the set of all parameters in the feature extractor FE. ?? how to merge, how to flatten? 2-layer CNN filter (1*d)2D convolutional kernel?? the detail of full connector?? Can i design the network on this description??

【Qustion 2】how to recognize gesture before it’s completed?

  • Gesture Sequence Labeler :model the temporal dependencies of the RFID input sequences using RNN+LSTM


it=θ(Wi∗[ht−1,xt]+bi)ft=θ(Wf∗[ht−1,xt]+bf)Ct⃗=θ(WC∗[ht−1,xt]+bC)Ot=θ(Wo∗[ht−1,xt]+bo)Ct=ft∗Ct−1+it∗Ct⃗ht=Ot∗tanh(Ct)这是LSTM的cell单元中的基本公式i_t=\theta(W_i*[h_{t-1,x_t}]+b_i)\\ f_t=\theta(W_f*[h_{t-1},x_t]+b_f)\\\vec{C_t}=\theta(W_C*[h_{t-1},x_t]+b_C)\\O_t=\theta(W_o*[h_{t-1,x_t}]+b_o)\\C_t=f_t*C_{t-1}+i_t*\vec{C_t} \\h_t=O_t*tanh(C_t)\\这是LSTM的cell单元中的基本公式 it=θ(Wi[ht1,xt]+bi)ft=θ(Wf[ht1,xt]+bf)Ct

=θ(WC[ht1,xt]+bC)Ot=θ(Wo[ht1,xt]+bo)Ct=ftCt1+itCt

ht=Ottanh(Ct)LSTMcell

  • the probabilities of the KthK^{th}Kth gesture : 这个概率分布公式含义不懂? !上面表示一个动作概率

Gp(t,k)=exp(ht∗Wp+bp)k∑kexp(ht∗Wp+bp)kG_p^{(t,k)}=\frac{exp(h_t*W_p+b_p)^k}{\sum_k{exp(h_t*W_p+b_p)^k}} Gp(t,k)=kexp(htWp+bp)kexp(htWp+bp)k

  • Highly general: θsl\theta_{sl}θsl is a set of gesture ; GP(t)G_P^{(t)}GP(t) is a k*1 prediction probability vector of all the gesture

Gpt=LSTMg(FR(t);θsl)G_p^{t}=LSTM_g(F_R^{(t)};\theta_{sl}) Gpt=LSTMg(FR(t);θsl)

Loss Function: 这个损失函数公式含义不懂?
Lg(θfe,θsl)=−1N∑n=1N1N∑t=1Tn∑k=1KGT(t,k)logGP(t,k)L_g(\theta_{fe},\theta_{sl})=-\frac{1}{N}\sum_{n=1}^{N}{\frac{1}{N}}\sum_{t=1}^{T_n}\sum_{k=1}^{K}G_T^{(t,k)}logG_P^{(t,k)} Lg(θfe,θsl)=N1n=1NN1t=1Tnk=1KGT(t,k)logGP(t,k)

  • Using SVM classifier instead of probability threshold .

【Qustion 3】how to extract environment & user invariant features the data streams of the same gesture performed by diverse users in different positions may differ in both spatial and temporal. the feature generated by feature extractor should be unrelated to the users and environments as possible ,we applies two domain discriminators including user discriminator and environment discriminator which maps the representations FRF_RFR to the user predictions and position predictions. Model the user diversity and enviroment discrepancy using BLSTM classifiers

BLSTM ensures the inference about user or position depends on the full sequence with a better capability of temporal modeling.

  • User discriminator: Up=BLSTMu(FR;θud)U_p=BLSTM_u(F_R;\theta_{ud})Up=BLSTMu(FR;θud)

    • Loss Function: Lu(θfe,θud)=−1N∑n=1N∑j=1JUT(j)logUp(j)L_u(\theta_{fe},\theta_{ud})=-\frac{1}{N}\sum_{n=1}^{N}\sum_{j=1}^{J}U_T^{(j)}logU_p^{(j)}Lu(θfe,θud)=N1n=1Nj=1JUT(j)logUp(j)
  • Environment discriminator: Ep=BLSTMe(FR;θed)E_p=BLSTM_e(F_R;\theta_{ed})Ep=BLSTMe(FR;θed)
    • Loss Function: Le(θfe,θed)=−1N∑n=1N∑j=1JET(j)logEp(j)L_e(\theta_{fe},\theta_{ed})=-\frac{1}{N}\sum_{n=1}^{N}\sum_{j=1}^{J}E_T^{(j)}logE_p^{(j)}Le(θfe,θed)=N1n=1Nj=1JET(j)logEp(j)

[]optimizing the parameters θfe,θsl\theta_{fe} ,\theta_{sl}θfe,θsl**

the features extracted from same kind of gestures performed by different users or in different positions are required to follow the same distribution. the purpose of two domain discriminators is oppsite to our final objective ?? BLSTM 是发现整个动作序列之间的关系,还是不理解

Evaluation

  • General Performance:

  • ComparatoOtherMethods:

  • Data Fusion Influence:

  • UI Model: training method: y we train the User Invariant (UI) model using the samples from & ! 1 volunteers and test the system using the samples from another volunteer, repeating this process for every distinct volunteer 。有个问题,使用留1交叉验证训练,那些它的测试样本也在里面吗?没有看到文章中训练样本和测试样本是怎么分配的

UI训练过程中特征的分布情况:

  • EI Model:all samples are collected from the gestures performed by several users in 12 positions in three weeks. The samples of two users from a same position are in different domains among users but in the same environment and are regarded as performed by the same participant

  • Real Time:

Notes 去加强了解

  • RFID low-cost ,mini-size and battery-free that widely employed
  • T-distributed stochastic neighbor embedding visualization :一种非线性降维算法,非常适用于高维数据降维到2维或者3维,进行可视化.http://www.datakit.cn/blog/2017/02/05/t_sne_full.html
  • BLSTM网络有什么优点??
  • Hample: 对输入向量x进行hampel滤波,检测和删除异常值。对于x的每个样本,该函数计算由样本及其周围六个样本组成的窗口的中值,每边三个。并利用中位数绝对值估计了各样本对中值的标准差。如果某个样本与中值相差超过三个标准差,则用中值替换该样本。如果x是一个矩阵,hampel将x的每一列都看成是独立的通道。
  • paper 22, 3, 6, 9, 24
  • LLRP [7]、
  • RFID is widely used in activity recognition with its stable low-level physical characters such as phase and RSS ,intuitively delineate its movements
  • gesture recognition builds a friendly and straight forward bridge between human and computer compared with text-based and graphic user interface.

level: CCF_B IEEE International Conference on Sensing ,Communication and Networking(SECON)
author: Shigeng Zhang,Chengwei Yang
date: 2016
keyword:


Paper: ReActor


ReActor:Real-time and Accurate Contactless Gesture Recognition with RFID

Summary

  1. use machine learning to distinguish different gestures ,instead of DTW the statics of the signal profile that characterize coarse-grained features and the wavelet(transformation) coefficients of the signal profile that characterize the fine-grained local features

Proble Statement

previous work:

  • RF-Finger[12] uses 35 tags to classify different hand gestures by convolutional neural networks
  • Contact-based Gesture Recognition
    • uWave[20] uses a single three-axis accelerometer sensor to recognize personalized gestures.
    • FEMD[21] uses the Kinect sensor to classify ten different gestures
    • The Magic Ring [7] recognizes different gestures by attaching a ring to the user’s finger
    • Femo[14] recognize the user’s activity during body exercise and assesses the quality of exercise movements
    • ShopMiner[17] and CBid[22] monitor the customer’ behaviors by attaching RFID tags to goods in the surpermarket and recognizing different behavior patterns by tracign motions of tags
    • [23] combine Kinect-based activity recognition and RFID-based user identification to improve the quality of augmented reality application
    • [16] propose an approach to detecting the user’s coarse-grained gesture by attaching tags to goods ,which supports online commenting of goods quality
    • IDSense[24] enables smart interaction between the user and objects by developing an activity detection systems based on RFID
    • [15, 25] using deep learning to recognize user’s body activities
  • Contactless Gesture Recognition
    • WiGest[26] detects basic primitive gesture in a device-free manner
    • E-eyes detects user’s activity at home based on channel state information(CSI)
    • WiFinger[18] detects fine-grained hand gestures based on CSI changes
    • GRfid[11]uses dynamic time warping to match different gestures

Methods

  • Environment:

  • system overview:

  • Signal preprocessing

    • phase unwrapping
    • phase ambiguity processing
    • signal smoothing using Savitzky-Golay (S-G) filter [32]
    • signal normalization (这一步作用是什么)

  • Gesture Segmentation:

    • Varri method

  • Attribute Extract

    • Static Attributes:

      • the mode, median, the first quartile, the third quartile, and the arithmetic mean reflect the central tendency of the data
      • the max, min, variance, standard deviation, the third-order central moment that reflect dispersion of the data
      • skewess, Kurtosis that reflect the distribution shape
    • Wavelet Decomposition Coefficient Attributes:
      • Data interpolation: handle the sample question
      • Wavelet coefficient calculation: use the Daubechies wavelet as the wavelet base to decompose the signal profile of each gesture

Evaluation

  • Environment:

    • Dataset:
  • the Gestures to Recognise

  • the Impact of Tag Number
  • Recognition Latency
  • Impact of Gesture Speed
  • Impact of Operation Distance

Conclusion

Notes 去加强了解

  • Varri method [29] segment different activity: uses a sliding window that combine amplitude measure and frequency measure of the signal
  • Savitzky-Golay (S-G) filter : a method based on local polynomial least square fitting in the time domain ,preserve the shape and width of the raw signal after filtering out noises
  • Signal normalization: can manify the signal changes caused by gestures and meanwhile suppress the impact of background signals by mapping them to values around zero
  • 学习小波变换算法作用

level: CCF_C WCNC (IEEE Wireless Communications and Networking Conference)
author: Dong Wang Shanghai jiaotong Ust
date: 2018


Paper: SGRS


SGRS:A Sequential Gesture Recognition System using COTS RFID
  • system overview:

Paper1《RF-Based Fall Monitoring Using Convolutional Neural Networks》

cited: keyword: Fall Detection, Device-free, Deep learning

Phenomenon&Challenge:

  1. These revelations have led to new passive sensors that infer falls by analyzing Radio Frequency (RF) signals in homes.
  2. They typically train and test their classifiers on the same people in the same environments, and cannot generalize to new people or new environments
  3. they cannot separate motions from different people and can easily miss a fall in the presence of other motions.

RelatedWork:

  1. proposed systems that transmit a low power RF signal and analyze its reflections off people’s bodies to infer falls [4, 15, 35, 41, 45, 56, 58, 59].

  2. State-of-the-art RF-based fall detection systems can be divided into two categories: The first category is based on Doppler radar [15, 22, 45]. These solutions exploit the relationship between the Doppler frequency and motion velocity. They associate falls with a spike in the Doppler frequency due to a fast fall motion. The second category is based on WiFi channel state information (CSI) [41, 56, 58, 59]. While this category differs in its input signal, it typically relies on the same basic principle.

  3. convolutional neural networks (CNNs) [31], which have demonstrated the ability to extract complex patterns from various types of signals, such as images and videos [16, 20, 30, 51, 52, 57, 60, 62, 63].

  4. wearable devices include accelerometers [12, 33], smart phones [1, 12], RFID [10], etc.non-wearable technologies, cameras [32, 39] are accurate but they invade people’s privacy, Audio and vibration based sensors [5, 34] have a relatively low accuracy due to interference from the environment [38]. Pressure mats and pulling cords work only when the fall occurs near the installed device [37].

  5. past papers on RF-based fall monitoring [15, 35, 41, 45, 56, 58, 59]

  6. Convolutional Neural Networks (CNN) have been the main workhorse of recent breakthroughs in

    understanding images [30], videos [28, 55] and audio signals [7, 53]. object detection [30], image segmentation [43], speech synthesis [53], machine translation [26], and AlphaGo [47]

Contribution:

  1. Dealing with complex falls and fast non-fall movements
  2. Generalization to new homes and new people:
  3. Detect falls in the presence of other motion
  4. first convolutional neural network architecture for RF-based fall detection. Our CNN design extracts complex spatio-temporal information about body motion from RF signals. As a result, it can characterize complex falls and fast non-fall motions, separate a fall from other motions in the environment, and generalize to new environments and people.
  5. multi-function design that combines fall detection with the ability to infer stand-up events and fall duration
  6. an extensive empirical evaluation with multiple sources of motion

Innovation&consolution:

  1. an RF-based fall detection system that uses convolutional neural networks governed by a state machine
  2. works with new people and environments unseen in the training set
  3. FMCW can separate RF reflections based on distance from the reflecting body, and the vertical and horizontal arrays separate reflections based on their elevation and azimuthal angles

Chart&Analyse:

  1. combine two CNNs: the first detects a fall event while the second detects a stand-up event. The two networks are coordinated via a state machine that tracks the transition of a person from a normal state to a fall state, and potentially back to a normal state.

Paper6《TagFree Activity Identifification with RFIDs》

cited: keyword: Network mobility; Sensor networks

Phenomenon&Challenge:

  1. the accuracy of the readings can be noticeably affected by multipath, which unfortunately is inevitable in an indoor environment and is complicated with multiple reference tags.
  2. human activity identification has become a key service in many IoT applications, such as healthcare and smart homes [1].
  3. the peak amplitudes may dramatically change in a short time, which could be filtered out as noises for activity identification.

RelatedWork:

  1. TagFree can further facilitate various smart home applications, e.g., activity-based temperature adjustment in homes or exercise assistant equipment in gyms.

  2. Unfortunately, the activity information inferred from the raw RSSI can be quite unreliable and inaccurate for small movement.

  3. Radio Frequency Identifification (RFID) is a promising technology due to its low cost, small form size,

    and batterylessness, making it widely used in a range of mobile applications, including detection of

    human-object interaction [25], people/object tracking [31] and more complex activity identifification

  4. previous solutions exploited the changing of RSSI (received signal strength

    indicator) [35][5][19] incurred by human actions. Yet RSSI is insensitive to small body movement, and

    thus difficult to achieve high-precision identifification

  5. LSTM networks have been successfully applied to many tasks such as handwriting [9] and speech recognition [10].

  6. RF-compass [24] presented a WiFi-based approach to classify a predefifined set of nine gestures; E-eyes [28] proposed a location-oriented activity identifification system, which utilized WiFi signals to recognize

    in-home human activities; Ding et al. [6] further developed FEMO that uses the frequency shifts of the movements to determine what exercise a user is performing.

Innovation&consolution:

  1. TagFree gathers massive angle information as spectrum frames from multiple tags, and preprocesses them to extract key features. It then analyzes their patterns through a deep learning framework(h Convolutional Neural Network (CNN) [15] and Long Short Term Memory (LSTM) network [13])
  2. Our experiments suggest that both the backscattered signal power and angle are highly related to human activities, impacting multiple paths with different levels.
  3. DataProcessing
    1. Phase calibration different frequencies induce different initial phase-offffsets at the reader. We accordingly design a mechanism to calibrate the phase difference between frequencies
    2. Multipath Decoupling ??? in practice the AoA estimation may not work well because of the multipath effffect.in order to The M higher peaks are of great power [20] and corresponds to the estimated direction of arrival of the signal source with the angles θ1, . . . , θM

Chart&Analyse:

Shortcoming:

  1. Multiple emitter location and signal parameter estimation 待读论文
  2. #A novel connectionist system for unconstrained handwriting recognition
  3. #Speech recognition with deep recurrent neural networks
  4. A platform for free-weight exercise monitoring with rfifids
  5. robot object manipulation using RFIDs
  6. device-free location-oriented activity identifification using fifine-grained WiFi signatures
  7. Beyond short snippets: Deep networks for video classifification.
  8. RF-IDraw: virtual touch screen in the air using RF signals
  9. Relative Localization of RFID Tags using Spatial-Temporal Phase Profifiling
  10. Deep Learning for RFID-Based Activity Recognition
  11. 22,25,6

Paper7《Through-Wall Human Pose Estimation Using Radio Signals

Phenomenon&Challenge:

  1. infeasible when the person is fully occluded, behind a wall or in a different room.
  2. In particular, there is no labeled data for this task. It is also infeasible for humans to annotate radio signals with keypoints.
  3. some body parts may not reflect much RF signals towards our sensor, and hence may be de-emphasized or missing in some heatmaps, even though they are not occluded

RelatedWork:

  1. Computer Vision: Human pose estimation from RGB images generally falls into two main categories: Top-down and bottom-up methods. Top-down methods [16, 14, 29, 15] fifirst detect each people in the image, and then apply a single-person pose estimator to each people to extract keypoints. Bottom-up methods [10, 31, 20], on the other hand, fifirst detect all keypoints in the image, then use post processing to associate the keypoints belonging to the same person
  2. Wireless Systems: Recent years have witnessed much interest in localizing people and tracking their motion using wireless signals. The literature can be classifified into two categories. The fifirst category operates at very high frequencies (e.g., millimeter wave or terahertz) [3].The second category uses lower frequencies, around a few GHz, and hence can track people through walls and occlusions.The second category uses lower frequencies, around a few GHz, and hence can track people through walls and occlusions.

Contribution:

  1. the system uses synchronized wireless and visual inputs, extracts pose information from the visual stream, and uses it to guide the training process.
  2. We first perform non-maximum suppression on the keypoint confifidence maps to obtain discrete peaks of keypoint candidates. To associate keypoints of different persons, we use the relaxation method proposed by Cao et al. [10] and we use Euclidean distance for the weight of two candidates

Innovation&consolution:

  1. we make the network learn to aggregate information from multiple snapshots of RF heatmaps so that it can capture different limbs and model the dynamics of body movement.

Chart&Analyse:

Shortcoming&confusing:

  1. First, the human body is opaque at the frequencies of interest – i.e., frequencies that traverse walls

  2. the operating distance of a radio is dependent on its transmission power

  3. less activity recognize

  4. 待读论文如下

  5. Realtime multiperson 2D pose estimation using part affifinity fifields

  6. 3D convolutional neural networks for human action recognition

  7. Learning spatiotemporal features with 3D convolutional networks.

  8. Temporal segment networks: Towards good practices for deep action recognition.

  9. Microsoft COCO: Com-mon objects in context.

  10. Realtime multiperson 2D pose estimation using part affifinity fifields

Paper8《Compressive Representation for Device-Free Activity Recognition with Passive RFID Signal Strength》

Chart&Analyse:

Paper《Sharing the Load: Human-Robot Team Lifting Using Muscle Activity

RelatedWork:

  1. Human-Robot Interaction using modalities such as vision, speech, force sensors, and gesture tracking datagloves [8], [9], [10], [11], [12].
  2. Using Muscle Signals for Robot Control EMG can yield effective human-robot interfaces, but also demonstrate associated challenges such as noise, variance between users, and complex muscle dynamics.

Contribution:

  1. an algorithm to continuously estimate a lifting setpoint from biceps activity, roughly matching a person’s hand height while also providing a closed-loop control interface for quickly commanding coarse adjustments;
  2. a plug-and-play rolling classififier for detecting up or down gestures from biceps and triceps activity
  3. an end-to-end system integrating these pipelines to collaboratively lift objects with a robot using only muscle activity associated with the task;

Innovation&consolution:

  1. The setpoint algorithm aims to estimate changes in the person’s hand height while also creating a task-based control interface.

Chart&Analyse:

Shortcoming&Question:

  1. don’t understand SetpointAlgorithm?
  2. after feature extraction,what do i get? how to detect up and down categories according to musal activities???

Paper《Emotion Recognition using Wireless Signals

Phenomenon&Challenge:

  1. Emotion recognition is an emerging field that has attracted much interest from both the industry and the research community [52, 16, 30, 47, 23].

  2. measure inner feelings [14, 48, 21]

  3. Recent research has shown that such RF reflfections can be used to measure a person’s

    breathing and average heart rate without body contact [7, 19, 25, 45, 31].

  4. RF signals reflected off a person’s body are modulated by both breathing and heartbeats.

  5. heartbeats in the RF signal lack the sharp peaks which characterize the ECG signal, making it harder to accurately identify beat boundaries

  6. the difffference in inter-beat-intervals (IBI) is only a few tens of milliseconds.

  7. the shape of a heartbeat in RF reflflections is unknown and varies depending on the person’s body and exact posture with respect to the device

RelatedWork:

  1. Existing approaches for inferring a person’s emotions either rely on audiovisual cues, such as images and audio clips [64, 30, 54], or require the person to wear physiological sensors like an ECG monitor [28, 48, 34, 8].
  2. Emotion Recognition:they extract emotionrelated signals (e.g., audio-visual cues or physiological signals); second, they feed these signals into a classififier in order to recognize emotions. Existing approaches for extracting emotion-related signals fall under two categories: audiovisual techniques and physiological techniques.
  3. RF-based Sensing it transmits an RF signal and analyzes its reflflections to track user locations [5], gestures [6, 50, 56, 10, 61, 3], activities [59, 60], and vital signs [7, 19, 20].
  4. past work that does not require users to hold their breath has an average error of 30-50 milliseconds [13, 40, 27]

Contribution:

  1. demonstrates the feasibility of emotion recognition using RF re-flections offff one’s body.
  2. introduces a new algorithm for extracting individual heartbeats from RF reflections offff the human body.
  3. Mitigating the Impact of Breathing && Heartbeat Segmentation
  4. EMOTION CLASSIFICATION:Feature Selection and Classifification

Innovation&consolution:

Chart&Analyse:

Code:

Shortcoming&Question:

  1. 测试环节没有细看跳过。

  2. what’s IBI features???

  3. 待读论文:1-norm support vector machines. Advances in neural information processing systems

  4. machine emotional intelligence: Analysis of affffective physiological state

  5. Comparison of detrended flfluctuation analysis and spectral analysis for heart rate variability

    in sleep and sleep apnea 2003

  6. Sample entropy analysis of neonatal heart rate variability 2002

  7. Emotion recognition based on physiological changes in music listening 2008

  8. Physiological signals based human emotion recognition: a review 2011

  9. An introduction to variable and feature selection 2003

Paper《Interacting with Soli: Exploring Fine-Grained Dynamic Gesture Recognition in the Radio-Frequency Spectrum

cited: keyword: gesture recognition; wearables; deep learning; radar sensing

Phenomenon&Challenge:

  1. sensing in the electro-magnetic spectrum eschews spatial information for temporal resolution. Capturing a superposition of reflflected energy from multiple parts of the hand such as the palm or fingertips, the signal is therefore not directly suitable* to reconstruct the spatial structure or the

    shape of objects in front of the sensor

RelatedWork:

  1. Google Soli resolving motion at a very fifine level and allowing for segmentation in range and velocity spaces rather than image space.

Contribution:

  1. a novel end-to-end trained stack of convolutional and recurrent neural networks (CNN/RNN) for RF signal based dynamic gesture recognition
  2. an in-depth analysis of sensor signal properties and highlight inherent issues in traditional frame-level approaches

Chart&Analyse:

Paper《Compressive Representation for Device-Free Activity Recognition with Passive RFID Signal Strength》

cited: keyword: —Activity recognition, RFID, compressive sensing

Phenomenon&Challenge:

  1. RSSI is quite complicated in real environments due to signal reflection,diffraction,and scattering,especially for the passive RFID tags.

RelatedWork:

  1. many efforts have been made to learn human activities by mining from a broad range of signal sources, such as videos and images [6], radio frequency of wearable or wireless sensors [7], [8], Wi-Fi [9], and even object vibration flfluctuations [10].
  2. Fall Detection
  3. Sleep Monitoring
  4. Ambulatory Monitoring.Posture recognition and monitoring are critical in the medical care.

Contribution:

  1. The system interprets what a person is doing by deciphering signal flfluctuations using radio-frequency identifification (RFID) technology and machine learning algorithms

  2. compressive sensing, dictionary-based approach that can learn a set of compact and informative dictionaries of activities using an unsupervised subspace decomposition.

  3. propose a dictionary learning approach to uncover the structural information between RSSI

    signals of different activities by learning the compact and discriminative dictionaries per activity

  4. model each predefifined human activity by learning discriminative dictionaries and its corresponding sparse coeffificients using features extracted and selected from raw RSSI streams.

  5. develop a compressive sensing dictionary-based learning approach to uncover structural information

    among RFID signals of different activities.

  6. propose a lightweight but effective feature selection method to assist the extraction of more discrimi

    native signal patterns from noisy RFID streams.

Chart&Analyse:

  1. the variations of signal strength reflect different patterns,which can be exploited to distinguish different activities.

  1. 但系统框架没看懂具体怎么处理的,individual segments 使用滑动窗口处理,但不明白在不知道起点的情况下如何更具滑动窗口进行切割。

Shortcoming&Confusion:

  1. Sparse coding is a common technique to model data vectors as sparse linear combinations (i.e., sparse representation) of basis elements, and has been widely used in image processing and computer vision applications [23], [24], [25]. Dontkonw the Sparse coding

Paper《RF-Based 3D Skeletons

cited: keyword: RF Sensing, 3D Human Pose Estimation, Machine Learning, Neural Networks, Localization, Smart Homes

Phenomenon&Challenge:

  1. images have high spatial resolution whereas RF signals have low spatial resolution, even when using multi-antenna systems
  2. only a few body parts are visible to the radio [1]
  3. Existing datasets for inferring 3D poses from images are limited to one environment or one person (e.g., Human3.6M [7])

RelatedWork:

  1. Novel algorithms have led to accurate localization within tens of centimeters [19, 34]. Advanced sensing technologies have enabled people tracking based on the RF signals that bounce off their bodies, even when they do not carry any wireless transmitters [2, 17, 35]. Various papers have developed clas

    sifiers that use RF reflections to detect actions like falling, walking, sitting, etc. [21, 23, 32].

  2. **Wireless Systems:**Different papers localize the people in the environment [2, 17], monitor their walking speed [15, 31], track their chest motion to extract breathing and heartbeats [3, 39, 41], or track the arm motion to identify a particular gesture [21, 23].

  3. **Computer Vision:**2D pose estimation has achieved remarkable success recently [6, 8, 11, 13, 16, 22, 33]. advances in 3D human pose estimation remain limited due to the difficulty and ambiguity of recovering 3D information from 2D images.

Innovation&consolution:

  1. RF-Pose3D provides a significant leap in RF-based sensing and enables new applications in gaming, healthcare, and smart homes
  2. model the relationship between the observed radio waves and the human body, as well as the constraints on the location and movement of different body parts.
  3. Sensing the 3D Skeleton: common deep learning platforms (e.g., Pytorch, Tensorflow) do not support 4D CNNs we leverage the properties of RF signals to decompose 4D convolutions into a combination of 3D convolutions performed on two planes and the time axis. We also decompose CNN training and inference to operate on those two planes.
  4. Scaling to Multiple People: run past localization algorithms, locate each person in the scene, and zoom in on signals from that location. The drawbacks of such approach are: 1) localization errors will lead to errors in skeleton estimation, and 2) multipath effects can create fictitious people. instead of zooming in on people in the physical space, the network first transforms the RF signal into an abstract domain that condenses the relevant information, then separates the information pertaining to different individuals in the abstract domain
  5. Testing given an image of people, identifies the pixels that correspond to their keypoints [6]. develop a coordinated system of 12 cameras. We collect 2D skeletons from each camera, and design an optimization problem based on multi-view geometry to find the 3D location of each keypoint of each person

Chart&Analyse:

Shortcoming&Confusion:

  1. openpose[6]
  2. 只了解了大概,离具体复现差很多,里面具体的设计细节不清楚

Paper《RF-Dial: an RFID-based 2D Human-Computer Interaction via Tag Array 》

Contribution:

  1. propose a novel scheme of 2D human-computer interaction, by attaching a tag array on the the surface of an ordinary object, thus turning it into an intelligent HCI device

  2. to track the rigid transformation including translation and rotation, we build a geometric model to depict the relationship between the phase variations of the tag array and the rigid transformation of the tagged object. By referring to the fifixed topology of at least two tags from the tag array, we are able to accurately estimate the 2D rigid body motion of the object

  3. implemented a prototype system of RF-Dial with COTS RFID

    and evaluated its performance in the real environment

Innovation&consolution:

  1. RF-Pose3D provides a significant leap in RF-based sensing and enables new applications in gaming, healthcare, and smart homes.

Chart&Analyse:

Paper《Multi-Target Intense Human Motion Analysis and Detection Using Channel State Information》

lab:Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, School of Software,

meeting:In Proceedings of the 2017 IEEE
19th International Conference on e-Health Networking, Applications and Services (Healthcom),=12–15 October 2017.

cited: keyword:

Phenomenon&Challenge:

  1. intense human motion usually has the characteristics of intensity, rapid change, irregularity, large amplitude, and continuity

RelatedWork:

  1. Camera-Based Human Motion Detection crowd counting gesture recognition [3], target tracking [4], violence detection
  2. Wi-Fi-Based Passive Human Detection many research studies have realized passive human detection by leveraging the variance of
    Received Signal Strength Indicator (RSSI) on the receiver [8?11].
  3. Wi-Fi-Based Activity Recognition

Contribution:

  1. finding out the pattern of the relationship between human motion and CSI variation. Then, we extract the feature from CSI to depict different human motion,and use machine learning methods to detect intense human motion from human activities
  2. analyzed the signal variation difference under LOS and NLOS conditions, and then we identify whether the current wireless link status.
  3. human motion detection system which can be deployed on the Wi-Fi APs

Innovation&consolution:

  1. RF-Pose3D provides a significant leap in RF-based sensing and enables new applications in gaming, healthcare, and smart homes.

Chart&Analyse:

Code:

Shortcoming&Confusion:

  1. Position-independent indicator:
  2. Multiple targets settings:
  3. People counting:
  4. Training-free human motion detection

Paper《Enabling Contactless Detection of Moving Humans

with Dynamic Speeds Using CSI》

RelatedWork:

  1. RSS-based detection
  2. CSI-based detection
  3. Detection as prerequisite.

Contribution:

  1. passive human detection leveraging full information of CSI.
  2. a novel unified feature using the eigenvalue of the correlation matrix of CSI.
    The feature holds excellent properties for device-free detection due to its stability for both amplitude and phase and irrelevance to specific power parameters that vary over different links and over time and space.
  3. space diversity provided by multiantennas supported by modern MIMO com
    municating systems to enable more accurate and robust detection

Innovation&consolution:

  1. using linear transformation on phase information of CSI, we apply phase differences across antennas as a new
    feature.

Chart&Analyse:

Paper《HuAc: Human Activity Recognition Using Crowdsourced WiFi Signals and Skeleton Data》

RelatedWork:

  1. Kinect-Based Activity Recognition keleton joints overlapping, and position-dependence factors.
  2. WiFi-Based Activity Recognition WiFall system [2] detects a fall behavior
    by learning the specific CSI pattern.WiFall system [2] E-eyes [9] recognizes
    walking activity and in-place activity by adopting movingvariance of CSI and fingerprint technique.CARM [10] shows the correlation between CSI
    value andhumanactivityby constructingCSI-speedandCSI
    activity model.

Contribution:

  1. We propose a HuAc system to recognize human
    activity and also construct a WiFi-based activity
    recognition dataset named WiAR as a benchmark to
    evaluate the performance of existing activity recognition
    systems. We use the kNN, Random Forest, and
    Decision Tree algorithms to verify the effectiveness of
    theWiAR dataset
  2. We detect the start and end of the activity using
    the moving variance of CSI. Moreover, we leverage

    【RFID_paper】Action Sensing相关推荐

    1. 【struts2】action中使用通配符

      在以前的学习中,<action>元素的配置,都是用明确的配置,其name.class等属性都是一个明确的值.其实Struts2还支持class属性和method属性使用来自name属性的通 ...

    2. 使用 MoveIt 控制自己的真实机械臂【2】——编写 action server 端代码

      上一篇文章中, 使用 MoveIt 控制自己的真实机械臂[1]--配置 action client 端,已经完成了 MoveIt 这边 action client 的基本配置,MoveIt 理论上可以 ...

    3. K8s in Action 阅读笔记——【9】Deployments: updating applications declaratively

      K8s in Action 阅读笔记--[9]Deployments: updating applications declaratively 集群配置: 本章介绍如何更新运行在Kubernetes集 ...

    4. 【转载】图像处理与计算机视觉的经典书籍

      [按]转载自https://www.cnblogs.com/jiahenhe2/p/7912210.html 图像处理与计算机视觉的经典书籍 ***************************** ...

    5. 【radar】毫米波雷达相关资料(文献综述列表、顶会研讨会资料列表、顶会workshops资料列表、工具书、使用手册)(2)

      [radar]毫米波雷达相关资料(毫米波雷达文献综述列表.毫米波雷达顶会研讨会资料列表.毫米波雷达顶会workshops资料列表.毫米波雷达工具书.毫米波雷达使用手册)(2) Review Paper ...

    6. oracle library cache lock,【案例】Oracle等待事件library cache lock产生原因和解决办法...

      [案例]Oracle等待事件library cache lock产生原因和解决办法 时间:2016-12-07 18:56   来源:Oracle研究中心   作者:网络   点击: 次 天萃荷净 O ...

    7. IdentityServer4关于多客户端和API的最佳实践【含多类型客户端和API资源,以及客户端分组实践】【中】...

      上一篇文章中,我们已经完成了服务端数据库的搭建,本篇主要处理多[传统HTTP][依赖CORE环境]客户端之间协同在线[SSO]以及不需要SSO的场景处理. 目标: 1)实现多类型客户端接入Identi ...

    8. android 创建3个按钮,【记录】继续尝试给Android程序的右上角的ActionBar中添加三个点的选项按钮...

      [背景] 想要给Android中的ActionBar中添加那种三个点的选项菜单,之前已经折腾过了,但是失败了,以为新版Android中没法添加呢: 但是后来在别的4.1.2的Android手机中,也还 ...

    9. 【node】express中mysql的基本用法、连接池的使用、事务的回滚

      [node]express中mysql的基本用法.连接池的使用 安装mysql包 mysql的配置信息 mysql基本操作 查询mysql并渲染数据 mysql插入操作 首先在html页面写上< ...

    最新文章

    1. 技术正文 history命令添加时间---测试磁盘写入速度
    2. ABAP实例:如何生成年月的输入帮助
    3. HDU 1054 Strategic Game 最小点覆盖
    4. SAP Spartacus - Progressive Web Applications,渐进式 Web 应用程序
    5. stmcubemx 脉冲计数_STM32CubeMX:ETR外部脉冲计数器
    6. 师妹问我:如何在7分钟内彻底搞懂word2vec?
    7. 把实体 转为json 数据格式---jackson 的详细用法_Jackson快速入门
    8. SQL SERVER 2000数据库置疑处理
    9. 开源Java B2B2C商城项目Javashop的部署安装过程
    10. oce专项认证 oracle_Oracle Certification Program | Oracle 中国
    11. 2018区块链技术及应用峰会(BTA)·中国全日程发布,大会倒计时5天
    12. 【SLAM学习笔记】12-ORB_SLAM3关键源码分析⑩ Optimizer(七)地图融合优化
    13. 存储容量单位GB GiB MB MiB
    14. 什么是keep-alive?
    15. linux vi 编辑器下经常会用到保存退出与不保存退出
    16. 详细解说RAID6结构及原理
    17. 【系统分析师之路】2008年上系统分析师上午综合知识真题
    18. 用 AsyncDisplayKit 開發響應式 iOS App
    19. shell学习之引号
    20. 电脑剪辑视频用什么工具?好用的视频剪辑工具推荐

    热门文章

    1. PyQt5第一部分-窗口类:QWidget, QDeskWidget, QTabWidget, QMainWindow, QDialog
    2. 计算机java工程师证书有哪些,java工程师证书怎么考?考工程师证书需要学什么内容?...
    3. ECCV2020 | 论文阅读——Arbitrary-Oriented Object Detection with Circular Smooth Label
    4. 秋招深信服技术服务1面过2面挂
    5. Android开发-线程池
    6. 设计模式——(四)设计模式原则___依赖倒转原则
    7. unity 等待...动画_介绍Unity 2019.1的动画索具预览包
    8. 当当·亚马逊·京东等加密的电子书转为PDF
    9. android 获得后退键按事件
    10. 真三国无双3.9D蓝屏 (魔兽争霸3 蓝屏) 的处理方法