Features, Data, Text Processing

1. Features

  1. Examples of Features

    e.g. Home Type, Material Status, Income Level

  2. Properties of Features
Distinctness:
  • = ≠
Order:
  • < > ≤ ≥
Meaningful differences:
  • .+ -
(e.g. 08Oct 2018 is three days after 05 Oct 2018)
Meaningful ratios:
  • × ÷
(e.g. Tom (18 years) is two times older than John (9 years)
  1. Type of Features
Nomial (Categorical Qualitative):
Any permutation of data
  • Property: Distinctness
e.g. gender, eye colour, postal codes
Ordinal (Categorical Qualitative):
An order preserving change of values. i.e., new_value = f(old_value), where f is a monotonic function
  • Properties: Distinctness & Ordered
e.g. school level (primary/secondary), grades
Interval (Numeric Quantitative) [加减]:
new_value = a*old_value + b, where a and b are constants
  • Poperties: Distinctness & ordered & meaningful differences
e.g. calendar dates, temperatures (Celsius or Fahrenheit)
Ratio (Numeric Quantitative) [乘除]:
new_value = a*old_value
  • Properties: Distinctness & ordered & meaningful differences/ratios
e.g. length, time, counts
  1. Discrete VS Continues

Nomial, Ordinal, Interval and Ratio features can be represented by discrete or continuous values

Discrete Valure (including Binary):
  • Finite or countable set of values
  • Typically represented as integers
e.g. course ID, postal codes
Continuous Values
  • Real values
  • Typically represented as floats
e.g. temperature, weight, height
Feature Binary, Discrete, or Continuous? Nominal, Odinal, Interval, Ratio?
Postal code Discrete Nominal
Gender Binary Nominal
Height/Weight Continuous Ratio
Student ID Discrete Nominal, Ordinal (if ID assigned by sequence
Grading system Binary (P/F), Discrete (A+, …, F), Continuous (Scores) Ordial, Ratio (scores)
Date Discrete (MM/YY), Countinous (Time) Interval

2. Data

  1. Dataset Characteristic
Dimensionality (no. of features)
  • Challenges of high-dimensional data, “Curse of dimensionality”
Sparsity
  • Advantage for computing time and space
e.g. In bag-of-words, most words will be zero (not used)
[bag-of-words 就是 disregarding grammar and even word order but keeping multiplicity – 一袋子words]
Resolution
  • Patterns depend on the scale
e.g. Travel patterns on scale of hours, days, weeks
  1. Possible Issues with Dataset
  • Low quality dataset/ features lead to poor model
    e.g. A classifier build with poor data/features may incorrectly diagnose a patient as being sick when he/she is not
  • possible issues wil dataset/ features
Noise
  • For features, noise refers to random error/variance in original values

e.g. Recording of a concert with background noise
e.g. Check-in data on social media with GPS errors

Outliers
  • Anomalous objects:
    Observations with characteristics that are considerably different than most other observations in the data set
  • Anomalous Values:
    Feature vaues that are unusual with respect to typical values for that feature
Noise VS Outliers
  • Noise: Due to random errors/variance in the data collection/measurement process --> we want to remove them (noise reduction/removal)

    e.g. blurry images, moisy recordings

  • Outlier: Due to intresting events, which may have good/bad consequences --> we want to identify/detect them (anomaly detection)

    e.g. sudden increase in web traffic, larger and odd online purchases

Missing values
  • Reasons for missing values

    • Incomplete data collection

      e.g. People not providing annual income

    • Features not applicable to certain observations

      e.g. Annual income not applicable to children

  • Types of missing values

    • Missing Completely at Random (MCAR)
      Missing values are a completely random subset

      e.g. Data collection/survey is randomly lost.

    • Missing at Random (MAR)
      Missing values related to some other features

      e.g. Older adults not roviding annual income

    • Missing Not at Random (MNAR)
      Missing values related to unobserved features

      e.g. no knowing age and income

  • What to do with missing values?

    • Eliminate observations or variables
      Okay for MCAR values, may be not ok for MAR and MNAR values
      Notice: we need to understand the effects of this eliminations!

    • Estimate missing values

      e.g. Using averages in a time series or spatial data

    • Ignore the missing value during analysis

      e.g. KNN using features with values

Duplicate data

Data set may include data objects that are duplicates, or almost duplicates of one another --> Major issue when merging data from heterogeneous sources.

e.g. Same person with multiple email addresses

  • Data cleaning
    Process of dealing with duplicate data issue
Wring/Inconsistent data

Features may contain wrong or inconsistent values

e.g. User-provided street name and postal code not matching
e.g. Negative values for weight, height, age, etc

  • Ways to overcome wrong/inconsistent data

    • More stringent data collection

      e.g. drop-down list for specific data input

    • Detect potentially wrong data values

      e.g. allowable range for specific features

    • Correction of wrong/inconsistent values

      e.g. correct postal code based on block number and street name

  1. Classification problem and dataset

Consider a dataset with the issues of noise, outliers, duplicate observations, missing values abnd wrong/inconsistent data.
What would be the possible problems of applyng the k-nearest neighbors (KNN) algorithm on this dataset? [KNN assigns an observation to the class lebal of the k-nearest neighbor with majority voting.]

  • Noise/Outliers:
    If k-value is too small, may be overly sensitive to noise/outlisers
  • Duplicate observations:
    K-nearest neighbors may be all duplicates
  • Missing/wrong/inconsistent:
    Distance measure may be inaccurate
  1. Data processing
Aggregation

Combining two or more deatures (or observtions) into a single feature (or observation)

  • Purpose

    • Data reduction
      Reduce the no. of features or observations
    • Change of scale
      e.g. Cities aggregated into regions, states, countries, etc.
      e.g. Days aggregated into weeks, months, or years.
    • More “stable” data
      Aggregated data tends to have less variability

Sampling

Sampling is the main technique employed for data reduction. --> often used for both the preliminary investigation of the data and the final data analysis.

  • Why use data sampling?

    • Expensive or time-cnsuming to obtain/collect entire set of relevant data

      e.g. Random surbey instead of census on entire population

    • Expensive or time-consuming to process entire set to relevant data
      (Less of an issue these days with distributed computing)

  • Key principle for effective sampling:

    • Using a sample will work almost as well as using the entire data set, if the sample is representative
    • A sample is representative if it hass approximately the same properties (of interest) as the original set of data.
  • Types of Sampling

    • Simple Random Sampling
      There is an equal probability of selecting any particular item.

      Sampling without replacement:
      As each irem is selected, it is removed from the population.

      Sampling with replacement:
      Objects are not removed from the population as they are selected for the sample. --> The same object can be picked up more than once.

    • Strtified sampling
      Split the data into several partitions; then draw random samples from each partition.

Dimensionality Reduction

As the number of features (dimensions) increases, more data (observations) is needed for an accurate classifier (model), since data gets increasingly sparse in the space it occupies.

  • Pupose

    • Avoid curse of dimensionality
    • Reduce amount of time and memory required by data mining algorithms
    • Allow data to be more easily visualized
    • May help to eliminate irrelevant features or reduce noise
  • Techniques

    • Data projection from high-dimensional to low-dimensional space.

      • Linear algebra techniques (e.g., Principal) Components Analysis, Singular Value
        Decomposition
    • Feature selection
      • More later
  • Motivation

    • Clustering
      One way to summarize a complex real-valued data point with a single categorical variable.
    • Dimensionality reduction
      • Another way to simplify complex high-dimensional data
      • Summarize data with a lower dimensional real valued vector
  • Principla Component Analysis (PCA)
    Find projection that maximize the variance.

    • Covariance

      • Variance and Covariance:

        • Measure of the “spread” of a set of points around their center of mass(mean).
      • Variance:
        • Measure of the deviation from the mean for points in one dimension.
      • Covariance:
        • Measure of how much each of the dimensions vary from the mean with respect to each other. (To see if there is a relation between two dimension.)

    • Eigenvector and Eigenvalue






    • More about eigenfaces
      We have initially n number of images each having b*c pixels. The value of each pixel can be one feature. => We represent each image in b*c dimensional space. We can call it d where d = b*c. Now after applyng PCA we get p principal components. Hence, we have p*d dimensional eigenvectors where d = b*c. We can plot these p principal components/eigenvectors. We call these eigenfaces.
    • Representation and Reconstruction

    • Reconstruction
      This reconstruction is blurred and lossy because we just use p principal components.
      If you use all the principal components then you can reconstruct the image without loosing any information.







  • Mutidimentsional Scaling
    Find projection that best preserves inter-point distances.
  • LDA (Linear Discriminant Analysis)
    Maximizing the component axes for class-separation.
Feature subset selection --> Another way ro reduce dimensionality of data
  • Redundant & Irrelevant features:

    • Redundant features:
      Duplicate much information contained in one or more features

      e.g. Income level, CPF contributions and income tax

    • Irrelevant features:
      Contain no useful information for the data science task at hand

      e.g. NRIC for predicting a person’s chances of falling sick

  • Feature Selection Approaches

    • Embedded Approached
      Feature selection embedded as part of the classification algorithm

      e.g. Selection of features when building decision trees

    • Filter Approached
      Independent feature selection process, before applying the algorithm

      e.g. Filtering features based on their correlation with class labels

    • Wrapper Approaches
      Search for best feature subset for a specific algorithm as a balck box

      e.g. Recursive feature elimination

    Filter Approach Wrapper Approach
    Not tagged to any algorithm Discrete Tagged to specific algorithm, but usually performs well for that algorithm
    Less computationally expensive Computationally expensive
Feature creation

Create new attributes that can captures the important information in a data set much more efficiently than the original attributes

  • Three general methodologies:

    • Feature extraction

      e.g. Extracting edges from images

    • Feature construction

      e.g. Diving mass by volume to get density

    • Mapping data to new space

      e.g. Fourier and wavelet analysis

Discretization

Converting a continous attribute into an ordinal attribute.

  • A potentially infinite number of values are mapped into a small number of categories.
  • Many classification algorithms work best if both the independent and deoendent variables have only a few values.

e.g. Instead off using (continous) total savings, we have (discrete) savings level like low, medium, high

Binarization

Mapping a categorical attribute into one or more binary variables
(Typically used for association analysis, e.g. Market basket analysis: buy broccoli and oil --> buy garlic)

3. Text Processing

  1. Tokenization
From Text to Tokens to Terms
  • Tokenization: Segmenting text into tokens
    [Token: a sequence of characters, in a particular document at a particular position]
Simple Tokenization

Analysis text into a sequence of diacrete tokens (words)

  • Sometimes punctuation (e-mail), number (1999), and case (Republican vs republican) can be a meaningful part of a token. (However, frequently they are not.)

  • Simplest approach is to ignore all numbers and punctuation and use only case-insensitive unbroken strings of alphabetic characters as tokens.

  • More careful approach:

    • Separate ? ! ; : " ’ [ ] ( ) < >
    • Care with . - why? when?
    • Care with … ??
Tokenization
  • Apostrophes are ambiguous

    • Possessive constructions

      e.g. the book’s cover => the book s cover

    • Contractions:

      e.g. He’s happy => He is happy
      e.g. aren’t => are not

    • Quotations:

      e.g. ‘Let it be’ => Let it be

  • Whitespaces in proper names or collocations:

    e.g. San Francisco => San_Francisco

  • Hyphenations:

    e.g. co-education => co-education
    e.g. state-of-the-art => state of the art? state_of_the_art?
    e.g. lowercase, lower-case, lower case = > lower_case
    e.g. Hewlett-Packard => Hewlett_Packard? Hewlett Packard?

  • Period:

    • Abbreviations

      e.g. Mr., Dr.

    • Acronyms:

      e.g. U.S.A

    • File names:

      e.g. a.out

  • Numbers

    e.g. 3/12/91
    e.g. Mar. 12, 1991
    e.g. 55 B.C.
    e.g. B-52
    e.g. 100.2.86.144

  • Unusual strings (that should be recognized as tokens)

    e.g. C++, C#, B-52, C4.5, MAS*H.

  • Tokenizing HTML
    Should text in HTML tags not typically seen by the user be included as tokens?

    e.g. Words appearing in URLs e.g. Words appeaing in ‘meta text’ of messgae

    • Simplest approach is to exclude all HTML tage info (between ‘<’ and ‘>’) from tokenization.
Tokenization is language dependent

Need to know the language of the document/query --> Language Identification (based on classifiers trained on short character subsequences as features, is hightly effective.)

  • Example:

    • French
      Reduced definite article (定冠词), postposed clitic, pronouns (代词)

      e.g. I’ensemble, un ensemble, donne-moi

    • German
      Compoung nouns, need compound splitter (usually implemented by finding segments that match against dictionary entries.)

      e.g. Comuterlinguistik
      e.g. Lebensversicherungsgesellschaftsangestellter
      e.g. (life insurance company employee)

    • Arabic and Hebrew
      written right to left, but with certain items like numbers written left to right.
      Words are separated, but letter forms within a word from complex ligatures.

    • Chinese and Japanese
      Have no spaces between words. Further complicated in Japanese, with multiple alphabets intermingled (i.e.Dates/amounts in multiple formats)

      • Word tokenization in Chinese (also called word segmentation)
        Chinese words are composed of characters --> generally 1 syllable and 1 morpheme, average word is 2.4 characters long.
        Standard baseline segmentation algorithm: Maximum Matching (also called Greedy).
Stopwords

It is typical to exclude high-frequency words.

e.g. Functions words: “a”, “the”, “in”, “to”;
e.g. Pronouns: “I”, “he”, “she”, “it”

Stopwords are language dependent. For effiency, store strings for stopwords in a hashtable rto recognize them in constant time.

e.g. simple Python dictionary

Normalization

Token Normalization: reducing multiple tokens to the same canonical term, such that matches occur despite superficial differences.

  • Accents and diacritics in French

    e.g. résumé vs. resume

  • Umlauts in German

    e.g. Tuebingen vs. Tübingen
    (often best to normalize to a de-accented term, e.g. Tuebingen, Tübingen, Tubingen => Tubingen)

  • Case-Folding
    Reduce all letters to lower case.
    Allow:

    • allow Automobile at beginning of sentences to match automobile.
    • allow user-typed ferrari to match Ferrari in documents.

    but may lead to unintended matches:

    • the Fed vs. fed.
    • Bush, Black, General Motors, Associated Press, …
  • Heuistic

    • Lowercase only some tokens
      words at beginning of sentences.
    • all words in a title where most words are capitalized.
  • Truecasing
    Use a classifier to decide when to fold

    • trained on many heuristic features.
  • British vs Amerian spellings:

    e.g. colour vs color

  • Multiple formats for dates, times:

    e.g. 09/30/2013 vs. Sep 30, 2013.

  • Asymmetric expansion

    e.g. Enter: window Search: window, windows
    e.g. Enter: windows Search: Windows, windows, window
    e.g. Enter: Windows Search: Windows

Lemmatization

Reduce inflectional/variant forms to base form. – > Direct impact on vocabulary size

e.g. am, are, is => be
e.g. car, cars, car’s, cars’ => car
e.g. the boy’s cars are different colors => the boy car be different color

  • How to do this?

    • Need a list of grammatical rules + a list of irregular words

      e.g. Children ® child, spoken ® speak …

    • Practical implementation: use WordNetʼs morphstr function
      (Python: NLTK.stem)

Stemming (词干提取)

Reduce tokens to “root” form of words to recognize morphological
variation

e.g. “computer”, “computational”, “computation” all reduced to same token “compute”

Correct morphological analysis is language specific and can be complex

Stemming “blindly” strips off known affixes (prefixes and suffixes) in an iterative fashion

Stemming was frequently used in IR

  • Porter Stemmer
    Simple procedure for removing known affixes in English without
    using a dictionary --> Can produce unusual stems that are not English words

    e.g. “computer”, “computational”, “computation” all reduced to same token “comput”

    May conflate (reduce to the same token) words that are actually distinct. --> Does not recognize all morphological derivations.

    • Typical rules in Porter

      e.g. sses => ss
      e.g. ies => i
      e.g. ational => ate
      e.g. tional => tion

    • Porter Stemmer Errors

      • Errors of “comission”

        e.g. organization, organ => organ
        e.g. police, policy => polic
        e.g. arm, army => arm

      • Erroes of “omission”

        e.g. cylinder, cylindrical
        e.g. create, creation
        e.g. Europe, European

  • Other stemmers
    e.g. Lovins stemmer

    Stemming is language- and often application-specific (open source and commercial plug-ins.)

    Does it improve IR performance?

    • mixed results for English: improves recall, but hurts precision.

      e.g. operative (dentistry) ⇒ oper

    • definitely useful for languages with richer morphology. (e.g. Spanish, German, Finish (30% gains).

4. Text representation

  1. Natural Language Processing
Some basic terms
  • Syntax:
    the allowable structures in the language

    e.g. sentences, phrases, affixes (-ing, -ed, -ment, etc.).

  • Semantics
    the meaning(s) of texts in the language.
  • Part-of-Speech (POS)
    the category of a word

    e.g. noun, verb, preposition etc.

  • Bag-of-words (BoW): a featurization that uses a vector of word counts (or binary) ignoring order.
  • N-gram: for a fixed, small N (2-5 is common), an n-gram is a consecutive sequence of words in a text.
  1. Bag of words Featurization

Assuming we have a dictionary mapping words to a unique integer id, a bag-of-words featurization of a sentence could look like this:

[vector 是每个词出现的个数]

Note that the original word order is lost, replaced by the order of id’s.

  1. Document Collection
A collection of n documents can be represented in the vector space model by a term-document matrix.
  • An entry in the matrix corresponds to the “weight” of a term in the document; zero means the term has no significance in the document or it simply doesn’t exist in the document.
  1. N-grams

Because word order is lost, the sentence meaning is weakened.
This sentence has quite a different meaning but the same BoW vector

BUT word order is important, especially the order of nearby words.

N-grams capture this, by modeling tuples of consecutive words.

N-gram Features

Typically, its advantages to use multiple n-gram fetures in machine learning models with text.

e.g. unigrams + bigrams (2-grams) + trigrams (3-grams).

The unigrams have higher counts and are able to detect influences that are weak, while bigrams and trigrams capture strong influences that are more specific.

e.g. “the white house” will generally have very different influences
from the sum of influences of “the”, “white”, “house”.

N-gram Size
  • N-grams pose some challenges in feature set size.
  • If the original vocabulary size is |V|, the number of 2-grams is ∣V∣2|V|^2V2. While for 3-grams it is ∣V∣3|V|^3V3.
  • Luckily natural language n-grams (including single words) have a power law frequency structure. This means that most of the n-grams you see are common. A dictionary that contains the most common n-grams will cover most of the n-grams you see.
  • Because of this you may see values like this:
    • Unigram dictionary size: 40,000
    • Bigram dictionary size: 100,000
    • Trigram dictionary size: 300,000
    With coverage of > 80% of the features occurring in the text.
N-gram Language Models

N-gram can be used to build statistical models of texts. --> n-gram language models --> associates a probability with each n-gram, such that the sum over all n-grams (for fixed n) is 1.

e.g. You can then determine the overall likelihood of a particular sentence:

  The cat sat on the mat

Is much more likely than

  The mat sat on the cat
Skip-grams

We can also analyze the meaning of a particular word by looking at the contexts in which it occurs.

The context is the set of words that occur near the word, i.e. at displacements of …,-3,-2,-1,+1,+2,+3,… in each sentence where the word occurs.

A skip-gram is a set of non-consecutive words (with specified offset), that occur in some sentence.
We can construct a BoSG (bag of skip-gram) representation for each word from the skip-gram table.

Then with a suitable embedding (DNN or linear projection) of the skip-gram features, we find that word meaning has an algebraic structure:

Man + (King – Man) + (Woman – Man) = Queen

  1. TF-IDF weighting
Term frequency
  • More frequent terms in a document are more important, i.e. more indicative of the topic.

    tfijtf_{ij}tfij = frequency of term i in document j

  • normalize term frequency (tf) by dividing by the frequency of the most common term in the document:

    tfijtf_{ij}tfij = fijf_{ij}fij / maximax_imaxi{fij_{ij}ij}

Inverse Document Frequency
  • Terms that appear in many different documents are less indicative of overall topic.

    (document frequency of term i)
    dfidf_idfi = number of documents containing term i

    (inverse document frequency of term i --> an indication of a term’s discrimination power)
    idfiidf_iidfi = log2(N/dfi)log2 (N/ df_i)log2(N/dfi)

    (N: total number of documents. Log used to dampen the effect relative to dfdfdf)

TF-IDF Weighting
  • A typical combined term importance indicator is TF- IDF weighting:

    wij=(tfij)(idfi)=(tfij)(log2(N/dfi))w_{ij} = (tf_{ij})( idf_i) = (tf_{ij}) (log2 (N/ df_i ))wij=(tfij)(idfi)=(tfij)(log2(N/dfi))

  • A term occurring frequently in the document but rarely in the rest of the collection is given high weight.
  • Many other ways of determining term weights have been proposed.
    Experimentally, TF-IDF has been found to work well.

CDS (W2) -- Features, Data, Text Processing相关推荐

  1. Python Text Processing with NLTK 2.0 Cookbook代码笔记

    如下是<Python Text Processing with NLTK 2.0 Cookbook>一书部分章节的代码笔记. Tokenizing text into sentences ...

  2. Rich Text Processing富文本处理

    Scribe框架提供一系列读和控制富文本文档的类.Qt4提供像QTextDocument类,他能够为开发提供创建和修改结构的富文本文档. 文档内的信息通过两个补充的接口存取: 1. 基于光标的接口用来 ...

  3. deep features for text spotting 在windows上使用

    deep features for text spotting这篇文章中给的matlab代码eccv2014_textspotting,是作者在mac上编译好的,我们在Windows和linux下使用 ...

  4. 编码 data:text/html;c,关于 Data URI Scheme -- data:image/jpg;base64

    转载一篇大神的文章 大家可能注意到了,网页上有些图片的src或css背景图片的url后面跟了一大串字符,比如: data:image/jpeg;base64,/9j/4QAYRXhpZgAASUkqA ...

  5. SAP LSMW 物料主数据Basic Data Text数据的导入

    SAP LSMW 物料主数据Basic Data Text数据的导入 笔者所在的D项目上,业务要求每个物料主数据能有一个remark字段,用以在物料描述之外为物料做更多更详细的描述,比如一些备品备件物 ...

  6. 《Data-Intensive Text Processing with mapReduce》读书笔记之一:前言

    暑假闲得蛋痒,混混沌沌,开始看<Data-Intensive Text Processing with mapReduce>,尽管有诸多单词不懂,还好六级考多了,虽然至今未过:再加上自己当 ...

  7. 【学术软件】ETPS(English Text Processing Software)

    开发了一款ETPS(English Text Processing Software)小软件,并使用科大的学生服务器做了1个ETPS软件的固定更新网页http://home.ustc.edu.cn/~ ...

  8. data:text/html firefox钓鱼,JS DataURL 整理(一)

    一.初识Data URL 数据URL是带有前缀的URL,data:可以将小文件直接嵌入文档中. data URL是一种特殊格式的url,它的前缀是data: data URL允许内容的创建者将小文件嵌 ...

  9. 编码 data text html c,谁说前端不需要懂二进制

    作者:全栈成长之路 公号 / 山月行 作为一名前端,在工作中也会遇到很多有关二进制处理的需求,如 EXCEL 表格的导出,PDF 的生成,多个文件的打包,音频的处理. 从前后端整体上来说前端代表 UI ...

最新文章

  1. 以金山界面库(openkui)为例思考和分析界面库的设计和实现——问题
  2. WebUI中DataGrid多层表头的终极解决办法
  3. tamper绕WAF小结
  4. FreeBSD下安装配置Hadoop集群(性能调优)
  5. python实现网页登录时的rsa加密流程
  6. 给定数字的全部组合实现方式
  7. Java笔记-使用jjwt生成jwt
  8. CICS的临时存储队列操作
  9. NAT综合实验(华为)
  10. 矩阵连乘问题算法思想_算法之矩阵连乘
  11. 【数据工具】地理坐标拾取器V.1.01(支持WGS-84、GCJ-02、BD-09)
  12. 消控中心人员配置_消控室的设置要求有哪些?
  13. 晶振知识,及晶振振荡电路
  14. 华为手机相册怎么镜像翻转_怎么制作照片视频?利用手机相册快速制作卡点视频...
  15. 微信小程序中输出大于号和小于号
  16. 专业技术计算机应用能力考试ppt2007,全国专业技术人员计算机应用能力考试PPT题库.pdf...
  17. python列表的使用_python列表的使用
  18. UA OPTI544 量子光学1 Maxwell方程与Lorentz Oscillator回顾
  19. 洛谷 - 一些好玩的问题
  20. 关于WPS中公式用不了的问题

热门文章

  1. 安装deepin微信报错: dpkg: 依赖关系问题使得 deepin.com.wechat:i386 的配置工作不能继续 解决办法如下:
  2. mysql索引失效怎么办,跳槽薪资翻倍
  3. 怎样调整好炒外汇心态
  4. 室内停车场地图定位-停车场地图定位导航
  5. 低成本快速开发 LoRa 终端:从 1 到 10000
  6. Ansible hosts文件写法
  7. [从头读历史] 第280节 诗经目录以及十五国风的地域分布
  8. oracle系统试算平衡表,oracle数据库中常用的系统表
  9. 设计师都在看的全球设计网站,你居然还不知道!
  10. 快乐想象识字认字注册码发放!