Python 处理文本是一项非常常见的功能,本文整理了多种文本提取及NLP相关的案例,还是非常用心的

文章很长,高低要忍一下,如果忍不了,那就收藏吧,总会用到的

  • 提取 PDF 内容

  • 提取 Word 内容

  • 提取 Web 网页内容

  • 读取 Json 数据

  • 读取 CSV 数据

  • 删除字符串中的标点符号

  • 使用 NLTK 删除停用词

  • 使用 TextBlob 更正拼写

  • 使用 NLTK 和 TextBlob 的词标记化

  • 使用 NLTK 提取句子单词或短语的词干列表

  • 使用 NLTK 进行句子或短语词形还原

  • 使用 NLTK 从文本文件中查找每个单词的频率

  • 从语料库中创建词云

  • NLTK 词法散布图

  • 使用 countvectorizer 将文本转换为数字

  • 使用 TF-IDF 创建文档术语矩阵

  • 为给定句子生成 N-gram

  • 使用带有二元组的 sklearn CountVectorize 词汇规范

  • 使用 TextBlob 提取名词短语

  • 如何计算词-词共现矩阵

  • 使用 TextBlob 进行情感分析

  • 使用 Goslate 进行语言翻译

  • 使用 TextBlob 进行语言检测和翻译

  • 使用 TextBlob 获取定义和同义词

  • 使用 TextBlob 获取反义词列表

1提取 PDF 内容

# pip install PyPDF2  安装 PyPDF2
import PyPDF2
from PyPDF2 import PdfFileReader# Creating a pdf file object.
pdf = open("test.pdf", "rb")# Creating pdf reader object.
pdf_reader = PyPDF2.PdfFileReader(pdf)# Checking total number of pages in a pdf file.
print("Total number of Pages:", pdf_reader.numPages)# Creating a page object.
page = pdf_reader.getPage(200)# Extract data from a specific page number.
print(page.extractText())# Closing the object.
pdf.close()

2提取 Word 内容

# pip install python-docx  安装 python-docximport docxdef main():try:doc = docx.Document('test.docx')  # Creating word reader object.data = ""fullText = []for para in doc.paragraphs:fullText.append(para.text)data = '\n'.join(fullText)print(data)except IOError:print('There was an error opening the file!')returnif __name__ == '__main__':main()

3提取 Web 网页内容

# pip install bs4  安装 bs4from urllib.request import Request, urlopen
from bs4 import BeautifulSoupreq = Request('http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1',headers={'User-Agent': 'Mozilla/5.0'})webpage = urlopen(req).read()# Parsing
soup = BeautifulSoup(webpage, 'html.parser')# Formating the parsed html file
strhtm = soup.prettify()# Print first 500 lines
print(strhtm[:500])# Extract meta tag value
print(soup.title.string)
print(soup.find('meta', attrs={'property':'og:description'}))# Extract anchor tag value
for x in soup.find_all('a'):print(x.string)# Extract Paragraph tag value
for x in soup.find_all('p'):print(x.text)

4读取 Json 数据

import requests
import jsonr = requests.get("https://support.oneskyapp.com/hc/en-us/article_attachments/202761727/example_2.json")
res = r.json()# Extract specific node content.
print(res['quiz']['sport'])# Dump data as string
data = json.dumps(res)
print(data)

5读取 CSV 数据

import csvwith open('test.csv','r') as csv_file:reader =csv.reader(csv_file)next(reader) # Skip first rowfor row in reader:print(row)

6删除字符串中的标点符号

import re
import stringdata = "Stuning even for the non-gamer: This sound track was beautiful!\
It paints the senery in your mind so well I would recomend\
it even to people who hate vid. game music! I have played the game Chrono \
Cross but out of all of the games I have ever played it has the best music! \
It backs away from crude keyboarding and takes a fresher step with grate\
guitars and soulful orchestras.\
It would impress anyone who cares to listen!"# Methood 1 : Regex
# Remove the special charaters from the read string.
no_specials_string = re.sub('[!#?,.:";]', '', data)
print(no_specials_string)# Methood 2 : translate()
# Rake translator object
translator = str.maketrans('', '', string.punctuation)
data = data.translate(translator)
print(data)

7使用 NLTK 删除停用词

from nltk.corpus import stopwordsdata = ['Stuning even for the non-gamer: This sound track was beautiful!\
It paints the senery in your mind so well I would recomend\
it even to people who hate vid. game music! I have played the game Chrono \
Cross but out of all of the games I have ever played it has the best music! \
It backs away from crude keyboarding and takes a fresher step with grate\
guitars and soulful orchestras.\
It would impress anyone who cares to listen!']# Remove stop words
stopwords = set(stopwords.words('english'))output = []
for sentence in data:temp_list = []for word in sentence.split():if word.lower() not in stopwords:temp_list.append(word)output.append(' '.join(temp_list))print(output)

8使用 TextBlob 更正拼写

from textblob import TextBlobdata = "Natural language is a cantral part of our day to day life, and it's so antresting to work on any problem related to langages."output = TextBlob(data).correct()
print(output)

9使用 NLTK 和 TextBlob 的词标记化

import nltk
from textblob import TextBlobdata = "Natural language is a central part of our day to day life, and it's so interesting to work on any problem related to languages."nltk_output = nltk.word_tokenize(data)
textblob_output = TextBlob(data).wordsprint(nltk_output)
print(textblob_output)

Output:

['Natural', 'language', 'is', 'a', 'central', 'part', 'of', 'our', 'day', 'to', 'day', 'life', ',', 'and', 'it', "'s", 'so', 'interesting', 'to', 'work', 'on', 'any', 'problem', 'related', 'to', 'languages', '.']
['Natural', 'language', 'is', 'a', 'central', 'part', 'of', 'our', 'day', 'to', 'day', 'life', 'and', 'it', "'s", 'so', 'interesting', 'to', 'work', 'on', 'any', 'problem', 'related', 'to', 'languages']

10使用 NLTK 提取句子单词或短语的词干列表

from nltk.stem import PorterStemmerst = PorterStemmer()
text = ['Where did he learn to dance like that?','His eyes were dancing with humor.','She shook her head and danced away','Alex was an excellent dancer.']output = []
for sentence in text:output.append(" ".join([st.stem(i) for i in sentence.split()]))for item in output:print(item)print("-" * 50)
print(st.stem('jumping'), st.stem('jumps'), st.stem('jumped'))

Output:

where did he learn to danc like that?
hi eye were danc with humor.
she shook her head and danc away
alex wa an excel dancer.
--------------------------------------------------
jump jump jump

11使用 NLTK 进行句子或短语词形还原

from nltk.stem import WordNetLemmatizerwnl = WordNetLemmatizer()
text = ['She gripped the armrest as he passed two cars at a time.','Her car was in full view.','A number of cars carried out of state license plates.']output = []
for sentence in text:output.append(" ".join([wnl.lemmatize(i) for i in sentence.split()]))for item in output:print(item)print("*" * 10)
print(wnl.lemmatize('jumps', 'n'))
print(wnl.lemmatize('jumping', 'v'))
print(wnl.lemmatize('jumped', 'v'))print("*" * 10)
print(wnl.lemmatize('saddest', 'a'))
print(wnl.lemmatize('happiest', 'a'))
print(wnl.lemmatize('easiest', 'a'))

Output:

She gripped the armrest a he passed two car at a time.
Her car wa in full view.
A number of car carried out of state license plates.
**********
jump
jump
jump
**********
sad
happy
easy

12使用 NLTK 从文本文件中查找每个单词的频率

import nltk
from nltk.corpus import webtext
from nltk.probability import FreqDistnltk.download('webtext')
wt_words = webtext.words('testing.txt')
data_analysis = nltk.FreqDist(wt_words)# Let's take the specific words only if their frequency is greater than 3.
filter_words = dict([(m, n) for m, n in data_analysis.items() if len(m) > 3])for key in sorted(filter_words):print("%s: %s" % (key, filter_words[key]))data_analysis = nltk.FreqDist(filter_words)data_analysis.plot(25, cumulative=False)

Output:

[nltk_data] Downloading package webtext to
[nltk_data]     C:\Users\amit\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping corpora\webtext.zip.
1989: 1
Accessing: 1
Analysis: 1
Anyone: 1
Chapter: 1
Coding: 1
Data: 1
...

13从语料库中创建词云

import nltk
from nltk.corpus import webtext
from nltk.probability import FreqDist
from wordcloud import WordCloud
import matplotlib.pyplot as pltnltk.download('webtext')
wt_words = webtext.words('testing.txt')  # Sample data
data_analysis = nltk.FreqDist(wt_words)filter_words = dict([(m, n) for m, n in data_analysis.items() if len(m) > 3])wcloud = WordCloud().generate_from_frequencies(filter_words)# Plotting the wordcloud
plt.imshow(wcloud, interpolation="bilinear")plt.axis("off")
(-0.5, 399.5, 199.5, -0.5)
plt.show()

14NLTK 词法散布图

import nltk
from nltk.corpus import webtext
from nltk.probability import FreqDist
from wordcloud import WordCloud
import matplotlib.pyplot as pltwords = ['data', 'science', 'dataset']nltk.download('webtext')
wt_words = webtext.words('testing.txt')  # Sample datapoints = [(x, y) for x in range(len(wt_words))for y in range(len(words)) if wt_words[x] == words[y]]if points:x, y = zip(*points)
else:x = y = ()plt.plot(x, y, "rx", scalex=.1)
plt.yticks(range(len(words)), words, color="b")
plt.ylim(-1, len(words))
plt.title("Lexical Dispersion Plot")
plt.xlabel("Word Offset")
plt.show()

15使用 countvectorizer 将文本转换为数字

import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer# Sample data for analysis
data1 = "Java is a language for programming that develops a software for several platforms. A compiled code or bytecode on Java application can run on most of the operating systems including Linux, Mac operating system, and Linux. Most of the syntax of Java is derived from the C++ and C languages."
data2 = "Python supports multiple programming paradigms and comes up with a large standard library, paradigms included are object-oriented, imperative, functional and procedural."
data3 = "Go is typed statically compiled language. It was created by Robert Griesemer, Ken Thompson, and Rob Pike in 2009. This language offers garbage collection, concurrency of CSP-style, memory safety, and structural typing."df1 = pd.DataFrame({'Java': [data1], 'Python': [data2], 'Go': [data2]})# Initialize
vectorizer = CountVectorizer()
doc_vec = vectorizer.fit_transform(df1.iloc[0])# Create dataFrame
df2 = pd.DataFrame(doc_vec.toarray().transpose(),index=vectorizer.get_feature_names())# Change column headers
df2.columns = df1.columns
print(df2)

Output:

Go  Java  Python
and           2     2       2
application   0     1       0
are           1     0       1
bytecode      0     1       0
can           0     1       0
code          0     1       0
comes         1     0       1
compiled      0     1       0
derived       0     1       0
develops      0     1       0
for           0     2       0
from          0     1       0
functional    1     0       1
imperative    1     0       1
...

16使用 TF-IDF 创建文档术语矩阵

import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer# Sample data for analysis
data1 = "Java is a language for programming that develops a software for several platforms. A compiled code or bytecode on Java application can run on most of the operating systems including Linux, Mac operating system, and Linux. Most of the syntax of Java is derived from the C++ and C languages."
data2 = "Python supports multiple programming paradigms and comes up with a large standard library, paradigms included are object-oriented, imperative, functional and procedural."
data3 = "Go is typed statically compiled language. It was created by Robert Griesemer, Ken Thompson, and Rob Pike in 2009. This language offers garbage collection, concurrency of CSP-style, memory safety, and structural typing."df1 = pd.DataFrame({'Java': [data1], 'Python': [data2], 'Go': [data2]})# Initialize
vectorizer = TfidfVectorizer()
doc_vec = vectorizer.fit_transform(df1.iloc[0])# Create dataFrame
df2 = pd.DataFrame(doc_vec.toarray().transpose(),index=vectorizer.get_feature_names())# Change column headers
df2.columns = df1.columns
print(df2)

Output:

Go      Java    Python
and          0.323751  0.137553  0.323751
application  0.000000  0.116449  0.000000
are          0.208444  0.000000  0.208444
bytecode     0.000000  0.116449  0.000000
can          0.000000  0.116449  0.000000
code         0.000000  0.116449  0.000000
comes        0.208444  0.000000  0.208444
compiled     0.000000  0.116449  0.000000
derived      0.000000  0.116449  0.000000
develops     0.000000  0.116449  0.000000
for          0.000000  0.232898  0.000000
...

17为给定句子生成 N-gram

NLTK

import nltk
from nltk.util import ngrams# Function to generate n-grams from sentences.
def extract_ngrams(data, num):n_grams = ngrams(nltk.word_tokenize(data), num)return [ ' '.join(grams) for grams in n_grams]data = 'A class is a blueprint for the object.'print("1-gram: ", extract_ngrams(data, 1))
print("2-gram: ", extract_ngrams(data, 2))
print("3-gram: ", extract_ngrams(data, 3))
print("4-gram: ", extract_ngrams(data, 4))

TextBlob

from textblob import TextBlob# Function to generate n-grams from sentences.
def extract_ngrams(data, num):n_grams = TextBlob(data).ngrams(num)return [ ' '.join(grams) for grams in n_grams]data = 'A class is a blueprint for the object.'print("1-gram: ", extract_ngrams(data, 1))
print("2-gram: ", extract_ngrams(data, 2))
print("3-gram: ", extract_ngrams(data, 3))
print("4-gram: ", extract_ngrams(data, 4))

Output:

1-gram:  ['A', 'class', 'is', 'a', 'blueprint', 'for', 'the', 'object']
2-gram:  ['A class', 'class is', 'is a', 'a blueprint', 'blueprint for', 'for the', 'the object']
3-gram:  ['A class is', 'class is a', 'is a blueprint', 'a blueprint for', 'blueprint for the', 'for the object']
4-gram:  ['A class is a', 'class is a blueprint', 'is a blueprint for', 'a blueprint for the', 'blueprint for the object']

18使用带有二元组的 sklearn CountVectorize 词汇规范

import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer# Sample data for analysis
data1 = "Machine language is a low-level programming language. It is easily understood by computers but difficult to read by people. This is why people use higher level programming languages. Programs written in high-level languages are also either compiled and/or interpreted into machine language so that computers can execute them."
data2 = "Assembly language is a representation of machine language. In other words, each assembly language instruction translates to a machine language instruction. Though assembly language statements are readable, the statements are still low-level. A disadvantage of assembly language is that it is not portable, because each platform comes with a particular Assembly Language"df1 = pd.DataFrame({'Machine': [data1], 'Assembly': [data2]})# Initialize
vectorizer = CountVectorizer(ngram_range=(2, 2))
doc_vec = vectorizer.fit_transform(df1.iloc[0])# Create dataFrame
df2 = pd.DataFrame(doc_vec.toarray().transpose(),index=vectorizer.get_feature_names())# Change column headers
df2.columns = df1.columns
print(df2)

Output:

Assembly  Machine
also either                    0        1
and or                         0        1
are also                       0        1
are readable                   1        0
are still                      1        0
assembly language              5        0
because each                   1        0
but difficult                  0        1
by computers                   0        1
by people                      0        1
can execute                    0        1
...

19使用 TextBlob 提取名词短语

from textblob import TextBlob#Extract noun
blob = TextBlob("Canada is a country in the northern part of North America.")for nouns in blob.noun_phrases:print(nouns)

Output:

canada
northern part
america

20如何计算词-词共现矩阵

import numpy as np
import nltk
from nltk import bigrams
import itertools
import pandas as pddef generate_co_occurrence_matrix(corpus):vocab = set(corpus)vocab = list(vocab)vocab_index = {word: i for i, word in enumerate(vocab)}# Create bigrams from all words in corpusbi_grams = list(bigrams(corpus))# Frequency distribution of bigrams ((word1, word2), num_occurrences)bigram_freq = nltk.FreqDist(bi_grams).most_common(len(bi_grams))# Initialise co-occurrence matrix# co_occurrence_matrix[current][previous]co_occurrence_matrix = np.zeros((len(vocab), len(vocab)))# Loop through the bigrams taking the current and previous word,# and the number of occurrences of the bigram.for bigram in bigram_freq:current = bigram[0][1]previous = bigram[0][0]count = bigram[1]pos_current = vocab_index[current]pos_previous = vocab_index[previous]co_occurrence_matrix[pos_current][pos_previous] = countco_occurrence_matrix = np.matrix(co_occurrence_matrix)# return the matrix and the indexreturn co_occurrence_matrix, vocab_indextext_data = [['Where', 'Python', 'is', 'used'],['What', 'is', 'Python' 'used', 'in'],['Why', 'Python', 'is', 'best'],['What', 'companies', 'use', 'Python']]# Create one list using many lists
data = list(itertools.chain.from_iterable(text_data))
matrix, vocab_index = generate_co_occurrence_matrix(data)data_matrix = pd.DataFrame(matrix, index=vocab_index,columns=vocab_index)
print(data_matrix)

Output:

best  use  What  Where  ...    in   is  Python  used
best         0.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   1.0
use          0.0  0.0   0.0    0.0  ...   0.0  1.0     0.0   0.0
What         1.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   0.0
Where        0.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   0.0
Pythonused   0.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   1.0
Why          0.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   1.0
companies    0.0  1.0   0.0    1.0  ...   1.0  0.0     0.0   0.0
in           0.0  0.0   0.0    0.0  ...   0.0  0.0     1.0   0.0
is           0.0  0.0   1.0    0.0  ...   0.0  0.0     0.0   0.0
Python       0.0  0.0   0.0    0.0  ...   0.0  0.0     0.0   0.0
used         0.0  0.0   1.0    0.0  ...   0.0  0.0     0.0   0.0[11 rows x 11 columns]

21使用 TextBlob 进行情感分析

from textblob import TextBlobdef sentiment(polarity):if blob.sentiment.polarity < 0:print("Negative")elif blob.sentiment.polarity > 0:print("Positive")else:print("Neutral")blob = TextBlob("The movie was excellent!")
print(blob.sentiment)
sentiment(blob.sentiment.polarity)blob = TextBlob("The movie was not bad.")
print(blob.sentiment)
sentiment(blob.sentiment.polarity)blob = TextBlob("The movie was ridiculous.")
print(blob.sentiment)
sentiment(blob.sentiment.polarity)

Output:

Sentiment(polarity=1.0, subjectivity=1.0)
Positive
Sentiment(polarity=0.3499999999999999, subjectivity=0.6666666666666666)
Positive
Sentiment(polarity=-0.3333333333333333, subjectivity=1.0)
Negative

22使用 Goslate 进行语言翻译

import goslatetext = "Comment vas-tu?"gs = goslate.Goslate()translatedText = gs.translate(text, 'en')
print(translatedText)translatedText = gs.translate(text, 'zh')
print(translatedText)translatedText = gs.translate(text, 'de')
print(translatedText)

23使用 TextBlob 进行语言检测和翻译

from textblob import TextBlobblob = TextBlob("Comment vas-tu?")print(blob.detect_language())print(blob.translate(to='es'))
print(blob.translate(to='en'))
print(blob.translate(to='zh'))

Output:

fr
¿Como estas tu?
How are you?
你好吗?

24使用 TextBlob 获取定义和同义词

from textblob import TextBlob
from textblob import Wordtext_word = Word('safe')print(text_word.definitions)synonyms = set()
for synset in text_word.synsets:for lemma in synset.lemmas():synonyms.add(lemma.name())print(synonyms)

Output:

['strongbox where valuables can be safely kept', 'a ventilated or refrigerated cupboard for securing provisions from pests', 'contraceptive device consisting of a sheath of thin rubber or latex that is worn over the penis during intercourse', 'free from danger or the risk of harm', '(of an undertaking) secure from risk', 'having reached a base without being put out', 'financially sound']
{'secure', 'rubber', 'good', 'safety', 'safe', 'dependable', 'condom', 'prophylactic'}

25使用 TextBlob 获取反义词列表

from textblob import TextBlob
from textblob import Wordtext_word = Word('safe')antonyms = set()
for synset in text_word.synsets:for lemma in synset.lemmas():        if lemma.antonyms():antonyms.add(lemma.antonyms()[0].name())        print(antonyms)

Output:

{'dangerous', 'out'}

END

推荐阅读牛逼!Python常用数据类型的基本操作(长文系列第①篇)
牛逼!Python的判断、循环和各种表达式(长文系列第②篇)牛逼!Python函数和文件操作(长文系列第③篇)牛逼!Python错误、异常和模块(长文系列第④篇)
吴恩达deeplearining.ai的经典总结资料
Ps:从小程序直接获取下载

整理了 25 个 Python 文本处理案例,收藏!相关推荐

  1. 精心整理 25 个 Python 文本处理案例,收藏!

    Python 处理文本是一项非常常见的功能,本文整理了多种文本提取及NLP相关的案例,还是非常用心的.文章很长,要忍一下,如果忍不了,那就收藏吧,总会用到的! 提取 PDF 内容 提取 Word 内容 ...

  2. (盘点)25个值得收藏的Python文本处理案例

    今天主要跟大家整理了25个值得收藏的Python文本处理案例.Python 处理文本是一项非常常见的功能,可以收藏起来,总会用到的,想要了解更多的关于python知识的,领取免费资源的,可以点击这个链 ...

  3. python 文本聚类分析案例——从若干文本中聚类出一些主题词团

    python 文本聚类分析案例 说明 摘要 1.结巴分词 2.去除停用词 3.生成tfidf矩阵 4.K-means聚类 5.获取主题词 / 主题词团 说明 实验要求:对若干条文本进行聚类分析,最终得 ...

  4. Python文本分析案例:近体诗格律分析

    作者:长行 时间:2020.05.26 Github原文:Week-03/Example-0301 在这个案例中,我们将要实现近体诗格律的分析.具体的,我们从如下角度分析近体诗的格律: 诗句数量.诗句 ...

  5. 整理了 70 个 Python 面向对象编程案例,怎能不收藏?

    作者 | 周萝卜 来源 | 萝卜大杂烩 Python 作为一门面向对象编程语言,常用的面向对象知识怎么能不清楚呢,今天就来分享一波 文章很长,高低要忍一下,如果忍不了,那就收藏吧,总会用到的 在 Py ...

  6. 整理了 10 个 Python 自动化办公案例,效率提高 100 倍!

    作者 | 1_bit 来源 | 数据分析与统计学之美 在计算机中,编写程序需要使用计算机编程语言,由于种类及针对性不同,计算机语言存在上百种,那对于目前日益复杂的办公需求,到底什么语言才可以提高我们的 ...

  7. 整理了70个Python实战项目案例,教程+源码+笔记。从基础到深入

    大家好,我是某某程序员,嘿嘿,很多人照书学完 Python,基础和常用模块使用没太大问题,但不知道下一步该怎么继续学习了.想找工作却没有项目经验-- 网上有些人建议拿实际项目练,但手头没有适合练习的项 ...

  8. Python工程师面试必备25条Python知识点,赶紧收藏!

    人生苦短 我用python 有些小伙伴最近问我Python工程师面试会问些什么 现在就给大家来总结一下干货! 源码.资料电子书点击此处 1.到底什么是Python?你可以在回答中与其他技术进行对比 下 ...

  9. Python文本整理案例分析:《全唐诗》文本整理

    在整理<全唐诗>的文本之前,我们首先需要完成以下两个步骤: 确定需求 了解文本 在完成以上步骤后,我们开始实际着手整理文本,在整理的过程中大体上也包含两个流程: 文本解析 结果输出 全唐诗 ...

  10. 熬夜整理!200道Python数据分析习题+50个办公自动化案例!

    大家好,之前整理干货内容都是授人以鱼,这次想不一样一些,鱼和渔都想送给大家.给大家分享好友刘早起整理的三份干货 Python数据分析200题 matplotlib图鉴100+ Python办公自动化实 ...

最新文章

  1. Docker 容器技术 — 容器网络
  2. Ucenter社区服务搭建
  3. 使用 ABAP 代码解析一个 class 的所有方法
  4. 一、在windows环境下修改pip镜像源的方法(以python3为例)
  5. 内核启动的C语言阶段——start_kernel函数
  6. arduinowifi.send怎么获取响应_Vue3.0 响应式原理 (一)
  7. VUE3@/cli数据交互(axios)
  8. Extreme Programming
  9. 【电脑帮助】解决Wind10系统照片中自带的保存的图片和本机照片的问题
  10. 必做作业3:原型化系统
  11. Rust :fold
  12. cad尺寸标注快捷键_CAD快速标注方法你知道几种?
  13. 关于保险的“损失补偿原则”
  14. 对话行癫:解密阿里云顶层设计和底层逻辑 1
  15. Android Studio实现一个记账本项目
  16. 计算机映像缺失磁盘如何修复,电脑映像损坏怎么修复_windows提示损坏的映像怎么处理...
  17. 以太网实习_实习 | 2018年春季实习面试问题整理
  18. CommandName属性和CommandArgument属性[转]
  19. 网站管理后台被破解原理分析及实例演示
  20. 2018/12/22 JSJ_JC_03

热门文章

  1. Linux--DHCP 服务(了解 DHCP 服务、其工作过程、如何动态配置主机地址、安装 DHCP 服务器及配置步骤)
  2. 网游加速器和换ip工具的区别
  3. div内容上下左右居中
  4. 聚类及DBSCAN 聚类算法
  5. ubuntu系统安装socket服务器,ubuntu 服务器安装socket需要安装啥
  6. springboot整合腾讯云短信服务
  7. CAS算法的理解与应用
  8. Java垃圾回收器详解
  9. st语言 数组的常用方法_st语言
  10. JS跨域请求解决方案