1.打开一个新的notepad++页面,复制下面这段话

{

    "corpora": {"semeval-2016-2017-task3-subtaskBC": {"num_records": -1,"record_format": "dict","file_size": 6344358,"reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/semeval-2016-2017-task3-subtaskB-eng/__init__.py","license": "All files released for the task are free for general research use","fields": {"2016-train": ["..."],"2016-dev": ["..."],"2017-test": ["..."],"2016-test": ["..."]},"description": "SemEval 2016 / 2017 Task 3 Subtask B and C datasets contain train+development (317 original questions, 3,169 related questions, and 31,690 comments), and test datasets in English. The description of the tasks and the collected data is given in sections 3 and 4.1 of the task paper http://alt.qcri.org/semeval2016/task3/data/uploads/semeval2016-task3-report.pdf linked in section \u201cPapers\u201d of https://github.com/RaRe-Technologies/gensim-data/issues/18.","checksum": "701ea67acd82e75f95e1d8e62fb0ad29","file_name": "semeval-2016-2017-task3-subtaskBC.gz","read_more": ["http://alt.qcri.org/semeval2017/task3/","http://alt.qcri.org/semeval2017/task3/data/uploads/semeval2017-task3.pdf","https://github.com/RaRe-Technologies/gensim-data/issues/18","https://github.com/Witiko/semeval-2016_2017-task3-subtaskB-english"],"parts": 1},"semeval-2016-2017-task3-subtaskA-unannotated": {"num_records": 189941,"record_format": "dict","file_size": 234373151,"reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/semeval-2016-2017-task3-subtaskA-unannotated-eng/__init__.py","license": "These datasets are free for general research use.","fields": {"THREAD_SEQUENCE": "","RelQuestion": {"RELQ_CATEGORY": "question category, according to the Qatar Living taxonomy","RELQ_DATE": "date of posting","RELQ_ID": "question indentifier","RELQ_USERID": "identifier of the user asking the question","RELQ_USERNAME": "name of the user asking the question","RelQBody": "body of question","RelQSubject": "subject of question"},"RelComments": [{"RelCText": "text of answer","RELC_USERID": "identifier of the user posting the comment","RELC_ID": "comment identifier","RELC_USERNAME": "name of the user posting the comment","RELC_DATE": "date of posting"}]},"description": "SemEval 2016 / 2017 Task 3 Subtask A unannotated dataset contains 189,941 questions and 1,894,456 comments in English collected from the Community Question Answering (CQA) web forum of Qatar Living. These can be used as a corpus for language modelling.","checksum": "2de0e2f2c4f91c66ae4fcf58d50ba816","file_name": "semeval-2016-2017-task3-subtaskA-unannotated.gz","read_more": ["http://alt.qcri.org/semeval2016/task3/","http://alt.qcri.org/semeval2016/task3/data/uploads/semeval2016-task3-report.pdf","https://github.com/RaRe-Technologies/gensim-data/issues/18","https://github.com/Witiko/semeval-2016_2017-task3-subtaskA-unannotated-english"],"parts": 1},"patent-2017": {"num_records": 353197,"record_format": "dict","file_size": 3087262469,"reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/patent-2017/__init__.py","license": "not found","description": "Patent Grant Full Text. Contains the full text including tables, sequence data and 'in-line' mathematical expressions of each patent grant issued in 2017.","checksum-0": "818501f0b9af62d3b88294d86d509f8f","checksum-1": "66c05635c1d3c7a19b4a335829d09ffa","file_name": "patent-2017.gz","read_more": ["http://patents.reedtech.com/pgrbft.php"],"parts": 2},"quora-duplicate-questions": {"num_records": 404290,"record_format": "dict","file_size": 21684784,"reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/quora-duplicate-questions/__init__.py","license": "probably https://www.quora.com/about/tos","fields": {"question1": "the full text of each question","question2": "the full text of each question","qid1": "unique ids of each question","qid2": "unique ids of each question","id": "the id of a training set question pair","is_duplicate": "the target variable, set to 1 if question1 and question2 have essentially the same meaning, and 0 otherwise"},"description": "Over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line contains a duplicate pair or not.","checksum": "d7cfa7fbc6e2ec71ab74c495586c6365","file_name": "quora-duplicate-questions.gz","read_more": ["https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs"],"parts": 1},"wiki-english-20171001": {"num_records": 4924894,"record_format": "dict","file_size": 6516051717,"reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/wiki-english-20171001/__init__.py","license": "https://dumps.wikimedia.org/legal.html","fields": {"section_texts": "list of body of sections","section_titles": "list of titles of sections","title": "Title of wiki article"},"description": "Extracted Wikipedia dump from October 2017. Produced by `python -m gensim.scripts.segment_wiki -f enwiki-20171001-pages-articles.xml.bz2 -o wiki-en.gz`","checksum-0": "a7d7d7fd41ea7e2d7fa32ec1bb640d71","checksum-1": "b2683e3356ffbca3b6c2dca6e9801f9f","checksum-2": "c5cde2a9ae77b3c4ebce804f6df542c2","checksum-3": "00b71144ed5e3aeeb885de84f7452b81","file_name": "wiki-english-20171001.gz","read_more": ["https://dumps.wikimedia.org/enwiki/20171001/"],"parts": 4},"text8": {"num_records": 1701,"record_format": "list of str (tokens)","file_size": 33182058,"reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/text8/__init__.py","license": "not found","description": "First 100,000,000 bytes of plain text from Wikipedia. Used for testing purposes; see wiki-english-* for proper full Wikipedia datasets.","checksum": "68799af40b6bda07dfa47a32612e5364","file_name": "text8.gz","read_more": ["http://mattmahoney.net/dc/textdata.html"],"parts": 1},"fake-news": {"num_records": 12999,"record_format": "dict","file_size": 20102776,"reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/fake-news/__init__.py","license": "https://creativecommons.org/publicdomain/zero/1.0/","fields": {"crawled": "date the story was archived","ord_in_thread": "","published": "date published","participants_count": "number of participants","shares": "number of Facebook shares","replies_count": "number of replies","main_img_url": "image from story","spam_score": "data from webhose.io","uuid": "unique identifier","language": "data from webhose.io","title": "title of story","country": "data from webhose.io","domain_rank": "data from webhose.io","author": "author of story","comments": "number of Facebook comments","site_url": "site URL from BS detector","text": "text of story","thread_title": "","type": "type of website (label from BS detector)","likes": "number of Facebook likes"},"description": "News dataset, contains text and metadata from 244 websites and represents 12,999 posts in total from a specific window of 30 days. The data was pulled using the webhose.io API, and because it's coming from their crawler, not all websites identified by their BS Detector are present in this dataset. Data sources that were missing a label were simply assigned a label of 'bs'. There are (ostensibly) no genuine, reliable, or trustworthy news sources represented in this dataset (so far), so don't trust anything you read.","checksum": "5e64e942df13219465927f92dcefd5fe","file_name": "fake-news.gz","read_more": ["https://www.kaggle.com/mrisdal/fake-news"],"parts": 1},"20-newsgroups": {"num_records": 18846,"record_format": "dict","file_size": 14483581,"reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/20-newsgroups/__init__.py","license": "not found","fields": {"topic": "name of topic (20 variant of possible values)","set": "marker of original split (possible values 'train' and 'test')","data": "","id": "original id inferred from folder name"},"description": "The notorious collection of approximately 20,000 newsgroup posts, partitioned (nearly) evenly across 20 different newsgroups.","checksum": "c92fd4f6640a86d5ba89eaad818a9891","file_name": "20-newsgroups.gz","read_more": ["http://qwone.com/~jason/20Newsgroups/"],"parts": 1},"__testing_matrix-synopsis": {"description": "[THIS IS ONLY FOR TESTING] Synopsis of the movie matrix.","checksum": "1767ac93a089b43899d54944b07d9dc5","file_name": "__testing_matrix-synopsis.gz","read_more": ["http://www.imdb.com/title/tt0133093/plotsummary?ref_=ttpl_pl_syn#synopsis"],"parts": 1},"__testing_multipart-matrix-synopsis": {"description": "[THIS IS ONLY FOR TESTING] Synopsis of the movie matrix.","checksum-0": "c8b0c7d8cf562b1b632c262a173ac338","checksum-1": "5ff7fc6818e9a5d9bc1cf12c35ed8b96","checksum-2": "966db9d274d125beaac7987202076cba","file_name": "__testing_multipart-matrix-synopsis.gz","read_more": ["http://www.imdb.com/title/tt0133093/plotsummary?ref_=ttpl_pl_syn#synopsis"],"parts": 3}},"models": {"fasttext-wiki-news-subwords-300": {"num_records": 999999,"file_size": 1005007116,"base_dataset": "Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset (16B tokens)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/fasttext-wiki-news-subwords-300/__init__.py","license": "https://creativecommons.org/licenses/by-sa/3.0/","parameters": {"dimension": 300},"description": "1 million word vectors trained on Wikipedia 2017, UMBC webbase corpus and statmt.org news dataset (16B tokens).","read_more": ["https://fasttext.cc/docs/en/english-vectors.html","https://arxiv.org/abs/1712.09405","https://arxiv.org/abs/1607.01759"],"checksum": "de2bb3a20c46ce65c9c131e1ad9a77af","file_name": "fasttext-wiki-news-subwords-300.gz","parts": 1},"conceptnet-numberbatch-17-06-300": {"num_records": 1917247,"file_size": 1225497562,"base_dataset": "ConceptNet, word2vec, GloVe, and OpenSubtitles 2016","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/conceptnet-numberbatch-17-06-300/__init__.py","license": "https://github.com/commonsense/conceptnet-numberbatch/blob/master/LICENSE.txt","parameters": {"dimension": 300},"description": "ConceptNet Numberbatch consists of state-of-the-art semantic vectors (also known as word embeddings) that can be used directly as a representation of word meanings or as a starting point for further machine learning. ConceptNet Numberbatch is part of the ConceptNet open data project. ConceptNet provides lots of ways to compute with word meanings, one of which is word embeddings. ConceptNet Numberbatch is a snapshot of just the word embeddings. It is built using an ensemble that combines data from ConceptNet, word2vec, GloVe, and OpenSubtitles 2016, using a variation on retrofitting.","read_more": ["http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14972","https://github.com/commonsense/conceptnet-numberbatch","http://conceptnet.io/"],"checksum": "fd642d457adcd0ea94da0cd21b150847","file_name": "conceptnet-numberbatch-17-06-300.gz","parts": 1},"word2vec-ruscorpora-300": {"num_records": 184973,"file_size": 208427381,"base_dataset": "Russian National Corpus (about 250M words)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/word2vec-ruscorpora-300/__init__.py","license": "https://creativecommons.org/licenses/by/4.0/deed.en","parameters": {"dimension": 300,"window_size": 10},"description": "Word2vec Continuous Skipgram vectors trained on full Russian National Corpus (about 250M words). The model contains 185K words.","preprocessing": "The corpus was lemmatized and tagged with Universal PoS","read_more": ["https://www.academia.edu/24306935/WebVectors_a_Toolkit_for_Building_Web_Interfaces_for_Vector_Semantic_Models","http://rusvectores.org/en/","https://github.com/RaRe-Technologies/gensim-data/issues/3"],"checksum": "9bdebdc8ae6d17d20839dd9b5af10bc4","file_name": "word2vec-ruscorpora-300.gz","parts": 1},"word2vec-google-news-300": {"num_records": 3000000,"file_size": 1743563840,"base_dataset": "Google News (about 100 billion words)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/word2vec-google-news-300/__init__.py","license": "not found","parameters": {"dimension": 300},"description": "Pre-trained vectors trained on a part of the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. The phrases were obtained using a simple data-driven approach described in 'Distributed Representations of Words and Phrases and their Compositionality' (https://code.google.com/archive/p/word2vec/).","read_more": ["https://code.google.com/archive/p/word2vec/","https://arxiv.org/abs/1301.3781","https://arxiv.org/abs/1310.4546","https://www.microsoft.com/en-us/research/publication/linguistic-regularities-in-continuous-space-word-representations/?from=http%3A%2F%2Fresearch.microsoft.com%2Fpubs%2F189726%2Frvecs.pdf"],"checksum": "a5e5354d40acb95f9ec66d5977d140ef","file_name": "word2vec-google-news-300.gz","parts": 1},"glove-wiki-gigaword-50": {"num_records": 400000,"file_size": 69182535,"base_dataset": "Wikipedia 2014 + Gigaword 5 (6B tokens, uncased)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/glove-wiki-gigaword-50/__init__.py","license": "http://opendatacommons.org/licenses/pddl/","parameters": {"dimension": 50},"description": "Pre-trained vectors based on Wikipedia 2014 + Gigaword, 5.6B tokens, 400K vocab, uncased (https://nlp.stanford.edu/projects/glove/).","preprocessing": "Converted to w2v format with `python -m gensim.scripts.glove2word2vec -i <fname> -o glove-wiki-gigaword-50.txt`.","read_more": ["https://nlp.stanford.edu/projects/glove/","https://nlp.stanford.edu/pubs/glove.pdf"],"checksum": "c289bc5d7f2f02c6dc9f2f9b67641813","file_name": "glove-wiki-gigaword-50.gz","parts": 1},"glove-wiki-gigaword-100": {"num_records": 400000,"file_size": 134300434,"base_dataset": "Wikipedia 2014 + Gigaword 5 (6B tokens, uncased)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/glove-wiki-gigaword-100/__init__.py","license": "http://opendatacommons.org/licenses/pddl/","parameters": {"dimension": 100},"description": "Pre-trained vectors based on Wikipedia 2014 + Gigaword 5.6B tokens, 400K vocab, uncased (https://nlp.stanford.edu/projects/glove/).","preprocessing": "Converted to w2v format with `python -m gensim.scripts.glove2word2vec -i <fname> -o glove-wiki-gigaword-100.txt`.","read_more": ["https://nlp.stanford.edu/projects/glove/","https://nlp.stanford.edu/pubs/glove.pdf"],"checksum": "40ec481866001177b8cd4cb0df92924f","file_name": "glove-wiki-gigaword-100.gz","parts": 1},"glove-wiki-gigaword-200": {"num_records": 400000,"file_size": 264336934,"base_dataset": "Wikipedia 2014 + Gigaword 5 (6B tokens, uncased)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/glove-wiki-gigaword-200/__init__.py","license": "http://opendatacommons.org/licenses/pddl/","parameters": {"dimension": 200},"description": "Pre-trained vectors based on Wikipedia 2014 + Gigaword, 5.6B tokens, 400K vocab, uncased (https://nlp.stanford.edu/projects/glove/).","preprocessing": "Converted to w2v format with `python -m gensim.scripts.glove2word2vec -i <fname> -o glove-wiki-gigaword-200.txt`.","read_more": ["https://nlp.stanford.edu/projects/glove/","https://nlp.stanford.edu/pubs/glove.pdf"],"checksum": "59652db361b7a87ee73834a6c391dfc1","file_name": "glove-wiki-gigaword-200.gz","parts": 1},"glove-wiki-gigaword-300": {"num_records": 400000,"file_size": 394362229,"base_dataset": "Wikipedia 2014 + Gigaword 5 (6B tokens, uncased)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/glove-wiki-gigaword-300/__init__.py","license": "http://opendatacommons.org/licenses/pddl/","parameters": {"dimension": 300},"description": "Pre-trained vectors based on Wikipedia 2014 + Gigaword, 5.6B tokens, 400K vocab, uncased (https://nlp.stanford.edu/projects/glove/).","preprocessing": "Converted to w2v format with `python -m gensim.scripts.glove2word2vec -i <fname> -o glove-wiki-gigaword-300.txt`.","read_more": ["https://nlp.stanford.edu/projects/glove/","https://nlp.stanford.edu/pubs/glove.pdf"],"checksum": "29e9329ac2241937d55b852e8284e89b","file_name": "glove-wiki-gigaword-300.gz","parts": 1},"glove-twitter-25": {"num_records": 1193514,"file_size": 109885004,"base_dataset": "Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/glove-twitter-25/__init__.py","license": "http://opendatacommons.org/licenses/pddl/","parameters": {"dimension": 25},"description": "Pre-trained vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased (https://nlp.stanford.edu/projects/glove/).","preprocessing": "Converted to w2v format with `python -m gensim.scripts.glove2word2vec -i <fname> -o glove-twitter-25.txt`.","read_more": ["https://nlp.stanford.edu/projects/glove/","https://nlp.stanford.edu/pubs/glove.pdf"],"checksum": "50db0211d7e7a2dcd362c6b774762793","file_name": "glove-twitter-25.gz","parts": 1},"glove-twitter-50": {"num_records": 1193514,"file_size": 209216938,"base_dataset": "Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/glove-twitter-50/__init__.py","license": "http://opendatacommons.org/licenses/pddl/","parameters": {"dimension": 50},"description": "Pre-trained vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased (https://nlp.stanford.edu/projects/glove/)","preprocessing": "Converted to w2v format with `python -m gensim.scripts.glove2word2vec -i <fname> -o glove-twitter-50.txt`.","read_more": ["https://nlp.stanford.edu/projects/glove/","https://nlp.stanford.edu/pubs/glove.pdf"],"checksum": "c168f18641f8c8a00fe30984c4799b2b","file_name": "glove-twitter-50.gz","parts": 1},"glove-twitter-100": {"num_records": 1193514,"file_size": 405932991,"base_dataset": "Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/glove-twitter-100/__init__.py","license": "http://opendatacommons.org/licenses/pddl/","parameters": {"dimension": 100},"description": "Pre-trained vectors based on  2B tweets, 27B tokens, 1.2M vocab, uncased (https://nlp.stanford.edu/projects/glove/)","preprocessing": "Converted to w2v format with `python -m gensim.scripts.glove2word2vec -i <fname> -o glove-twitter-100.txt`.","read_more": ["https://nlp.stanford.edu/projects/glove/","https://nlp.stanford.edu/pubs/glove.pdf"],"checksum": "b04f7bed38756d64cf55b58ce7e97b15","file_name": "glove-twitter-100.gz","parts": 1},"glove-twitter-200": {"num_records": 1193514,"file_size": 795373100,"base_dataset": "Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased)","reader_code": "https://github.com/RaRe-Technologies/gensim-data/releases/download/glove-twitter-200/__init__.py","license": "http://opendatacommons.org/licenses/pddl/","parameters": {"dimension": 200},"description": "Pre-trained vectors based on 2B tweets, 27B tokens, 1.2M vocab, uncased (https://nlp.stanford.edu/projects/glove/).","preprocessing": "Converted to w2v format with `python -m gensim.scripts.glove2word2vec -i <fname> -o glove-twitter-200.txt`.","read_more": ["https://nlp.stanford.edu/projects/glove/","https://nlp.stanford.edu/pubs/glove.pdf"],"checksum": "e52e8392d1860b95d5308a525817d8f9","file_name": "glove-twitter-200.gz","parts": 1},"__testing_word2vec-matrix-synopsis": {"description": "[THIS IS ONLY FOR TESTING] Word vecrors of the movie matrix.","parameters": {"dimensions": 50},"preprocessing": "Converted to w2v using a preprocessed corpus. Converted to w2v format with `python3.5 -m gensim.models.word2vec -train <input_filename> -iter 50 -output <output_filename>`.","read_more": [],"checksum": "534dcb8b56a360977a269b7bfc62d124","file_name": "__testing_word2vec-matrix-synopsis.gz","parts": 1}}
}

2.将上述notepad++中文件,在报错的路径下另存为information.json(在notepad++可以选择保存的格式,注意一定是.json,不是.txt)

3.检验一下效果吧

import gensim.downloader
print(list(gensim.downloader.info()['models'].keys()))

没有报错了,哈哈哈,开心!!!

unable to read local cache ‘C:\\Users\\admin/gensim-data\\information.json‘ during fallback 解决办法相关推荐

  1. unable to read local cache ‘C:\\Users\\../gensim-data\\information.json‘ during fallback, connect to

    报错:         报错位置: unable to read local cache 'C:\\Users\\....../gensim-data\\information.json' durin ...

  2. unable to read local cache ‘C:\\Users\\kingS/gensim-data\\information.json‘ during fallback, connec

    1.复制该段代码 {"corpora": {"semeval-2016-2017-task3-subtaskBC": {"num_records&qu ...

  3. Nexus默认密码admin登录失败问题,简单有效解决办法

    当下载Nexus3后,登录Web,默认端口号8081,http://0.0.0.0:8081(这个是本机的地址),输入账号密码,admin/admin123,咦,登录失败,出现下面的情况 Nexus在 ...

  4. cos html cache插件,关于Cos-Html-Cache插件不能创建首页缓存的解决办法

    修改一句代码或者说一个档次修复cos-html-cache插件在https网站下无法无法创建首页缓存的问题 前言 这个是云落自己遇到的问题,就是cos插件不能创建首页缓存,考虑到这款插件一直没有出过什 ...

  5. Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (111)解决方法

    登陆mysql的时候,出现了这个问题: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' ( ...

  6. PHP SSL certificate: unable to get local issuer certificate的解决办法

    微信小程序开发交流qq群   173683895    承接微信小程序开发.扫码加微信. 当本地curl需要访问https时,出现SSL certificate: unable to get loca ...

  7. SSL certificate problem: unable to get local issuer certificate

    fatal: unable to access 'https://github.com/GitHubSi/t...': SSL certificate problem: unable to get l ...

  8. git pull报“unable to update local ref”解决方案

    使用git pull拉取代码的时候,无法拉取最新代码,报"unable to update local ref"错误. 除了重新clone一份代码外,还可以使用如下解决方案: 1. ...

  9. SSL certificate problem: unable to get local issuer certificate 的解决方法

    今天在进行微信开发获取微信Access_Token时,使用到了php的curl库, 在敲完代码后获取token失败,经过各种排查错误,到了下面这一步 SSL certificate problem: ...

最新文章

  1. 自动驾驶车路测法规出台,车辆需支持远程实时监控
  2. comsol稀物质传递_印刷指南丨印刷油墨传递的影响因素?
  3. 成为大厂AI算法工程师,“NLP/CV”都是你必须过的坎!
  4. 前端模拟数据的技术方案(二)
  5. CVE-2019-14287(sudo提权)
  6. SQL LIKE 通配符随笔
  7. 【每日一题】8月17日题目精讲-[SCOI2009]生日礼物
  8. 对listView的理解
  9. python模块安装
  10. CS231n李飞飞计算机视觉 神经网络训练细节part2上
  11. 曾经如日中天的VB编程,现已没落,而惨遭嫌弃的它,成了香饽饽
  12. python判断火车票座位_利用Python实现命令行版的火车票查看器
  13. 文字不换行、显示省略号
  14. Trojan.Win32.Scar.cjdy分析
  15. Jenkins-cents7.6 rpm安装
  16. nginx_centos
  17. 区块链技术与其在旅游行业的应用
  18. Python-100-Days学习笔记day09
  19. 《BREW进阶与精通——3G移动增值业务的运营、定制与开发》连载之36---支持BREW的手机
  20. 老站长传授网站防黑经验

热门文章

  1. 正版授权|Charles 4 网络封包分析调试工具软件
  2. 共享经济跨界创新,共享办公掀起风潮
  3. 医保基金稽查案件管理系统丨陀螺研究院×FISCO BCOS案例专辑
  4. Appid + appSecret + code 到微信方服务器 获取 session_key openid 并授权登录
  5. 毕业设计 stm32单片机的家庭成员监控监护系统 - 物联网 嵌入式
  6. 微信小程序设置导航栏标题颜色
  7. android 虹软人脸存储,用虹软Android SDK做人脸识别
  8. \t\t把超星图书虚拟打印为PDF格式,实现永久阅读
  9. FPGA开源网站和论坛介绍
  10. windows - 网络流量监控工具