Twitter数据抓取的方法(一)

2024-05-13 14:08:55

Scraping Tweets Directly from Twitters Search Page – Part 1

EDIT – Since I wrote this post, Twitter has updated how you get the next list of tweets for your result. Rather than using scroll_cursor, it uses max_position. I’ve written a bit more in detail here.

In fairly recent news, Twitter has started indexing it’s entire history of Tweets going all the way back to 2006. Hurrah for data scientists! However, even with this news (at time of writing), their search API is still restricted to the past seven days of Tweets. While I doubt this will be the case permanently, as a useful exercise this post presents how we can search for Tweets from Twitter without necessarily using their API. Besides the indexing, there is also the advantage that Twitter is a little more liberal with rate limits, and you don’t require any authentication keys.

The post will be split up into two parts, this first part looking at what we can extract from Twitter and how we might start to go about it, and the second a tutorial on how we can implement this in Java.

Right, to begin, lets say we want to search Twitter for all tweets related to the query “Babylon 5”. You can access Twitters advanced search without being logged in: https://twitter.com/search-advanced

If we take a look at the URL that’s constructed when we perform the search we get:

https://twitter.com/search?q=Babylon%205&src=typd

As we can see, there are two query parameters, q (our query encoded) and src (assumed to be the source of the query, i.e. typed).  However, by default, Twitter returns top results, rather than all, so on the displayed page, if you click on All the URL changes to:

https://twitter.com/search?f=realtime&q=Babylon%205&src=typd

The difference here is the f=realtime parameter that appears to specify we receive Tweets in realtime as opposed to a subset of top Tweets. Useful to know, but currently we’re only getting the first 25 Tweets back. If we scroll down though, we notice that more Tweets are loaded on the page via AJAX. Logging all XMLHttpRequests in whatever dev tool you choose to use, we can see that everytime we reach the bottom of the page, Twitter makes an AJAX call a URL similar to:

https://twitter.com/i/search/timeline?f=realtime&q=Babylon%205&src=typd&include_available_features=1&include_entities=1&last_note_ts=85&scroll_cursor=TWEET-553069642609344512-553159310448918528-BD1UO2FFu9QAAAAAAAAETAAAAAcAAAASAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

On further inspection, we see that it is also a JSON response, which is very useful! Before we look at the response though, let’s have a look at that URL and some of it’s parameters.

First off, it’s slightly different to the default search URL. The path is /i/search/timeline as opposed to /search. Secondly, while we notice our familiar parameters q, f, and src, from before, there are several additional ones. The most important and essential new one though is scroll_cursor. This is what Twitter uses to paginate the results. If you remove scroll_cursor from that URL, you end up with your first page of results again.

Now lets take a look now at the JSON response that Twitter provides:

{
has_more_items: boolean,
items_html: "...",
is_scrolling_request: boolean,
is_refresh_request: boolean,
scroll_cursor: "...",
refresh_cursor: "...",
focused_refresh_interval: int
}

Again, not all parameters for this post are important to take note of, but the ones that are include: has_more_items, items_html, and scroll_cursor.

has_more_items – This lets you know with a boolean value whether or not there are any more results after this query.

items_html – This is where all the tweets are which Twitter uses to append to the bottom of their timeline. It requires parsing, but there is a good amount of information in there to be extracted which we will look at in a minute.

scroll_cursor – A pagination value that allows us to extract the next page of results.

Remember our scroll_cursor parameter from earlier on? Well for each search request you make to twitter, the value of this key in the response provides you with the next set of tweets, allowing you to recursively call Twitter until either has_more_items is false, or your previous scroll_cursor equals the last scroll_cursor you had.

Now that we know how to access Twitters own search functionality, lets turn our attention to the tweets themselves. As mentioned before, items_html in the response is where all the tweets are at. However, it comes in a block of HTML as Twitter injects that block at the bottom of the page each time that call is made. The HTML inside is a list of li elements, each element a Tweet. I won’t post the HTML for one here, as even one tweet has a lot of HTML in it, but if you want to look at it, copy the items_html value (omiting the quotes around the HTML content) and paste it into something like JSBeautifier to see the formatted results for yourself.

If we look over the HTML, aside from the tweets text, there is actually a lot of useful information encapsulated in this data packet. The most important item is the Tweet id itself. If you check, it’s actually in the root li element. Now, we could stop here as with that ID, you can query Twitters official API, and if it’s a public Tweet, you can get all kinds of information. However, that’d defeat the purpose of not using the API, so lets see what we can extract from what we already have.

The table below shows various CSS selector queries that you can use to extract the information with.

Embedded Tweet Data
Selector Value
div.original-tweet[data-tweet-id] The authors twitter handle
div.original-tweet[data-name] The name of the author
div.original-tweet[data-user-id] The user ID of the author
span._timestamp[data-time] Timestamp of the post
span._timestamp[data-time-ms] Timestamp of the post in ms
p.tweet-text  Text of Tweet
span.ProfileTweet-action–retweet > span.ProfileTweet-actionCount[data-tweet-stat-count] Number of Retweets
span.ProfileTweet-action–favorite > span.ProfileTweet-actionCount[data-tweet-stat-count]  Number of Favourites

That’s quite a sizeable amount of information in that HTML. From looking through, we can extract a bunch of stuff about the author, the time stamp of the tweet, the text, and number of retweets and favourites.

What have we learned here? Well, to summarize, we know how to construct a Twitter URL query, the response we get from said query, and the information we can extract from said response. The second part of this tutorial (to follow shortly) will introduce some code as to how we can implement the above.

Twitter数据抓取的方法(一)相关推荐

  1. python教程怎么抓起数据_介绍python 数据抓取三种方法

    三种数据抓取的方法正则表达式(re库) BeautifulSoup(bs4) lxml *利用之前构建的下载网页函数,获取目标网页的html,我们以https://guojiadiqu.bmcx.co ...

  2. 查询数据 抓取 网站数据_有了数据,我就学会了如何在几个小时内抓取网站,您也可以...

    查询数据 抓取 网站数据 I had a shameful secret. It is one that affects a surprising number of people in the da ...

  3. 干货!链家二手房数据抓取及内容解析要点

    "本文对链家官网网页进行内容分析,可以作为一般HTTP类应用协议进行协议分析的参考,同时,对链家官网的结构了解后,可以对二手房相关信息进行爬取,并且获取被隐藏的近期成交信息." 另 ...

  4. vba抓取网页数据到excel_R语言网页数据抓取XML数据包

    有些网络上的数据无法复制粘贴,一个一个录入有点费时费力,此时用这种数据抓取方法,短短几句,简单实用.XML是一种可扩展标记语言,它被设计用来传输和存储数据.XML是各种应用程序之间进行数据传输的最常用 ...

  5. python table数据抓取_Python爬虫:数据抓取工具及类库详解

    前言 文的文字及图片来源于网络,仅供学习.交流使用,不具有任何商业用途,版权归原作者所有,如有问题请及时联系我们以作处理. 作者: ncepu_Chen PS:如有需要Python学习资料的小伙伴可以 ...

  6. 10-穿墙代理的设置 | 01.数据抓取 | Python

    10-穿墙代理的设置 郑昀 201005 隶属于<01.数据抓取>小节 我们访问 Twitter 等被封掉的网站时,需要设置 Proxy . 1.使用HTTP Proxy 下面是普通HTT ...

  7. net.conn read 判断数据读取完毕_单方验方|如何应对千万级工商数据抓取(一)

    最近主要在处理有关企业信用的工商数据库,朋友列出的名单在200万家上下,字段共有13个维度,数据规模粗略计算好几千万了.起初自己懵懵懂懂的爬了200万,经过不断调试改进,发现大规模抓取数据门门道道还真 ...

  8. 全国城市空气质量实时发布平台数据抓取采集获取

    全国城市空气质量实时发布平台(https://air.cnemc.cn:18007/   原http://106.37.208.233:20035/)数据抓取采集获取,数据处理方法 import or ...

  9. 网络爬虫——票房网数据抓取及存储

    网络爬虫--票房网数据抓取及存储 实验内容 目标网站:电影票房网 目标网址:http://58921.com/daily/wangpiao 任务要求 目标数据:(1)名次(2)电影名称 (3)日期(4 ...

  10. 《Python数据抓取与实战》读书笔记:第2章

    目录 第2章 字符串解析 2.1 常用函数 2.2 正则表达式 2.3 Beautiful Soup 2.4 json结构 第2章 字符串解析 本章介绍Python处理字符串的基本方法,包括Pytho ...

最新文章

  1. 参展神器| 算法告诉你优先参加哪个会展
  2. 如何做出受欢迎的字体排版风格?
  3. 精进:如何成为一个很厉害的人---书摘(转)
  4. linux unix域socket_python3从零学习-5.8.1、socket—底层网络接口
  5. jasmine-JavaScript单元测试工具
  6. 用 Python 分析了 10000 场吃鸡数据,原来吃鸡要这么玩!
  7. reduce详细用法
  8. Spring Framework 5.0 新特性有这些
  9. [多图]Maclean的巴厘岛游记
  10. Activity的几种启动模式介绍
  11. react native 直传 阿里云 OSS云存储
  12. 【转载】扫描渗透等工具介绍
  13. 200plc与施耐德ATV610变频器modbus通讯
  14. Eclipse 无法查看第三方jar包文件源代码解决方法(转载https://www.cnblogs.com/1995hxt/p/5252098.html自己备用)
  15. 部署CM报错(7):hue无法访问hbase报错:HBase Thrift 1 server cannot be contacted: Could not connect to hadoop02:90
  16. 年度催泪之作:2015中国程序员生存报告
  17. khadas与树莓派_抛弃电信机顶盒,单板电脑打造家庭多媒体中心
  18. 解决中文名单按拼音排序的问题
  19. webpack基础配置
  20. 【安全牛学习笔记】密钥交换、AIRCRACK-NG基础、AIRODUMP-NG排错

热门文章

  1. extjs Ext.XTemplate
  2. python 发送email邮件带附件
  3. 201671010129 2016—2017—2 《Java程序设计》Java总结
  4. java中的缓冲流BufferedWriter和BufferedReader
  5. Java与微信不得不说的故事——消息的接收与发送
  6. ios虚拟机安装(二)
  7. 【IBM Tivoli Identity Manager 学习文档】14 TIM组织结构设计
  8. Java NIO - Buffer 基础 -1
  9. Windows下安装和配置Java JDK
  10. C++ main函数的几点细节(转载)