r语言抓取网页数据

by Hiren Patel

希伦·帕特尔(Hiren Patel)

使用R进行网页抓取的简介 (An introduction to web scraping using R)

With the e-commerce boom, businesses have gone online. Customers, too, look for products online. Unlike the offline marketplace, a customer can compare the price of a product available at different places in real time.

随着电子商务的繁荣,企业已经上网。 客户也在线寻找产品。 与离线市场不同,客户可以实时比较不同地方的可用产品价格。

Therefore, competitive pricing is something that has become the most crucial part of a business strategy.

因此,竞争性定价已成为业务战略中最关键的部分。

In order to keep prices of your products competitive and attractive, you need to monitor and keep track of prices set by your competitors. If you know what your competitors’ pricing strategy is, you can accordingly align your pricing strategy to get an edge over them.

为了使您的产品价格具有竞争力和吸引力,您需要监视并跟踪竞争对手确定的价格。 如果您知道竞争对手的定价策略是什么,则可以相应地调整定价策略以获取竞争优势。

Hence, price monitoring has become a vital part of the process of running an e-commerce business.

因此,价格监控已成为运行电子商务业务过程的重要组成部分。

You might wonder how to get hold of the data to compare prices.

您可能想知道如何掌握数据以比较价格。

获取价格比较所需数据的3种主要方法 (The top 3 ways of getting the data you need for price comparison)

1.商人的饲料 (1. Feeds from Merchants)

As you might be aware, there are several price comparison sites available on the internet. These sites get into a sort of understanding with the businesses wherein they get the data directly from them and which they use for price comparison.

您可能已经知道,互联网上有几个价格比较站点。 这些站点与业务部门建立了某种了解,他们可以直接从他们那里获取数据并将其用于价格比较。

These businesses put into place an API, or utilize FTP to provide the data. Generally, a referral commission is what makes a price comparison site financially viable.

这些企业部署了API,或利用FTP提供数据。 通常,推荐佣金使价格比较站点在财务上可行。

2.来自第三方API的产品Feed (2. Product feeds from third-party APIs)

On the other hand, there are services which offer e-commerce data through an API. When such a service is used, the third party pays for the volume of data.

另一方面,有些服务通过API提供电子商务数据。 使用此类服务​​时,第三方将为数据量付费。

3.网页抓取 (3. Web Scraping)

Web scraping is one of the most robust and reliable ways of getting web data from the internet. It is increasingly used in price intelligence because it is an efficient way of getting the product data from e-commerce sites.

Web抓取是从Internet获取Web数据的最可靠,最可靠的方法之一。 由于它是从电子商务站点获取产品数据的有效方法,因此越来越多地用于价格情报中。

You may not have access to the first and second option. Hence, web scraping can come to your rescue. You can use web scraping to leverage the power of data to arrive at competitive pricing for your business.

您可能无权访问第一个和第二个选项。 因此,网页抓取可以助您一臂之力。 您可以使用网络抓取来利用数据的功能来为您的业务确定具有竞争力的价格。

Web scraping can be used to get current prices for the current market scenario, and e-commerce more generally. We will use web scraping to get the data from an e-commerce site. In this blog, you will learn how to scrape the names and prices of products from Amazon in all categories, under a particular brand.

Web抓取可用于获取当前市场情况下的当前价格,以及更广泛的电子商务。 我们将使用网络抓取来从电子商务网站获取数据。 在此博客中,您将学习如何从一个特定品牌的所有类别中刮取亚马逊产品的名称和价格。

Extracting data from Amazon periodically can help you keep track of the market trends of pricing and enable you to set your prices accordingly.

定期从Amazon提取数据可以帮助您跟踪定价的市场趋势,并使您能够相应地设置价格。

目录 (Table of contents)

  1. Web scraping for price comparison

    网页抓取以进行价格比较

  2. Web scraping in R

    R中的网页抓取

  3. Implementation

    实作

  4. End note

    尾注

1.网页抓取以进行价格比较 (1. Web scraping for price comparison)

As the market wisdom says, price is everything. The customers make their purchase decisions based on price. They base their understanding of the quality of a product on price. In short, price is what drives the customers and, hence, the market.

正如市场智慧所言,价格就是一切。 客户根据价格做出购买决定。 他们基于价格对产品质量的了解。 简而言之,价格是驱动客户以及市场的驱动力。

Therefore, price comparison sites are in great demand. Customers can easily navigate the whole market by looking at the prices of the same product across the brands. These price comparison websites extract the price of the same product from different sites.

因此, 价格比较网站的需求量很大。 客户可以通过查看各品牌相同产品的价格轻松浏览整个市场。 这些价格比较网站从不同的站点提取相同产品的价格。

Along with price, price comparison websites also scrape data such as the product description, technical specifications, and features. They project the whole gamut of information on a single page in a comparative way.

除价格外,价格比较网站还会抓取产品描述,技术规格和功能等数据。 他们以比较的方式在单个页面上投影全部信息。

This answers the question the prospective buyer has asked in their search. Now the prospective buyer can compare the products and their prices, along with information such as features, payment, and shipping options, so that they can identify the best possible deal available.

这回答了潜在买家在搜索中提出的问题。 现在,潜在买家可以比较产品及其价格,以及功能,付款和运输选项等信息,以便他们确定可用的最佳交易。

Pricing optimization has its impact on the business in the sense that such techniques can enhance profit margins by 10%.

从某种意义上说,此类技术可以将利润率提高10%,因此定价优化会对业务产生影响。

E-commerce is all about competitive pricing, and it has spread to other business domains as well. Take the case of travel. Now even travel-related websites scrape the price from airline websites in real time to provide the price comparison of different airlines.

电子商务全都具有竞争力的价格,并且它已经扩展到其他业务领域。 以旅行为例。 现在,甚至与旅行相关的网站也可以实时从航空公司网站上抓取价格,以提供不同航空公司的价格比较。

The only challenge in this is to update the data in real time and stay up to date every second as prices keep changing on the source sites. Price comparison sites use Cron jobs or at the view time to update the price. However, it will rest upon the configuration of the site owner.

唯一的挑战是实时更新数据并每秒更新一次,因为源站点上的价格不断变化。 价格比较网站使用Cron作业或在查看时间更新价格。 但是,它将取决于站点所有者的配置。

This is where this blog can help you — you will be able to work out a scraping script that you can customize to suit your needs. You will be able to extract product feeds, images, price, and all other relevant details regarding a product from a number of different websites. With this, you can create your powerful database for price comparison site.

这是此博客可以为您提供帮助的地方-您将能够制定出可以自定义以满足需要的抓取脚本。 您将能够从许多不同的网站中提取有关产品的产品Feed,图像,价格和所有其他相关详细信息。 这样,您就可以创建功能强大的价格比较网站数据库。

2. R中的网页抓取 (2. Web scraping in R)

Price comparison becomes cumbersome because getting web data is not that easy — there are technologies like HTML, XML, and JSON to distribute the content.

价格比较变得麻烦,因为获取Web数据并不那么容易-存在诸如HTML,XML和JSON之类的技术来分发内容。

So, in order to get the data you need, you must effectively navigate through these different technologies. R can help you access data stored in these technologies. However, it requires a bit of in-depth understanding of R before you get started.

因此,为了获得所需的数据,您必须有效地浏览这些不同的技术。 R可以帮助您访问存储在这些技术中的数据。 但是,在开始之前,它需要对R有一点深入的了解。

什么是R? (What is R?)

Web scraping is an advanced task that not many people perform. Web scraping with R is, certainly, technical and advanced programming. An adequate understanding of R is essential for web scraping in this way.

Web抓取是一项很少有人执行的高级任务。 用R进行Web抓取无疑是技术和高级编程。 对R的充分理解对于以这种方式刮取卷材至关重要。

To start with, R is a language for statistical computing and graphics. Statisticians and data miners use R a lot due to its evolving statistical software, and its focus on data analysis.

首先,R是用于统计计算和图形的语言。 由于R不断发展的统计软件以及对数据分析的关注,统计学家和数据挖掘者经常使用R。

One reason R is such a favorite among this set of people is the quality of plots which can be worked out, including mathematical symbols and formulae wherever required.

R之所以受到这组人的青睐的一个原因是可以得出的绘图质量,包括所需的数学符号和公式。

R is wonderful because it offers a vast variety of functions and packages that can handle data mining tasks.

R之所以出色,是因为R提供了许多可以处理数据挖掘任务的功能和包。

rvest, RCrawler etc are R packages used for data collection processes.

rvest,RCrawler等是用于数据收集过程的R包。

In this segment, we will see what kinds of tools are required to work with R to carry out web scraping. We will see it through the use case of Amazon website from where we will try to get the product data and store it in JSON form.

在此部分中,我们将了解与R配合使用需要哪些工具才能进行Web抓取。 我们将通过Amazon网站的用例来查看它,从那里我们将尝试获取产品数据并将其以JSON形式存储。

要求 (Requirements)

In this use case, knowledge of R is essential and I am assuming that you have a basic understanding of R. You should be aware of at least any one R interface, such as RStudio. The base R installation interface is fine.

在这个用例中,R的知识是必不可少的,我假设您对R有了基本的了解 。 您应该知道至少任何一个R接口,例如RStudio。 基本的R安装界面很好。

If you are not aware of R and the other associated interfaces, you should go through this tutorial.

如果您不了解R和其他关联的接口,则应阅读本教程。

Now let’s understand how the packages we’re going to use will be installed.

现在,让我们了解如何安装将要使用的软件包。

包装方式: (Packages:)

1. rvest

1. 狂

Hadley Wickham authored the rvest package for web scraping in R. rvest is useful in extracting the information you need from web pages.

Hadley Wickham在R中编写了用于网络抓取的rvest软件包。rvest在从网页提取所需信息方面很有用。

Along with this, you also need to install the selectr and ‘xml2’ packages.

与此同时,您还需要安装选择器和“ xml2”软件包。

Installation steps:

安装步骤:

install.packages(‘selectr’)
install.packages(‘xml2’)
install.packages(‘rvest’)

rvest contains the basic web scraping functions, which are quite effective. Using the following functions, we will try to extract the data from web sites.

rvest包含基本的Web抓取功能,这些功能非常有效。 使用以下功能,我们将尝试从网站中提取数据。

  • read_html(url) : scrape HTML content from a given URL

    read_html(url) :从给定的URL抓取HTML内容

  • html_nodes(): identifies HTML wrappers.

    html_nodes() :标识HTML包装器。

  • html_nodes(“.class”): calls node based on CSS class

    html_nodes(“.class”) :基于CSS类调用节点

  • html_nodes(“#id”): calls node based on <div> id

    html_nodes(“#id”) :基于<div> id调用节点

  • html_nodes(xpath=”xpath”): calls node based on xpath (we’ll cover this later)

    html_nodes(xpath=”xpath”) :基于xpath调用节点(我们将在后面介绍)

  • html_attrs(): identifies attributes (useful for debugging)

    html_attrs() :标识属性(用于调试)

  • html_table(): turns HTML tables into data frames

    html_table() :将HTML表转换为数据框

  • html_text(): strips the HTML tags and extracts only the text

    html_text() :剥离HTML标记,仅提取文本

2. stringr

2. 纵梁

stringr comes into play when you think of tasks related to data cleaning and preparation.

当您想到与数据清理和准备有关的任务时,便会发挥作用。

There are four essential sets of functions in stringr:

Stringr中有四组基本功能:

  • stringr functions are useful because they enable you to work around the individual characters within the strings in character vectors字符串函数非常有用,因为它们使您能够处理字符向量中字符串内的各个字符
  • there are whitespace tools which can be used to add, remove, and manipulate whitespace有空白工具可用于添加,删除和操作空白
  • there are locale sensitive operations whose operations will differ from locale to locale存在对语言环境敏感的操作,其操作因语言环境而异
  • there are pattern matching functions. These functions recognize four parts of pattern description. Regular expressions are the standard one but there are other tools as well有模式匹配功能。 这些功能识别模式描述的四个部分。 正则表达式是标准表达式,但还有其他工具

Installation

安装

install.packages(‘stringr’)

install.packages('stringr')

3. jsonlite

3. jsonlite

What makes the jsonline package useful is that it is a JSON parser/generator which is optimized for the web.

使jsonline包有用的原因在于,它是针对Web优化的JSON解析器/生成器。

It is vital because it enables an effective mapping between JSON data and the crucial R data types. Using this, we are able to convert between R objects and JSON without loss of type or information, and without the need for any manual data wrangling.

这一点至关重要,因为它可以在JSON数据和关键R数据类型之间进行有效的映射。 使用此方法,我们可以在R对象和JSON之间进行转换,而不会丢失类型或信息,也不需要任何手动数据处理。

This works really well for interacting with web APIs, or if you want to create ways through which data can travel in and out of R using JSON.

这对于与Web API交互非常有效,或者如果您想创建使用JSON将数据传入和传出R的方式,则非常有用。

安装 (Installation)

install.packages(‘jsonlite’)

install.packages('jsonlite')

Before we jump-start into it, let’s see how it works:

在开始学习之前,让我们看看它是如何工作的:

It should be clear at the outset that each website is different, because the coding that goes into a website is different.

从一开始就应该清楚每个网站都是不同的,因为进入网站的编码是不同的。

Web scraping is the technique of identifying and using these patterns of coding to extract the data you need. Your browser makes the website available to you from HTML. Web scraping is simply about parsing the HTML made available to you from your browser.

Web抓取是一种识别并使用这些编码模式来提取所需数据的技术。 您的浏览器使您可以通过HTML访问该网站。 Web抓取只是关于解析从浏览器提供给您HTML。

Web scraping has a set process that works like this, generally:

Web抓取具有如下设置的过程,通常是这样的:

  • Access a page from R从R访问页面
  • Instruct R where to “look” on the page指示R在页面上“查找”的位置
  • Convert data in a usable format within R using the rvest package使用rvest包在R中以可用格式转换数据

Now let’s go to implementation to understand it better.

现在让我们去实施以更好地理解它。

3.实施 (3. Implementation)

Let’s implement it and see how it works. We will scrape the Amazon website for the price comparison of a product called “One Plus 6”, a mobile phone.

让我们实现它,看看它是如何工作的。 我们将刮掉亚马逊网站,以比较一款名为“ One Plus 6”的手机的价格。

You can see it here.

你可以在这里看到它。

步骤1:加载所需的软件包 (Step 1: Loading the packages we need)

We need to be in the console, at R command prompt to start the process. Once we are there, we need to load the packages required as shown below:

我们需要在控制台中的R命令提示符下启动该过程。 到达那里后,我们需要加载所需的软件包,如下所示:

#loading the package:> library(xml2)> library(rvest)> library(stringr)

步骤2:从Amazon读取HTML内容 (Step 2: Reading the HTML content from Amazon)

#Specifying the url for desired website to be scrappedurl <- ‘https://www.amazon.in/OnePlus-Mirror-Black-64GB-Memory/dp/B0756Z43QS?tag=googinhydr18418-21&tag=googinkenshoo-21&ascsubtag=aee9a916-6acd-4409-92ca-3bdbeb549f80’
#Reading the html content from Amazonwebpage <- read_html(url)

In this code, we read the HTML content from the given URL, and assign that HTML into the webpage variable.

在这段代码中,我们从给定的URL中读取HTML内容,并将该HTML分配给webpage变量。

步骤3:从Amazon抓取产品详细信息 (Step 3: Scrape product details from Amazon)

Now, as the next step, we will extract the following information from the website:

现在,作为下一步,我们将从网站上提取以下信息:

Title: The title of the product.Price: The price of the product.Description: The description of the product.Rating: The user rating of the product.Size: The size of the product.Color: The color of the product.

标题:产品的标题。 价格:产品的价格。 描述:产品的描述。 等级:产品的用户等级。 尺寸:产品的尺寸。 颜色:产品的颜色。

This screenshot shows how these fields are arranged.

此屏幕快照显示了这些字段的排列方式。

Next, we will make use of HTML tags, like the title of the product and price, for extracting data using Inspect Element.

接下来,我们将使用HTML标记(例如产品标题和价格)来使用Inspect Element提取数据。

In order to find out the class of the HTML tag, use the following steps:

为了找出HTML标记的类,请使用以下步骤:

=> go to chrome browser => go to this URL => right click => inspect element

=>转到chrome浏览器=> 转到该URL =>右键单击=>检查元素

NOTE: If you are not using the Chrome browser, check out this article.

注意:如果您没有使用Chrome浏览器,请参阅本文 。

Based on CSS selectors such as class and id, we will scrape the data from the HTML. To find the CSS class for the product title, we need to right-click on title and select “Inspect” or “Inspect Element”.

基于类和id之类CSS选择器,我们将从HTML中抓取数据。 要找到产品标题CSS类,我们需要右键单击标题,然后选择“检查”或“检查元素”。

As you can see below, I extracted the title of the product with the help of html_nodes in which I passed the id of the title — h1#title — and webpage which had stored HTML content.

如您在下面看到的,我在html_nodes的帮助下提取了产品的标题,在其中传递了标题的ID( h1#title )和存储HTML内容的webpage

I could also get the title text using html_text and print the text of the title with the help of the head () function.

我还可以使用html_text获取标题文本,并在head ()函数的帮助下打印标题文本。

#scrape title of the product> title_html <- html_nodes(webpage, ‘h1#title’)> title <- html_text(title_html)> head(title)

The output is shown below:

输出如下所示:

We could get the title of the product using spaces and \n.

我们可以使用空格和\ n获得产品的标题。

The next step would be to remove spaces and new line with the help of the str_replace_all() function in the stringr library.

下一步是借助stringr库中的str_replace_all()函数删除空格和换行符。

# remove all space and new linesstr_replace_all(title, “[\r\n]” , “”)

Output:

输出:

Now we will need to extract the other related information of the product following the same process.

现在,我们将需要按照相同的过程提取产品的其他相关信息。

Price of the product:

产品价格:

# scrape the price of the product> price_html <- html_nodes(webpage, ‘span#priceblock_ourprice’)> price <- html_text(price_html)
# remove spaces and new line> str_replace_all(title, “[\r\n]” , “”)
# print price value> head(price)

Output:

输出:

Product description:

产品描述:

# scrape product description> desc_html <- html_nodes(webpage, ‘div#productDescription’)> desc <- html_text(desc_html)
# replace new lines and spaces> desc <- str_replace_all(desc, “[\r\n\t]” , “”)> desc <- str_trim(desc)> head(desc)

Output:

输出:

Rating of the product:

产品等级:

# scrape product rating > rate_html <- html_nodes(webpage, ‘span#acrPopover’)> rate <- html_text(rate_html)
# remove spaces and newlines and tabs > rate <- str_replace_all(rate, “[\r\n]” , “”)> rate <- str_trim(rate)
# print rating of the product> head(rate)

Output:

输出

Size of the product:

产品尺寸:

# Scrape size of the product> size_html <- html_nodes(webpage, ‘div#variation_size_name’)> size_html <- html_nodes(size_html, ‘span.selection’)> size <- html_text(size_html)
# remove tab from text> size <- str_trim(size)
# Print product size> head(size)

Output:

输出:

Color of the product:

产品颜色:

# Scrape product color> color_html <- html_nodes(webpage, ‘div#variation_color_name’)> color_html <- html_nodes(color_html, ‘span.selection’)> color <- html_text(color_html)
# remove tabs from text> color <- str_trim(color)
# print product color> head(color)

Output:

输出:

步骤4:我们已成功从所有字段中提取了数据,这些数据可用于比较另一个站点的产品信息。 (Step 4: We have successfully extracted data from all the fields which can be used to compare the product information from another site.)

Let’s compile and combine them to work out a dataframe and inspect its structure.

让我们对其进行编译和组合,以得出一个数据框并检查其结构。

#Combining all the lists to form a data frameproduct_data <- data.frame(Title = title, Price = price,Description = desc, Rating = rate, Size = size, Color = color)
#Structure of the data framestr(product_data)

Output:

输出:

In this output we can see all the scraped data in the data frames.

在此输出中,我们可以看到数据框中的所有已抓取数据。

步骤5:以JSON格式存储数据: (Step 5: Store data in JSON format:)

As the data is collected, we can carry out different tasks on it such as compare, analyze, and arrive at business insights about it. Based on this data, we can think of training machine learning models over this.

收集数据后,我们可以对其执行不同的任务,例如比较,分析并获得有关它的业务见解。 基于这些数据,我们可以考虑在此基础上训练机器学习模型。

Data would be stored in JSON format for further process.

数据将以JSON格式存储以进行进一步处理。

Follow the given code and get the JSON result.

按照给定的代码获取JSON结果。

# Include ‘jsonlite’ library to convert in JSON form.> library(jsonlite)
# convert dataframe into JSON format> json_data <- toJSON(product_data)
# print output> cat(json_data)

In the code above, I have included jsonlite library for using the toJSON() function to convert the dataframe object into JSON form.

在上面的代码中,我包括用于使用toJSON()函数将dataframe对象转换为JSON形式的jsonlite库。

At the end of the process, we have stored data in JSON format and printed it.It is possible to store data in a csv file also or in the database for further processing, if we wish.

在流程结束时,我们已经将数据存储为JSON格式并进行了打印。如果愿意,也可以将数据存储在csv文件中或数据库中以进行进一步处理。

Output:

输出:

Following this practical example, you can also extract the relevant data for the same from product from https://www.oneplus.in/6 and compare with Amazon to work out the fair value of the product. In the same way, you can use the data to compare it with other websites.

按照该实际示例,您还可以从https://www.oneplus.in/6中从产品中提取相同的相关数据,并与Amazon进行比较以得出产品的公允价值。 同样,您可以使用数据将其与其他网站进行比较。

4.尾注 (4. End note)

As you can see, R can give you great leverage in scraping data from different websites. With this practical illustration of how R can be used, you can now explore it on your own and extract product data from Amazon or any other e-commerce website.

如您所见,R可以极大地帮助您从不同的网站抓取数据。 通过此实用的R用法示例,您现在可以自己浏览R并从Amazon或任何其他电子商务网站提取产品数据。

A word of caution for you: certain websites have anti-scraping policies. If you overdo it, you will be blocked and you will begin to see captchas instead of product details. Of course, you can also learn to work your way around the captchas using different services available. However, you do need to understand the legality of scraping data and whatever you are doing with the scraped data.

请注意: 某些网站具有反抓取政策 。 如果您做得过多,您将被屏蔽,您将开始看到验证码,而不是产品详细信息。 当然,您还可以使用各种可用的服务来学习如何处理验证码。 但是,您确实需要了解抓取数据的合法性以及对抓取数据所做的任何操作。

Feel free to send to me your feedback and suggestions regarding this post!

随时将您对本帖子的反馈和建议发送给我!

翻译自: https://www.freecodecamp.org/news/an-introduction-to-web-scraping-using-r-40284110c848/

r语言抓取网页数据

r语言抓取网页数据_使用R进行网页抓取的简介相关推荐

  1. r语言查找是否存在空值_关于R包安装你知道多少?

    在R语言的学习过程中离不了各种R包的安装与使用,要使用某个R包首先得学会如何安装该R包.对于R包的安装你知道的有多少?你知道如何指定安装路径吗?为何你每次重新打开R绘画都需要重新安装R包?今天小编带你 ...

  2. python爬取行业数据_用Python进行Web爬取数据

    介绍 我们拥有的数据太少,无法建立机器学习模型.我们需要更多数据! 如果这句话听起来很熟悉,那么你并不孤单!希望获得更多数据来训练我们的机器学习模型是一个一直困扰人们的问题.我们无法在数据科学项目中获 ...

  3. r语言kendall协和系数_数据挖掘|R相关性分析及检验

    相关系数可以用来描述定量变量之间的关系.结果的正负号分别表明正相关或负相关,数值的大小则表示相关关系的强弱程度. R可以计算多种相关系数,今天主要介绍常见的三种:Pearson相关系数.Spearma ...

  4. R语言对数线性模型loglm函数_使用R语言进行混合线性模型(mixed linear model) 分析代码及详解...

    1.混合线性模型简介 混合线性模型,又名多层线性模型(Hierarchical linear model).它比较适合处理嵌套设计(nested)的实验和调查研究数据.此外,它还特别适合处理带有被试内 ...

  5. r语言 rgl 强制过程中_一个R语言中操纵矢量空间数据的标准化工具—sf

    ​注: 本文是R语言sf包的核心开发者和维护者--来自德国明斯特大学的地理信息学教授:Edzer Pebesma 的一篇关于sf包的简介,发表于2018年7月的R语言期刊,主要讲述了sf的定位.功能. ...

  6. R语言对数线性模型loglm函数_用R语言进行数据分析:常规和广义线性模型

    用R语言进行数据分析:常规和广义线性模型 线性模型 对于常规的多重模型(multiple model)拟合,最基本的函数是lm(). 下面是调用它的方式的一种改进版: >fitted.model ...

  7. r语言t检验输出检验统计量_[转载]R语言:常用统计检验

    R语言:常用统计检验方法 写在前面 R已经成为当前国际学术界最流行的统计和绘图软件之一,该语言较为简单易学,统计分析功能强大,且具有很强的绘图功能,能够绘制学术出版要求的多种图表.R语言在生物信息学, ...

  8. r语言kendall协和系数_使用R语言函数cor.test()做相关性计算和检验

    假设我们现在有两组数据,如下所示 x y 55.24 1.2 59.03 1.19 47.27 1.38 52.94 0.94 55 1.81 54 1.75 55.69 1.42 48.85 3.0 ...

  9. java取网页数据_浅析JAVA实现网页取内容

    有很多网站提供从其他网站提取新闻甚至是从向翻译网站取内容 由于手头做的网站需要涉及这个方面的内容,所以最近研究了一下 这里把我的一点小心得写给大家 希望大家讨论共同进步 首先我觉得这种功能的实现其实是 ...

最新文章

  1. C++_STL——array(C++11)
  2. Asp将查询结果导出到excel里
  3. [Flask+Vue]Books全栈应用
  4. boost::lambda模块实现boost::function 进行测试
  5. 防灾科技学院计算机组成原理,防灾科技学院计算机组成原理复习材料1
  6. matlab窗函数 响应,matlab窗函数设计方案.doc
  7. linux判断redis是否启动成功_087、Redis
  8. 15.3D效果,盒阴影和滤镜
  9. iCollections for Mac(桌面整理工具)
  10. datagrip 查看 blob_AppStore今日限免 不解锁屏幕通知中心查看日历等3款软件标题文章...
  11. nmake 环境变量配置
  12. visual studio 各版本 激活码
  13. 边缘计算与智慧城市应用
  14. 生活不止有诗和远方,也有眼前的美好。也许你心里有清风明月,
  15. windows系统下的文件长名和文件短名
  16. Ubuntu 制作光盘镜像文件
  17. 【渝粤题库】陕西师范大学202021宏观经济学作业(高起本、专升本)
  18. pyaudio usb playback_「APPSO」苹果还会为 iPhone 换上 USB-C 吗?
  19. HAL库的串口基础学习(包含串口接收不定长数据的实现)
  20. 我们常说祖宗十八代,到底是哪十八代?这个称呼又是怎么来的?

热门文章

  1. wifidog php,wifidog用php实现验证流程
  2. 【Oracle】看懂执行计划之基于规则的优化器(RBO)
  3. ANTLR4入门(三):使用mave ANTLR4插件(antlr4-maven-plugin)执行语法解析生成器
  4. R329 NASNet模型仿真测试
  5. SNTP服务器如何实现计算机时间同步?
  6. ISO 光盘镜像 启动系统
  7. pet材料——百度百科
  8. 红米3 android 版本,红米Note3的手机系统是什么?能升级安卓5.0吗?
  9. 在 Linux 中删除目录——如何从命令行删除目录和内容
  10. 熟悉Minix3.1.2a操作系统的进程管理