语音库构建

Nowadays people don’t have time to manually search the internet for information or the answers to their questions, rather they expect someone to do it for them, just like a personal assistant who listens to the commands provided and acts accordingly.Thanks to Artificial Intelligence, this personal assistant can now be available to everyone in the form of a voice assistant which is much faster and reliable than a human. An assistant that is even capable of accomplishing difficult tasks like placing an order online, playing music, turning on the lights, etc. just by listening to the users command

如今,人们没有时间手动在互联网上搜索信息或问题的答案,而是希望有人为他们做这些事情,就像私人助理聆听所提供的命令并采取相应的行动一样。 ,该个人助手现在可以以语音助手的形式提供给所有人,该语音助手比人类更快捷,更可靠。 助手甚至能够通过听用户命令来完成诸如在线下订单,播放音乐,打开灯等艰巨的任务

In this article, I am going to show you how to build a voice assistant that responds to basic user queries. After reading this article you shall get a basic idea about what web scraping is and how can it be used to build a voice assistant.

在本文中,我将向您展示如何构建一个语音助手来响应基本的用户查询。 阅读本文后,您将获得有关什么是网络抓取以及如何将其用于构建语音助手的基本概念。

Note: You will need to have a basic understanding of the Python language to follow this article.

注意:您需要对Python语言有一个基本的了解才能跟随本文。

目录 (Table of Contents)

  1. Voice Assistant语音助手

2. Web Scraping

2.网页抓取

3. Implementation

3.实施

语音助手 (Voice Assistant)

Voice Assistant is a software agent that can perform tasks or services for an individual based on commands or questions. In general, voice assistants react to voice commands and give the user relevant information about his/her queries.

语音助手是一种软件代理,可以根据命令或问题为个人执行任务或提供服务。 通常,语音助手会对语音命令做出React,并向用户提供有关其查询的相关信息。

The assistant can understand and react to specific commands given by the user like playing a song on YouTube or knowing about the weather. It will search and/or scrape the web to find the response to the command in order to satisfy the user.

助理可以理解用户所发出的特定命令并对其做出React,例如在YouTube上播放歌曲或了解天气。 它将搜索和/或抓取网络以找到对命令的响应,以使用户满意。

Presently voice assistants are already able to process orders of products, answer questions, perform actions like playing music or start a simple phone call with a friend.

目前,语音助手已经能够处理产品订单,回答问题,执行诸如播放音乐之类的操作或与朋友打个简单的电话。

The implemented voice assistant can perform the following tasks:

已实现的语音助手可以执行以下任务:

  1. Provide weather details提供天气细节
  2. Provide corona updates提供电晕更新
  3. Provide latest news提供最新消息
  4. Search the meaning of a word搜索单词的含义
  5. Take notes做笔记
  6. Play YouTube videos播放YouTube视频
  7. Show location on Google Maps在Google地图上显示位置
  8. Open websites on Google Chrome在Google Chrome浏览器上打开网站

When building a voice assistant, there are 2 important libraries that you should consider. Python’s SpeechRecognition package helps the voice assistant understand the user. It can be implemented as follows:

构建语音助手时,应考虑两个重要的库。 Python的SpeechRecognition包可帮助语音助手了解用户。 它可以实现如下:

import speech_recognition as sr#initalises the recognizerr1 = sr.Recognizer()#uses microphone to take the inputwith sr.Microphone() as source:    print('Listening..')    #listens to the user    audio = r1.listen(source)    #recognises the audio and converts it to text    audio = r1.recognize_google(audio)

Python’s pyttsx3 package helps the voice assistant respond to the user by converting the text to audio. It can be implemented as follows:

Python的pyttsx3软件包通过将文本转换为音频来帮助语音助手响应用户。 它可以实现如下:

import pyttsx3#initialises pyttsx3engine = pyttsx3.init('sapi5')#converts the text to audioengine.say('Hello World')engine.runAndWait()

网页抓取 (Web Scraping)

Web Scraping refers to the extraction of data from websites. The data on the websites are unstructured. Web scraping helps collect these unstructured data and store it in a structured form.

Web爬网是指从网站中提取数据。 网站上的数据是非结构化的。 Web抓取有助于收集这些非结构化数据并将其以结构化形式存储。

Some applications of web scraping include:

Web抓取的一些应用程序包括:

Scraping Social Media such as Twitter to collect tweets and comments for performing sentiment analysis.

搜刮诸如Twitter之类的社交媒体以收集推文和评论,以进行情感分析。

Scraping E-Commerce Website such as Amazon to extract product information for data analysis and predicting market trends.

刮擦电子商务网站(例如Amazon)以提取产品信息以进行数据分析和预测市场趋势。

Scraping E-Mail Address to collect email IDs and then send bulk emails for marketing and advertisement purpose.

搜集电子邮件地址以收集电子邮件ID,然后发送大量电子邮件以用于营销和广告目的。

Scraping Google Images to create datasets for training a Machine Learning or Deep Learning model.

刮取Google图片以创建用于训练机器学习或深度学习模型的数据集。

Although it can be done manually, Python’s library Beautiful Soup makes it easier and faster to scrape the data.

尽管可以手动完成,但是Python的Beautiful Soup库使抓取数据变得更加轻松快捷。

To extract data using web scraping with python, you need to follow these basic steps:

要使用python使用网络抓取功能提取数据,您需要遵循以下基本步骤:

  1. Find the URL of the website that you want to scrape查找您要抓取的网站的URL
  2. Extract the entire code of the website提取网站的完整代码
  3. Inspect the website and find the data you want to extract检查网站并找到您要提取的数据
  4. Filter the code using html tags to get the desired data使用html标记过滤代码以获取所需数据
  5. Store the data in the required format以所需格式存储数据

实作 (Implementation)

Let us start by importing the following libraries in you python notebook as shown below:

让我们首先在python笔记本中导入以下库,如下所示:

import requests from bs4 import BeautifulSoup import reimport speech_recognition as sr from datetime import dateimport webbrowserimport pyttsx3

Now let us create our main function which consists of a bunch of if-else statements which tell the assistant how to respond under certain conditions.

现在让我们创建主要函数,该函数由一堆if-else语句组成,这些语句告诉助手在特定条件下如何响应。

engine = pyttsx3.init('sapi5')r1 = sr.Recognizer()with sr.Microphone() as source:    print('Listening..')    engine.say('Listening')    engine.runAndWait()    audio = r1.listen(source)    audio = r1.recognize_google(audio)    if 'weather' in audio:        print('..')        words = audio.split(' ')        print(words[-1])        scrape_weather(words[-1])    elif 'covid' in audio:        print('..')        words = audio.split(' ')        corona_updates(words[-1])    elif 'meaning' in audio:        print('..')        words = audio.split(' ')        print(words[-1])        scrape_meaning(words[-1])    elif 'take notes' in audio:        print('..')        take_notes()        print('Noted!!')    elif 'show notes' in audio:        print('..')        show_notes()        print('Done')    elif 'news' in audio:        print('..')        scrape_news()    elif 'play' in audio:        print('..')        words = audio.split(' ')        print(words[-1])        play_youtube(audio)    elif 'open' in audio:        print('..')        words = audio.split('open')        print(words[-1])        link = str(words[-1])        link = re.sub(' ', '', link)        engine.say('Opening')        engine.say(link)        engine.runAndWait()        link = f'https://{link}.com'        print(link)        webbrowser.open(link)    elif 'where is' in audio:        print('..')        words = audio.split('where is')        print(words[-1])        link = str(words[-1])        link = re.sub(' ', '', link)        engine.say('Locating')        engine.say(link)        engine.runAndWait()        link = f'https://www.google.co.in/maps/place/{link}'        print(link)        webbrowser.open(link)    else:        print(audio)        print('Sorry, I do not understand that!')        engine.say('Sorry, I do not understand that!')        engine.runAndWait()

Case 1: If the user wants to know about the Weather, he/she can ask the assistant “Hey! What is the weather today in Mumbai?”

情况1:如果用户想了解天气 ,他/她可以问助手“嘿! 孟买今天的天气如何?”

Since the word “weather” is present in the audio, the function scrape_weather(words[-1]) will be called with the parameter as “Mumbai”.

由于音频中存在单词“ weather” ,因此将使用参数“ Mumbai”调用函数scrape_weather(words [-1])

Let us take a look at this function.

让我们看一下这个功能。

def scrape_weather(city):    url = 'https://www.google.com/search?q=accuweather+' + city    page = requests.get(url)soup = BeautifulSoup(page.text, 'lxml')links = [a['href']for a in soup.findAll('a')]    link = str(links[16])    link = link.split('=')    link = str(link[1]).split('&')    link = link[0]    page = requests.get(link, headers={'User-Agent': 'Mozilla/5.0'})soup = BeautifulSoup(page.content, 'lxml')     time = soup.find('p', attrs = {'class': 'cur-con-weather-card__subtitle'})    time = re.sub('\n', '', time.text)    time = re.sub('\t', '', time)    time = 'Time: ' + timetemperature = soup.find('div', attrs = {'class':'temp'})    temperature = 'Temperature: ' + temperature.text    realfeel = soup.find('div', attrs = {'class': 'real-feel'})    realfeel = re.sub('\n', '',realfeel.text)    realfeel = re.sub('\t', '',realfeel)    realfeel = 'RealFeel: ' + realfeel[-3:] + 'C'climate = soup.find('span', attrs = {'class':'phrase'})    climate = "Climate: " + climate.text    info = 'For more information visit: ' + link     print('The weather for today is: ')    print(time)    print(temperature)    print(realfeel)    print(climate)    print(info)    engine.say('The weather for today is: ')    engine.say(time)    engine.say(temperature)    engine.say(realfeel)    engine.say(climate)    engine.say('For more information visit accuweather.com' )    engine.runAndWait()

We shall use the website “accuweather.com ” to scrape all the weather related information. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’)

我们将使用“ accuweather.com”网站来抓取所有与天气有关的信息。 函数request.get(url)将GET请求发送到使用BeautifulSoup(page.text,'lxml')提取了整个HTML代码的URL

Once the code is extracted, we shall inspect the code to find the data of interest. For example, the numeric value of temperature present in the following format

提取代码后,我们将检查代码以找到感兴趣的数据。 例如,温度的数值以以下格式显示

<div class = "temp">26°</div>

can be extracted using

可以使用提取

soup.find('div', attrs = {'class':'temp'})

Similarly we extract the time, the real feel and the climate, and using engine.say(), we make the assistant respond back to the user.

同样,我们提取时间,真实感觉和气候,并使用engine.say()使助手回复用户。

Case 2: If the user wants current COVID-19 updates, he/she can ask the assistant “Hey! Can you give me the COVID updates of India?” or “Hey! Can you give me the COVID updates of the World?”

情况2:如果用户想要当前的COVID-19更新,他/她可以询问助手“嘿! 您能给我印度的COVID更新吗?”“嘿! 您能给我世界的COVID更新吗?”

Since the word “covid” is present in the audio, the function corona_updates(words[-1]) will be called with the parameter as “India” or “World”

由于音频中存在单词“ covid” ,因此将使用参数“印度”或“世界”调用函数corona_updates(words [-1])。

Let us take a look at this function.

让我们看一下这个功能。

def corona_updates(audio):    audio = audiourl = 'https://www.worldometers.info/coronavirus/'    page = requests.get(url)soup = BeautifulSoup(page.content, 'lxml')totalcases = soup.findAll('div', attrs =  {'class': 'maincounter-number'})    total_cases = []    for total in totalcases:        total_cases.append(total.find('span').text)world_total = 'Total Coronavirus Cases: ' + total_cases[0]    world_deaths = 'Total Deaths: ' + total_cases[1]    world_recovered = 'Total Recovered: ' + total_cases[2]    info = 'For more information visit: ' + 'https://www.worldometers.info/coronavirus/#countries'if 'world' in audio:        print('World Updates: ')        print(world_total)        print(world_deaths)        print(world_recovered)        print(info)else:        country = audiourl = 'https://www.worldometers.info/coronavirus/country/' + country.lower() + '/'        page = requests.get(url)soup = BeautifulSoup(page.content, 'lxml')totalcases = soup.findAll('div', attrs =  {'class': 'maincounter-number'})        total_cases = []        for total in totalcases:            total_cases.append(total.find('span').text)total = 'Total Coronavirus Cases: ' + total_cases[0]        deaths = 'Total Deaths: ' + total_cases[1]        recovered = 'Total Recovered: ' + total_cases[2]info = 'For more information visit: ' + urlupdates = country + ' Updates: 'print(updates)        print(total)        print(deaths)        print(recovered)        print(info)        engine.say(updates)        engine.say(total)        engine.say(deaths)        engine.say(recovered)        engine.say('For more information visit: worldometers.info')        engine.runAndWait()

We shall use the website “worldometers.info to scrape all the corona related information. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’)

我们将使用网站“ worldometers.info 来刮除所有与电晕有关的信息。 函数request.get(url)将GET请求发送到使用BeautifulSoup(page.text,'lxml')提取了整个HTML代码的URL

Once the code is extracted, we shall inspect the code to find the numerical values of the total corona cases, total recovered and total deaths.

提取代码后,我们将检查代码以找到电晕总数,恢复的总数和死亡总数的数值。

These value is present inside a span of a div having class “maincounter-number” as shown below.

这些值存在于具有“ maincounter-number”类的div的范围内,如下所示。

<div id="maincounter-wrap" style="margin-top:15px"><h1>Coronavirus Cases:</h1><div class="maincounter-number"><span style="color:#aaa">25,091,068 </span></div></div>

These can be extracted as follows.

这些可以如下提取。

totalcases = soup.findAll('div', attrs =  {'class': 'maincounter-number'})    total_cases = []    for total in totalcases:        total_cases.append(total.find('span').text)

We first find all the div elements having class “maincounter-number”. Then we iterate through each div to obtain the span containing the numerical value.

我们首先找到所有具有“ maincounter-number”类的div元素。 然后,我们遍历每个div,以获得包含数值的范围。

Case 3: If the user wants to know about the News, he/she can ask the assistant “Hey! Can you give me the news updates?”

情况3:如果用户想了解新闻 ,他/她可以问助手“嘿! 你能给我新闻更新吗?”

Since the word “news” is present in the audio, the function scrape_news() will be called.

由于音频中存在单词“ news” ,因此将调用函数scrape_news()

def scrape_news():    url = 'https://news.google.com/topstories?hl=en-IN&gl=IN&ceid=IN:en '    page = requests.get(url)    soup = BeautifulSoup(page.content, 'html.parser')    news = soup.findAll('h3', attrs = {'class':'ipQwMb ekueJc RD0gLb'})    for n in news:        print(n.text)        print('\n')        engine.say(n.text)    print('For more information visit: ', url)    engine.say('For more information visit google news')    engine.runAndWait()

We shall use “Google News” to scrape the headlines of news. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’)

我们将使用“ Google新闻 ”来抓取新闻头条。 函数request.get(url)将GET请求发送到使用BeautifulSoup(page.text,'lxml')提取了整个HTML代码的URL

Once the code is extracted, we shall inspect the code to find the headlines of the latest news.

提取代码后,我们将检查代码以查找最新新闻的标题。

These headlines are present inside the href attribute of the h3 tag having class “ipQwMb ekueJc RD0gLb” as shown below.

这些标题显示在h3标签的href属性内部,该标签的类别为“ ipQwMb ekueJc RD0gLb”,如下所示。

<h3 class="ipQwMb ekueJc RD0gLb"><a href="./articles/CAIiEA0DEuHOMc9oauy44TAAZmAqFggEKg4IACoGCAoww7k_MMevCDDW4AE?hl=en-IN&amp;gl=IN&amp;ceid=IN%3Aen" class="DY5T1d">Rhea Chakraborty arrest: Kubbra Sait reminds ‘still not a murderer’, Rhea Kapoor says ‘we settled on...</a></h3>

We first find all the h3 elements having class “ipQwMb ekueJc RD0gLb”. Then we iterate through each element to obtain the text (news headline) present inside the href attribute.

我们首先找到所有具有“ ipQwMb ekueJc RD0gLb”类的h3元素。 然后,我们遍历每个元素以获得href属性内的文本(新闻标题)。

Case 4: If the user wants to know the Meaning of any word, he/she can ask the assistant “Hey! What is the meaning of scraping?”

情况4:如果用户想知道任何单词的含义 ,他/她可以问助手“嘿! 刮擦是什么意思?”

Since the word “meaning” is present in the audio, the function scrape_meaning(words[-1]) will be called with the parameter as “scraping”

由于音频中存在“含义”一词,因此将使用参数 scraping”来调用函数scrape_意义(words [-1])

Let us take a look at this function.

让我们看一下这个功能。

def scrape_meaning(audio):    word = audio    url = 'https://www.dictionary.com/browse/' + word    page = requests.get(url)    soup = BeautifulSoup(page.content, 'html.parser')    soup    meanings = soup.findAll('div', attrs =  {'class': 'css-1o58fj8 e1hk9ate4'})    meaning = [x.text for x in meanings]    first_meaning = meaning[0]    for x in meaning:        print(x)        print('\n')    engine.say(first_meaning)    engine.runAndWait()

We shall use the website “Dictionary.com to scrape the meanings. The function request.get(url) sends a GET request to the URL whose entire HTML code is extracted using BeautifulSoup(page.text, ‘lxml’)

我们将使用“ Dictionary.com 网站来抓取其含义。 函数request.get(url)将GET请求发送到使用BeautifulSoup(page.text,'lxml')提取了整个HTML代码的URL

Once the code is extracted, we shall inspect the code to find all html tags containing the meaning of the word passed as the parameter.

提取代码后,我们将检查代码以找到所有html标记,这些标记包含作为参数传递的单词的含义。

These values are present inside the div having class “css-1o58fj8 e1hk9ate4” as shown below.

这些值存在于具有类“ css-1o58fj8 e1hk9ate4”的div内部,如下所示。

<div value="1" class="css-kg6o37 e1q3nk1v3"><span class="one-click-content css-1p89gle e1q3nk1v4" data-term="that" data-linkid="nn1ov4">the act of a person or thing that <a href="/browse/scrape" class="luna-xref" data-linkid="nn1ov4">scrapes</a>. </span></div>

We first find all the div elements having class “css-1o58fj8 e1hk9ate4”. Then we iterate through each element to obtain the text (meaning of the word)present inside the div.

我们首先找到所有具有“ css-1o58fj8 e1hk9ate4”类的div元素。 然后,我们遍历每个元素以获得div内的文本(单词的含义)。

Case 5: If the user wants the assistant to Take Notes, he/she can ask the assistant “Hey! Can you take notes for me?”

情况5:如果用户希望助手记录笔记,他/她可以问助手“嘿! 你能为我做笔记吗?”

Since the word “take notes” is present in the audio, the function take_notes() will be called.

由于音频中包含“笔记”一词,因此函数take_notes() 将被称为。

Let us take a look at this function.

让我们看一下这个功能。

def take_notes():r5 = sr.Recognizer()      with sr.Microphone() as source:        print('What is your "TO DO LIST" for today')        engine.say('What is your "TO DO LIST" for today')        engine.runAndWait()        audio = r5.listen(source)        audio = r5.recognize_google(audio)        print(audio)        today = date.today()        today = str(today)        with open('MyNotes.txt','a') as f:            f.write('\n')            f.write(today)            f.write('\n')            f.write(audio)            f.write('\n')            f.write('......')            f.write('\n')            f.close()         engine.say('Notes Taken')        engine.runAndWait()

We start by initialising the recogniser to ask the user for their ‘To-Do list’. We then listen to the user and recognise the audio using recognize_google. Now we will open a notepad named “MyNotes.txt” and jot down the notes given by the user along with the date.

我们首先初始化识别器,要求用户提供他们的“待办事项列表”。 然后,我们听用户的声音并使用Recognize_google识别音频 现在,我们将打开一个名为“ MyNotes.txt”的记事本,并记下用户提供的注释以及日期。

We will then create another function named show_notes() which will read out the notes/To-Do list for today from the notepad named “MyNotes.txt”.

然后,我们将创建另一个名为show_notes()的函数,该函数将从名为“ MyNotes.txt”的记事本中读出今天的注释/待办事项列表。

def show_notes():    with open('MyNotes.txt', 'r') as f:        task = f.read()        task = task.split('......')    engine.say(task[-2])    engine.runAndWait()

Case 6: If the user wants to Play YouTube Video, he/she can ask the assistant “Hey! Can you play Hypnotic?”

情况6:如果用户想播放YouTube视频,他/她可以问助手“嘿! 你可以玩催眠吗?”

Since the word “play” is present in the audio, the function play_youtube(words[-1]) will be called with “hypnotic” passed as the parameter.

由于音频中存在“播放”一词,因此函数play_youtube(words [-1]) 将以“ hypnotic”作为参数传递来调用。

Let us take a look at this function.

让我们看一下这个功能。

def play_youtube(audio):url = 'https://www.google.com/search?q=youtube+' + audio    headers = {    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'    }    engine.say('Playing')    engine.say(audio)    engine.runAndWait()    page = requests.get(url, headers=headers)    soup = BeautifulSoup(page.content, 'html.parser')    link = soup.findAll('div', attrs = {'class':'r'})    link = link[0]    link = link.find('a')    link = str(link)    link = link.split('"')    link = link[1]webbrowser.open(link)

We will use Google Videos to search for the video title, and open the first link to play the YouTube video which is present in the div element having class ‘r’.

我们将使用摹oogle视频搜索视频标题,打开第一个链接来播放YouTube视频出现在有div元素类“R”。

Case 7: If the user wants to Search for Location, he/she can ask the assistant “Hey! Where is IIT Bombay?”

情况7:如果用户想搜索位置,他/她可以问助手“嘿! IIT孟买在哪里?”

Since the word “where is” is present in the audio, the following code will be executed.

由于音频中存在单词“ where is” ,因此将执行以下代码。

(This code is present inside the if-else loop of the main function)

(此代码位于主函数的if-else循环内)

elif 'where is' in audio:        print('..')        words = audio.split('where is')        print(words[-1])        link = str(words[-1])        link = re.sub(' ', '', link)        engine.say('Locating')        engine.say(link)        engine.runAndWait()        link = f'https://www.google.co.in/maps/place/{link}'        print(link)        webbrowser.open(link)

We will join the location provided by the user with the Google Maps link and the use webbrowser.open(link) to open the link locating ‘IIT Bombay’.

我们将用户提供的位置与Google Maps链接一起使用,并使用webbrowser.open(link)打开位于“ IIT Bombay”的链接

Case 8: If the user wants to Open a Website, he/she can ask the assistant “Hey! Can you open Towards Data Science?”

情况8:如果用户要打开网站,他/她可以问助手“嘿! 您可以打开“走向数据科学”吗?”

Since the word “open” is present in the audio, the following code will be executed.

由于音频中存在单词“ open” ,因此将执行以下代码。

(This code is present inside the if-else loop of the main function)

(此代码位于主函数的if-else循环内)

elif 'open' in audio:        print('..')        words = audio.split('open')        print(words[-1])        link = str(words[-1])        link = re.sub(' ', '', link)        engine.say('Opening')        engine.say(link)        engine.runAndWait()        link = f'https://{link}.com'        print(link)        webbrowser.open(link)

We will join the website name provided by the user with the standard format of any URL i.e https://{website name}.com to open the website.

我们将用户提供的网站名称与任何URL的标准格式(即https:// {website name} .com)结合在一起以打开网站。

So that is how we create a simple voice assistant. You can modify the code to to add more features like performing basic mathematical calculation, telling jokes, creating a reminder, changing desktop wallpapers, etc.

这就是我们创建简单语音助手的方式。 您可以修改代码以添加更多功能,例如执行基本的数学计算,讲笑话,创建提醒,更改桌面墙纸等。

You can find the whole code in my GitHub repository linked below.

您可以在下面链接的GitHub存储库中找到整个代码。

翻译自: https://medium.com/@sakshibutala12/building-a-multi-functionality-voice-assistant-in-10-minutes-3e5d87e164f0

语音库构建


http://www.taodudu.cc/news/show-4619512.html

相关文章:

  • 长文丨亚马逊帝国的人工智能革命史
  • 25年持续创新的奥秘:解读亚马逊的创新DNA
  • python 爬取亚马逊评论_用Python爬取了三大相亲软件评论区,结果...
  • amazon白皮书学习 I
  • 亚马逊Alexa的深度学习与语音识别的核心技术原理
  • 从云计算六大技术趋势,看亚马逊云科技的领先优势
  • 两种实现多线程的方式 迅雷案列
  • 神奇女侠Wonder Woman迅雷下载
  • windows10 原版 纯净版 下载
  • faithful原版高清java_我的世界Faithful材质包下载【1.6-1.8.x】【32x/64x】
  • macOS Mojave 10.14.2 原版镜像
  • 什么是危险化学品?
  • 2021年危险化学品经营单位安全管理人员模拟试题及危险化学品经营单位安全管理人员实操考试视频
  • 2022年危险化学品经营单位主要负责人考试题模拟考试平台操作
  • 2022年危险化学品经营单位主要负责人考试题模拟考试题库及答案
  • 2021年危险化学品经营单位主要负责人考试题库及危险化学品经营单位主要负责人考试内容
  • 2021年危险化学品经营单位主要负责人考试及危险化学品经营单位主要负责人考试资料
  • MACD金叉不绿选股公式
  • php8能否救命,虎皮兰干枯发黄惨兮兮!2大罪魁祸首,对症下药救命
  • 英语cymophanite猫眼石cymophanite单词
  • 金绿宝石chrysopal与猫眼石cymophanite区别
  • “大”北京“小”地方
  • 十二花仙子
  • 天然水晶的种类
  • 记述
  • gem是什么证书_珠宝鉴定:EGL证书到底是一个什么样的证书?
  • 用53款全免费软件重装Windows(xbeta译)
  • 现代人必备的计算机工具
  • 重装Windows,只用53款全免费软件
  • 九款能将PowerPoint转换成PDF的免费软件

语音库构建_在10分钟内构建一个多功能语音助手相关推荐

  1. bootstrap设计登录页面_前端小白如何在10分钟内打造一个爆款Web响应式登录界面?...

    对于前端小白(例如:专注后端代码N年的攻城狮),自己编写一个漂亮的Web登录页面似乎在设计上有些捉襟见肘,不懂UI设计,颜色搭配极度的混乱(主色,辅助色,配色,色彩渐变,动画效果等等,看起来一堆乱七八 ...

  2. 如何在10分钟内构建交互式HTML5广告

    本文出生地传送门→→→→→如何在10分钟内构建交互式HTML5广告---静华网-一个有气质的网站 随着Flash的消亡,交互式广告的责任被传递给了HTML5.在这里,我们将学习如何在短短10分钟内建立 ...

  3. 开发平台之美:10分钟内实现一个销售订单功能的增删改查

    IT技术发展了这么多年,早就应该抛弃那些copy&paste的工作了,毫无成就,毫无趣味,毫无好感.这直接催生了一大批快速开发平台的崛起,下面的视频讲述的就是通过一个开发平台如何在10分钟内实 ...

  4. iis 网站添加 身份验证_在10分钟内将身份验证添加到任何网页

    iis 网站添加 身份验证 This content is sponsored via Syndicate Ads 该内容是通过辛迪加广告 赞助的 Adding authentication to w ...

  5. 【手把手】如何在10分钟内搭建一个以太坊私有链?

    在开发以太坊时,很多时候需要搭建一条以太坊私有链,这篇来自作者熊丽兵的文章,手把手教你10分钟内如何在Mac上进行搭建. 作者 | 熊丽兵 整理 | 科科 阅读本文前,你应该对以太坊语言有所了解,如果 ...

  6. vr设备应用程序_在15分钟内构建一个VR Web应用程序

    vr设备应用程序 在15分钟内,您可以开发一个虚拟现实应用程序,并在Web浏览器,VR头盔或Google Daydream上运行它. 关键是A-Frame ,这是Mozilla VR Team构建的开 ...

  7. 用python做预测模型的好处_如何用Python在10分钟内建立一个预测模型

    匿名用户 1级 2017-01-01 回答 预测模型的分解过程 我总是集中于投入有质量的时间在建模的初始阶段,比如,假设生成.头脑风暴.讨论或理解可能的结果范围.所有这些活动都有助于我解决问题,并最终 ...

  8. react学习预备知识_在10分钟内学习React基础知识

    react学习预备知识 If you want to learn the basics of React in the time it takes you to drink a cup of coff ...

  9. 如何使用在 10 分钟内构建您的 Flutter 新闻应用程序而无需编码(Nowa 教程)

    构建新闻应用程序.博客应用程序或任何旨在显示来自网络的信息的应用程序是非常常见的事情.构建此类应用程序时一个非常重要的方面是拥有漂亮的 UI,但通常情况下,当您使用现有的无代码工具将 UI 构建限制为 ...

最新文章

  1. 我为我Windows Home Server 预热
  2. [数据库]Oracle和mysql中的分页总结
  3. mac 下launchpad超级慢的问题
  4. weifenluo与notifyIcon小细节
  5. 更精炼更专注的RTMPClient客户端EasyRTMPClient,满足直播、转发、分析等各种需求...
  6. 一个本地分支能关联两个远程仓库吗_使用git分支保存hexo博客源码到github
  7. html5 视频 showtime,利用function showTime显示不出时间是为什么?
  8. 逻辑回归能摆平二分类因变量,那……不止二分类呢?
  9. JQUERY1.9学习笔记 之内容过滤器(三) has选择器
  10. windows 程序员计算器
  11. Mysql常用函数大全(分类汇总讲解)
  12. 搜狗批量提交工具(2021)
  13. linux用户态内核态通信,内核态与用户态通信 之 sockopt
  14. 千亿商用车车联网市场,智能车载终端企业如何抢食?
  15. 【OpenCV入门教程之五】 分离颜色通道 多通道图像混合
  16. iOS主题/皮肤之SakuraKit
  17. python歌词解析器
  18. 怎么用软件测试内存条是否好坏,内存如何检测?Win7检测内存条好坏的方法
  19. 网络信息安全复习笔记
  20. 【网络攻防】网络扫描工具Nmap的使用

热门文章

  1. ERROR: Can not execute `setup.py` since setuptools is not available in the build environment.
  2. 关于OpenCV的智能视频监控实现代码
  3. 二度云抢先成为首批工信部(.vip/.xyz/.club)域名注册管理机构
  4. 关闭compactos_HOWTO: 利用 CompactOS 减少 Windows 10 磁盘占用量
  5. USACO oct. 09 Watering Hole
  6. Widget原理分析
  7. 晶体谐振器和晶体振荡器有什么区别?
  8. 基本电路概念(二)什么是电容?
  9. 华三路由交换配置命令_H3C的路由器配置命令详解
  10. 华为少帅李一男是如何沦陷的【附任正非讲话】