spotify下载

There’s one particular event that cheers my Mondays. No that’s not having classes or work — I’m talking about the amazing Spotify ‘Weekly Discovery’ Playlist that is updated. It’s amazing the combination of songs that you never heard but still love.

有一个特别的事件使我的星期一高兴。 不,那不是上课或上课—我说的是令人惊叹的Spotify “每周发现”播放列表,该列表已更新。 您从未听过但仍然喜欢的歌曲的组合真是太神奇了。

There are several articles online explaining the AI behind this beloved Weekly Discovery technique used by Spotify. Here’s one of my favourites:

在线上有几篇文章解释了Spotify使用的备受喜爱的每周发现技术背后的AI。 这是我的最爱之一:

Putting together my recent programming skills with my not so recent Spotify love and addiction, I tried to build my own recommendation system, as well as explore some fun Data Analysis along the way.

我将最近的编程技能与最近的Spotify的爱和上瘾结合在一起,试图建立自己的推荐系统,并在此过程中探索一些有趣的数据分析。

Here is my Project git folder in case you’d like to check the code. It is divided into several notebooks.

如果您想检查代码, 这是我的Project git文件夹。 它分为几个笔记本。

设置Spotify API (Setting up the Spotify API)

The witchery starts with the use of Spotify API to create an app within the Spotify developer environment. After creating a developer’s account and an application environment I’m able to access data on any public playlist out there. I followed the below tutorial.

巫术始于使用Spotify API在Spotify开发人员环境中创建应用程序。 创建开发者帐户和应用程序环境后,我可以访问那里任何公共播放列表中的数据。 我遵循以下教程。

https://machinelearningknowledge.ai/tutorial-how-to-use-spotipy-api-to-scrape-spotify-data/

https://machinelearningknowledge.ai/tutorial-how-to-use-spotipy-api-to-scrape-spotify-data/

My first access to the API involved getting two playlists: liked and disliked songs. (Spoiler alert: these were then used for supervised learning)

我对API的首次访问涉及获得两个播放列表:喜欢和不喜欢的歌曲。 (剧透警报:然后将其用于监督学习)

I build a master function to be able to convert any playlist into a dataframe:

我构建了一个主函数,可以将任何播放列表转换为数据帧:

def master_function(uri):    uri = playlist_uri        username = uri.split(':')[2]    playlist_id = uri.split(':')[4]    results = {'items':[]}for n in range(0,3000,100):        new = sp.user_playlist_tracks(username, playlist_id,  offset = n)        results['items'] += new['items']        playlist_tracks_data = results        playlist_tracks_id = []        playlist_tracks_titles = []        playlist_tracks_artists = []for track in playlist_tracks_data['items']:            playlist_tracks_id.append(track['track']['id'])            playlist_tracks_titles.append(track['track']['name'])#adds a list of all artists involved in the song to the list of artists for the playlistfor artist in track['track']['artists']:                artist_list = []                artist_list.append(artist['name'])            playlist_tracks_artists.append(artist_list[0])    df = pd.DataFrame([])for i in range(0, len(playlist_tracks_id)):        features = sp.audio_features(playlist_tracks_id[i])        features_df = pd.DataFrame(features)        df = df.append(features_df)    df['title'] = playlist_tracks_titles#features_df['first_artist'] = playlist_tracks_first_artists    df['main_artist'] = playlist_tracks_artists#features_df = features_df.set_index('id')    df = df[['id', 'title', 'main_artist',                               'danceability', 'energy', 'key', 'loudness',                               'mode', 'acousticness', 'instrumentalness',                               'liveness', 'valence', 'tempo',                               'duration_ms', 'time_signature']]return df

探索性数据分析 (Exploratory Data Analytics)

Luckily for us, Spotify provides us with a way to do that — the Audio Feature Object aka Features!!

对我们来说幸运的是,Spotify为我们提供了一种方法-音频特征对象又称特征!

Let’s take a look at each feature.

让我们看一下每个功能。

速度。 简而言之,每首歌曲每分钟有多少次节拍(BPM)? (TEMPO. Simply put, how many beats per minute (BPM) does each song have?)

Tempos are also related to different Genres: Hip Hop 85–95 BPM, Techno 120–125 BPM, House & Pop 115–130 BPM, Electro 128 BPM, Reggaeton >130 BPM, Dubstep 140 BPM

节奏也与不同的流派有关:嘻哈85–95 BPM,Techno 120–125 BPM,House&Pop 115–130 BPM,Electro 128 BPM,雷鬼舞> 130 BPM,Dubstep 140 BPM

0-Disliked; 1- Liked Songs
0-不喜欢; 1-喜欢的歌曲

能源。 强度和活动的量度。 (ENERGY. A measure of intensity and activity.)

  • This is the first of Spotify’s more subjective metrics.这是Spotify更加主观的指标之一。
  • Energy represents a perceptual measure of intensity and activity.能量代表对强度和活动的感知度量。
  • Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale.通常,充满活力的曲目会感觉快速,响亮且嘈杂。 例如,死亡金属具有较高的能量,而巴赫前奏的得分则较低。

What artists are driving my Energy taste down?

哪些艺术家正在压低我的能量爱好?

Thank’s Nick fka Chet Faker
谢谢尼克·卡(Nick fka)

可跳舞性。 (DANCEABILITY.)

Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity.

舞蹈性是根据节奏,节奏稳定性,拍子强度和整体规律性等音乐元素的组合来描述轨道对舞蹈的适应性。

Looks like while my ‘disliked’ songs follow a skewed distribution towards higher levels of danceability, my loved songs follow a supernormal distribution on this feature showing that I enjoy a wide range of danceability level.

看起来,虽然我的“不喜欢”的歌曲遵循偏高的可跳舞性分布,但我喜欢的歌曲在此功能上遵循超正态分布,这表明我喜欢各种各样的可跳舞性。

Checking what artists are driving the lower levels of danceability.
检查哪些艺术家正在推动较低的舞蹈水平。
we found the guilty one.
我们发现有罪。

能源与可塑性 (ENERGY vs DANCEABILITY)

From the graph, we can see that I do enjoy songs with a normal level of danceability but low energy the image shows two very distinct clusters从图中可以看出,我确实喜欢正常舞蹈水平但低能量的歌曲,图像显示了两个截然不同的簇

What is driving High danceability but low Energy?

是什么驱动着高舞蹈性但低能量?

LO-FI music! Makes sense — I love this type of music.
LO-FI音乐! 很有道理-我喜欢这种音乐。

键。 (KEY.)

The estimated overall key of the track. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1.

曲目的估计总键 。 整数使用标准音高类别符号映射到音高。 例如0 = C,1 =C♯/ D♭,2 = D,依此类推。 如果未检测到键,则值为-1。

I mapped the keys to have the real notation.
我将键映射为具有真正的符号。
We can see key A is used more often both on liked and disliked songs.
我们可以看到,在喜欢和不喜欢的歌曲上都更经常使用键A。

声音。 (ACOUSTICNESS.)

A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.

轨道是否发声的置信度从0.0到1.0。 1.0表示音轨是声学的高置信度。

Very skewed towards more acoustic songs.
非常偏向于更多原声歌曲。

价。 (VALENCE.)

This is one of the most interesting metrics that Spotify produces: A measure describing the musical positiveness conveyed by a track.

这是Spotify产生的最有趣的指标之一:一种描述曲目传达的音乐积极性的指标。

  • Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric)价高的曲目听起来更积极(例如,快乐,开朗,欣快)
  • Tracks with low valence sound more negative (e.g. sad, depressed, angry).价低的音轨听起来更消极(例如,悲伤,沮丧,愤怒)。
Am I a sad person? No, its just LDR again.
我难过吗? 不,它只是LDR。

响度。 轨道的整体响度,以分贝(dB)为单位。 (LOUDNESS. The overall loudness of a track in decibels (dB).)

Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db.

响度值是整个轨道的平均值,可用于比较轨道的相对响度。 响度是声音的质量,它是身体力量(振幅)的主要心理关联。 值的典型范围是-60至0 db。

Fyi. Did you know Spotify adjusts for loudness?

您知道Spotify会调整响度吗?

监督学习 (Supervised Learning)

You can find the full code here.

您可以在此处找到完整的代码。

The first step is to split our data into a training and testing set. I used sklearn function called train_test_split() which splits the data according to a test_size percent specified in the method. The code below breaks up the data into 85% train, 10% test.

第一步是将我们的数据分为训练和测试集。 我使用了一个名为train_test_split()的sklearn函数,该函数根据方法中指定的test_size百分比拆分数据。 下面的代码将数据分为85%的训练,10%的测试。

from sklearn.model_selection import train_test_splitfeatures = ['danceability', 'energy', 'key','loudness', 'mode', 'acousticness', 'instrumentalness', 'liveness','valence', 'tempo', 'duration_ms', 'time_signature']X = data[features]y = data['Like']X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=0.1)

逻辑回归 (Logistic Regression)

The first model that I tried was logistic regression. I got 80% accuracy.

我尝试的第一个模型是逻辑回归。 我的准确率达到了80%。

from sklearn.linear_model import LogisticRegressionlr_model = LogisticRegression()lr_model.fit(X_train, y_train)lr_pred = lr_model.predict(X_test)score = metrics.accuracy_score(y_test, lr_pred)*100print("Accuracy using Logistic Regression: ", round(score, 3), "%")

决策树分类器 (Decision Tree Classifier)

A decision tree classifier is often the easiest to visualize. All it is is pretty much a decision tree based on the features so you can trace the path down and visually see how it makes decisions.

决策树分类器通常最容易可视化。 所有这些几乎都是基于这些功能的决策树,因此您可以追溯路径并直观地查看其如何做出决策。

model = DecisionTreeClassifier(max_depth = 8)model.fit(X_train, y_train)score = model.score(X_test,y_test)*100print("Accuracy using Decision Tree: ", round(score, 2), "%")#visualize the treeexport_graphviz(model, out_file="tree.dot", class_names=["malignant", "benign"],feature_names = data[features].columns, impurity=False, filled=True)import graphvizwith open("tree.dot") as f:    dot_graph = f.read()graphviz.Source(dot_graph)
Tree visualization
树的可视化
#feature importancedef plot_feature_importances(model):    n_features = data[features].shape[1]    plt.barh(range(n_features), model.feature_importances_, align='center')    plt.yticks(np.arange(n_features), features)    plt.xlabel("Feature importance")    plt.ylabel("Feature")    plt.title("Spotify Features Importance - Decision Tree")plot_feature_importances(model)

K最近邻居(KNN) (K-Nearest Neighbors (KNN))

The K-Nearest Neighbors classifier looks at the neighbours of a data point in order to determine what the output is. This approach gave a slightly better accuracy than the previous classifiers.

K最近邻居分类器查看数据点的邻居,以确定输出是什么。 与以前的分类器相比,该方法的准确性略高。

knn = KNeighborsClassifier(n_neighbors = 2)knn.fit(X_train, y_train)knn_pred = c.predict(X_test)score_test_knn = accuracy_score(y_test, knn_pred) * 100print("Accuracy using Knn Tree: ", round(score_test_knn, 3), "%")

随机森林 (Random Forest)

The random forest is a model made up of many decision trees. Rather than just simply averaging the prediction of trees (which we could call a “forest”), this model uses sampling and random subsets to build trees and split nodes, respectively. Accuracy of 87%!

随机森林是由许多决策树组成的模型。 该模型不仅仅是简单地对树木的预测取平均值(我们可以称其为“森林”),而是使用采样和随机子集分别构建树木和分割节点。 准确度达87%!

model=RandomForestClassifier(n_estimators=100)model = clf.fit(X_train,y_train)y_pred=clf.predict(X_test)score = metrics.accuracy_score(y_test, y_pred)*100print("Accuracy with Random Forest:", round(score, 4), "%")

检查结果 (Check Results)

After checking within my test datasets which songs had been incorrectly classified, I decided to compare against external playlists.

在测试数据集中检查了哪些歌曲被错误分类之后,我决定与外部播放列表进行比较。

To do so, I used the best classifier (Random Forest) with predict_proba instead of predicting only. This allowed having a probability as dependent variable instead of a simple binary output.

为此,我将最佳分类器(Random Forest)与预测_proba结合使用,而不是仅进行预测。 这允许将概率作为因变量而不是简单的二进制输出。

pred = clf.predict_proba(felipe_df[features])[:,1]felipe_df['prediction'] = predprint("How similar is it to my taste?", round(felipe_df['prediction'].mean()*100,3), "%")

无监督学习 (Unsupervised Learning)

Recall that in supervised machine learning you have input variables (X) and an output variable (Y ) and you use an algorithm to learn the mapping function from the input to the output. In contrast, in unsupervised machine learning, you only have input data (X) and no corresponding output variables.

回想一下,在有监督的机器学习中,您具有输入变量(X)和输出变量(Y),并且使用算法来学习从输入到输出的映射函数。 相反,在无监督机器学习中,您只有输入数据(X),而没有相应的输出变量。

The goal here is to extract knowledge based on any patterns we’ll be able to find — no labels will be addressed on liked and disliked. Unsupervised learning problems can be further grouped into clustering and association problems.

此处 的目标 是根据我们能够找到的任何模式来提取知识,不会在喜欢和不喜欢的内容上加上任何标签。 无监督学习问题可以进一步分为聚类和关联问题。

PCA:主成分分析 (PCA: Principal Component Analysis)

Principal Component Analysis (PCA) emphasizes variation and brings out strong patterns in a dataset. In other words, it takes all the variables then represents it in a smaller space while keeping the nature of the original data as much as possible.

主成分分析(PCA)强调变异并在数据集中显示出强大的模式。 换句话说,它使用所有变量然后在较小的空间中表示它,同时尽可能保留原始数据的性质。

  • The first principal component will encompass as much of the dataset variation as possible in 1 dimension,第一个主成分将在1维上包含尽可能多的数据集变化,
  • The second component will encompass as much as possible of the remaining variation as possible while remaining orthogonal to the first, and so on第二个分量将包含尽可能多的剩余变化,同时保持与第一个正交,依此类推
from sklearn.decomposition import PCApca = PCA(n_components=3, random_state=42)df_pca = pd.DataFrame(data=pca.fit_transform(features_scaled), columns=[‘PC1’,’PC2',’PC3'])plt.matshow(pca.components_, cmap='viridis')plt.yticks([0, 1, 2], ["First component", "Second component", "Third component"])plt.colorbar()plt.xticks(range(len(data[features].columns)),data[features], rotation=60, ha='left')plt.xlabel("Feature")plt.ylabel("Principal components")
PCA with 3 components
具有3个组件的PCA

Plotting in 3D: I chose danceability for colour differentiation since, as per above, we can conclude that it is an important feature.

在3D中绘图:我选择舞蹈性来进行颜色区分,因为如上所述,我们可以得出结论,它是重要的功能。

# Plot the PCApx.scatter_3d(df_pca_fix,                     x='PC1',                    y='PC2',                    z='PC3',                    title='Principal Component Analysis Projection (3-D)',                    color='danceability',                     size=np.ones(len(df_pca_fix)),                     size_max=5,                    height=600,                    hover_name='title',                    hover_data=['main_artist'],                    color_continuous_scale=px.colors.cyclical.mygbm[:-6])

We can see each song position and its distance to other songs based on the audio features that have been transformed. Most points are concentrated on the green-eish areas. The mapping also confirms that danceability does correlate with PC2 to some extent. ‘Am I boy? Am I a girl? Do I really care’ (fyi it’s a liked song) is on the opposite side to the way way less dance level with Hellowen’s “Hallowen”.

根据已转换的音频功能,我们可以看到每首歌曲的位置及其与其他歌曲的距离。 大多数问题都集中在绿色的区域。 该映射还确​​认了可跳舞性确实在某种程度上与PC2相关。 我是男孩吗? 我是女孩吗? “我真的在乎吗?” (这是一首喜欢的歌曲)与Hellowen的“万圣节”减少舞蹈水平的方式相反

用K均值聚类 (Clustering with K-Means)

The main idea behind k-means clustering is that we choose how many clusters we would like to create (typically we call that number k). We choose this based on domain knowledge (maybe we have some market research on the number of different types of groups we expect to see in our customers?), based on a ‘best-guess’, or randomly.

k均值聚类背后的主要思想是我们选择要创建多少个聚类(通常将其称为数字k)。 我们基于领域知识(也许我们已经对一些期望在客户中看到的不同类型的组的数量进行了一些市场研究?),基于“最佳猜测”或随机进行选择。

In the end you are left with areas that identify in which a cluster a newly assigned point would be classified.

最后,剩下的区域将标识新分配的点将在哪个集群中分类。

# Let’s start with 2 clusters= 2 featureskmeans = KMeans(n_clusters=2)model = kmeans.fit(features_scaled)data_2 = data.copy()data_2[‘labels’] = model.labels_data_2['labels'].value_counts() #check how many obs in each cluster
Plotting Clusters
绘制集群

What’s the optimal number of clusters? While having too many clusters might mean that we haven’t actually learned much about the data — the whole point of clustering is to identify a relatively small number of similarities that exist in the dataset. Too few clusters might mean that we are grouping unlike samples together artificially.

最佳群集数是多少? 尽管聚类过多可能意味着我们实际上并未对数据了解太多,但是聚类的全部目的是识别数据集中存在的相对较少的相似性。 簇太少可能意味着我们将不同的样本人为地分组在一起。

There are many different methods for choosing the appropriate number of clusters, but one common method is calculating a metric for each number of clusters, then plotting the error function vs the number of clusters.

有许多不同的方法可以选择适当数量的群集,但是一种常见的方法是为每个群集数量计算一个度量,然后绘制误差函数与群集数量的关系。

  • Yellowbrick’s KElbowVisualizer: implements the “elbow” method of selecting the optimal number of clusters by fitting the K-Means model with a range of values for K.

    Yellowbrick的KElbowVisualizer:通过将K均值模型与K值的范围拟合来实现选择最佳聚类数的“肘”方法。

X = features_scaled# Instantiate the clustering model and visualizermodel = KMeans()visualizer = KElbowVisualizer(model, k=(1,10))visualizer.fit(X)        # Fit the data to the visualizervisualizer.show()        # Finalize and render the figure
we see that the model is fitted with 3 clusters — the “elbow” in the graph,
我们看到模型拟合了3个簇-图中的“肘”,

I used other two methodologies for unsupervised learning: Gaussian Mixture Models and HAC (Hierarchical Agglomerative Clustering). Please refer to the notebook.

我使用了其他两种方法进行无监督学习:高斯混合模型和HAC(分层聚集聚类)。 请参阅笔记本 。

最终产品 (FINAL PRODUCT)

Throughout this project, I have investigated the Spotify API, done Exploratory Data Analysis on disliked vs liked songs, as well as with my Musical Journey, and even dug into several supervised and unsupervised ML techniques.

在整个项目中,我研究了Spotify API ,对不喜欢的歌曲和喜欢的歌曲以及“我的音乐之旅”进行了探索性数据分析 ,甚至研究了几种有监督和无监督的 ML技术。

Finally, I produced a final product where the user can input his (public) playlist URI and get an Exploratory Data Analysis on the songs’ features, as well as new song recommendations. This will be produced based on Unsupervised Learning techniques since we can assume the shared playlist will be a collection of liked songs and therefore we can’t apply labelling techniques.

最后,我制作了一个最终产品,用户可以在其中输入(公开)播放列表URI,并获得有关歌曲功能以及新歌曲推荐探索性数据分析。 由于我们可以假设共享播放列表将是喜欢的歌曲的集合,因此将基于无监督学习技术来制作,因此我们无法应用标签技术。

I’ve put together a super big function that can be found here. Let’s investigate the most important parts.

我整理了一个超大功能,可以在这里找到。 让我们研究最重要的部分。

1- Use the master function that takes the playlist URI and transforms it into a dataframe.

1-使用master函数获取播放列表URI,并将其转换为数据帧。

2- Create a super big playlist with musical diversity that will work as a library for music recommendations.

2-创建具有音乐多样性的超大播放列表,该列表将用作音乐推荐库。

3- Perform PCA technique with 3 components on the two playlists.

3-对两个播放列表中的 3个组件执行PCA技术

Output example for friend’s playlist
朋友的播放列表的输出示例

4- Get recommendations with PCA and Nearest Neighbors. Export to PDF.

4-获取有关PCA最近邻居的建议 导出为PDF。

from scipy.spatial import KDTreecolumns = [‘PC1’, ‘PC2’, ‘PC3’]kdB = KDTree(df_pca_all_songs[columns].values) neighbours = kdB.query(df_pca[columns].values, k=1)[-1]#recomendations output: 30 songs you might likerecomendations = all_songs[all_songs.index.isin(neighbours[:31])]recomendations_output = recomendations[['title', 'main_artist']]recomendations_output.columns = ['Song Title', 'Artist']
Example of Recommendations (30 songs)
推荐示例(30首歌曲)

5- EDA with the Obamas playlist, Pitchfork top albums and songs and Billboard Top 100.

5-EDA, 包括奥巴马的播放列表 干草叉 最流行的专辑和歌曲以及 Billboard前100名

from sklearn.preprocessing import MinMaxScaler  #scaledata_scaled = pd.DataFrame(MinMaxScaler().fit_transform(data[features]),  columns=data[features].columns)data_scaled[‘Playlist’] = data[‘Playlist’]df_radar = data_scaled.groupby(‘Playlist’).mean().reset_index() \ .melt(id_vars=’Playlist’, var_name=”features”, value_name=”avg”) \ .sort_values(by=[‘Playlist’,’features’]).reset_index(drop=True)fig = px.line_polar(df_radar,  r=”avg”,  theta=”features”,  title=’Mean Values of Each Playlist Features’, color=”Playlist”,  line_close=True, line_shape=’spline’, range_r=[0, 0.9], color_discrete_sequence=px.colors.cyclical.mygbm[:-6]fig.show()
Polar Graph
极坐标图

Hipster or Mainstream? Compare your taste to Billboard and Pitchfork

时髦还是主流? 比较您的口味与广告牌和干草叉

High Fidelity TV Show
高保真电视节目
def big_graph(feature, label1="", label2="", label3 = ""):    sns.kdeplot(data[data['Playlist']=='Your Songs'][feature],label=label1)    sns.kdeplot(data[data['Playlist']=='Pitchfork'][feature],label=label2)    sns.kdeplot(data[data['Playlist']=='Billboard Top 100'][feature],label=label3)    plt.title(feature)    plt.grid(b=None)plots =[]plt.figure(figsize=(16,16))plt.suptitle("Hipster or Mainstream?", fontsize="x-large")for i, f in enumerate(features):    plt.subplot(4,4,i+1)    if ((i+1)% 4 == 0) or (i+1==len(features)):        big_graph(f,label1="Your Songs", label2="Pitchfork", label3="Billboard Top 100")    else:        big_graph(f)plots.append(plt.gca())#the code is the same for the Obamas
You vs Pitchfork vs Billboard Top 100
您vs干草叉vs广告牌百强

You an Obama? Compare your taste with the Obamas.

你是奥巴马吗? 将您的品味与奥巴马家族进行比较。

You vs The Obamas
你对奥巴马

6- Put together a Final Report PDF. et voilá!

6-汇总最终报告PDF 。 等等!

Linkedin: https://www.linkedin.com/in/ritasousapereira/

Linkedin: https : //www.linkedin.com/in/ritasousapereira/

If you liked the article make sure to check the full Github with all the code that you can grab and explore! Also, if you have any suggestions regarding the project workflow or if you notice I did something wrong please be sure to let me know in the comments

如果您喜欢这篇文章,请确保检查完整的 Github 以及可以获取和探索的所有代码! 另外,如果您对项目工作流程有任何建议,或者您发现我做错了什么,请务必在评论中告知我

翻译自: https://medium.com/@ritasousabritopereira4/my-spotify-journey-towards-a-recommendation-system-2fac1701dda4

spotify下载


http://www.taodudu.cc/news/show-4428615.html

相关文章:

  • spotify歌曲下载_k表示使用Spotify歌曲功能进行聚类
  • spotify使用教程_如何在iPhone上的Siri中使用Spotify
  • Spotify大规模敏捷之路
  • 夸克网盘提取cookie
  • 【安全预警】WINRAR,7ZIP,WINZIP等存在严重漏洞
  • CloudCompare源码分析之ccViewer模块:视图类一(阅读经典)
  • 内网渗透(九)之内网信息收集-手动本地信息收集
  • Word2Vec增量训练实现
  • python爬取豆瓣电影评论_python 爬取豆瓣电影评论,并进行词云展示及出现的问题解决办法...
  • 【源码解析】豆瓣电影推荐卡片效果实现原理
  • 豆瓣电影api系列
  • Android中来电号码归属地的显示
  • android来电归属地提醒
  • java如何将中文转换成byte数组
  • c语言 文件编码转换为字符串,c语言下汉字转换(字符串改为utf-8编码)
  • A* 算法详解 小时候玩过红警的进来看看,非常清晰
  • 红警ol服务器维护中1003,【图片】红警ol心灵终结3单位全面解析_红警ol吧_百度贴吧...
  • 红警ol服务器维护中1003,不止于经典,全球唯一正版授权《红警OL》登录UP2018腾讯新文创生态大会...
  • 红色警戒3原版V1.00基址大全
  • 《红色警戒2·尤里复仇》-第四章 随地建设
  • PMML模型-评分卡模型Undefined result解析
  • OpenCV——人脸识别模型训练(2)
  • Actor 分布式并行计算模型: The Actor Model for Concurrent Computation
  • java加载tensorflow训练的PB模型记录
  • linux端防火墙指定端口的开和关
  • 虚拟机防火墙关了怎么端口还是不能访问
  • linux防火墙关闭开放的端口,Linux关闭防火墙,开放端口
  • 安装mysql5.7防火墙关了为什么远程登录不了呢?
  • Oracle索引、视图、序列、同义词、事务、锁机制详解
  • Oracle中序列

spotify下载_我的Spotify推荐系统之旅相关推荐

  1. spotify音乐下载_如何从Spotify下载音乐

    spotify音乐下载 Bring your music with you. Spotify Premium users can listen to the streaming service's e ...

  2. 下载spotify音乐_如何从Spotify下载音乐以进行离线播放

    下载spotify音乐 Khamosh Pathak Khamosh Pathak If you're using Spotify Premium, you can easily download a ...

  3. 下载spotify音乐_如何在Spotify上发现新音乐

    下载spotify音乐 One of Spotify's most powerful features is its recommendation system, which allows you t ...

  4. 下载spotify音乐_如何在Spotify上播放更高质量的音乐

    下载spotify音乐 With Spotify Premium, you get access to higher quality music streaming. By default (and ...

  5. spotify歌曲下载_如何停止Spotify以相同音量播放所有歌曲

    spotify歌曲下载 When sound engineers are mixing an album, they decide how loud they want each track to b ...

  6. spotify 数据分析_我的Spotify流历史分析

    spotify 数据分析 Spotisis /spo-ti-sis/ noun The analysis of one's Spotify streaming history using Python ...

  7. spotify 缓存_如何在Spotify中获得最佳音质

    spotify 缓存 Spotify allows you to change the streaming quality of the music or playlists you listen t ...

  8. spotify下载_Spotify聋哑听障辅助案例研究

    spotify下载 Spotify for the deaf and hard-of-hearing may sound like an oxymoron, but it's quite the op ...

  9. 下载spotify音乐_完成播放列表或专辑后如何停止Spotify停止自动播放音乐

    下载spotify音乐 By default, whenever Spotify reaches the end of the song, album, artist, or playlist you ...

最新文章

  1. [pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would ov
  2. Data Lake Analytics + OSS数据文件格式处理大全
  3. 电信运营商的云机遇-【软件和信息服务】2015.01
  4. 图片SIFT特征匹配处理
  5. jQuery中的text()、html()和val()以及innerText、innerHTML和value
  6. HDU4392(反素数强大的模版)
  7. LeetCode 1620. 网络信号最好的坐标
  8. Python数据分析模块 | pandas做数据分析(二):常用预处理操作
  9. 从Applet中读取Cookie Access Cookies from a Java Applet
  10. linux先安装svn server
  11. PAT 1011 A+B 和 C(C语言)
  12. 所有for循环都可以用while循环改写python_python-for循环与while循环
  13. 飞思卡尔智能车----模糊PID算法通俗讲
  14. 《EDA前端软件开发工程师面试指南》
  15. 我要彻底搞懂SSD网络结构(2)特征提取网络
  16. NGINX反向代理缓存
  17. C语言arduino密码锁实验报告,简易密码锁的制作-Arduino中文社区 - Powered by Discuz!...
  18. 可行方向法的matlab代码,基于MATLAB的可行方向法求极值问题参考.doc
  19. 使用virt-manager管理虚拟机
  20. 项目经理如何面对困境

热门文章

  1. 研究生宿舍大盘点!令人羡慕的研究生宿舍来了!
  2. C++实现大数加减法
  3. 商品促销倒计时效果实现
  4. 下载Linux ISO镜像的方法 (带你快速了解)
  5. html 删除事件,HTML DOM removeEventListener() 方法
  6. 使用potplayer播放器看直播
  7. 驱动开发:运用VAD隐藏R3内存思路
  8. Facebook公布2012年Q2财务数据
  9. 全景地图是什么?怎么用有什么作用!
  10. 智能优化算法——正余弦优化算法(SCA)及其改进策略