Being a sneakerhead is a culture on its own and has its own industry. Every month Biggest brands introduce few select Limited Edition Sneakers which are sold in the markets according to Lottery System called ‘Raffle’. Which have created a new market of its own, where people who able to win the Sneakers from the Lottery system want to Sell at higher prices to the people who wished for the shoes more. One can find many websites like stockx.com, goat.com to resell untouched Limited Edition sneakers.

成为运动鞋者本身就是一种文化,并且拥有自己的行业。 每个月,最大的品牌都会推出极少数的限量版运动鞋,根据抽奖系统“ Raffle”的规定,这些运动鞋会在市场上出售。 这就创造了一个自己的新市场,那些能够从彩票系统中赢得运动鞋的人们希望以更高的价格卖给那些希望获得更多鞋子的人们。 您可以找到许多网站,例如stockx.com ,山羊网站(comat.com)来转售未经修饰的限量版运动鞋。

But the problem with reselling the Sneakers is, that every Limited edition sneaker is not a success, and cannot return big profits. One has to study the “hype”, “popularity” which model is a hot topic and is in discussion more than others, and if one can find that well, can gain even up to 300% profits.

但是转售Sneakers的问题在于,每一款限量版运动鞋都不会成功,也无法获得丰厚的利润。 人们必须研究“炒作”,“受欢迎程度”,该模型是一个热门话题,并且比其他人更受讨论,如果能找到一个很好的例子,则可以获得多达300%的利润。

I found a way to discover that “hype” or popularity of certain models by doing Instagram Analysis, and studying the hashtags related to the Sneakers, and find out which Sneaker is a unicorn.

我找到了一种方法,可以通过进行Instagram分析并研究与运动鞋相关的标签来发现某些模型的“炒作”或受欢迎程度,并找出哪个运动鞋是独角兽。

数据抓取和准备数据 (Data Scraping and Preparing the data)

Instagram Api doesn't let you study about likes and comments on other profiles,so instead of using Instagram Api, I used Data scraping. To scrape data from Instagram you will need a hash query like this.

Instagram Api不允许您研究其他个人资料上的喜欢和评论,因此我没有使用Instagram Api,而是使用数据抓取。 要从Instagram抓取数据,您将需要像这样的哈希查询。

url='https://www.instagram.com/graphql/query/?query_hash=c769cb6c71b24c8a86590b22402fda50&variables=%7B%22tag_name%22%3A%22azareth%22%2C%22first%22%3A2%2C%22after%22%3A%22QVFCVDVxVUdMLWlnTlBaQjNtcUktUkR4M2dSUS1lSzkzdGVkSkUyMFB1aXRadkE1RzFINHdzTmprY1Yxd0ZnemZQSFJ5Q1hXMm9KZGdLeXJuLWRScXlqMA%3D%3D%22%7D' 

As you can see keyword azareth , that is my Hashtag. You can simply change that keyword to any hashtag you want to get the data from.

如您所见,关键字azareth就是我的标签 。 您可以简单地将该关键字更改为要从中获取数据的任何主题标签。

Let us select some hashtags for Air Jordan 1 “Fearless” Sneakers #airjordanfearless,#fearless,#jordanbluefearless,#fearlessjordan,#aj1fearless,#ajonefearless,#airjordanonefearless

让我们为Air Jordan 1“ Fearless”运动鞋选择一些标签,#airjordanfearless,#fearless,#jordanbluefearless,#fearlessjordan,#aj1fearless,#ajonefearless,#airjordanonefearless

#Creating a dataframe with columns hashtags
airjordanfearless = ["airjordanfearless","fearless","jordanbluefearless","fearlessjordan","aj1fearless","ajonefearless","airjordanonefearless"]
airjordanfearless=pd.DataFrame(airjordanfearless)
airjordanfearless.columns=["hashtag"]
#Creating a Coloumn of URL in order to place respective urls of the hashtags
url='https://www.instagram.com/graphql/query/?query_hash=c769cb6c71b24c8a86590b22402fda50&variables=%7B%22tag_name%22%3A%22azareth%22%2C%22first%22%3A2%2C%22after%22%3A%22QVFCVDVxVUdMLWlnTlBaQjNtcUktUkR4M2dSUS1lSzkzdGVkSkUyMFB1aXRadkE1RzFINHdzTmprY1Yxd0ZnemZQSFJ5Q1hXMm9KZGdLeXJuLWRScXlqMA%3D%3D%22%7D'
airjordanfearless["url"]= url
#code to replace the hashtag in the query URL
airjordanfearless['url'] = airjordanfearless['hashtag'].apply(lambda x : url.replace('azareth',x.lower()))

After we have a Dataframe, Its time to see what we can do with the Instagram hash query. We can find Total Likes, Total Comments, Total posts related to a certain hashtag and these parameters can help us predict the “hype” and “popularity” of the sneakers.

有了Dataframe之后,该该看看该如何处理Instagram哈希查询了。 我们可以找到总喜欢,总评论,与某个标签相关的总帖子 ,而这些参数可以帮助我们预测运动鞋的“炒作”和“受欢迎程度”。

We will need urlib and requests libraries to open the URL and retrieve certain values we require like Total Likes, Total Comments, or even images themselves.

我们将需要urlib并请求库来打开URL并检索我们需要的某些值,例如“总喜欢”,“总评论”,甚至是图像本身。

import urllib.request
import requests#opening the url and reading it to decode and search for parametrs edge_media_preview_like,edge_media_to_comment,edge_hashtag_to_mediaairjordanfearless['totalikes'] = airjordanfearless['url'].apply(lambda x :(urllib.request.urlopen(x).read().decode('UTF-8').rfind("edge_media_preview_like")))
airjordanfearless['totalcomments'] = airjordanfearless['url'].apply(lambda x :(urllib.request.urlopen(x).read().decode('UTF-8').rfind("edge_media_to_comment")))
airjordanfearless['totalposts'] = airjordanfearless['url'].apply(lambda x :(urllib.request.urlopen(x).read().decode('UTF-8').rfind("edge_hashtag_to_media")))
airjordanfearless['releaseprice'] = 160
airjordanfearless

In order to create train data, I made similar data frames of some selected sneakers -Yeezy700 Azareth,Nike X Sacai Blazar,Puma Ralph Sampson OG,Nike SB Dunk X Civilist , Nike Space Hippie Collection.

为了创建火车数据,我对一些选定的运动鞋(Yeezy700 Azareth,Nike X Sacai Blazar,Puma Ralph Sampson OG,Nike SB Dunk X Civilist和Nike Space Hippie Collection)进行了类似的数据制作。

I took mean Values of Total Likes , comments and posts of all hashtags of each Sneakers to create Training Data.Max Resale prices of the following Sneakers were taken from goat.com.

我以平均总喜欢值,评论和每个运动鞋的所有标签的帖子来创建培训数据。以下运动鞋的最高转售价来自山羊网站。

traindata = {'name':  ['yeezyazareth','airjordanfearless','sacainikeblazar' ,'pumaralphsamson' ,'nikedunkcivilist' ,'nikespacehippie'],'likes': [yeezyazareth.totalikes.mean(),airjordanfearless.totalikes.mean(),sacainikeblazar.totalikes.mean(),pumaralphsamson.totalikes.mean(),nikedunkcivilist.totalikes.mean(),nikespacehippie.totalikes.mean()],'comment': [yeezyazareth.totalcomments.mean(),airjordanfearless.totalcomments.mean(),sacainikeblazar.totalcomments.mean(),pumaralphsamson.totalcomments.mean(),nikedunkcivilist.totalcomments.mean(),nikespacehippie.totalcomments.mean()],'post': [yeezyazareth.totalposts.mean(),airjordanfearless.totalposts.mean(),sacainikeblazar.totalposts.mean(),pumaralphsamson.totalposts.mean(),nikedunkcivilist.totalposts.mean(),nikespacehippie.totalposts.mean()],'releaseprice': [yeezyazareth.releaseprice[1],airjordanfearless.releaseprice[1],sacainikeblazar.releaseprice[1],pumaralphsamson.releaseprice[1],nikedunkcivilist.releaseprice[1],nikespacehippie.releaseprice[1]],'maxresaleprice': [361,333,298,115,1000,330], #maxresaleprice data taken from goat.com'popular':[1,1,1,0,2,1]}df = pd.DataFrame (traindata, columns = ['name','likes','comment','post','releaseprice','maxresaleprice','popular'])
df

数据培训和ANN模型构建 (Data Training and ANN Model Building)

DATA TRAINING

资料训练

1- The hash query gives most recent photos from Instagram for certain hashtags, so it reduces the possibility of having any old-model sneakers photos into the data, this validates, as “HYPE” or “popularity” of a certain sneaker is possibly estimated from the most recent photos, so we can know which sneakers are in Talk and hot right now and have could more resale values.

1-哈希查询可提供来自Instagram最新照片中的特定标签,因此可以减少将任何旧款运动鞋照片纳入数据的可能性,这可以验证,因为可能会估计某个运动鞋的“ HYPE”或“人气”从最近的照片中,我们可以知道哪些运动鞋现在处于Talk和热门状态,并且可能具有更多的转售价值。

2- For any possibility of Hashtags overlaps over photos,(which is quite possible) I talk mean counts of TOTAL LIKES/COMMENTS and POSTS to train data and to predict resale prices.

2-对于标签在照片上重叠的任何可能性(这很有可能),我说的是“总数” /“评论”和“帖子”的均值,以训练数据并预测转售价格。

3- To validate the model instead of splitting data to Train or test we can simply put hashtags into x_test of the recent release of a sneaker and compare our predictions with actual ongoing resale price.

3-为了验证模型,而不是将数据分割以进行训练或测试,我们可以简单地将标签添加到运动鞋最新版本的x_test中,然后将我们的预测与实际的持续转售价格进行比较。

Artificial Neural Network

人工神经网络

For X , I took variable “likes”, “comment”, “post”, “releaseprice” and for Y/Labels I used the “maxretailprices” in order to make the model learn itself to place weight differently in neurons from getting data from the x variables and reach to “maxretailprices” /y data, and find a pattern between Likes and comments and number of posts on Instagram to Max retail prices.

对于X,我采用了变量“喜欢”,“评论”,“发布”,“发行价格”,对于Y / Label,我使用了“最大零售价格”,以使模型从从获取数据中学习到如何在神经元中放置不同的权重x变量并到​​达“ maxretailprices” / y数据,并在“顶”和“评论”之间以及在Instagram上的帖子数量到最大零售价格之间找到一种模式。

The reason being, more likes, comments, and Posts on Instagram related to a particular Sneakers will reflect its hype, popularity among Instagram users, and Model can find accurate weights to determine the relation between

原因是,与某个特定运动鞋相关的更多喜欢,评论和Instagram上的帖子将反映其炒作,在Instagram用户中的流行度,并且Model可以找到准确的权重来确定两者之间的关系。

x = df[["likes","comment","post","releaseprice"]]
x=np.asarray(x)
y=np.asarray(df.maxresaleprice)
y

Model Tuning

模型调整

Learning Rate — I selected Low learning rate of 0.001 , in order to let model find weights and make gradients without overshooting the minima.

学习率 —我选择了0.001的低学习率,以使模型找到权重并进行渐变而不会超出最小值。

Loss method- I selected MSE as a loss method, as I am trying to find relations between the variable so it is a type of regression

损失法 -我选择MSE作为损失法,因为我试图查找变量之间的关系,因此它是一种回归类型

Activation Method -Relu was the best option, as it turns all negative values to 0 (being Instagram showing -1 values for 0) and places the exact value if higher then 0)

激活方法 -Relu是最好的选择,因为它将所有负值都变为0(Instagram表示0的值为-1),如果大于0则放置确切的值)

Layers and Neurons- I played with Neurons and layers to find the best combination where gradients do not blow up , and minimizes loss at best, and able to find patterns and weights better with 50 epochs.

图层和神经元 -我与神经元和图层一起使用,以找到梯度不爆破的最佳组合,并最大程度地减少了损失,并能够在50个历元时更好地找到样式和权重。

from keras.models import Model
from keras.layers import Input
from keras.layers import Dense
from keras.models import Sequential
import tensorflow as tf
model= Sequential()model.add(Dense(10,input_shape=[4,], activation='relu'))
model.add(Dense(30, activation='relu'))
model.add(Dense(1, activation='relu'))
mse = tf.keras.losses.MeanSquaredError()model.compile('Adam',loss=mse)
model.optimizer.lr =0.001model.fit(x,y,epochs=50, batch_size=10, verbose=1)

结果 (RESULTS)

I did not create big enough train data in order to split data between train and test. So in order to Verify results, I simply created an x_test data frame of some recent releases like the way I shown you and compared the predictions of my model with Resale prices with goat.com.Here is an example with Nike dunk LOW sb Black with resale price on goat.com 326 Euros

我没有创建足够大的火车数据来在火车和测试之间分配数据。 因此,为了验证结果,我只是创建了一些最新版本的x_test数据框(如我向您展示的方式),然后将其模型的预测与山羊皮.com的转售价格进行了比较。以下是Nike dunk LOW sb Black与出售价格goat.com 326欧元

For, complete jupyter notebook and code, you can view my repository over github.com — https://github.com/Alexamannn/Instagram-analysis-to-predict-Sneaker-resale-prices-with-ANN

对于完整的Jupyter笔记本和代码,您可以在github.com上查看我的存储库— https://github.com/Alexamannn/Instagram-analysis-to-predict-Sneaker-resale-prices-with-ANN

翻译自: https://medium.com/analytics-vidhya/instagram-analysis-to-predict-limited-edition-sneakers-resale-price-with-ann-5838cbecfab3


http://www.taodudu.cc/news/show-4522544.html

相关文章:

  • php球鞋,行家啊?!这些球鞋外号你必须要知道!
  • java工作流 snaker_GitHub - sneakerhead/hutool: A java tools make it easy to code
  • Spark Scala当中reduceByKey(_+_) reduceByKey((x,y) => x+y)的用法
  • python plot坐标轴显示比例一致,Matplotlib-固定x轴比例和自动缩放y轴
  • mysql group by date_format( stat_time, '%Y/%m/%d' ) 优化
  • 【C++】--抽象工厂模式
  • Cenots Oracle11g设置开机自启动
  • PC - 解决 Logitech G HUB 无法开机启动
  • 计算机开机没有找到引导设备,电脑开机显示没有可以引导的设备
  • win10设置软件开机启动
  • linux自定义开机启动脚本
  • 西门子洗衣机漏电
  • 拉绳位移传感器的零线有电吗?
  • 拉线位移编码器零线有电的原因
  • 安全准入考试(配电专业一般工作人员)安规题库(含答案)
  • 论地线与漏电防护
  • 2021年低压电工免费试题及低压电工考试总结
  • 火线,零线,地线各自颜色和作用(转载)
  • 关于火线、零线、地线
  • 零线和地线的区别,示波器如何测量市电?
  • redis脑图
  • 网络脑图
  • 算法 脑图
  • RPC脑图
  • HBase脑图
  • jvm脑图
  • JavaScript脑图
  • Docker脑图
  • 微服务脑图
  • ES6脑图

instagram分析以预测与安的限量版运动鞋转售价格相关推荐

  1. OPPO R11巴萨限量版开售,上午10点线上线下火爆开启!

    8月18日对于手机行业来说注定是不平凡的一天,因为OPPO R11巴萨限量版选择在这天开售,这无形中牵动着众多消费者的心.据了解,今天上午十点准,OPPO会在线上线下正式开启OPPO R11巴萨限量版 ...

  2. OPPO R11巴萨限量版走红,获年轻用户一致好评

    今年手机市场上最火爆的产品莫过于OPPO R11了,不仅外观颜值高,而且拍照性能.游戏性能等多方面都有出色的表现.8月18日,OPPO R11巴萨限量版手机也正式首销,作为全球第一台的红蓝撞色手机,O ...

  3. 基于分布式的智联招聘数据的大屏可视化分析与预测

    项目需求分析及体系架构 1.1项目介绍 互联网成了海量信息的载体,目前是分析市场趋势.监视竞争对手或者获取销售线索的最佳场所,数据采集以及分析能力已成为驱动业务决策的关键技能.<计算机行业岗位招 ...

  4. 浅谈三种近场通信技术特点以及未来应用场景分析与预测

    一.三种近场通信技术特点 1.WIFI WiFi全称Wireless Fidelity,具有传输速度较高(可以达到UMbps).有效距离长和接入设备多等优点.IEEE 802.11是针对WIFI技术制 ...

  5. 三种近距离通信技术特点及其应用场景的分析与预测

    一.WIFI 概念:创建于IEEE 802.11标准的无线局域网技术.WiFi(Wireless Fidelity,无线保真技术)即IEEE 802.11协议,是一种短程无线传输技术,能够在数百英尺范 ...

  6. 浅谈三种近场通信技术的特点,对未来近场通信技术的应用场景进行分析与预测

    一.三种近场通信技术特点 1.WIFI 概念:创建于IEEE 802.11标准的无线局域网技术.WiFi(Wireless Fidelity,无线保真技术)即IEEE 802.11协议,是一种短程无线 ...

  7. 三种近场通信技术的特点以及对未来近场通信技术的应用场景进行分析与预测

    三种近场通信技术的特点以及对未来近场通信技术的应用场景进行分析与预测 第一种:蓝牙 Bluetooth Classic无线电,也被称为Bluetooth 基本速率/增强数据速率(BR/EDR),是一种 ...

  8. 简述三种近场通信技术的特点及其未来应用场景进行分析与预测

    目录 一.三种近场通信技术的特点 NFC 特点: 应用: 应用形式: 蓝牙 特点: 应用: 问题: Wi-Fi 特点: 应用: 安全风险: 二.未来应用场景进行分析与预测 NFC发展前景 蓝牙发展前景 ...

  9. 三种近场通信的特点及未来分析与预测

    文章目录 一.三种近场通信的特点 1.WIFI 2.Bluetooth 3.NFC 二.未来近场通信技术的应用场景进行分析与预测 1.WIFI 2.Bluetooth 3.NFC 一.三种近场通信的特 ...

最新文章

  1. 转帖-Linux 磁盘坏道检测和修复
  2. linux下mv命令移动目录的二种情况
  3. 关于系统弹出错误:429 , ActiveX 部件不能创建对象 的解决方法
  4. Linux给用户添加sudo权限
  5. double 数组_寻找两个有序数组的中位数
  6. Ognl标签常用例子 只能在Struts2中使用
  7. Java SE 7、8、9 –推进Java
  8. 数学界的花木兰——苏菲﹒热尔曼
  9. C语言——输出9*9口诀
  10. 数据结构笔记(十)-- 循环队列
  11. canvas设置字体粗细用数字没效果_干货 | 用uni-app制作迷你PS小程序
  12. WOE编码和IV信息量
  13. 基于STM32的ESP8266天气时钟(1)---------AT指令获取天气数据
  14. 公司为什么要融资上市?
  15. payscale 美国计算机专业,2016PayScale美国大学排名:计算机专业
  16. python+win10toast—实现PC端通知栏消息推送
  17. linux网络协议栈(四)链路层 vlan处理
  18. ResNet50是什么
  19. 通过JS定义一个Iframe
  20. ssm毕设项目学生宿舍管理系统15pjb(java+VUE+Mybatis+Maven+Mysql+sprnig)

热门文章

  1. 浙江省八年级python_今年9月起 浙江八年级新增Python编程课程
  2. vscode多行注释,自定义按键多行注释
  3. 死性不改【17Fi】ISO9000 Win7x64专业版、WS2008r2企业版GHO下载 2017.06.29
  4. C语言中的布尔型变量
  5. GaussDB架构(下)
  6. HDU 6595 Everything Is Generated In Equal Probability(概率+组合数)
  7. CSS语法及其选择器
  8. [LCT刷题][树链信息维护] P4332 [SHOI2014]三叉神经树
  9. 转行程序员日记---2020-10-12【不是孤独一人】
  10. java 变量过期实现