FROM:https://news.ycombinator.com/item?id=9613810

thisisdave 7 days ago

Has LeCun changed his position on open access? He'd previously "pledged to no longer do any volunteer work, including reviewing, for non-open access publications." [1] A major part of his reasoning was, "Why should people pay $31 to read individual papers when they could get them for free if they were published by JMLR?"

I had assumed that meant he wouldn't write for them either (and thus wouldn't enlist other people to volunteer as reviewers when the final product would cost $32 to read).

[1] https://plus.google.com/+YannLeCunPhD/posts/WStiQ38Hioy

reply

paulsutter 6 days ago

The publishers aren't the problem, and the authors aren't the problem. We the readers are the problem. Seriously. Let me explain.

I have to admit, when I saw "LeCun, Hinton, in Nature" I thought "That must be an important article, I need to read it". I haven't read every single paper by LeCun or Hinton. The Nature name affected me. That's why it's rational to publish there.

There's still no effective alternative to the journal system to identify what papers are important to read. There have been attempts, Google Scholar and Citeseer for example.

A voting system like HackerNews wouldn't work, because Geoff Hinton's vote should count for a lot more than my vote. Pagerank solved that problem for web pages (a link from Yahoo indicates more value than a link from my blog). How can scientific publication move to such a system?

reply

hackuser 6 days ago

> How can scientific publication move to such a system?

Complete amatuer speculation: Scientists' professional societies could create elite, free, online journals, with a limited number of articles per month (to ensure only the best are published there), openly stating that they intend these to be the new elite journals in their respective fields.

reply

aheilbut 6 days ago

PLoS has tried to do that, with journals like PLoS Biology.

Hypothetically, AAAS is in the best position to pull something like this off, but as the publishers of Science, they're sadly very committed to preserving the status quo...

reply

joelthelion 6 days ago

PLoS is actually a pretty nice success. It's more than an experiment at this point.

reply

grayclhn 6 days ago

Things are moving in that direction. Some of the top statistics journals encourage authors to put their papers on ArXiv, for example.[1] Creating "new elite journals" takes much more time than you'd think, though. I work in economics, and I'm aware of two new open access journals that are very good but definitely not comparable to Science or Nature (or, really, the equivalents of Science and Nature in econ). One is five years old and the other is ten, and they've probably got as high a reputation now as they're likely to ever have.

[1]: http://www.imstat.org/publications/eaccess.htm

reply

apl 6 days ago

People are trying to accomplish precisely that: it's called eLife. Wellcome Trust, HHMI and the Max-Planck-Society are backing it; adoption is moderately successful.

So there's hope!

reply

dcre 6 days ago

Karma could be used to weight votes.

reply

letitgo12345 7 days ago

To be fair, it's a review article in nature -- so meant to advertise deep learning to scientists in other fields. His actual research is still publicly accessible.

reply

grayclhn 6 days ago

"Review" in the context of your link means "referee." I don't think that most researchers would view writing a review article for Nature as "volunteer work" because the career benefits of the publication are pretty high.

So, going by the link, I don't think this is a change in his position on open access, but I also don't think that his position involved as much self-sacrifice as you'd assumed.

edit: I don't know the field well, but these review articles usually recycle a lot of material from review articles the author wrote a year or two before. The contents of the article might be basically available for free already.

reply

thisisdave 6 days ago

> I don't think that most researchers would view writing a review article for Nature as "volunteer work"

I was referring more specifically to the fact that Nature had to enlist volunteers to referee the paper he wrote.

I was curious whether he was okay with that even though he wouldn't volunteer himself, if his position on non-open journals had changed, or if there was some other explanation.

reply

Teodolfo 6 days ago

He won't do things for the non-open journals without getting something in return. In this case, he gets a Nature paper in return. He presumably would be willing to edit one for sufficient pay as well, he just won't do it for free.

reply

rfrey 7 days ago

$32 seems steep for one article. Does anyone know if Nature allows authors to publish on their websites? Nothing so far on Hinton's, LeCun's, or Bengio's page.

reply

gwern 7 days ago

https://www.dropbox.com/s/fmc3e4ackcf74lo/2015-lecun.pdf / http://sci-hub.org/downloads/d397/lecun2015.pdf

reply

dmazin 7 days ago

gwern saves the day once again.

reply

chriskanan 7 days ago

This link allows you to access the entire document using ReadCube: http://www.nature.com/articles/nature14539.epdf?referrer_acc...

It doesn't allow you to print or to save the article.

reply

nabla9 7 days ago

http://libgen.in/scimag/?s=10.1038/nature14539

reply

cowpig 7 days ago

Having to pay for this at all is problematic.

reply

robotresearcher 7 days ago

why? Because the authors are not paid for their work, or some other reason?

reply

foxbarrington 7 days ago

But think of how much all the research costs Nature.

reply

IanCal 6 days ago

Their editor-in-chief estimates their costs to be ~$30-40,000 per article. Much higher than other publishers, it seems, probably in part due to their high rejection rate (92% in 2011).

http://www.nature.com/news/open-access-the-true-cost-of-scie...

reply

joelthelion 6 days ago

That could be reduced a lot if they had an incentive to do it.

The work that counts, ie. the research and the peer-review, are free and not compensated at all by the publisher.

reply

IanCal 6 days ago

I expect prices for everything could be reduced with enough effort, but I think that all businesses have an incentive to reduce internal costs. They apparently publish roughly 1000 articles per year[0] so that suggests internal costs in the range of 30-40 million.

> The work that counts, ie. the research and the peer-review, are free and not compensated at all by the publisher.

The peer review is done for free, yes. I don't think I'd class the research as free though, unless you mean free to the publisher, the scientists are still generally paid.

Again, I'm not arguing for paid journals, just pointing out that they do have costs to run.

[0] http://lib.hzau.edu.cn/xxfw/SCIzx/Document/249/Image/2010101...

reply

gwern 6 days ago

"_Nature_ says that it will not disclose information on margins."

reply

IanCal 6 days ago

Yes, we don't know the profit per article, but we do know some of their internal costs.

reply

cowpig 7 days ago

Hurray for science behind paywalls!

reply

IanCal 6 days ago

Here's a link where you can read it: http://www.nature.com/articles/nature14539.epdf?referrer_acc...

reply

yzh 7 days ago

There is a whole session on machine intelligence in this issue. Just curious, is there more CS-related research on Nature now than before?

reply

chriskanan 7 days ago

Nature and Science both generally have only a few CS-focused papers per year. This issue has a special theme (Machine learning and robots), so it breaks that general pattern. I regularly read Nature and Science for general science news, commentary, etc.

reply

yzh 7 days ago

Thanks.

reply

paulsutter 6 days ago

If you only have time to read one paper on Deep Learning, read this paper.

A few quotes:

"This rather naive way of performing machine translation has quickly become competitive with the state-of-the-art, and raises serious doubts about whether understanding a sentence requires anything like the internal symbolic expressions that are manipulated by using inference rules. It is more compatible with the view that everyday reasoning involves many simultaneous analogies that each contribute plausibility to a conclusion"

"The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast ‘intuitive’ inference that underpins effortless commonsense reasoning."

"Problems such as image and speech recognition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed). At the pixel level, images of two Samoyeds in different poses and in different environments may be very different from each other, whereas two images of a Samoyed and a wolf in the same position and on similar backgrounds may be very similar to each other. A linear classifier, or any other ‘shallow’ classifier operating on raw pixels could not possibly distinguish the latter two, while putting the former two in the same category.... The conventional option is to hand design good feature extractors, which requires a considerable amount of engineering skill and domain expertise. But this can all be avoided if good features can be learned automatically using a general-purpose learning procedure. This is the key advantage of deep learning."

"Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance"

reply

deepnet 6 days ago

Don't miss C Olah's blog where Figure.1 was copied from

http://colah.github.io/

Very very visually insightful on the nature of Neural Nets, Convnets, Deep Nets...

reply

evc123 7 days ago

Nature should just use targeted advertising to make their journals free, similar to the way google/facebook make their services free using targeted ads.

reply

grayclhn 6 days ago

I think you're overestimating the number of people that read Nature.

reply

itistoday2 6 days ago

Why do these articles on RNNs and "Deep Learning" never mention Hierarchical Temporal Memory?

https://en.wikipedia.org/wiki/Hierarchical_temporal_memory#D...

reply

paulsutter 6 days ago

Jeff Hawkins takes the position that only Numenta sees the human brain as a temporal prediction engine based on sparse, hierarchical memory.

But actually, RNNs are great for recognizing and predicting temporal sequences (as we saw in the Karpathy post), RNNs use a sparse representation, and RNNs can be extended with hierarchical memory [1]

The big difference is that the neural network crowd are getting some spectacular results, and Numenta, well, maybe they'll be show more progress in the future.

Jeff Hawkins is super smart and a good guy, and he might get more done if he acknowledged the commonalities in the approaches rather than having to invent it all separately at Numenta. I really dont mean to be critical. Jeff inspired my own interest in machine intelligence.

[1] page 442, "Over the past year, several authors have made different proposals to augment RNNs with a memory module. Proposals include the Neural Turing Machine in which the network is augmented by a ‘tape-like’ memory that the RNN can choose to read from or write to, and memory networks, in which a regular network is augmented by a kind of associative memory. Memory networks have yielded excellent performance on standard question-answering benchmarks. The memory is used to remember the story about which the network is later asked to answer questions."

reply

bra-ket 6 days ago

These communities have different goals from the very beginning. Pattern recognition vs. real intelligence, which is what HTM is about. Hawkins describes this gap well in his book.

But there is some cross-pollination, see the recent projects by Stan Franklin's lab on Sparse distributed memory and composite representations, it's a step towards integration with deep learning: http://ccrg.cs.memphis.edu/assets/papers/theses-dissertation...

On the other hand check out the work by Volodymyr Mnih from DeepMind https://www.cs.toronto.edu/~vmnih/, reinforcement learning with "visual attention" is a step towards consciousness models of the HTM/SDM/LIDA camp.

reply

paulsutter 6 days ago

I was also under the misapprehension that deep learning is just about classification, but that isn't true.

Yes, reinforcement learning is the path to general intelligence, and the deep learning community is showing impressive progress on that front as well. The Deepmind demo [1] and the recent robotics work at Berkeley[2] are good examples.

Thanks for the link to Stan Franklin's work. I'm glad to hear there is work to integrate the two approaches.

[1] https://www.youtube.com/watch?v=EfGD2qveGdQ

[2] http://newscenter.berkeley.edu/2015/05/21/deep-learning-robo...

reply

akyu 6 days ago

It seems like theres a kind of taboo against HTM in the ML community. I guess it stems from their lack of impressive benchmarks, but I think that's kind of missing the point when it comes to HTM. Maybe HTM isn't the right solution, but I think there is a lot to be learned by using neocortex inspired models, and HTM is at least a solid step in that direction. And the work Numenta has contributed on Sparse Codings shouldn't be overlooked.

reply

davmre 6 days ago

I don't think most ML people are actively hostile to HTM, just indifferent until they show some results. For example, Yann Lecun in his Reddit AMA: http://www.reddit.com/r/MachineLearning/comments/25lnbt/ama_...

reply

dwf 6 days ago

Extraordinary claims demand extraordinary evidence. Numenta has plenty of the former and nil as regards the latter.

Even if they had invented some believably interesting task and done fair comparisons with other methods and shown that HTM succeeds where others fail, it would be considered worth a look by the wider machine learning community.

reply

ajays 6 days ago

I've yet to find one significant benchmark dataset where HTM beats other methods. One.

reply

Teodolfo 6 days ago

Because it doesn't meet academic standards for publishing in NIPS and ICML, the most prestigious machine learning conferences.

Edit: To clarify, research papers generally cite other peer-reviewed research papers in similar venues preferentially. ML papers should mostly be citing ML papers in high-quality, peer-reviewed venues. HTM doesn't have papers like this to cite.

reply

jphilip147 6 days ago

Very Helpful review.

reply

Deep Learning – Review by LeCun, Bengio, and Hinton相关推荐

  1. 论文笔记:Deep Learning [nature review by Lecun, Bengio, Hinton]

    如今,机器学习的技术在我们的生活中扮演着越来越重要的角色.从搜索引擎到推荐系统,从图像识别到语音识别.而这些应用都开始逐渐使用一类叫做深度学习(Deep Learning)的技术. 传统机器学习算法的 ...

  2. 论文翻译阅读——Facial Emotion RecognitionUsing Deep Learning:Review And Insights

    文章目录 Abstract Introduction Facial Available Databases Facial Emotion Recognition Using Deep Learning ...

  3. Deep Learning (Ian Goodfellow, Yoshua Bengio and Aaron Courville) 阅读笔记

    Ian Goodfellow, Yoshua Bengio and Aaron Courville 合著的<Deep Learning> 终于写完了,并且放在网上可以在线免费阅读.网址:h ...

  4. Deep Learning (Ian Goodfellow, Yoshua Bengio and Aaron Courville)深度学习中英文版本资源

    转自 -博客园的zivon:https://www.cnblogs.com/zivon/p/9106966.html 看到CSDN上下载需要10个.20个积分,觉得有点贵并没有下载.然后查了一圈,发现 ...

  5. 论文笔记翻译——Nature 综述论文《deep learning》LeCun、Bengio和Hinton

    Yann LeCun , Yoshua Bengio和Geoffrey Hinton被作者誉为深度学习界三大天王,他们所发布在 Nature上的"Deep Learning"包含了 ...

  6. My deep learning reading list

    My deep learning reading list 主要是顺着Bengio的PAMI review的文章找出来的.包括几本综述文章,将近100篇论文,各位山头们的Presentation.全部 ...

  7. 机器学习(Machine Learning)深度学习(Deep Learning)资料(Chapter 2)

    机器学习(Machine Learning)&深度学习(Deep Learning)资料(Chapter 2) - tony的专栏 - 博客频道 - CSDN.NET 注:机器学习资料篇目一共 ...

  8. 【深度学习Deep Learning】资料大全

    感谢关注天善智能,走好数据之路↑↑↑ 欢迎关注天善智能,我们是专注于商业智能BI,人工智能AI,大数据分析与挖掘领域的垂直社区,学习,问答.求职一站式搞定! 对商业智能BI.大数据分析挖掘.机器学习, ...

  9. 【github】机器学习(Machine Learning)深度学习(Deep Learning)资料

    转自:https://github.com/ty4z2008/Qix/blob/master/dl.md# <Brief History of Machine Learning> 介绍:这 ...

最新文章

  1. 最完整代码的用php备份mysql数据库
  2. Oracle 删除归档日志脚本
  3. golang错误处理
  4. boost::hana::fuse用法的测试程序
  5. orcle抽数据到mysql_抽取oracle数据到mysql数据库的实现过程
  6. 唐山师范学院计算机论文,唐山师范学院校园网络解决方案 毕业论文
  7. 记一次线上服务假死排查过程
  8. 域用户频繁被锁定怎么解决_Oracle11g用户频繁锁定并且解锁后不允许登录
  9. 被阿里P8面了两个小时,技术、业务有来有回......
  10. bulk这个词的用法_15、形容词与副词(二)比较的用法
  11. Vue Bootstrap OSS 实现文件上传
  12. 学习ESLint的规则配置,ESLint语法检测配置说明
  13. wifi怎么设置找不到服务器,无线网 登入ip找不到服务器
  14. Linux下解压rar格式文件
  15. 摘录Xcode 交叉开发编程中选项
  16. 深度学习:感知机perceptron
  17. Tomcat原理剖析及性能调优
  18. Collecting stars
  19. docker insecure-registry
  20. pat 1026C语言

热门文章

  1. samli文件_5.3 smali文件格式
  2. 源码解读Mybatis List列表In查询实现的注意事项
  3. linux常用shell命令面试,shell经典笔试题目总结
  4. KVM中四种网络模型(三)
  5. ossweb上传 php_php - ftp 上传文件到远程服务器
  6. PHP如何验证以太坊签名
  7. POJ1990:MooFest——题解
  8. Java泛型解析(02):通配符限定
  9. 12核心 联发科和台积电将研发7nm芯片
  10. 处理字符串时常用方法0914