这个问题来自于对过去50年左右计算领域各种进展的评论。

其他一些与会者请我把这个问题作为一个问题向整个论坛提出。

这里的基本思想不是抨击事物的现状,而是试图理解提出基本新思想和原则的过程。

我认为我们在大多数计算领域都需要真正的新想法,我想知道最近已经完成的任何重要而有力的想法。如果我们真的找不到他们,那么我们应该问“为什么?”和“我们应该做什么?”


当前回答

bt。它完全颠覆了以前看似显而易见的不可改变的规则——一个人通过互联网下载一个文件所需的时间与下载该文件的人数成正比。它还解决了以前的点对点解决方案的缺陷,特别是围绕着“吸血”,以一种有机的解决方案本身的方式。

BitTorrent优雅地将通常的缺点——许多用户试图同时下载一个文件——转变为优点,将文件在地理位置上分发,这是下载过程的自然组成部分。它优化两个对等点之间带宽使用的策略不鼓励作为副作用的“吸血”——强制节流符合所有参与者的最佳利益。

这是一种一旦被别人发明出来,即使不明显,也似乎很简单的想法。

其他回答

第一台真正的多媒体个人电脑,Amiga:第一台32位抢先处理多任务的个人电脑,第一台硬件图形加速,第一台多声道声音,在许多方面,它比现在流行的多核、多兆赫的Windows盒子机更有用、更强大。

即时消息已经出现很长时间了(60年中后期),但是IRC在1988年之前还没有出现。

除此之外,视频通讯(比如,Windows Live Messenger,或Skype,或……)确实改变了我们的沟通方式;)而且是最近才出现的。


<修改> (见VideoConferencing: 1968, alt text http://wpcontent.answers.com/wikipedia/en/thumb/6/64/On_Line_System_Videoconferencing_FJCC_1968.jpg/180px-On_Line_System_Videoconferencing_FJCC_1968.jpg,正如Alan Kay自己在评论中指出的那样:

请再次查看恩格尔巴特在1968年演示的内容(包括实时视频聊天和屏幕共享)。低,猜测真的没有查东西管用。这就是为什么大多数人对事物的发明时间做不充分的假设。)

把它放在我的脸上;),这是理所当然的。

注意:那个时代的“网络摄像头”(视频设置)并不是为普通的客厅设计的;)

> < /修正


[…继续回答:]

网络摄像头替代文本http://wpcontent.answers.com/wikipedia/commons/thumb/c/c5/Logitech_Quickcam_Pro_4000.jpg/180px-Logitech_Quickcam_Pro_4000.jpg的推广也有帮助(始于1991年,第一个这样的摄像头,称为CoffeeCam,是针对剑桥大学计算机科学系的特洛伊房间咖啡壶)。

所以:80后:2 / 3:IRC和网络摄像头。

上世纪八十年代初,施乐帕洛阿尔托研究中心对计算机蠕虫进行了研究。

摘自John Shoch和Jon Hupp的“蠕虫”程序——分布式计算的早期经验”(ACM通讯,1982年3月,第25卷第3期,172-180页,1982年3月):

In The Shockwave Rider, J. Brunner developed the notion of an omnipotent "tapeworm" program running loose through a network of computers - an idea which may seem rather disturbing, but which is also quite beyond our current capabilities. The basic model, however, remains a very provocative one: a program or a computation that can move from machine to machine, harnessing resources as needed, and replicating itself when necessary. In a similar vein, we once described a computational model based upon the classic science-fiction film, The Blob: a program that started out running in one machine, but as its appetite for computing cycles grew, it could reach out, find unused machines, and grow to encompass those resources. In the middle of the night, such a program could mobilize hundreds of machines in one building; in the morning, as users reclaimed their machines, the "blob" would have to retreat in an orderly manner, gathering up the intermediate results of its computation. Holed up in one or two machines during the day, the program could emerge again later as resources became available, again expanding the computation. (This affinity for nighttime exploration led one researcher to describe these as "vampire programs.")

引用艾伦·凯的话:“预测未来最好的方法就是创造未来。”

自然语言处理。我第一次遇到这种情况是在20世纪90年代初,当时使用的是赛门铁克(Symantec)的一个名为Q&A的程序,它允许您通过键入英文查询来查询数据库。直到今天,我仍然对它印象深刻。

我认为自20世纪80年代以来发明的最好的想法将是我们不知道的。要么是因为它们很小,无处不在,以至于不引人注意,要么是因为它们的受欢迎程度还没有真正起飞。

前者的一个例子是单击并拖动以选择文本的一部分。我相信这是1984年首次出现在麦金塔电脑上。在此之前,您有单独的按钮用于选择选择的开始和结束。相当繁重。

后者的一个例子是(可能是)可视化编程语言。我不是说像hypercard,我是说像Max/MSP, Prograph, Quartz Composer, yahoo pipes等。目前它们确实是小众的,但我认为,除了思想分享之外,没有什么能阻止它们像标准编程语言一样具有表现力和强大的功能。

可视化编程语言有效地加强了引用透明性的函数式编程范式。这对于代码来说是一个非常有用的属性。他们执行这一点的方式也不是人为的——这只是由于他们使用的比喻。

VPL让那些本来不会编程的人也能编程,比如有语言障碍的人,比如阅读困难的人,甚至只是需要简单节省时间的门外汉。专业程序员可能会对此嗤之以鼻,但就我个人而言,我认为如果编程成为一种真正无处不在的技能,就像识字一样,那就太好了。

就目前来看,VPL只是一个小众的兴趣,还没有真正成为主流。

我们应该做些什么不同的事情

all computer science majors should be required to double major- coupling the CS major with one of the humanities. Painting, literature, design, psychology, history, english, whatever. A lot of the problem is that the industry is populated with people that have a really narrow and unimaginative understanding of the world, and therefore can't begin to imagine a computer working any significantly differently than it already does. (if it helps, you can imagine that I'm talking about someone other than you, the person reading this.) Mathematics is great, but in the end it's just a tool for achieving. we need experts who understand the nature of creativity, who also understand technology.

But even if we have them, there needs to be an environment where there's a possibility that doing something new would be worth the risk. It's 100 times more likely that anything truly new gets rejected out of hand, rather viciously. (the newton is an example of this). so we need a much higher tolerance for failure. We should not be afraid to try an idea which has failed in the past. We should not fully reject our own failures- and we should learn to recognize when we have failed. We should not see failure as a bad thing, and so we shouldn't lie to ourselves or to others about it. We should just get used to it, because it is just about the only constant in this ever changing industry. Post mortems are useful in this regard.

One of the more interesting things, about smalltalk, I think, was not the language itself, but the process that was used to arrive at the design of smalltalk. The iterative design process, going through many many revisions- But also very carefully and critically identifying the flaws of the existing system, and finding solutions in the next one. The more perspectives, and the broader the perspectives we have on the situation, the better we can judge where the mistakes and problems are. So don't just study computer science. Study as many other academic subjects as you can get yourself to be interested in.