The Future of Distributed Computing
(Originally Published January 19, 2002)
Yes, it can. But only when this one computer program runs on thousands of computers around the globe.
This is the world of distributed computing. The same simple peer-to-peer technology that was the original purpose of the Internet is being used as we speak to do all this and more.
This is the same technology used by Napster and Gnutella to transfer MP3 music and other files from one user to the next. The only difference between the two being that Napster had static central servers (as do most of the distributed computing projects that are talked about above, but that is another matter).
How does it work?
Simple, really. A central computer (or group of computers) coordinate a database of data to be processed, be it models for cancer treatments, data received from a few galaxies away by super high-power telescopes, or a number of possible answers to a potential mathematical problem.
Individual users download a program for their personal computer (usually their home or work PC). This usually non-intrusive program runs in the background, or when the computer is idle, processing through the data received. When the answer(s) are calculated, the personal computer sends the results back to the central server.
This takes the processing power of thousands of computers and effectively builds one big “distributed” computer all across the globe, connected by a basic Internet connection. Even dial-up Internet users can use this to download the data when connected, then disconnect and their computer processes away (when it is finished processing, the program just waits until the user reconnects to the Internet to get the results back).
In short, it is both simple and brilliant.
Technology and Commercial Applications
Most of the projects that are operational today are not-for-profit. Some offer prizes, and some are merely “net philanthropy,” or contributing to solving the issues dogging mankind in a small way. While most people cannot afford to give large donations to organizations to fund cancer research, most people that have a PC can afford to contribute some idle computing time to the cause.
Some projects are even controversial in nature, such as cracking security codes. Organizations like the Electronic Frontier Foundation support many projects, including more controversial ones.
But is there a corporate application here? Perhaps the appeal of this technology to technophiles is that it is so “un-corporate,” and almost “frontier-like” in its application, while at the same time revolutionary — really much like the Internet itself.
Many of the not-for-profit projects use the Open Source model, a model whereby individual programmers contribute their time to the project, so corporations generally do not profit. As with the Linux operating system, the world’s largest Open Source project, some corporations could make money in more support-oriented roles.
However, there are some corporations lingering on the outskirts. Many have designed their own proprietary peer-to-peer software (both client and server) that can be purchased and used by research institutes and the like to run distributed computing projects.
Some have taken this a step further, and actually act as agents for the projects, and in turn offer to pay Internet users for their computer’s idle time. This author found one such project, but discovered it was not currently “on.” This business model has been examined by many prominent publications, including
The Economist, and it was largely deemed to be too early to determine whether there is a serious commercial application or not.
Your author believes there is a very serious application, far apart from the research projects the technology currently assists.
First and foremost, the fundamental peer-to-peer application of the Internet is the one which has to do with the names and numbers at the very heart of the Internet. Currently the Internet operates off a few handfuls of central-registry computers that hold all the names (such as PaulHolmes.ca) and numbers (such as 100.100.10.1) on the Internet.
This technology is not something beyond all comprehension. There are many private companies and organizations which utilize this technology internally, for their own purposes. One such organization, OpenNIC, has set up its own root servers for the Internet, and makes decisions about its operation democratically, in challenge to the monopoly of ICANN, a not-for-profit California corporation which operates the root servers on the Internet (but that too is another matter).
The central computers on the ICANN root distribute name and number information to sub-servers, which distribute them down the pipe to “sub-sub-servers,” until they hit the end ISP before the user. Distributed technology at its finest, running every single thing on the Internet (and turning a tidy profit for domain registration companies on the Internet to boot). E-mail is another pretty fundamental function of the Internet that utilizes the peer-to-peer technologies.
The central-server “corporate website” will also soon become passe. Many corporations have jumped on the “peer-to-peer bandwagon” and have begun developing applications (loosely called web services) which distribute websites and other media-type content from several potential locations. This will revolutionize the World Wide Web at least, in that there will be fewer chances of “Page Unavailable” messages.
Interestingly, the Internet was largely non-corporate before 1996 or so. Then software and media companies came on board fast and furious, bringing along their ingrained “stand-alone” software heritage, and so developed “central-server” technologies like the World Wide Web to try to make money. It is only now that they are re-thinking all these technologies to utilize the fundamentals of the Internet that have for so long been ignored.
So what applications will utilize distributed computing in the future? In short, every application on the Internet.
But in the meantime, be a cyber-philanthropist, and help solve some real-world problems.