Today’s computers are the result of many decades of evolution of techniques and technologies. From this process new techniques and technologies have sprung forth and some of these are really just starting on their evolutionary journey, in time they will change the face of computing but there’s a road bump to get over first.
The Evolution of Software
Computing started in theories [1] then became reality with hardware in the first half of the 20th century. “Software” development started with physically changing valves in the early 1940s, then there was machine code, assembly code and then we moved onto higher level languages making software development easier. Basic was specifically designed to be easy and became very popular as a result being supplied with most early personal computers. VisualBasic did the same trick and I think we will see the same happen again even for large applications.
Another part of this trend has been the addition of ever more powerful libraries. You don’t need to worry about the specifics of data structures anymore, you can just call up a library which manages it all for you. We now have software components which go to a higher level giving you specific functionality for all manner of different operations. Add to this the proliferation of virtual machines, application servers and middleware and you see the programmer is doing less and less of the programming and more and more of joining up the dots.
The next stage is to drop the complexities of Java and C# and do it all in scripting languages like Python [2] which allow easier, faster and cheaper software development. Microsoft are already doing this with the .net version of Visual Basic and it is on it’s way for Java with projects like Jython [3].
As application servers and middleware become more powerful how long will it be before the vendors start shipping actual applications? All you will do then is customise the application to your own needs, you’ll still need to understand logic but there will probably even be tools that even help you do that. Programming could become very different in the future and open to a lot more people, the specialist skills that development requires will be less and less required, at least for business development.
I think the long term trend for the software development industry is not looking good, but the trend for the home developer, the open sorcerer, is very different, quite the opposite in fact. I can see business development becoming so incredibly easy and thus incredibly boring that many developers will take to open source development simply for the challange, so they can tackle the complex problems in the language of their choice.
All software will be free (as in do what you want)
Patents do not last forever, Everything patented today will be freely available in twenty years time. As software advances all the techniques being invented will eventually become free and open for use by everyone. The difference then between open and closed source will be one of functionality rather than one of technique.
As open source software advances it will keep catching up with the propriety vendors, there will come a time when you’ll not be able to tell the difference at least in terms of functionality. The differences of integration and consistency will remain, but as more companies become involved in open source development the needs of users will be fed into the development process and open source products will also become more integrated and consistent just as closed source products are today.
Linux will continue to grow but as it becomes more business-like I can see the potential for the more adventurous developers moving on to other platforms simply for the challenge. Arguably this is already happening and you don’t need to look far [4] to see that these days there is a proliferation of different Operating System projects.
All hardware will be free
The same applies for hardware patents, these too will become free for everyone to use.
I don’t see the possibility of everyone making their own multi-million transistor CPUs in their bedroom becoming possible any time soon but with the increasing availability of open source tools and larger, faster FPGAs* creating a CPU in your bedroom will become easier.
*An FPGA (Field Programmable Gate Array) is a “blank” chip you can wire up in software to do pretty much anything.
One day you may even be able to code new CPU specifications and software will automatically create the CPU design for you then program it on an FPGA for you to use, of course it will also create a compiler so you can program and test your design. As with many future developments this is already in development (or at least being considered).
A (very) brief history of computer advancement
Much of the history of personal computers has been new radical, advanced technologies appearing (Apple I / II, Macintosh, Amiga, NeXT, BeOS) then being copied by the competitors and especially the now ubiquitous Wintel PC. The pace of development has slowed in the last decade, we see constant advancement and innovation of sorts but this is the incremental drip feed of evolution, who in this day and age is doing computers as radical, as advanced as those platforms?
The Engine
Just as cars have engines so does the technology industry, but it is not software developers or hardware designers, it is the semiconductor process Engineers who have driven this industry for the past four decades. Gordon Moore noticed their progress back in the 1960s in what became known as “Moore’s law”, they’ve been going solidly ever since. Perhaps I should call them “magicians” – how many other fields of human endeavour have seen such advancement in such a short time? They get little credit for it, but without them none of the other advances would have taken place, none of the above platforms or their technologies would even exist.
If it wasn’t for them we would not have the ever faster computers we are used to, we would not have the continuing advances in technology. Most of the performance in CPUs are not the result of advanced architectures, they are the result of ever finer process geometries being used, transistors get smaller, their threshold voltages lowered and as a result their switching speed goes up. If it wasn’t for this continuing advancement we’d still be using CPUs who’s clocks would measure in KHz, not MHz or GHz.
The same progress has lead to the higher capacity and ever lower cost memories. I have an Amiga A1000 on this desk, it came as standard with 256 Kilobytes of RAM, beside it is a digital camera which can take a memory card with a capacity 32,000 times higher [5].
Because of the ever more powerful semiconductor technologies we can have ever more advanced CPUs and ever higher memory capacities. Because of these ever more sophisticated software can be written faster, friendlier and more capable with every generation. Unfortunately however there is a problem, one day sooner or later the engine is going to stop.
The Engine Stops
One day you are going to be able to walk into a computer shop two years running to buy the fastest CPU, both times you’ll get the same CPU, Intel, AMD and IBM will not have produced a faster CPU. That may be inconceivable today but it will happen, it may be 20 years yet, maybe even longer, but it will happen.
It’s been predicted for many years but time and again but the process engineers have always found ways around the road blocks and kept going. Eventually even they are going to hit immutable physical limits. Eventually we will all find out that Moore’s law is not a law after all.
Of course they will keep trying, perhaps we will see the number of layers increased so chips can be built upwards. This is already done today to a degree today but could you double the number of transistors by building upwards? I bet they’ll try.
They’ll try different chemicals and techniques as well no doubt. Even after Moore’s law stops chips will keep advancing. The Cray 3 [6] used gallium arsenide instead of silicon for it’s transistors, perhaps they’ll try that and we’ll see power budgets going completely bananas – the Cray 3 used 90,000 Watts!
On the other hand other technologies sitting on the sidelines today could come to the fore. It’s astonishing what’s out there even today: Using Superconductor technology an 8 bit CPU operating at 15.2GHz has recently been developed [7]. What’s more despite it’s stratospheric clock speed it uses just 0.0016 Watts, 18,000 times less power than the Intel’s low power Pentium M or a colossal 56,000,000 times less power than the Cray 3.
One thing that could make a very big advance is to find a way to cut the price of manufacturing chips, an unfortunate side effect of Moore’s law is the ever increasing cost of manufacturing plants (known as Fabs), these costs may kill off progress long before they ever hit the physical limits.
It could come to a sudden halt brought on by simple physical impossibility or more likely the engine will stutter out. Either way the engine will stop and from that day on computing is in a different world.
Is this the end, or just the beginning?
Ironically, the end of Moore’s law could actually be a good thing, all this advancement has allowed software developers to get lazy. The art of efficient programming has all but died. Abstractions and easy programming techniques rule these days. Did you know it was possible to have a GUI on an Apple II? Did you know it is possible to do real time 3D on a Commodore 64? When programmers are restricted they can solve seemingly impossible problems. When Moore’s law runs out they are going to hit exactly these sorts of problems.
Modern desktop computers are extraordinarily inefficient. We could be in for a programming renaissance as developers find ways to fight the bloat. To bring the computing power already present in modern systems to the surface.
But this is evolution, not revolution. A revolution will come, and it will come when a number of technologies are combined to create something that today we don’t know as a computer. It will be possible when a number of powerful technologies shall combine to create something new but strangely familiar.
Software becomes hardware
There are of other ways of improving performance than better programming, moving software functions into hardware is one of them. A future CPU could contain a set of general purpose cores, special purpose cores and a big FPGA.
FPGAs are probably the future of hardware computing, general purpose hardware is too slow, special purpose hardware is inflexible and takes up room. FPGAs cuts between these lines, they will allow very high speed computation but have the advantage of being reprogrammable.
We don’t know when FPGA hardware will replace high speed CPUs (it’s been predicted for years) but when the engine stops we will find the advances it has provided us with are quite enough to be getting on with. The rapid progress made in the last 40 years will allow the next stage of computer revolution to begin.
An FPGA based computer will be nothing like anything on your desktop today, there are no registers or pipelines like normal CPUs, in fact there’s not much more than a big bunch of gates and you have to figure out how to get them to do your work. Sure there are tools available but these are really for electronic engineers not programmers, if you like a challange…
FPGAs make our computers more powerful than ever and a lot more difficult to program. Just how exactly does a (insert language of choice) programmer write software for a FPGA? Luckily there are more familiar languages that can be used such as Stream-C [8] so you don’t have to go learning hardware languages such as Verilog or VHDL.
The technology for programming FPGAs will move into the mainstream. Rather than having software libraries to do everything you will have hardware libraries, when a specific problem is encountered the CPU will program this library into hardware and use it, this will boost performance into the stratosphere.
Computers will become adaptive, optimising themselves for the task you are doing. Want to encode a movie to MPEG? It’ll transform itself into a giant MPEG encoder then tear through the entire movie as fast as the storage system can deliver it. When FPGA based computing becomes widespread who will need fast general purpose processors any more?
However, learning to program FPGAs isn’t a question we are going to be worrying about, we are not going to be doing the programming, someone, or rather something else is.
Artificial Intelligence is a long held goal of Scientists and Science Fiction authors alike, it’s got it’s ticket and has boarded the bus. The road may be long but like everything else I’ve written about much of the technology is here today. However unlike anything else I’ve covered AI will have a much greater impact on computing and wider society. AI is going to change everything.
There is a revolution coming, I didn’t said it’s going to be nice.
—————————-
References
http://news.bbc.co.uk/2/hi/uk_news/330480.stm [2] The Python programming language.
http://www.python.org/ [3] Write Java in Python.
http://www.jython.org/
Beginnings of a project to do the same with Perl.
JPL [4] A website just for Operating Systems 😉
http://www.osnews.com [5] 8 Gigabyte CompactFlash card.
http://www.infosyncworld.com/news/n/4605.html [6] Cray 3
http://www.scd.ucar.edu/computers/gallery/cray/cray3/graywolf.html [7] 15GHz! I did say all the action was at 8 bits…
Microprocessor Watch Vol 116 [8] Program hardware with a variant of C.
http://www.eedesign.com/story/OEG20021018S0060
Copyright (c) Nicholas Blachford February 2004
Disclaimer: This series is about the future and as such is nothing more than informed speculation on my part. I suggest future possibilities and actions which companies may take but this does not mean that they will take them or are even considering them.
Hi Nicholas, we enjoyed reading your articles. You write well and have a solid grasp of many of the moving parts, which makes your suggestions and thoughts worth considering.
Thanks!
R&B
Everything will be free, and you will have a machine as powerful as a Cray in every fridge. Dream on.
That series of articles is really terrible and disregards so many fundamentals (technical, business, law, etc) that it’s just laughable.
>> Computers will become adaptive, optimising themselves for the task you are doing. Want to encode a movie to MPEG? It’ll transform itself into a giant MPEG encoder then tear through the entire movie as fast as the storage system can deliver it.
general purpose cpus will be ultra powerful (thus negating most asics)
asics will evolve from general purpose cpus guven programmable logic
so what is it? the only conclusion i can draw fomr this series is that in the future there will be stuff
the future of computers…
http://www.theregister.co.uk/content/53/35824.html
is eugenia’s pda ;D
…
[ http://osnews.com/story.php?news_id=6107 ]
Basic was specifically designed to be easy and became very popular as a result being supplied with most early personal computers. VisualBasic did the same trick and I think we will see the same happen again even for large applications.
Visual Basic took what was already there, then added drag & drop/point & click GUI development and simple component development. This point would’ve fit in much better with your eventual point in this section, but you seem to have missed it. Something else VB eventually proves to most developers is that complex applications need a language better suited to handling the complexity than VB. You can write large, complex applications in VB, but they only become a pain to maintain.
Another part of this trend has been the addition of ever more powerful libraries. You don’t need to worry about the specifics of data structures anymore, you can just call up a library which manages it all for you. We now have software components which go to a higher level giving you specific functionality for all manner of different operations. Add to this the proliferation of virtual machines, application servers and middleware and you see the programmer is doing less and less of the programming and more and more of joining up the dots.
Except, of course, that the programming to “join up the dots” is sometimes more complex than the programming required for the individual dots. Beyond that, someone’s still going to be programming better dots. If you don’t know anything about data structure, how do you know which you need for the job at hand?
The next stage is to drop the complexities of Java and C# and do it all in scripting languages like Python [2] which allow easier, faster and cheaper software development. Microsoft are already doing this with the .net version of Visual Basic and it is on it’s way for Java with projects like Jython [3].
How can VB.Net be an advancement beyond C# when the two operate in the same realm, with the same libraries and most of the same language features? The biggest difference between the two is the syntax, which makes C# much easier to use than VB for those familiar with C or C++, and consequently also makes C# lend itself to more complex programming somewhat better than VB. While I sometimes enjoy interactive and scripting languages like Python, I have to say that Python itself is probably a poor example of a language advancing easier/faster/cheaper programming, since many developers never deal with enforced use of space and indentation, and these have actual meaning in python.
As application servers and middleware become more powerful how long will it be before the vendors start shipping actual applications? All you will do then is customise the application to your own needs, you’ll still need to understand logic but there will probably even be tools that even help you do that. Programming could become very different in the future and open to a lot more people, the specialist skills that development requires will be less and less required, at least for business development.
Some vendors will do this, and will probably help drive the industry to your envisioned goal, but overall most of them see more money in the services industry. People make very good money selling hard-to-understand services that have to be integrated into an environment, and then selling the integration services to go with it.
I think the long term trend for the software development industry is not looking good, but the trend for the home developer, the open sorcerer, is very different, quite the opposite in fact. I can see business development becoming so incredibly easy and thus incredibly boring that many developers will take to open source development simply for the challange, so they can tackle the complex problems in the language of their choice.
I can only wish that business development would become incredibly easy. I could then spend far less time actually working and far more time coming up with new ideas, making myself more useful to the company. As it stands, if Microsoft’s drive is even moderately successful, the trend is towards being able to use the language of your choice with any given application, server, or service. You don’t need to learn VBA any more to talk to Office, you can do it in VB, C#, C++, or any number of other languages (even Perl). Most programmers spend more time learning libraries and interfaces rather than languages when they’re working. The 2 weeks I spent learning VB when I started working for this company (and the subsequent years spent learning it) has been left in the dust with the release of C#, allowing me to leverage the years I spent with C and C++, while still dropping into VB if I need to when a particular language element is giving me a hard time, and can be done more easily in that language. Even better, I can fire up C++ and explicitly drop in and out of managed code to isolate the less portable sections, rather than dealing with MFC and other MS-specific libraries, especially now that VC++ is so much more compatible with ISO C++ than it was before.
Learning a language becomes an easy task after you have a solid understanding of the features of that language (from some previous language, perhaps). It’s learning the new interfaces, the new libraries that come along with each new piece of hardware or software brought into your environment that takes up most of the time of many business programmers. Making software more flexible and more adaptable to changes in the environment is what developers have to spend their time on today, and as time goes by it will quickly become the difference between a working developer and one trying to find a new job.
“Software” development started with physically changing valves in the early 1940s
just for the record :
http://irb.cs.tu-berlin.de/~zuse/Konrad_Zuse/en/Rechner_Z1.html
Software development with higher Languages starts amazingly
in an early stage of computing,
http://irb.cs.tu-berlin.de/~zuse/Konrad_Zuse/plank.html
More at the Unesco Memory of the World Register http://www.unesco.org/webworld/mdm/2001/nominations_2001/germany/zu…
But the article series was interessting, sometimes astounding.
cheers,
frank
>> Computers will become adaptive, optimising themselves for the task you are doing. Want to encode a movie to MPEG? It’ll transform itself into a giant MPEG encoder then tear through the entire movie as fast as the storage system can deliver it.
general purpose cpus will be ultra powerful (thus negating most asics)
asics will evolve from general purpose cpus guven programmable logic
so what is it? the only conclusion i can draw fomr this series is that in the future there will be stuff
Yes, in the future, there will be stuff. The great thing is how easily the particular point is disproven, as MPEG-2 decoders (and encoders) are already common on video cards as it is. Computers are evolving into multiple specialized processors rather than general purpose processors that are reprogrammable for particular tasks. Not to mention that we have things like parallel processing and (obviously) multitasking becoming more prevalent. Why would I want my CPU to become a huge MPEG encoder when I can put a TV decoder card in my computer with an MPEG encoder built in that can already handle the quality of current programming. MPEG isn’t going to be something you do as quickly as your storage system can handle it, instead it will be raw video and audio, because storage is becoming cheaper every day, and anyone that wants the best possible quality will move to lossless compression first, then compressionless encoding second (if such lossless compression proves to be too little benefit compared to the processor cycles it takes up, or proves to actually have some loss in it).
I’d much rather watch a movie in real time while it encodes it and sends it to my drive in the background (much like I do with music now) than have my entire computer reconfigure itself to the one task of encoding the movie. I think most people would agree with me on this.
The trend will be that less people care about technology and more about ‘coolness’, including UI coolness. Oh well, perhaps in the future computer won’t have to do anything but just look cool and do all the same stuff – text notes and web browsing.
i think that this series is excellente. the author did a great job and he’s very clever. all the ideas are clearly exposed and the content is simply awesome.
well done, keep going!
daniel from buenos aires
PainKilleR has solid arguments, while the author does not.
People who don’t know what they’re talking about should shut up. My 2 cents ;P
I’d much rather watch a movie in real time while it encodes it and sends it to my drive in the background (much like I do with music now) than have my entire computer reconfigure itself to the one task of encoding the movie. I think most people would agree with me on this.
I watch my other movies, streaming them over the net while my box encodes away. But I put my encoding in the background so it only uses the extra CPU cycles I’m not using.
It might be nice to watch a movie while it is encoding, but I’d rather setup a batch process to encode all my DVDs as soon as I rip them and use my cluster of systems to handle the CPU requirements.
Linux and DVD::Rip do this nicely.
FPGA, i still don’t understand why this have not catched up faster. Xillinx should build a consumer PC based around it.
I always thinked the only way to have the amiga back for real would be to have it 100% FPGA based. Imagine an amiga without the limitation of custom chip but with all the avantage of them.
If PCI bus would be lot faster, PCI FPGA card would make for a nice BeBOX with a hardware media kit and translator also.
Hey, Nicholas, the article is terrific indeed, but that’s “Part 5” according to the title. I guess I’ve missed the other 4 parts ) Where can I grab them?
I’m pretty eager to point the users of my community to this article, but I guess it’d be not very wise to start from point 5 though.
The series of articles are marked as “editorials”. Simply navigate to our “editorials” section from our menu and grab the other 4 articles.
More likely we will go to quantum based computers. Though they are still in their infancy, I do think that they will eclipse regular or ‘classical’ based computers in time.
Yes, there has yet to be a functional general purpose CPU built using quantum technology, but the possibilities of being able to double the processing power of a computer by only adding a single atom to the processor is too much to shove aside.
Not that I expect them to come out anytime soon.
bbrv wrote:
Thanks
You’re very welcome 🙂
Everything will be free, and you will have a machine as powerful as a Cray in every fridge.
Cray 1 did 66 MegaFlops, difficult to find something that _slow_ these days. Hint: Cray’s advantage was not just high clock speed.
so what is it? the only conclusion i can draw fomr this series is that in the future there will be stuff
A mixture, some tasks are better done on a general purpose CPU, some better on a FPGA, some better on an ASICs (ASICs are lower power).
PainKiller
Good points, thank you.
but…
Why would I want my CPU to become a huge MPEG encoder when I can put a TV decoder card in my computer with an MPEG encoder built
The point about the entire computer turning into an encoder is just an example, it could be only part encoder.
What if you now decide TV and MPEG sucks and decide to use HDTV and H.264? You card gets replaced while my FPGA just gets reconfigured…
Oh well, perhaps in the future computer won’t have to do anything but just look cool and do all the same stuff – text notes and web browsing.
Many industries have been getting away with this for years!
because only in my wildest dreams i can imagine all that.
If those are your wildest dreams I’d hate to see your boring ones…
Pretty much everything I described there exists TODAY. It’s just not on your desktop yet.
Where’s part 4? 3? 2?
…or search on Blachford
More likely we will go to quantum based computers
I’m sceptical of these, if you could double your processing power with an atom how do you know it’s given you the correct result?
Quantum effect transistors on the other hand are a different matter and I can see them appearing.
Others:
Thanks
>>Where’s part 4? 3? 2?
>…or search on Blachford
Or, alternatively:
http://www.osnews.com/topic.php?icon=5
or even more alternatively:
http://www.osnews.com/search.php?search=blachford
🙂
Thanx Eugenia & Nicholas Blachford. That was quite helpful for me. Wonderful articles, I should note again!
I’m sceptical of these, if you could double your processing power with an atom how do you know it’s given you the correct result?
I’m sure that Transistors that use quantum effects will happen because they are already at that scale.
But creating a processor that works only by quantium effects may be the only way to go forward eventuatly. Physics limit the ability of ‘normal’ transistors because of the rules that they work by. Current processes work by ignoring or working against the quantum forces, which are undenyable at the size level that processors are at today.
It’s hard to compare, but it’s like going from classical physics to quantum physics. Nothing makes sence until you relearn everything you thought you knew.
Yes, I feel like I am reading PC Magazine. This was terrible. All software is not free 20 years after it’s creation. Companies aren’t going to all be open, they’re moving the other way toward continuing license fees!
An 8 bit processor is by no means a supercomputer. Do you know how large a number 8 bits can represent?
This is just mind boggling. Aaaaah, it hurts to read it.
Oh, and programming goes at least back to textile machines during the 19th century.
– As a FPGA developper ( in VHDL & Verilog ), I can say that FPGA will never be as fast as custom chips for a given application. For example, for equivalent technologies, a FPGA-made general purpose CPU will always be slower than a special purpose chip, as the programmable interconnections takes place on the chip and time delays.
– To date, the FPGAs are preferred for low quantities as the initial cost of a ASIC is very high. The high volume of FPGA chips production make advanced technologies more affordables than an equivalent performance ASIC made on a less advanced process ( for example comparing a 90nm FPGA with a 150nm ASIC ).
– Most FPGAs are reconfigurables on the fly, this can be used for advanced signal processing for example ( like a programmable modem or video codecs ) but I don’t think that it will really be useable for general purpose processing as the reconfiguration of a processor is an awfully complex task. What could be done is a configurable hardware “emulator” : With the same hardware, switching between an Amiga, an Atari, an Apple][, a Commodore 64, … the goal would not be absolute maximum number crunching performance.
– The trend of FPGA manufacturers today for high performance CPUs is to integrate general purpose CPU in the FPGA fabric ( PowerPC for Xilinx, ARM for Altera ) rather than trying to build up a full fledged CPU directly in programmable logic. It can nevertheless be done for legacy hardware ( say an 8051 or a 6502 in the chip ), not very horsepower intensive tasks or very specific processing ( you could make a 13bits words processor counting in gray code with compressed data tranmission and base-7 floating point … ).
– For a given FPGA, a CPU made in the programmable fabric will take more area and will be slower ( so more expensive ) than a fixed hard-made CPU.
– The evolution of semiconductor technologies will slow down significantly when the skyrocketing rise of the FABs, the unmanageable rise in power dissipation and lower initial quantities of ASIC design will intersect.
[ Anyone for correcting my English language mistakes ? ]
* Why would I want my CPU to become a huge MPEG encoder
* when I can put a TV decoder card in my computer with an
* MPEG encoder built
The point about the entire computer turning into an encoder is just an example, it could be only part encoder.
What if you now decide TV and MPEG sucks and decide to use HDTV and H.264? You card gets replaced while my FPGA just gets reconfigured…
Fortunately for me, current TV decoder cards can handle HDTV resolutions and MPEG-4 (w/ H.264). Even with reprogrammable FPGA (and I have to say referring to reprogrammable chips as FPGA can be confusing, since many of the Intel Pentium line of CPUs are on FPGA sockets), you either have to be well versed in a programming language that can be used with the chip, or someone else has to be, and has to release the code for what you want to do.
As time has gone by, we’ve moved further away from generic chipsets, though complex specialized chips have become more programmable for different tasks. The problem there, though, is that the range of the programmability of specialized chips has so far been limited to particular areas, such as programmable shaders on graphics chips and programmable logic and I/O chips that handle low-level, high-throughput applications with very specific routines.
Perhaps in this case, though, I’m speaking more specifically in my own areas of knowledge and have missed something blatantly obvious. For instance, I’m only really aware of what current decoder cards can handle because I’ve been looking at them for the purpose of building a computer for my TV (rather than buying something like a TiVo that will be too specialized to really fulfill my needs), and most of my run-ins with programmable chips have come in my line of work, which relies heavily on replacing older specialized I/O solutions with PC-based solutions (though we still usually have to rely on PCI (and up until very recently ISA) I/O boards simply because of the number of I/O lines being handled and the general lack of other PC-based solutions to the problem).
Personally, i have respect for people like you who try to speculate on the future of computing. It is _extremely_ difficult, especially -and more and more- when it is about the longer run. Having read all of your previous articles, i’d like to urge you to re-evaluate your analysis before you post them. For example, by letting someone else read the article before you posing (my favorite own technique when i post an article partly because of possible grammer mistakes as well).
When i read your article, i stumbled on the following proposition:
“All software will be free (as in do what you want)”
If i understand this correctly, you claim both Free as in speech and free as in beer. How exactly do you think this will be realized then? Which economical model, if there is any at all? Who will pay the programmers? How will the programmers be “employed”? For example do you think programmers will employ themselve as in a sort of grassroot-company-system, or do you think this will become a gift-economy-alike, or […]? What would be the trend, and why?
“Patents do not last forever, Everything patented today will be freely available in twenty years time.”
Ok, but how can you be so sure about this? Regarding copyright, those have been extended on the run right before they’d expire. The very same _could_ become true for patents. So i’m wondering how you can be so sure about this assertion? Without any futher analysis on wether ie. my assertion regarding non-expiration will be true or false, it is my opinion you are not in a position to state what earlier said i’m quoting.
“As software advances all the techniques being invented will eventually become free and open for use by everyone.”
What do you define as “technique”? Source code, because it explains _exactly_ how a program works, falls under the definition of “technique” from my point of view (imo it’s an art first of all); i don’t see how all source will become open. If you agree on the assertion that source code is a technique, i’m looking forward to your analysis on why all source code will be “open” or “available” for everybody between now and “eventually”.
“The difference then between open and closed source will be one of functionality rather than one of technique.”
Right. Here i can conclude that because you say the difference between open source and closed source code won’t be one of technique, is that you don’t see “source code” as part of the definition “technique”. We disagree on that then. Perhaps we can discuss why you don’t see source code as a technique while i do. More interesting however is when all techniques are open; isn’t the functionality between open and source code about the same too?
You see? I think this is about a chicken/egg.
Finally, what i find extremely unfortunate is that you do not mention Trusted Computing in this regard. You do mention patents but do not state how to solve this. You ignore the Trusted Computing threat. This while these 2 are seen as dangers to the FLOSS world, by FLOSS advocates, civil right groups, et al. You _ignore_ these points. Lately i saw and readed a few articles and a video regarding Eben Moglen, and he adresses both and points out how important both are. A thirth threat i see is the development of quantum computing vs cryptography. I fear that once upon a time, people will think they’re save with crypto and rely on crypto, while secret services & governments know how to easily “crack” them using quantum computing which has by then been developed in secret while it is not known yet to the common public it has been developed already.
That said, and excuse my wording if you feel offended, imo your vision is (regarding _this_ very point which i find intriguing) rather a dream than a realistic analysis on the future, though i mostly welcome any futher analysis on this subject.
Personally, i have respect for people like you who try to speculate on the future of computing. It is _extremely_ difficult, especially -and more and more- when it is about the longer run.
Thank you, just wait for part 6 😀
For example, by letting someone else read the article before you posing
Not a bad idea but I don’t really have anyone to send them to. I am very careful checking them though. I do sometimes get complaints about my grammar but that’s a British Vs American thing.
“All software will be free (as in do what you want)”
If i understand this correctly, you claim both Free as in speech and free as in beer.
That’s in reference to the patents, I don’t mean all companies will suddenly open their code but all the techniques that were patented previously will be free to use.
By “technique” I mean algorithm, not source code. It is possible to implement an algorithm in different languages in which case the source is completely different.
Who will pay the programmers?
I’m not saying it’s good or bad, just that it will happen.
Open Source gives programmers “freedom” while at the same time removing their bread and butter. Also even in the same language you could have two different pieces of source to implement a single algorithm.
I for one disagree that all source should be free, the idea that you can make money from services is nonsense, it only works for big complex programs. Single programmers will find this very difficult, especially if they don’t like doing support. You raise a very good point but at the moment there is no answer to it.
“Patents do not last forever, Everything patented today will be freely available in twenty years time.”
Ok, but how can you be so sure about this?
I can’t, but current law means patents expire after 20 years. I think that is a ludicrously long time for a rapidly evolving industry. It could be extended but I think that’s a seriously bad idea.
Finally, what i find extremely unfortunate is that you do not mention Trusted Computing in this regard.
Could you explain your point further? I’m not sure what you mean.
For Government breaking codes, sure…
I’m not convinced on quantum computing but the government have big systems that’s for sure, did you know the NSA have their own fab producing Alphas? (well, that’s the rumour)
The British Government sold many countries German code machines after the 2nd world war, they didn’t of course tell these countries that they’d long since figures out how to crack them…
P.S. Comments will be closed here at some point, fell free to send me an e-mail if you wish to continue.
What’s the bump already? It’s unclear. Now, if you wanted to say that it will be nice to personally track CMM and value at home for the languages and capabilities around, that might be one thing. If you think hardware innovation has to hit a wall because gate count can’t double easily, that’s unfounded. Ditto with pricing; just because India likes it doesn’t mean the fun’s over! (Well, until we find 20M cranky Indians to agree.)
If free firmware proliferates and machines fill each other up with junk, I can see a clear need to buy a new one from that extension of software. If the one unit caching all the RC5 and RSA3 keys for all the others fails, you go out and get the Agent Smith chip.
Once it is all working again, you buy solar cells that also cool the chips to solve the serious problems with overheating in 3D chips (water in the vias or nay) and quite probably treat human waste (another problem set to roost at the same time.)