Science What will computers be like?

guitarist473

The UMMU that can play guitar
Joined
Feb 1, 2009
Messages
196
Reaction score
0
Points
0
Location
Long Eaton, Nottinghamshire England
Here's a though...

Paying the PC shop to build my new pc for me and being in control on its specs got me wondering earlier...

What will the assortment of RAM and Processors be like in the future?
For gaming? or daily life? or work? or... anything? like suggested in this article i read...

http://sciindustry.com/Computers.html

This portrays what computers will assist with in our daily lifes into the future...

But knowing us lot, we would probably be playing orbiter 2020P1 in super hi-def Level 30 textures and Ravenstar Mk-VII with super Interactive touch screen moniters and over 500fps.

So what do you think computers will be like... and capable of in the future?

My opinion...

10 years - Super Processors so games (like orbiter) can handle the crazy stuff above.

20 years - computers help us control our money, house, health, work, everything else.

50 years - complete interactiveness, be able to surf the internet shops, not by clicking, but by immersing yourself in a fully interactive 3D environment, ebay will be like a real-life super market, all simulatored by a headset, goggles and sensors of some sort. and any material goods will be able to instantly materialise for use at the touch of the finger... (no mouse in a 3d interactive enviroment) whether it be food, clothing or accesories.

100+ years - Something similar to the Holodeck or like that movie (gamer) i think its called?

http://www.imdb.com/media/rm1878165504/tt1034032

500+ years - so our population doesnt have to suffer through what our fragile planet will probably become to by them, all people will abandon their material form for a life of data and variables, living completly inside a super-massivly multiplayer super server computer. we will have a choice, become a online data stream, or go into space and look for a new planet.

(i dont support this, im all for space exploration, and finding new habitable planets, this is a
'just if' because of the topic this is on)

What you guys think?

:cheers:

EDIT

This looks quite interesting too...

http://www.microsoft.com/surface/en/us/default.aspx

...Maybe my 50 year mark should be alot earlier :O
 
Last edited:

Pyromaniac605

Toast! :D
Joined
Aug 15, 2010
Messages
1,774
Reaction score
0
Points
0
Location
Melbourne
To be honest I'd change 10 to 5 and 20 to 10, after all the speed at which processing power increases, I think doubles every year (I could be wrong here) as for the 50 year mark, that should also probably be a bit closer.
I hope I don't have to wait 100 years for a holodeck, and as for becoming data... just no... I'm sorry, but that's just ridiculous.
 

guitarist473

The UMMU that can play guitar
Joined
Feb 1, 2009
Messages
196
Reaction score
0
Points
0
Location
Long Eaton, Nottinghamshire England
as for becoming data... just no... I'm sorry, but that's just ridiculous.

I agree, i dont like the idea either, just an idea from the matrix kind of reality theory.

I was just thinking what if someone hacked that kind of server though, all the money and god like powers a person can handle, a good idea on one or two points, a bad idea on many other points though, for example, who would be able to control such a thing, and its just moraly wrong i would think.

I agree that processing power is also dramaticly increasing among household computers and my year marks are probably way off, and a real-life holodeck in our lifetimes would be pretty sweet.

:cheers:
 
Last edited:

Turbinator

New member
Joined
Dec 12, 2009
Messages
1,145
Reaction score
0
Points
0
Location
Tellurian
[ame="http://www.youtube.com/watch?v=SxFIk-kJNnQ"]U-Touch 138" Video Wall Demo - YouTube[/ame]
 

guitarist473

The UMMU that can play guitar
Joined
Feb 1, 2009
Messages
196
Reaction score
0
Points
0
Location
Long Eaton, Nottinghamshire England
Thats pretty darn amazing!

Immerising one-self in something like that U-Touch or the Microsoft surface so it becomes a 4D enviroment (visual and other senses) with super gaming specs (for orbiter 2020) would be MY ideal future in computers in the next 10-20 years.

Other peoples ideas may differ.

:cheers:
 

Turbinator

New member
Joined
Dec 12, 2009
Messages
1,145
Reaction score
0
Points
0
Location
Tellurian
Wow, now that would be something.


Take a look at this graph though:
intel_processor_clock_speeds_1970-2006.gif
 

guitarist473

The UMMU that can play guitar
Joined
Feb 1, 2009
Messages
196
Reaction score
0
Points
0
Location
Long Eaton, Nottinghamshire England
Its going down? D: if thats true i think the problem may be that more cores is the wrong approach to improved computing!

This kinda tech needs to look into a different type of CPU or 'Core replacement' in my opinion, i think that you cannot keep evolving this tech with out reaching a max i think... but 5Ghz processing! when did that happen!

but what could replace it, if it is starting to reach the max?

Or am i talking a load of :censored: ? Or did i misunderstand that graph?
(i dont know computers very well, which is bad when im trying for a B in both Electronics and Computers/IT at school)

:cheers:
 
Last edited:

Turbinator

New member
Joined
Dec 12, 2009
Messages
1,145
Reaction score
0
Points
0
Location
Tellurian
Everyone in the industry knew this would eventually happen, but no one was really sure when. It ended up happening earlier than expected -- I think the industry consensus was that it was going to top out at about 10 GHz, which is why Intel made a huge (and ultimately failed) bet on the "Netburst" architecture. It was designed to run optimally at about 6 GHZ, but they never got that high, and at low clock rates it made too many concessions.

They're up against physical limits: the speed of light, Planck's constant, the size of atoms, and a couple of others. One big problem is tunneling, quantum leakage. As transistors get smaller, and as you use smaller voltages, there's a greater and greater chance that an electron will jump from the source to the drain even when the FET is "off". The smaller the FET, the more of that you'll see.

You can prevent that by using higher voltages. If the hill is taller, the chance of an electron tunneling is lower. But if the voltage is higher, then it means you have to use more charge in the gate, which makes the switching time slower. But if you don't do that, eventually there comes a point where the quantum leakage approaches the level of a normal signal, and then you can't tell if the FET is "on" or "off".

Also, using a higher voltage means you use more power, and cooling is a real issue.

We haven't outright topped out yet; it's still possible to make more gains. But we're near the limit of what's possible with MOSFETs. And we're also near the limit of what we can buy with making the devices smaller. Right now it's down to the point where some insulating layers are less than 10 atoms thick. At a certain point when you're trying to get smaller, you start running into granularity issues, and we're near that point.

There are two alternate approaches which could conceivably yield vastly higher switching rates, but both are radically different than anything we're currently using. One is light gates. (I don't know what the official name of this is.) The other is Josephson Junctions. There has been research into both for decades, but neither is remotely close to being ready for prime time. (A "big" device for either right now is 50 gates. There's a non-trivial issue scaling them up, not to mention the weird operational environment needed for Josephson Junctions.)

However, a clock rate stall in MOSFET technology doesn't mean that processors will cease to increase in compute power. There's a lot that can be done in terms of architectural changes to increase compute power without requiring increased clock speeds. Increasing parallelism is the ticket, and that's why dual-core and quad-core processors are becoming more and more common. But there are other things, too.
 

guitarist473

The UMMU that can play guitar
Joined
Feb 1, 2009
Messages
196
Reaction score
0
Points
0
Location
Long Eaton, Nottinghamshire England
Thats gonna give me nightmares! D:

Im not quite sure what any of that ment, but im sure as hell gonna reseach into most of it now. (as i said, im not good with the really technical side computers, hopefully one last year of secondary school taking electronics and IT as a btec will aid that) so this stuff really interests me!

From what i gathered, the reason processing power in the current style of processors is failing, is due to the size of computer chips becoming too small to handle the output of such a processor?

And changes to the style of computers and the way they work will increase the computers capabilities in the future, but untill then, we can still squeeze a couple more Ghz out of the standard... quad's, duo's and other multi-core processor's?

But a limit on Processing power will eventually come due to current physics of computing power and the physics of atoms themselves?

Jeez, this computer stuff goes much deaper than i though, and i thought i was averagely smart and good with computers. (well, better than all my friends, all they know is facebook, and how to operate a mouse)

Or i may have once again just talken a load of :censored:

But thanks for the insight!

:cheers:

PS
I want to become a IT tech guy now and learn all about processors and stuff after reading that HUGE post, thanks for all that info turbinator! it was really insightful! :p :) To GOOGLE for research!
 
Last edited:

Keatah

Active member
Joined
Apr 14, 2008
Messages
2,218
Reaction score
2
Points
38
Very well said. For the near term we're going to see significant enhancements to multi-processor hardware and software. Other enhancements will come from new instruction sets, new interfaces (from the cpu to memory and the like) - perhaps optical.

Something manufacturers always keep in the background as an emergency buffer is cache. Performance jumps of great significance will happen when they make use of 128MB cache and so on.

Parallel buses are also something of a hot research topic.

It will also be nice to see the southbridge absorbed into the microprocessor. The more that gets put onto one substrate the better..

Currently, in the home computing arena, we have hardware that is woefully underutilized. Programmers are still trying to figure out scheduling between multiple cores.
 
Last edited:

guitarist473

The UMMU that can play guitar
Joined
Feb 1, 2009
Messages
196
Reaction score
0
Points
0
Location
Long Eaton, Nottinghamshire England
instruction sets...

interesting... like the more they advance, the more we will have to learn to compensate to use such power processing? or do you mean somthing else?

It will also be nice to see the southbridge absorbed into the microprocessor. The more that gets put onto one substrate the better..

whats Southbridge?

Currently, in the home computing arena, we have hardware that is woefully underutilized. Programmers are still trying to figure out scheduling between multiple cores.

So... we have the tech for much better gaming, but its not available for the household computer?

Or am i... once-again, completely misunderstanding this technology?

:cheers:
 
Last edited:

Keatah

Active member
Joined
Apr 14, 2008
Messages
2,218
Reaction score
2
Points
38
The existing tech sitting on your desk is often underutilized. Programmers could do better to use the hardware.

http://en.wikipedia.org/wiki/Southbridge_(computing)

The more rich and complex and instruction set becomes, the more work a cpu is capable of doing in a clock cycle, in general.

[ame="http://en.wikipedia.org/wiki/Instruction_set"]Instruction set - Wikipedia, the free encyclopedia[/ame]
 

guitarist473

The UMMU that can play guitar
Joined
Feb 1, 2009
Messages
196
Reaction score
0
Points
0
Location
Long Eaton, Nottinghamshire England
Ah, i did misunderstand (no suprise :p )

Thanks for the IS link, i have a basic understanding on the way CPUs work now :)
Before that i learnt all i know about CPU's from the movie 'Tron' :p (yeah, laugh it up :) )

As for the Southbridge page, i think i understand what it is...

But for what you said earlier about putting it all on one substrate... putting a whole motherboard chipset onto one chip/substate/microprocessor would be pretty hard? or not... considering the speed computer tech is evolving? i think thats what i mean :/ I probably said many stupid things there that made no sense. :p

:cheers:
 

Artlav

Aperiodic traveller
Addon Developer
Beta Tester
Joined
Jan 7, 2008
Messages
5,790
Reaction score
780
Points
203
Location
Earth
Website
orbides.org
Preferred Pronouns
she/her
putting a whole motherboard chipset onto one chip/substate/microprocessor would be pretty hard?
That describes a modern embedded system - smartphones and like - where almost everything is in a single chip.
Eventually it should be more efficient to put the rest in it - memory and SSD included.

So, there will be a single chip and a board to wire out peripherals:
tion.jpg


So far keeping memory and storage outside is still more efficient than inside.
 

N_Molson

Addon Developer
Addon Developer
Donator
Joined
Mar 5, 2010
Messages
9,279
Reaction score
3,248
Points
203
Location
Toulouse
The future is probably biotechnology. Associate living nervous cells to electronics.

"Hey, don't forget to fill the nutriments tank of the computer before leaving the house" !
 

guitarist473

The UMMU that can play guitar
Joined
Feb 1, 2009
Messages
196
Reaction score
0
Points
0
Location
Long Eaton, Nottinghamshire England
That describes a modern embedded system - smartphones and like - where almost everything is in a single chip.

Oh, so it isnt hard :p

The future is probably biotechnology. Associate living nervous cells to electronics.

Going back to the origional topic... :p this is what i was thinking, some sort of nanotech system intergrated with the human biology to maintain a easy healthy human system.

"Hey, don't forget to fill the nutriments tank of the computer before leaving the house" !

I hope theres a coffee nutriment tank involved for an electronic handsfree cuppa on the go :p

:cheers:
 

N_Molson

Addon Developer
Addon Developer
Donator
Joined
Mar 5, 2010
Messages
9,279
Reaction score
3,248
Points
203
Location
Toulouse
Going back to the origional topic... this is what i was thinking, some sort of nanotech system intergrated with the human biology to maintain a easy healthy human system.

No, I wasn't thinking about human cells. Actually, a few experiments have been done with animals cells AFAIR, and gave interesting results. The idea is to overcome mechanical limits using living components, to improve the computer performance. Not turning humans into cyborgs. :p
 

Jarvitä

New member
Joined
Aug 5, 2008
Messages
2,030
Reaction score
3
Points
0
Location
Serface, Earth
The current trend of lowering the main clock speed is NOT indicative of the computing technologies' development slowing down or even reversing. If you look at the transistor count, feature size and ANY performance benchmark, you'll see that computing technology is still advancing exponentially as it has been for the last 100 years, and it shows no sign of stopping. By 2020, the computational equivalent of a human brain will be available at the price accessible to middle-sized corporations. By 2040, the computational equivalent of all of humanity will be available for the price of a today's personal computer.

Looking at the exponential trend of hardware development, it's pretty obvious hardware isn't going to be the limit. The thing limiting us is software development. An artificial general intelligence could probably be made to run on today's fastest supercomputers, but we simply have no idea how to begin building one. This task may get somewhat easier as we can afford to throw more hardware at it, but not much.

However, once you have an artificial, general, superhuman intelligence, all bets are off the table. The current exponential improvement will seem slower than continental drift once we get software to improve the hardware, which in turn allows it to improve software, and iterate ad infinitum. The moment we construct a superhuman intelligence, we aren't needed for further development.
 

N_Molson

Addon Developer
Addon Developer
Donator
Joined
Mar 5, 2010
Messages
9,279
Reaction score
3,248
Points
203
Location
Toulouse

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,605
Reaction score
2,327
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
I suspect the idea of synergistic processing units will become more widespread soon - special processing units, that are less capable as for example a typical CPU core by themselves, but by their huge number and design can do mathematical operations on large sets of data really quickly and in parallel. For example processing data streams.

Technologies like OpenCL will likely pave the path towards this, you can with OpenCL write computing kernels that can work on anything that supports OpenCL: Cell processors of your PlayStation 3, smart phones, GPUs, CPUs, anything.

The problem of Multi-core CPUs will remain : Writing software for them will be much harder as it was for single-core CPUs. You need to synchronize many different kinds of processes and processors, and despite all attempts to assist the programmer, the main problem is in the head of the programmer. You need to get used to juggling with dozens of threads and processing kernels at the same time.

Also, to make one thing clear: Complex instruction set computing (CISC) has NEVER been able to make calculations faster, quite contrary. The only working way to speed up CPUs at the same clock-rate had been Reduced Instruction Set Computing (RISC) combined with much longer instruction pipelines. While your programs then get larger and more instructions have to be processed, your CPU can not only execute the much simpler instructions faster (microcode is for example not necessary then), but can also optimize the execution in the pipeline better. RISC instruction sets are also easier designed to have constant instruction word length, another factor that speeds things up since you can faster read the next instructions and prepare them for execution. Also, the less transistors you have to use for executing one instruction in one cycle, the shorter the cycle can be - you can operate at slightly higher clock rates at the same basic technology.

This can also be extended: By the Very Large Instruction Word (VLIW) concept. In this, you have multiple simplified instructions in one big instruction word, ideally of constant length. But these instructions are not executed in series anymore, but multiple instructions are calculated at once by parallel ALUs.

The text book example is the linear equation:

y = a*x + b

A classic CPU would produce something like that:

LOAD A, x
MUL A, a
ADD A, b
STORE y, A

With A being the so called accumulator register (AH, AL, AX, EAX in x86 processors, depending on data type)

A popular alternative exists in some number cruncher architectures, the three operand instruction set:

MUL R0,x,a
ADD y,R0,b

Just two instructions, but very very slow to decode.

A classic RISC processor would do it like that

LOAD R0, x
LOAD R1, a
LOAD R2, b
MUL R0, R1
ADD R0, R2
STORE y, R0

All operations take place in the CPU registers (and RISC processors have many of them). One such instruction can also be pretty small. There is also one extreme form of RISC, in which you even need to have the addresses of variables to load in registers and calculate them yourself. But I leave this example then to the reader.

The VLIW version would look essentially like the RISC example, but with one big difference: Instructions can be grouped into multiple units that execute in parallel. For example for 4 units, like AMD GPUs do.

For this simple mathematical example, in which most operations are depending on each other, there is no big improvement, but take for example the dot product of two vectors:

d = x1*x2 + y1*y2 + z1*z2

In classic RISC:

LOAD R0, x1
LOAD R1, x2
MUL R0, R1
LOAD R2, y1
LOAD R3, y2
MUL R2, R3
LOAD R4, z1
LOAD R5, z2
MUL R4, R5
ADD R0, R2
ADD R0, R4
STORE d, R0

Simple, isn't it. 12 clock cycles in your CPU. But in VLIW, with 4 units, you could do something like that:

[LOAD R0, x1; LOAD R1, x2; LOAD R2, y1; LOAD R3, y2]
[MUL R0, R1; MUL R2, R3; LOAD R4, z1; LOAD R5, z2]
[MUL R4, R5;ADD R0, R2; NOP; NOP]
[ADD R0, R4; NOP; NOP; NOP]
[STORE d, R0; NOP; NOP; NOP]

Just 5 clock cycles now, despite the final operations in this example being hard to make parallel, thus the many infamous NOP or "No OPeration" instructions (in the good old past, you already plastered your programs with these NOPs to synchronize with slow hardware. But it takes little imagination how you can use the 8 idle instructions in this example already for calculating the next instructions. Don't worry, the translator from assembly language to machine code will already do that mapping of instructions to ALUs in the VLIW word for you.

A quantum computer would be different in its instruction set...but programming quantum computers is also currently purely hypothetic. We are still working with hardwired one. It is also doubtful that the insecurities that you want from a quantum computer would make it very good for controlling program flow, I suspect the classic digital CPUs will not die out that fast - and thus the concepts of programming will remain the same.
 
Top