According to Moore’s law, the number of transistors in any integrated circuit chip will double every two years. This entails higher performance and also lower cost. However, there are limitations to this growth rate and we are slowly but surely about to reach them. Here’s a look at some promising technologies for future development.
When looking at the future, it’s often a good idea to start with a little bit of history, so that’s what we are going to do now. We start in 1939, with the engineer Russel Ohl’s research in the fields of semiconductors and photovoltaics, which ultimately effected the development of silicon-based solar cells and junction transistors.
Building on Ohl’s work, Bell Labs constructed the first silicon transistor in 1954, which was marketed by Texas instruments. It wasn’t until 1971, however, that the underlying silicon-gate process technology was used in the manufacturing of CPUs.
And that’s where Moore’s law started. The law basically states that the number of transistors in a dense integrated circuit will double roughly every two years. This, of course, causes the performance to double while also driving down the cost of making chips. That was true for about 37 years until around 2012, when the development rate began to slow down significantly. The rate didn’t decrease for the first time, though, as the development was even faster than predicted for the first few decades (with the number of transistors doubling about every 18 months).
Processors are still made from silicon, which in turn is made from sand, a material we are not about to run out of any time soon. Yet, CPU processor technology has developed and condensed a lot. With an average size of 90nm still in the mid-2000s, they have already shrunk to 14nm and are predicted to decrease in size further by 2021, down to 7 or perhaps even 5nm. Simultaneously, it is getting ever more difficult to create these miniature circuits and the further we approach the 5nm limit, effects such as Quantum tunnelling (particles flowing free where they are actually supposed to be contained by barriers thus limiting the minimum transistor size) start to occur. Therefore, microchip manufacturers are consistently looking for alternative materials or even biological components.
The most promising technologies for future development in this respect are as follows.
A possible way of going about making a change is staying with silicon as a material but rethinking the way it is being used. Instead of attempting to fit a growing number of transistors into ever smaller spaces, you could take a structural approach, stacking multiple silicone layers on top of one another.
Some benefits of a three-dimensional approach are as follows:
- – More functionality fits into a small space.
- – Reduced fabrication cost
- – Circuit layers can be built with different processes, or even on different types of wafers.
- – The average wire length is reduced by 10–15% (applies to longer interconnect)
- – The power consumption is reduced by 10–100 times.
- – Wide bandwidth buses between functional blocks in different layers
A typical example of how this technology might be applied is a processor and memory in a 3D stack.
Carbon nanotubes are manufactured from sheets of carbon with a thickness of only one atom. They are rolled into tubes measuring between 1 and 2nm in diameter. Carbon nanotubes conduct electricity better than most other materials and they perform five times better than a conventional silicon transistor, while at the same time consuming only a fifth of their energy. So, what’s the catch? Well, currently, the biggest obstacle in the way of switching to nanotube transistors are the high R&D costs and expensive equipment involved in their mass production. Thus, nanotube transistors are definitely not to be counted out, but might not have their big break for some years.
Rethinking the CPU Architecture
The structure of today’s CPUs is funded on Von Neumann’s architecture. In this architecture, the CPU is the centre of the computer and central to all communication with the different parts. Thus, another possible approach is rethinking the CPU architecture. Looking at large manufacturers like Intel, Nvidia and AMD, they all use a GPU (graphics processing unit) to produce chips with integrated graphics. GPUs are massively parallel circuits, i.e. they consist of relatively simple components which are assembled in huge quantities. Thus, they possess thousands of cores as opposed to the CPU with only a few. Also unlike CPUs, GPUs are nowhere near reaching their limits before their exponential growth will be limited by the laws of physics.
Classical computing is based on bits, which are single pieces of information that can exist in two states – 1 or 0. Quantum computers, however, aren’t binary like traditional ones, because they are based on qbits, which can exist in multiple states at the same time. In other words, quantum computers are able to consider multiple options simultaneously. This in turn means that quantum technology could help us to solve problems and analyse complex processes that would stump our existing regular computers.
Why are we not using them all the time then, you may ask. The answer is that producing quantum computers is pretty hard to do. The technologies underlying regular computers based on normal bits have been around for a while and well-researched, but the same does not hold true for qbit technology.
To illustrate, there is no commonly agreed on approach to producing qbits for a start. There are multiple proposed techniques which variously involve trapping ions, electrons and other small particles, using superconductors to produce quantum circuits, or using photons and optical technology to get the job done. While these are obviously all valid suggestions, they have a common downside – with the resources and technologies currently at our disposal, these approaches could work on a small scale, but would not stand up to mass production. So, arguably, this puts a lid on using quantum computers outside of research for the time being.
So, where do we stand?
Currently, Moore’s law still basically holds true, but it is slowing down. With the way we are producing transistors at the moment, we are definitely approaching the absolute physical limit in terms of their minimum size. But then, we have about 10 to 20 years before this would eventually happen. By then, I am sure, one or more of the above-mentioned technologies will prove a promising next step.