Recyclable microchip design
Recyclable microchip design
by Wolfgang Nebel
"I believe there is a worldwide need for perhaps five computers," said IBM founder Thomas J. Watson in 1943, assessing the market opportunities for computers. Why is it possible that today there is a computer in practically every office workplace and in many private homes, and that around 10 billion microcomputers are in operation worldwide - mostly in secret? What assumptions did Watson make and what unforeseeable events made this development possible? This article provides insights into the technological development of computer design and manufacture.
The centrepiece of a modern computer is its microprocessor. This 1 - 2 cm² slice of silicon houses up to 10 million transistors, which are the smallest switching units that carry out the computer's calculations and storage processes. The electrical behaviour of the transistors depends on their geometry and is described by non-linear differential equations. In digital computers, however, transistors are simply regarded as controllable switches that connect the "drain" and "source" connections to each other depending on the value at the "gate" control input. Logic operations and memories can be realised by connecting several transistors to each other and to positive or negative voltages.
The realisation of complex functions and processes in a computer using such simple components requires the decomposition of the functions into a large number of yes/no decisions. It becomes clear that the functional power of a computer increases with the number of transistors, as more functions are available, which can possibly work simultaneously. The computing speed also depends on the clock frequency at which the functional units of the computer are operated. With today's microprocessors, this can be up to 300 MHz, i.e. the switching states of the computer change up to 300 million times per second.
Having shown that transistors are the "raw material" of microprocessors, so to speak, it is clear that Thomas J. Watson could not have foreseen today's development in 1943, because the transistor was only invented in 1947 by three American scientists at Bell Laboratories, namely Walter Brattain, John Bardeen and William Shockley, who later received the Nobel Prize for their invention. Until then, computers could only be built using electromechanical relays or electron tubes. The main advantage of transistors over these components is that a large number of transistors can be produced on a single integrated circuit, the microchip or IC for short, in a very small space, cost-effectively and largely automatically. Another advantage is the lower power consumption of integrated circuits compared to tubes and relays. The significance of this factor becomes clear if you believe the anecdote according to which the lights in Philadelphia flickered every time the tube-based ENIAC computer was switched on.
The advances in manufacturing technology are breathtaking. No other industry has yet managed to improve the manufacturing process at anywhere near the same speed. The invention of the transistor was followed in 1958 by the development of the first integrated circuits, i.e. silicon modules containing several transistors, and in 1971 by the first microprocessor, the Intel 4004, clocked at 108 kHz, which already contained around 2,300 transistors. The latest successor to the Intel processor series, the Pentium II, which came onto the market in 1997, consists of approx. 7.5 million transistors and is clocked at up to 300 MHz. A comparison with the automotive industry illustrates miniaturisation: if it had been possible to miniaturise cars in the same way, all the cars in Germany would fit together in a normal garage, their total value would be roughly equivalent to that of a luxury-class vehicle and the weight of the legendary VW Beetle would have shrunk to 0.2 g - with a correspondingly positive effect on fuel consumption. In addition, the top speed would have increased by a factor of 2,800 to over 500,000 km/h between 1971 and 1997. An end to this development, or even just a slowdown, is not to be expected in the next 15 years; on the contrary, technologists estimate that microprocessors with over 550 million transistors will be on the market in 2010.
However, the vast majority of the microprocessors used do not work in computers in the usual sense, but perform - often unnoticed by the user - control, monitoring or signal processing tasks in technical systems. Examples are the engine management of a car, the control of a washing machine or a video recorder, the voice coding in a mobile phone or the stabilisation of an aircraft. In the latest generation of the Boeing 777, for example, 1,000 microcomputers are at work. Such control systems are known as "embedded systems". As product diversification in mechanical engineering, the automotive sector and telecommunications is primarily based on comfort, safety, cost and environmental compatibility properties, semiconductors serve as a high-tech raw material that is an irreplaceable prerequisite for the value creation pyramid of the five most important German economic sectors.
How do you design 7.5 million transistors?
If we assume that it takes 10 minutes to design a transistor correctly and if we ignore for a moment that the main problem, apart from the design of the individual transistors, is their organisation into a meaningful whole, it is easy to estimate that around 750 engineers would have to work for a year to develop such a complex circuit. Even if this were feasible from an organisational and financial point of view, the undertaking would be doomed to failure, as a further generation of computers would have to be developed within this year due to the rapid obsolescence. It is clear that such complex systems for a short-lived market with rapidly falling prices can only be developed through extensive automation of the most labour-intensive design steps and through an adapted organisation. As early as the 1970s, computer programmes were used to record and check the production documents for integrated circuits. With each advance in technology, advances in design processes and tools have necessarily followed.
A usually successful solution strategy for complex problems is to structure them into independently solvable sub-problems. This subdivision takes place over several design or abstraction levels until the individual sub-problems can be directly assigned a solution. This decomposition phase is then followed by assembly, in which the partial solutions developed are aggregated into an overall solution. The aim of this "divide et impera" method is to master the organisational problem. If each transistor still has to be calculated individually at the lowest of these design levels, the pure development time is not shortened, the process merely becomes manageable. To reduce development time and costs, it is also necessary to reuse components that have already been developed. However, if their internal structure has to be understood in detail each time, the gain is small. Only the reuse of abstract components, of which one knows what they do without having to understand exactly how they do it, can bring the hoped-for gain in development efficiency. A first step in this direction was already taken in the 1980s with the use of cell libraries containing transistor structures for the realisation of logic functions. This development continued at the beginning of this decade with the use of hardware description languages and automatic circuit synthesis at the register transfer level, so that each new generation of technology was accompanied by an increase in the lowest level of abstraction to be processed manually, relieving the developer of routine work and allowing him to concentrate his creativity on the more interesting architectural design.
The "Green Dot" for engineering expertise
If a circuit developer uses logic blocks from a cell library to realise a specific function instead of developing the function itself from scratch, he reuses the engineering work that was once performed to create the library. The same applies to the user of a computer-aided circuit synthesis tool, who reutilises the development strategy gained from the experience of previous circuit developers and automated in the tool. The difference to recycling in Germany's dual system is that there, as a rule, high-quality raw materials are recycled to manufacture low-value new products, whereas in the case of microelectronics, valuable expertise is reused to develop higher-value new products. The aims of reuse are to reduce development time and costs, concentrate engineering creativity on new challenges and improve product quality.
While the importance of reutilisation for the first objectives is obvious, this needs to be explained for the intended increase in product quality. It is understandable that development errors are unavoidable in the development of integrated circuits of the complexity class mentioned. The exchange of Pentium chips is a current example. For this reason, great importance is attached to checking the correctness of a design. This verification is carried out by means of extensive simulations of the circuit and, in some cases, automatic verification. Despite intensive computer support, however, it can never be proven that a realistic circuit has been developed without errors; confidence in the circuit only increases if no further errors are found despite long simulations. At a certain level of confidence, the design is then released. A circuit that has been used many thousands of times in devices without errors occurring, i.e. that has proven itself in practical use, therefore deserves a higher level of confidence in its correctness when it is reused.
Another comparison with the automotive industry may illustrate that the more mature a development process is, the more complicated it is to increase the recycling rate. The first phase of introducing recycling means, for example, that only screws, nuts, switches and bolts are used as standardised components. Next come the engine, transmission, axles, parts of the interior, etc., before finally entire floor assemblies are shared (recycled) for several vehicles. As you can see, the more complex a recycled component is, the more limited its applicability for products other than the one originally intended. The same applies to the recycling of microelectronic designs. While logic gates are so universal that they are needed in practically every digital circuit, this is not the case for components at higher levels of abstraction. Here, the existing components themselves are so complex and tailored to specific applications that they can only be used generally in exceptional cases. In addition, there are numerous possibilities for realising these functional units, which differ in their suitability for certain applications due to different costs and performance characteristics. One solution is to develop modules specifically for reusability or to create the possibility of adapting reusable components to new requirements.
The problem is exacerbated by the fact that manufacturing technology in the semiconductor industry is subject to constant further development, meaning that inflexible modules tailored to a specific technology can only be reused for a short period of time.
The requirements for a design method that promotes the reusability of complex modules can therefore be summarised as follows: These components must be described in such a way that their function can be understood without knowledge of their internal structure, they can be automatically synthesised for any target technologies, they can communicate with other components in a flexible manner and their functionality can be adapted to new requirements without compromising previously verified properties in an uncontrolled manner.
With the introduction of special programming languages for hardware design, so-called hardware description languages, synthesis tools and simulators, microelectronics development is becoming more and more similar to software development, which also has programming languages, compilers and software debuggers. As large software systems with several million lines of programming language text are also created in the software world and complexity problems, time and cost pressures prevail there too, it makes sense to examine software engineering development methods for their suitability in the hardware sector. Inherent differences between the two domains must be taken into account, such as the massive parallelism of the hardware, different product life cycles and the different cost structure due to the necessary hardware production.
The current programming paradigm in software engineering is object-orientated programming. It allows the properties of objects that are later to be used in a system to be specified in classes. This also includes the functions, called methods, that an object can execute. It is then no longer necessary to know the internal structure of such an object in order to use it. Instead, this remains hidden from the user and the object is accessed exclusively via the agreed methods. If a variant of such an object is required, it can be created as a new class using an inheritance mechanism. In this case, the properties of the object that have not been explicitly changed are retained. Finally, a concrete system is set up by creating - instantiating - objects of the previously defined classes, which then exchange messages with each other and thus call each other's methods and exchange values.
The basic requirements for a design method for reutilisation in microelectronics design are fulfilled by object-oriented programming. For this reason, the OFFIS Institute is developing object-oriented language extensions for the internationally standardised hardware description language VHDL, based on principles developed in the Department of Computing Science at the University of Oldenburg. Well-known industrial companies from the telecommunications sector, German Telekom AG, France Telecom, the Spanish Telefonica and Italtel as well as other research institutes and CAD providers are participating in the research project, which is funded by the European Union as part of the ESPRIT programme. The first result of the project will be a translation program available this autumn, which will translate circuit descriptions developed in the new Objective VHDL language into the VHDL language, thus establishing a connection to the standard industrial design process and opening up the new object-oriented design method, which promotes reusability, to industrial testing.
The author
Prof. Dr Wolfgang H. Nebel (40) studied electrical engineering at the University of Hanover and obtained his doctorate in Computing Science at the University of Kaiserslautern. He then worked for Philips Semiconductors in Hamburg for over six years. His last position here was Head of CAD Software Development. Nebel was appointed to the universities of Linz and Oldenburg. At the University of Oldenburg, he has headed the "Design of Integrated Circuits" department at the Faculty of Computing Science since 1993, has been Dean of the Faculty since 1996 and is involved in the OFFIS Institute and the newly founded Institute for "Complex Integrated Systems and Microsensor Technology". Nebel's research interests: Design of embedded systems with a special focus on recyclability and low energy consumption. He is a member of numerous committees and scientific organisations, including as Chair of the IFIP Special Interest Group VHDL and several programme committees of international conferences.