Table of Contents
Foreword
Introduction
Chapter 1. Photolithography
1.1. Introduction
1.2. Principles and technology of scanners
1.3. Lithography processes
1.4. Immersion photolithography
1.5. Image formation
1.6. Lithography performances enhancement techniques
1.7. Contrast
1.8. Bibliography
Chapter 2. Extreme Ultraviolet Lithography
2.1. Introduction to extreme ultraviolet lithography
2.2. The electromagnetic properties of materials and the complex index
2.3. Reflective optical elements for EUV lithography
2.4. Reflective masks for EUV lithography
2.5. Modeling and simulation for EUV lithography
2.6. EUV lithography sources
2.7. Conclusion
2.8. Appendix: Kramers–Krönig relationship
2.9. Bibliography
Chapter 3. Electron Beam Lithography
3.1. Introduction
3.2. Different equipment, its operation and limits: current and future solutions
3.3. Maskless photolithography
3.4. Alignment
3.5. Electron-sensitive resists
3.6. Electron–matter interaction
3.7. Physical effect of electronic bombardment in the target
3.8. Physical limitations of e-beam lithography
3.9. Electrons energy loss mechanisms
3.10. Database preparation
3.11. E-beam lithography equipment
3.12. E-beam resist process
3.13. Bibliography
Chapter 4. Focused Ion Beam Direct-Writing
4.1. Introduction
4.2. Main fields of application of focused ion beams
4.3. From microfabrication to nanoetching
4.4. The applications
4.5. Conclusion
4.6. Acknowledgements
4.7. Bibliography
Chapter 5. Charged Particle Optics
5.1. The beginnings: optics or ballistics?
5.2. The two approaches: Newton and Fermat
5.3. Linear approximation: paraxial optics of systems with a straight optic axis, cardinal elements, matrix representation
5.4. Types of defect: geometrical, chromatic and parasitic aberrations
5.5. Numerical calculation
5.6. Special cases
5.7. Appendix
5.8. Bibliography
Chapter 6. Lithography resists
6.1. Lithographic process
6.2. Photosensitive resists
6.3. Performance criteria
6.4. Conclusion
6.5. Bibliography
List of Authors
Index
First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc. Adapted and updated from Lithography published 2010 in France by Hermes Science/Lavoisier © LAVOISIER 2010
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd | John Wiley & Sons, Inc. |
27-37 St George's Road | 111 River Street |
London SW19 4EU | Hoboken, NJ 07030 |
UK | USA |
www.iste.co.uk | www.wiley.com |
© ISTE Ltd 2011
The rights of Stefan Landis to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Cataloging-in-Publication Data
Landis, Stefan.
Lithography / Stefan Landis.
p. cm.
Summary: “Lithography is now a complex tool at the heart of a technological process for manufacturing micro and nanocomponents. A multidisciplinary technology, lithography continues to push the limits of optics, chemistry, mechanics, micro and nano-fluids, etc. This book deals with essential technologies and processes, primarily used in industrial manufacturing of microprocessors and other electronic components”-- Provided by publisher.
Includes bibliographical references and index.
ISBN 978-1-84821-202-2 (hardback)
1. Microlithography. I. Title.
TK7872.M3L36 2010
621.3815'31--dc22
2010040731
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN 978-1-84821-202-2
“An image is a pure creation of spirit.” (Pierre Reverdy)
Today, in a world of eternal representation, we are the observers of the theater of the grand image for as far as the eye can see, a theater which incessantly unfolds in the marvelous recording box that is our brain. Though we see them, the touch and even the substance of illustrations sometimes escape us completely, so much so that we can almost not differentiate between representative illusion and the physical reality of beings and things. Yet, the representation of the world in our eyes is not the same as the one that we want to transpose, to put into images. There, the reality of that which is visible is captured by our brains, which makes copies which are sometimes faithful, sometimes untrue. To produce these images we have, since the dawn of mankind, resorted to sometimes extremely complex alchemies, where invention has struggled with various materials, as a result of which we have been able to leave behind our illustrated drawings, the prints of our lives and of our societies.
For some 32,000 years man has not stopped etching, doodling, drawing, copying, painting, reproducing – for nothing, for eternity – producing millions of infinite writings and images which are the imperishable memory of his genius. How did he do it, with which materials, on what, and why? The alchemy of representation, in its great complexity, deserves to be slowed down, so that we can try to understand, for example, how today's images reach us in a kind of gigantic whirlwind, whereas 200 years ago these things were still rather sober. Or how else could we go from an image that we can look at, to an image that is difficult to see, or to one that we cannot even see with the naked eye? Whereas now we throw things away, in the past images were preciously preserved. Are the images which we do try to preserve today not the same as the ones we were preserving yesterday?
It is amongst the cavemen that that which I call the image maker can first be outlined. Collating their visions, their dreams, their beliefs on cave walls, these first imagicians undoubtedly bequeathed to us the only widely known account of this period. In their wake, we will be able to better evaluate the formal evolution of the visual representation of nature and things, this inevitable invention in which we endeavor to capture the spirit through an artefact.
Man had to train long and hard to finally tame and durably transmit the images of the world which surrounded him. The techniques employed across the ages to make and convey these images, the materials, the pigments, the bindings, the instruments and the mediums, either natural, chemical or manufactured, not only conditioned the appearance of the image itself but also its durability.
Cave paintings, coins, palaces, churches, are just some of the mediums which have left us with invaluable visual evidence of more or less remote pasts, sometimes essential for putting together the history of humanity. If we consider the manufacturing and the trading of images from the beginning, and in its totality, we can distinguish two major periods: the longest, the pre-photographic; and the post-photographic, which began in the first half of the 19th Century, and which is therefore extremely recent. Admittedly, our eyes can see but they cannot take photographs. The images that they collect are transitory fragments in a “bandwidth”, a time kept in the memory, often lost, far from any material existence, and for which any attempt at verbal transcription is on this side of reality. For other animals, sight is part of a sub-conscious effort to survive. For man, by contrast, sight is a conscious irreplaceable instrument, appreciating the outside world, which is an integral part of his own physical and mental development. For us, to see is natural. However, representing what we see calls upon a certain kind of initiation. How were the first painters of history introduced to engraving and drawing? How were they able to find or invent the tools and materials needed to succeed?
The tools, materials and shapes are precisely the three essential ingredients needed to build, needed to formalize the representation of the visible. Footprints on sand, for example, undoubtedly the first prints left by man, were already kinds of natural images of the body, and most probably were the root of the original idea to make images. The tool here was man's own foot, with its shape, using a soft and flexible material, a support able to keep an image. Thus, without any doubt, the earth and sand were among the first image mediums, even before other sketches came to cover other materials, and other surfaces.
The various attempts leading to the reproduction and spreading of visible images or texts, little by little, drove man to develop very clever techniques, sometimes born out of chance, or sometimes by increasingly elaborate research. The first stone engravings (from before 600 BC) precede, by a long time, the first examples of wood engravings (c. 200 AD), or metal engravings made by a direct method, then etchings, or the invention of typographical characters, and, finally, lithography itself, which has been, from the 19th Century onwards, a practically irreplaceable means of reproduction, and remains an essential part of the book and publicity industries, even today.
The document media have also diversified and evolved incessantly since the beginning. Stone, bone or ivory, terracotta, glass, skins, leaves, wood, parchment, paper, celluloid, vinyl, are just some of the aids bequeathed to us, with greater or lesser clarity or brittleness, the precious evidence of life and the history of mankind.
In 1796, 43 years before the invention of photography, the lithographic reproduction technique was invented by Aloïs Senefelder in Germany. Developed through the first half of the 20th Century, it brought, without question, the most important graphic revolution in the worlds of text reproduction and printed images. In this respect, we can consider two very great periods in the history of print: one, the pre-lithographic period, and the other which began with lithography in all of its forms. Here, two distinct lithographic fields start to truly develop: on one side, the advanced forms of the graphics industry (and the photolithographic industry); and, on the other side, a completely innovative form of artistic expression, now freed from the technical constraints of engraving and now able to devote itself with joy to those much freer forms of graphics, with drawings made (or transferred) directly onto the lithographic support itself. These two domains participated, together, in the technical developments which led finally to the offset printing methods used overwhelmingly today and which profit from these most advanced technologies.
As far as the photographic reproduction of images was concerned, one major challenge was the faithful reproduction of half-tones. This problem was solved in 1884 by Meisenbach, the inventor of the linear screen which was quickly applied to typographical image reproduction and then successively to photo-lithography and to offset printing. This photographic support itself already contained the seeds and the “secret” of the visibility of half-tones, incorporating the smoothness of the granular nature even of photosensitive emulsions. But to print them, it was necessary to find a way of transcribing them in a printing matrix, initially in black and white, and then later in color. An interesting characteristic is that the various screens which we have just alluded to, in particular the finest or ultra-fine (higher than 80 lines/cm) or the most recent digital grids forming an ultra-fine grid of random dots, have always tried to more or less blend in, until made invisible to the naked eye. The printed images our eyes can see are actually optical illusions. Today, if we look closely at a beautiful reproduction of an engraving by Durer, or at a painting by Vèlasquez, for example, it is impossible to distinguish the dots from the printing screens which they are made from. Already in the 19th Century, commercial chromolithography had used clever methods to create half-tones, either with the proper matrix granulation (stones or granulated metal), or by dots, drawn very finely with a feather, which simultaneously allowed the ranges and mixtures of the colors, of which there are some sublime examples. In the art field, it is nowadays necessary to use a microscope with a magnification of ×30 to determine the true nature of a printing technique.
Even in the first half of the 20th Century, we saw the first steps of a very new aid to knowledge. Indeed, 1936 and the publication of a founding article by Alan Turing, “On computable numbers with an application to the Entscheidungsproblem”, is the true starting point of the creation of programmable computers. But it was especially from the 1980s that the use of computers was democratized and, little by little, became essential to the world of information and imagery. From then on, texts and images have been created by each and everyone, with no need to be preserved in a physical, material way, but instead held on other media which we would not have dared to even imagine 30 years earlier. The image, which is still the product of another optical illusion, while keeping its own graphic originality, from now on needs no hardware support to be visible. It has its own light, can be modified at will, engraved, printed, and sent to the entire world with the single touch of a button. The image, in this case, is created in all its subtleties of color and light, not by a material screen, but by something which replaces it virtually, a succession of dots invisible to the eye (pixels) which are now at the origin of texts and images digitally recorded on our computers.
During the second half of the 20th Century, the American Jack Kilby invented the very first printed circuit (in 1958), another artefact in the service of knowledge transmission which is at the root of modern data processing, and the mass production of electronic chips with integrated transistors began not much later. For his work and his some 60 patents, Kilby received the Nobel Prize for Physics in 2000. All these circuits are used in a more or less direct way nowadays, in information recording and image handling and storage. The big family of integrated circuits and microprocessors continues to move forward, and with them has come another new technology, microscopic photolithography, which makes new plate sensitization techniques possible and, thanks to the use of masks and light beams, the engraving of circuit supports in smaller and smaller micro-relief (such as, for example, the various chip-cards with integrated circuits, whether analog or digital).
At the beginning of the third millennium, another “image” architecture was already on the horizon, in a nanosphere with still vague contours, which curiously made us swing from a visible optical illusion towards an invisible physical reality. Indeed, from micro-photolithography to polymeric nanostructured materials by nanolithographic printing, the miniaturization of three-dimensional engraved spaces took a giant leap forward. micro-dimensions are already virtually invisible to the naked eye; those of nano-dimensions will need a scanning electron microscope to be seen.
Lithography has thus exceeded the old domains of printed texts and of the “macro-image” with which we were more familiar, to reach other limits, in a new nano-imagery resolutely emerging from a dream world.
Ultra-miniaturized circuits, texts and images can, from now on, be conceived in infinitesimal spaces, and it may even be possible to think that millions of images, for example, could in the future easily be stored in less than one square meter of recording space.
However, we still know little about the stability and perennial nature of these digital media. How will the enormous mass of documentation recorded each day, all the images and mixed texts, be preserved? What will become of them in the coming centuries? We, who have already benefitted from many “recordings” of the past, also have a shared responsibility for the way in which we leave our imprints for future generations. From now on, we dare to hope, copying and the successive multiplication of documents will allow a kind of systematic and unlimited preservation of writings and images for the future.
1 Foreword written by Jörge DE SOUSA NORONHA.
The microelectronic industry is remarkable for its exponential growth over recent decades. At the heart of this success is “Moore's law”, a simple technical and economic assessment according to which it is always possible to integrate more and more functions into a circuit at reduced costs. This observation, made in the mid-1960s, has been transformed into a passionate obligation to fulfill its own prophecy, and has focused the efforts of an entire generation of microelectronics researchers and engineers.
Anyone talking about greater integration density is thinking about increasing our capacity to precisely define and place increasingly smaller components, building and using materials to support them. Lithography is succeeding in this arena, using increasingly sophisticated techniques, and is essential to the progress of the semiconductor industry because it allows a reduction in the size of patterns as well as an increase in the integration density of the integrated circuits at an economically acceptable cost.
The issue of dimension is considered so central to all microelectronic improvements that the industry calls each generation of the process, or each technological node, after a dimension which characterizes the technology; often, the half-pitch of the most dense interconnection is used. For a 45 nm technology for example, the minimum period of the interconnection pattern is 90 nm. Doubling the integration density of a circuit means decreasing its linear dimensions by 0.7: the nominal typical dimensions of advanced technologies follow one another at this rate, from 90 nm to 65 nm then 45 nm, 32 nm, 22 nm, etc.
From a very simplistic point of view, the fabrication of integrated circuits concatenates and alternates two types of processing on the wafer (Figure I.1); either:
– a functional layer is deposited by a lithographic process. The material is localized by removing the extra material in the non-selected areas (subtractive process): this is the case, for example, for contact holes through an isolating layer; or
– a specific area is defined where a technological process is locally applied, the confinement system being removed at the end of the step (additive process): this is the case for ionic implantation or localized electro-deposition.
The efficiency of the lithographic process depends on only a few fundamental parameters:
– the capability of printing even the smallest patterns, or resolution;
– the precise alignment of each layer of a circuit;
– the capacity to obtain repeatable patterns, of a controlled geometrical shape;
– the capacity to control fabrication costs as a function of the products'typology.
A greater integration density implies that the very smallest patterns must be able to be manufactured, hence the focus on ultimate resolution for lithography techniques. Patterns of just a dozen nanometers do not surprise anyone anymore, and even atomic resolutions are now achievable, with today's more sophisticated experimental conditions.
Optical lithography remains the preferred production choice. Despite inevitably being abandoned once the physical limits of the micron, and then of the 100 nm, are crossed, it remains today the preferred technique for mass production for 32 nm, thanks to the numerous innovations of the past 20 years.
In optical lithography, a polymer layer called a photosensitive resist is deposited on a wafer. This resist is composed of a matrix which is transparent to the exposure wavelength and contains photosensitive compounds. When the image of the patterns from a mask is projected onto the wafer (and onto the photosensitive resist), the areas exposed are submitted to a photochemical reaction which, if completed correctly, enables the dissolution of the resist in those areas (in the case of positive resists), or prevents dissolution (in the case of negative resists). We can therefore obtain perfectly delimited areas for which the substrate is bare, and have areas still protected by the resist, allowing a subsequent local treatment. At the end of the process, the resist is removed from the wafer. During the fabrication of integrated circuits, this step is repeated several dozen times, hence the central role of lithography in microelectronics.
In order to understand simply how this technique reaches its highest resolution, we can refer to the standard formula giving the resolution, R:
in which λ is the wavelength of the exposure light, NA the numerical aperture of the projection optics and k1 a factor depending on the technological process. Each of these factors corresponds to a way of improving the image resolution.
Improvements were first made by decreasing the exposure wavelength λ. This was why, in the 1980s, the first tools started using different radiations from a mercury lamp (λ = 436 nm, called g-line radiation; 405 nm, or h-line; and then 365 nm, or i-line), usually using reduction projection optics based on quartz. Each wavelength change was accompanied by process changes that were major at the time, but which, in retrospect, could now be qualified as minor.
The first transition came in the 1990s with the use of deep ultraviolet excimer lasers, first with 248 nm (with a KrF laser) and then 193 nm (with an ArF laser), and allowed feature size resolution below the 0.1 µm limit to be reached. However, this evolution required major changes in either projection optics (use of CaF2 in addition to quartz) or in the choice of the transparent matrix of the photosensitive resist.
The normal evolution would have moved towards F2 lasers (λ = 157 nm) which need projection optics made mainly out of CaF2, a material whose bi-refringence has proven to be a major obstacle: in the decade after 2000, after many years of development, industry finally concluded that it was illusory to continue down this path for mass production.
Reducing the k1 parameter then appeared very promising. This is achieved by first improving the resist process, for example by increasing its contrast with nonlinear phenomena or by controlling the diffusion of the photosensitive compound. By optimizing illumination techniques (annular, quadripolar, etc.), it is also possible to gain resolution and process control but often by promoting certain shapes or pattern orientations.
It has been, above all, by mastering diffraction phenomena, and thus influencing the exposure light phases, that progress has been the most spectacular: it has been acknowledged that it is now possible to go beyond the Rayleigh criterion and print patterns even smaller than the exposure wavelength. From laboratory curiosities, these techniques have now become the workhorse of the microelectronics industry and are now known under the name “Resolution Enhancement Techniques”.
In a very schematic manner, and for a certain illumination and resist process, we will try to calculate what the patterns and phase-differentiated areas on a mask should be in order to achieve an image on a wafer which matches an image initially conceived by circuit designers. The reverse calculations are extremely complex and demand very powerful computers in order to obtain the result (in some cases taking up to several days, which affects the cycle time of prototypes of new circuits). In the end, the goal is to take proximity effects between close patterns (thus a combinational explosion of the calculation time) into account, by in turn taking into account the most precise possible optical models (and, as the technologies improve, it is important to not only take into account intensity and phase but also light polarization). The resulting pattern on a mask becomes particularly complex, and the cost of a mask set for a new circuit can exceed several million dollars for the most advanced technologies, which can become a major obstacle for small production volumes.
Despite this complexity, it is increasingly difficult to find a solution for arbitrary patterns (called random logic patterns, even though this term is inappropriate). The idea arose to simplify the problem by grouping patterns with the most periodicities (and therefore easier to process) and obtain the desired design on a wafer by multiple exposures. This approach, despite its significant production costs, has become common in the most advanced technologies.
Additionally, the numerical aperture (NA) of the projection tool has been studied, even though we know that an increase of the NA can only be made to the detriment of the depth of field. Of course, NA has increased over recent years, thus decreasing the size of the exposed field. This is why print patterns were “photo-repeated” by repeating the exposure of a field a few centimeters in size over the entire wafer (the tool used is called a photo-repeater or “stepper”), then the area exposed was reduced a little more by scanning a light-slit over the exposure field (using a tool called a “scanner”). Unfortunately lithography was limited by the numerical aperture, which could not exceed 1.
Researchers then returned to their old optical knowledge: by adding a layer of liquid (with a higher index than air) between the first lens of the exposure tool and the resist, the limit could be overrun. This “immersion lithography” has not been established without difficulties. The defect density generated by this process was at first high, not to mention there being an increased complexity of the lithographic tool. The conjunction of these major difficulties encountered in 157 nm lithography and the need to decrease the dimensions made this technique viable and it is starting to be used for mass production.
The next step was to increase the refraction index of the liquid to above that of water, and that of the projection systems (the lenses) to above that of quartz. However, in the case of 157 nm, this approach is blocked by major material problems, and the future of this path beyond that of the resist-water-quartz system seems highly endangered.
Many believe that a major decrease of the exposure wavelength would significantly relax the constraints that apply to lithography. Hence there has been a unique worldwide effort to develop Extreme UltraViolet lithography (EUV) using the 13.5 nm wavelength. However, despite an enormous effort during the past two decades, this technology stays blocked by major problems of source power and industrial facilities able to produce defectless masks. Initially foreseen to be introduced for 90 nm technologies, it has difficulties addressing 22 nm technologies. As a result, initially peripheral aspects, such as high numerical aperture optics, come back to the forefront, even though other technological problems are still unresolved for industrial manufacturing.
Complexity has considerably increased the cost of lithography for the fabrication of integrated circuits for the most advanced technologies. The newest immersion scanners, in addition to their environment (resist coating track, metrology) easily cost over $50 million each, and it would not be surprising if a price of $100 million was reached with EUV, hence the large amount of research into alternative technologies to optical lithography in order to either significantly decrease the cost or to address very specific applications that do not necessarily need the most advanced lithographic tools.
One alternative technique was established a long time ago: electron beam (often called “e-beam”) lithography. This technique is not limited by wavelength or by depth of field, thus making it very attractive. The absence of a mask is an additional advantage when looking at the never ending increase of mask prices, especially in the case of small volume production. The disadvantage of this technique is that pattern printing can only be achieved sequentially (the electron beam writes in the resist pixel after pixel), which does not allow high enough productivity for mass production. In addition, e-beam can no longer claim its superiority in terms of resolution and alignment precision because of the continuous progress of optical lithography. However, new projects are being developed, among which is the idea of massively multiplying the number of independently controlled beams (tens of thousands of beams is the number stated): productivity would then increase significantly, with the prospect of it being applied to small volume production. In addition to this application, electron beam lithography remains a preferred tool for research activities that can combine flexibility, dimension control and affordable price. It can also be used to precisely repair circuits (or to print specific patterns on demand), using either an electron or an ion beam.
Other alternative techniques offer interesting prospects for precise applications:
– nanoimprint lithography, similar to the techniques used to fabricate CDs or DVDs from a master. This enables nanoscale resolutions to be achieved, and could emerge as a contender technology if there were only one lithographic level. It has also been shown that this technique could be used to print three-dimensional patterns. The stacking of dozens of layers in integrated circuits is still to be demonstrated industrially, in particular in terms of alignment precision and defect density due to fabrication;
– near-field lithography is still the perfect tool when aiming for ultimate resolution (potentially positioning atoms one by one). Its current state suffers from the same intrinsic limitations as electronic lithography (small productivity) as well as a difficult setting when reaching ultimate resolutions, but this technique could open up to real prospects with tip-matrices of the millipede type;
– X-ray lithography was, for a long period after the major efforts of the 1980s, not considered adequate to become an industrial technique. Source weakness (even if synchrotrons are huge experimental systems), the difficulty of fabrication of transparent masks and the absence of reduction optics have heavily handicapped the future of this technique. However, it remains useful for specific applications (such as the LIGA technique1) given its great field depth that can be used in microsystems.
A special note should be made about self-organizing techniques. These rely on a simple fact: nature seems to be able to generate complex structures from apparently simple reactions. More specifically, local interactions can induce unexpected or even complex, emerging behaviors: this is called self-organization. Convincing examples of periodic structures generated by these techniques are regularly submitted to the scientific literature; however, it is hard to find an idea to exploit this technique in order to produce future low cost microprocessors. Thus, two directions now exist:
– the use of these phenomena to locally improve process quality. For example, the use of resists based on copolymers could help improve the line roughness of lithographic patterns; and
– the notion of “directed self-assembly” or “emplated self-assembly”, which is the most important direction for more complex structures. This is about defining and implementing limit conditions that, using local self-organization forces, could generate the complex structures desired.
Finally, it is important to remember that the fabrication cost aspect of these emerging technologies remains completely speculative, since the technical solutions to be implemented on an industrial scale are still unknown.
This focus on ultimate resolution as the connecting thread of this book should not hide other technical elements that are also critical for lithography's success. Thus, popular literature often forgets that the capacity to stack two patterns greatly contributes to the capacity to integrate many compounds in a circuit. Indeed if patterns are misaligned, an area around the pattern would have to be freed to ensure the functionality of the circuit, thus reducing the integration density (Figure I.2). Achieving alignment with a precision equal to a fraction of the minimum size of the pattern (a few nm), and measuring it, represents a challenge that lithography has so far been able to meet.
The functionality of a circuit will depend on the precision at which the patterns on the wafer are printed. Metrology is a key element in mastering the production yield, whereas the demands regarding precision, information integrity and measurement speed keep growing. Previously, optical microscopy techniques were enough to measure, in a relative way, the two critical parameters of a lithographic step, meaning the dimension of its pattern and alignment in relation to the underlying layers. As dimensions have decreased, standard optical techniques were replaced by different approaches:
– the use of an electron beam microscope (and more recently near-field techniques) enabled a natural extension to the smallest dimensions;
– light scattering of periodic patterns (for example scatterometry) gave access to more complete information on the dimensions and shape of the patterns, even though the interpretation of the results remains unsure. A move towards shorter wavelengths (for example SAXS for X-rays) opens up new perspectives (as well as some advantages, for example with substrate transparency).
However, the challenges to be fulfilled keep increasing. A relative measurement is no longer sufficient to guarantee a circuit's performance and the possibility of an absolute metrology on a nanometric scale still remains. In addition, the shape of the pattern is increasingly: a three-dimensional measurement is essential, at least when considering mass production, even if the techniques used are still in the embryonic stages. Finally, the proximity effects between patterns make the measurement indicators less representative of the complexity of a circuit: the metrology of a statistical collection of meaningful objects in a complex circuit is a field of research that is still wide open.
It is important to mention a technical field which, even if not part of lithography in the strictest sense, is connected to it to a large extent: the measurement of physical defects in a production process. Indeed, the analysis and measurement of defectivity is interesting in two different aspects:
– for defects with an identified physical signature, techniques similar to lithography can be applied because it concerns acquiring an image with optical techniques (in the broad meaning, including charged particle beams) and treating it in order to extract meaningful information; and, additionally,
– lithography is unique in the way that, in the case of the detection of a defect during this step, it is usually possible to rework the wafer and thus avoid the permanent etching of the defect into the circuit.
In conclusion, lithography has undergone several decades of unimaginable progress, by-passing unproven physical limits thanks to the ingenuity of microelectronics researchers and engineers. Even if questions emerge about the economic viability of dimension decrease at all costs, major steps forward are expected during the coming years, either in terms of the solutions reached, the integration density or capacity to produce cheap complex structures.
1 Introduction written by Michel BRILLOUËT.
1 LIGA is a German acronym for Lithographie, Galvanoformung, Abformung (Lithography, Electroplating, Molding).
Since the beginning of the microelectronics industry, optical lithography has been the preferred technique for mass fabrication of integrated circuits, as it has always been able to meet the requirements of microelectronics, such as resolution and high productivity.
In addition, optical lithography has adapted to technology changes over time. Moreover, it is expected to be able to be used up to the 45 nm, 32 nm [ITR] and maybe even the 22 nm technology nodes (Figure 1.1).
The principle of this technique is to transfer the image of patterns inscribed on a mask onto a silicon wafer coated with a photoresist (Figure 1.2). The image is optically reduced by a factor M, where M is the projection optics reduction factor, which generally equals 4–5. The different elements of a lithography tool are detailed below.
However, due to the continuous decrease of chip dimensions, the tools used in optical lithography have now become very complex and very expensive. It is thus necessary to consider using low-cost alternative techniques in order to reach the resolutions forecast in the International Technology Roadmap for Semiconductors (ITRS) (Figure 1.1).
An optical projection lithography tool consists of a light source, a mask (or reticle) containing the drawing of the circuit to be made and an optical system designed for projecting the reduced image of that mask onto the photoresist coated on the substrate (Figure 1.2).The purpose of this chapter is to introduce the principle and performances of optical lithography, as well as alternate techniques called “new generation” techniques.
During exposure, the resist is chemically altered, but only in the areas where light is received. It then undergoes a baking process which makes the exposed zones either sensitive or insensitive to the development step.
In the case of a “positive” photoresist, the exposed part is dissolved. There are also “negative” photoresists for which only the non-exposed zones are soluble in the developer solution. The resist is therefore structured like the patterns present on the mask: this will define the device's future process level.
Thus the patterns defined can then be transferred to the material underneath during an etch process step. The resist that remains after the development step is used as an etch mask: the areas protected by the resist will not be etched. This is also used for selective ion implant in the open areas. All these steps are shown in Figure 1.3.
Illumination consists of a source and a condenser. The source must be powerful as it settles the exposure time for a given dose; it helps determine the tool's throughput, which is a major economic factor. It must work at a wavelength for which photoresists have been optimized. Furthermore, it has to be quasi-monochromatic as the optics are only efficient within a very narrow spectral range.
In order to improve performances (such as resolution) of lithography tools, as discussed below, it is necessary to reduce the sources' wavelength. To meet these criteria, different sources were originally used, from mercury vapor lamps (436 nm g-line, 405 nm h-line and 365 nm i-line) to ultraviolet-emitting lasers and, further on to the present day, deep ultraviolet radiation at 248 nm and 193 nm. The source is followed by a condenser made of a set of lenses, mirrors, filters and other optical elements. Its role is to collect and filter the light emitted from the source and to focus it at the inlet pupil of the projection optics (Figure 1.2). This type of illumination, called “Köhler” illumination, has the characteristics of projecting the image at the lens rather than at the mask, as is the case with critical or Abbe-type illumination. This ensures good lighting uniformity on the mask.
It will be seen later that the illumination geometry (circular, annular, bipolar) of such a projection lithography system can vary to improve the imaging performance. This is the widely used concept of partial coherence which is part of the image-shaping process.
The mask is a critical part of the lithography tool, as the patterns defined on it are to be reproduced on the wafer. The quality of the integrated circuits directly depends on the mask set used, in terms of dimensions, flatness, drawing precision and defect control. The mask manufacturing process is an important aspect of the technology.
As stated (Figure 1.4) in the ITRS for 32 nm technology node masks (expected in 2013), it is predicted that CD uniformity (in other words the achieved size of the patterns) will have to be controlled within 1 nm and that the defect size will have to be minimized so that it does not exceed 20 nm. In addition, pattern drawing on the mask becomes increasingly complex as the diffraction limit gets closer.
These days, in order to improve the performances of lithography, Optical Proximity Corrections (OPCs) are made by optimizing the patterns' shape on the mask.
As will be mentioned later, this is part of a whole set of reticle enhancement techniques (RETs). Thus the cost of a mask becomes an important parameter that must not be neglected in the final cost of a chip. As many masks as there are levels (several dozens) are required, and this is why much effort has been put into developing new maskless lithography techniques.
The simplest masks used in lithography are binary masks. They consist of a substrate made of a material that is transparent at the exposure wavelength, typically 6 inch-long and ¼ inch-thick melted silica squares for 193 nm and 248 nm wavelengths. The patterns are etched a few dozens of nanometers into a chrome layer, which is absorbent at those wavelengths.
The mask is composed of either transparent or absorbent areas, hence the term “binary”. It is an amplitude mask, that is to say it only alters the amplitude of the wave going through it. That way, the electric field amplitude that goes through the silica does not change, whereas the field amplitude going through the chrome equals zero after the mask.
There is another type of mask that uses both the amplitude and phase of the wave in the image-shaping process: the phase shift mask (PSM). This type of mask was first introduced in 1982 to improve lithographic performances [LEV 82]. Like those of a binary mask, the patterns of a PSM are made out of chrome on a transparent melted silica substrate. In the case of a PSM, a material is added, the goal of which is to shift the phase of the incidental wave. There are two types of phase shift masks: an alternating phase shift mask for which the phase shifting material and the chrome coexist, and the attenuated phase shift mask for which the pattern is designed to attenuate the amplitude and shift the phase of the wave going through it. The attenuating PSM is typically used as an RET. How this type of mask impacts lithographic performances is explained later.
Projection lithography was developed in the 1970s along with the development of efficient refractive lenses, in other words the optical elements which use transmission. Previously, images were made by contact or by proximity with scale 1 masks. The projection reduction factor M was introduced thanks to projection lithography. Today, typically, M equals 4. Having a reduction factor greater than 1 is an advantage, as it does not require the mask patterns to be the same size as the actual printed patterns. This releases some of the constraints of the mask manufacturing process.
Since their creation, projection optics have become increasingly complex in order to improve their performance, whilst increasing their numerical aperture: they are now composed of more than 40 elements and can be up to 1 m high and with a weight of approximately 500 kg (Figure 1.5). In fact, just like the wavelength, the numerical aperture is an important parameter which, as will be studied later, preconditions the resolution of the lithography tool.
Let us introduce here the concept of a numerical aperture. The numerical aperture of a lens or an imaging device is defined as follows:
where nim is the refractive index of the medium on the object side, and θmax the maximum half angle of the light cone on the image or object side, depending on whether the numerical aperture is seen from the object or the image side, as represented in Figure 1.6. Indeed, an optical element has two numerical apertures linked to each other by the lens magnification: one on the image side and one on the object side.
The object and image numerical apertures are proportional. Their ratio equals M, the reduction factor of the projection optics:
When the lens is in the air, according to the relationship above, its numerical aperture is only determined by its collection angle and, therefore, it depends on its diameter. It is a genuine technological challenge for optical engineers to make high quality lenses without aberrations and transparent to the illumination wavelengths. Many improvements have been achieved in this field and it is now possible to find very efficient lenses with a very high numerical aperture (greater than 0.8). In the next chapter, it will be shown that the emergence of immersion lithography encouraged the development of even more complex lenses with higher and higher refraction indexes, leading to higher numerical apertures.
A 200 mm wafer usually holds about 70 exposure fields, each one corresponding to the image of the mask. To cover a whole wafer, it is necessary to reproduce the image of the mask several times. This is called “photorepeating”.
There are two kinds of lithography tool used for the photorepeating step. The first, known as a “stepper”, reproduces the reduced image of the mask on the field. The wafer is then moved in two directions to expose the other fields. The second tool, called a “scanner”, was invented later. This is the tool used today. With this type of tool, the mask image is projected through a slit during the synchronous scanning of the mask and the substrate. It allows large dimension fields in the scanning direction without needing to change the optical system (Figure 1.6).
However, this system can produce some difficulties, such as vibration and synchronization issues between the mask and the wafer.
The typical features of the most evolved scanners are summed up in Table 1.1.
One should not forget that all the considerations about theoretical resolution shrinkage do not take into account the technological feasibilities of the lithographic process. In fact, the lower the resolution, the harder it is to control CDs. In practice, defocusing has a lot of impact on the patterns and makes sensitivity to other process errors higher. In the same way, a dose setting error degrades the patterns and can bring them out of specification. A process tolerance criterion is usually defined, for instance with the “on wafer” CD varying at a maximum ±10% around the target CD. This defines a focus range, the depth of focus, and the dose range, the exposure latitude. In microelectronics, an imaging process is usually determined by simultaneously changing focus and dose in order to evaluate the process depth of focus (DOF) and exposure latitude.
The focus-dose matrix obtained can be visualized using Bossung curves [BOS 77]. These curves represent the printed critical dimension as a function of the focus for different exposure doses. Figure 1.7 shows an example of that type of curve for dense lines and a 120 nm target CD in the following illumination conditions: a binary mask, where NA = 0.75, σ = 0.6.
From these curves, the process window can be deduced, that is, the focus and dose ranges for which the CDs obtained meet the predefined specifications. Plotting the exposure latitude as a function of the DOF or the defocus gives a good representation of the coupled effects of defocus and dose on the lithography process.
The best configuration is obtained with a wide exposure latitude and a high DOF, as this ensures a larger process window. However, decreasing the dimensions makes the process window shrink. At first this problem was avoided by improving focus control or substrate flatness. Now, parameters that influence the imaging process have to be modified to get past such constraints. Improving the photoresists helped improve the process windows at first but, as the process becomes less tolerant, resolution or reticle enhancement techniques must be used.