Table of Contents
Foreword
Introduction
Chapter 1. X-ray Lithography: Fundamentals and Applications
1.1. Introduction
1.2. The principle of X-ray lithography
1.3. The physics of X-ray Lithography
1.4. Applications
1.5. Appendix 1
1.6. Bibliography
Chapter 2. NanoImprint Lithography
2.1. From printing to NanoImprint
2.2. A few words about NanoImprint
2.3. The fabrication of the mold
2.4. Separating the mold and the resist after imprint: de-embossing
2.5. The residual layer problem in NanoImprint
2.6. Residual layer thickness measurement
2.7. A few remarks on the mechanical behavior of molds and flow properties of the NanoImprint process
2.8. Conclusion
2.9. Bibliography
Chapter 3. Lithography Techniques Using Scanning Probe Microscopy
3.1. Introduction
3.2. Presentation of local-probe microscopes
3.3. General principles of local-probe lithography techniques
3.4. Classification of surface structuring techniques using local-probe microscopes
3.5. Lithographic techniques with polymer resist mask
3.6. Lithography techniques using oxidation-reduction interactions
3.7. "Passive" lithography techniques
3.8. Conclusions and perspectives
3.9. Bibliography
Chapter 4. Lithography and Manipulation Based on the Optical Properties of Metal Nanostructures
4.1. Introduction
4.2. Surface plasmons
4.3. Localized plasmon optical lithography
4.4. Delocalized surface plasmon optical lithography
4.5. Conclusions, discussions and perspectives
4.6. Bibliography
Chapter 5. Patterning with Self-Assembling Block Copolymers
5.1. Block copolymers: a nano-lithography technique for tomorrow?
5.2. Controlling self-assembled block copolymer films
5.3. Technological applications of block copolymer films
5.4. Bibliography
Chapter 6. Metrology for Lithography
6.1. Introduction
6.2. The concept of CD in metrology
6.3. Scanning electron microscopy (SEM)
6.4. 3D atomic force microscopy (AFM 3D)
6.5. Grating optical diffractometry (or scatterometry)
6.6. What is the most suitable technique for lithography?
List of Authors
Index
First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc. Adapted and updated from La nanolithographie published 2010 in France by Hermes Science/Lavoisier © LAVOISIER 2010
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd 27-37 St George's Road London SW19 4EU UK |
John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA |
www.iste.co.uk | www.wiley.com |
© ISTE Ltd 2011
The rights of Stefan Landis to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Cataloging-in-Publication Data
Nanolithography English
Nano-lithography / edited by Stefan Landis.
p. cm.
Includes bibliographical references and index.
ISBN 978-1-84821-211-4
1. Microlithography. 2. Nanotechnology. I. Landis, Stefan. II. Title.
TK7872.M3N3613 2011
621.3815'31--dc22
2010046516
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN 978-1-84821-211-4
“An image is a pure creation of spirit.” (Pierre Reverdy)
Today, in a world of eternal representation, we are the observers of the theater of the grand image for as far as the eye can see, a theater which incessantly unfolds in the marvelous recording box that is our brain. Though we see them, the touch and even the substance of illustrations sometimes escape us completely, so much so that we can almost not differentiate between representative illusion and the physical reality of beings and things. Yet, the representation of the world in our eyes is not the same as the one that we want to transpose, to put into images. There, the reality of that which is visible is captured by our brains, which makes copies which are sometimes faithful, sometimes untrue. To produce these images we have, since the dawn of mankind, resorted to sometimes extremely complex alchemies, where invention has struggled with various materials, as a result of which we have been able to leave behind our illustrated drawings, the prints of our lives and of our societies.
For some 32,000 years man has not stopped etching, doodling, drawing, copying, painting, reproducing – for nothing, for eternity – producing millions of infinite writings and images which are the imperishable memory of his genius. How did he do it, with which materials, on what, and why? The alchemy of representation, in its great complexity, deserves to be slowed down, so that we can try to understand, for example, how today's images reach us in a kind of gigantic whirlwind, whereas 200 years ago these things were still rather sober. Or how else could we go from an image that we can look at, to an image that is difficult to see, or to one that we cannot even see with the naked eye? Whereas now we throw things away, in the past images were preciously preserved. Are the images which we try to preserve today not the same as the ones we were preserving yesterday?
It is amongst the cavemen that that which I call the image maker can first be outlined. Collating their visions, their dreams, their beliefs on cave walls, these first imagicians undoubtedly bequeathed to us the only widely known account of this period. In their wake, we will be able to better evaluate the formal evolution of the visual representation of nature and things, this inevitable invention in which we endeavor to capture the spirit through an artefact.
Man had to train long and hard to finally tame and durably transmit the images of the world which surrounded him. The techniques employed across the ages to make and convey these images, the materials, the pigments, the bindings, the instruments and the mediums, either natural, chemical or manufactured, not only conditioned the appearance of the image itself but also its durability.
Cave paintings, coins, palaces, churches, are just some of the mediums which have left us with invaluable visual evidence of more or less remote pasts, sometimes essential for putting together the history of humanity. If we consider the manufacturing and the trading of images from the beginning, and in its totality, we can distinguish two major periods: the longest, the pre-photographic; and the post-photographic, which began in the first half of the 19th Century, and which is therefore extremely recent. Admittedly, our eyes can see but they cannot take photographs. The images that they collect are transitory fragments in a “band-width”, a time kept in the memory, often lost, far from any material existence, and for which any attempt at verbal transcription is on this side of reality. For other animals, sight is part of a sub-conscious effort to survive. For man, by contrast, sight is a conscious irreplaceable instrument, appreciating the outside world, which is an integral part of his own physical and mental development. For us, to see is natural. However, representing what we see calls upon a certain kind of initiation. How were the first painters of history introduced to engraving and drawing? How were they able to find or invent the tools and materials needed to succeed?
The tools, materials and shapes are the three essential ingredients needed to build and formalize the representation of the visible. Footprints on sand, for example, undoubtedly the first prints left by man, were already kinds of natural images of the body, and most probably were the root of the original idea to make images. The tool here was man's own foot, with its shape, using a soft and flexible material, a support able to keep an image. Thus, without any doubt, the earth and sand were among the first image mediums, even before other sketches came to cover other materials, and other surfaces.
The various attempts leading to the reproduction and spreading of visible images or texts, little by little, drove man to develop very clever techniques, sometimes born out of chance, or sometimes by increasingly elaborate research. The first stone engravings (from before 600 BC) precede, by a long time, the first examples of wood engravings (c. 200 AD), or metal engravings made by a direct method, then etchings, or the invention of typographical characters, and, finally, lithography itself, which has been, from the 19th Century onwards, a practically irreplaceable means of reproduction, and remains an essential part of the book and publicity industries, even today.
The document media have also diversified and evolved incessantly since the beginning. Stone, bone or ivory, terracotta, glass, skins, leaves, wood, parchment, paper, celluloid, vinyl, are just some of the aids bequeathed to us, with greater or lesser clarity or brittleness, the precious evidence of life and the history of mankind.
In 1796, 43 years before the invention of photography, the lithographic reproduction technique was invented by Aloïs Senefelder in Germany. Developed during the first half of the 20th Century, it brought, without question, the most important graphic revolution in the worlds of text reproduction and printed images. In this respect, we can consider two very great periods in the history of print: one, the pre-lithographic period, and the other which began with lithography in all of its forms. Here, two distinct lithographic fields start to truly develop: on one side, the advanced forms of the graphics industry (and the photolithographic industry); and, on the other side, a completely innovative form of artistic expression, now freed from the technical constraints of engraving and now able to devote itself with joy to those much freer forms of graphics, with drawings made (or transferred) directly onto the lithographic support itself. These two domains participated, together, in the technical developments which led finally to the offset printing methods used overwhelmingly today and which profit from these most advanced technologies.
As far as the photographic reproduction of images was concerned, one major challenge was the faithful reproduction of half-tones. This problem was solved in 1884 by Meisenbach, the inventor of the linear screen which was quickly applied to typographical image reproduction and then successively to photo-lithography and to offset printing. This photographic support itself already contained the seeds and the “secret” of the visibility of half-tones, incorporating the smoothness of the granular nature even of photosensitive emulsions. But to print them, it was necessary to find a way of transcribing them in a printing matrix, initially in black and white, and then later in color. An interesting characteristic is that the various screens which we have just alluded to, in particular the finest or ultra-fine (higher than 80 lines/cm) or the most recent digital grids forming an ultra-fine grid of random dots, have always tried to more or less blend in, until made invisible to the naked eye. The printed images our eyes can see are actually optical illusions. Today, if we look closely at a beautiful reproduction of an engraving by Durer, or at a painting by Vélasquez, for example, it is impossible to distinguish the dots from the printing screens which they are made from. Already in the 19th Century, commercial chromolithography used clever methods to create half-tones, either with the proper matrix granulation (stones or granulated metal), or by dots, drawn very finely with a feather, which simultaneously allowed the ranges and mixtures of the colors, of which there are some sublime examples. In the art field, it is nowadays necessary to use a microscope with a magnification of ×30 to determine the true nature of a printing technique.
Even in the first half of the 20th Century, we saw the first steps of a very new aid to knowledge. Indeed, 1936 and the publication of a founding article by Alan Turing, “On computable numbers with an application to the Entscheidungsproblem”, is the true starting point of the creation of programmable computers. But it was especially from the 1980s that the use of computers was democratized and, little by little, became essential to the world of information and imagery. From then on, texts and images have been created by each and everyone, with no need to be preserved in a physical, material way, but instead held on other media which we would not have dared to even imagine 30 years earlier. The image, which is still the product of another optical illusion, while keeping its own graphic originality, from now on needs no hardware support to be visible. It has its own light, can be modified at will, engraved, printed, and sent to the entire world with the single touch of a button. The image, in this case, is created in all its subtleties of color and light, not by a material screen, but by something which replaces it virtually, a succession of dots invisible to the eye (pixels) which are now at the origin of texts and images digitally recorded on our computers.
During the second half of the 20th Century, the American Jack Kilby invented the very first printed circuit (in 1958), another artefact in the service of knowledge transmission which is at the root of modern data processing, and the mass production of electronic chips with integrated transistors began not much later. For his work and his some 60 patents, Kilby received the Nobel Prize for Physics in 2000. All these circuits are used in a more or less direct way nowadays, in information recording and image handling and storage. The big family of integrated circuits and microprocessors continues to move forward, and with them has come another new technology, microscopic photolithography, which makes new plate sensitization techniques possible and, thanks to the use of masks and light beams, the engraving of circuit supports in smaller and smaller micro-relief (such as, for example, the various chip-cards with integrated circuits, whether analog or digital).
At the beginning of the third millennium, another “image” architecture was already on the horizon, in a nanosphere with still vague contours, which curiously made us swing from a visible optical illusion towards an invisible physical reality. Indeed, from micro-photolithography to polymeric nanostructured materials by nanolithographic printing, the miniaturization of 3D engraved spaces took a giant leap forward. micro-dimensions are already virtually invisible to the naked eye; those of nano-dimensions will need a scanning electron microscope to be seen.
Lithography has thus exceeded the old domains of printed texts and of the “macro-image” with which we were more familiar, to reach other limits, in a new nano-imagery resolutely emerging from a dream world.
Ultra-miniaturized circuits, texts and images can, from now on, be conceived in infinitesimal spaces, and it may even be possible to think that millions of images, for example, could in the future easily be stored in less than one square meter of recording space.
However, we still know little about the stability and perennial nature of these digital media. How will the enormous mass of documentation recorded each day, all the images and mixed texts, be preserved? What will become of them in the coming centuries? We, who have already benefitted from many “recordings” of the past, also have a shared responsibility for the way in which we leave our imprints for future generations. From now on, we dare to hope, copying and the successive multiplication of documents will allow a kind of systematic and unlimited preservation of writings and images for the future.
Jörge DE SOUSA NORONHA
1 Foreword written by Jörge DE SOUSA NORONHA
The microelectronic industry is remarkable for its exponential growth over recent decades. At the heart of this success is “Moore's law”, a simple technical and economic assessment according to which it is always possible to integrate more and more functions into a circuit at reduced costs. This observation, made in the mid-1960s, has been transformed into a passionate obligation to fulfill its own prophecy, and has focused the efforts of an entire generation of microelectronics researchers and engineers.
Anyone talking about greater integration density is thinking about increasing our capacity to precisely define and place increasingly smaller components, building and using materials to support them. Lithography is succeeding in this arena, using increasingly sophisticated techniques, and is essential to the progress of the semiconductor industry because it allows a reduction in the size of patterns as well as an increase in the integration density of the integrated circuits at an economically acceptable cost.
The issue of dimension is considered so central to all microelectronic improvements that the industry calls each generation of the process, or each technological node, after a dimension which characterizes the technology; often, the half-pitch of the most dense interconnection is used. For a 45 nm technology for example, the minimum period of the interconnection pattern is 90 nm. Doubling the integration density of a circuit means decreasing its linear dimensions by 0.7: the nominal typical dimensions of advanced technologies follow one another at this rate, from 90 nm to 65 nm then 45 nm, 32 nm, 22 nm, etc.
From a very simplistic point of view, the fabrication of integrated circuits concatenates and alternates two types of processing on the wafer (Figure I.1); either:
– a functional layer is deposited by a lithographic process. The material is localized by removing the extra material in the non-selected areas (subtractive process): this is the case, for example, for contact holes through an isolating layer; or
– a specific area is defined where a technological process is locally applied, the confinement system being removed at the end of the step (additive process): this is the case for ionic implantation or localized electro-deposition.
The efficiency of the lithographic process depends on only a few fundamental parameters:
– the capability of printing even the smallest patterns, or resolution;
– the precise alignment of each layer of a circuit;
– the capacity to obtain repeatable patterns, of a controlled geometrical shape;
– the capacity to control fabrication costs as a function of the products’ typology.
A greater integration density implies that the very smallest patterns must be able to be manufactured, hence the focus on ultimate resolution for lithography techniques. Patterns of just a dozen nanometers do not surprise anyone anymore, and even atomic resolutions are now achievable, with today's more sophisticated experimental conditions.
Optical lithography remains the preferred production choice. Despite inevitably being abandoned once the physical limits of the micron, and then of the 100 nm, are crossed, it remains today the preferred technique for mass production for 32 nm, thanks to the numerous innovations of the past 20 years.
In optical lithography, a polymer layer called a photosensitive resist is deposited on a wafer. This resist is composed of a matrix which is transparent to the exposure wavelength and contains photosensitive compounds. When the image of the patterns from a mask is projected onto the wafer (and onto the photosensitive resist), the areas exposed are submitted to a photochemical reaction which, if completed correctly, enables the dissolution of the resist in those areas (in the case of positive resists), or prevents dissolution (in the case of negative resists). We can therefore obtain perfectly delimited areas for which the substrate is bare, and have areas still protected by the resist, allowing a subsequent local treatment. At the end of the process, the resist is removed from the wafer. During the fabrication of integrated circuits, this step is repeated several dozen times, hence the central role of lithography in microelectronics.
In order to understand simply how this technique reaches its highest resolution, we can refer to the standard formula giving the resolution, R:
in which λ is the wavelength of the exposure light, NA the numerical aperture of the projection optics and k1 a factor depending on the technological process. Each of these factors corresponds to a way of improving the image resolution.
Improvements were first made by decreasing the exposure wavelength λ. This was why, in the 1980s, the first tools started using different radiations from a mercury lamp (λ = 436 nm, called g-line radiation; 405 nm, or h-line; and then 365 nm, or i-line), usually using reduction projection optics based on quartz. Each wavelength change was accompanied by process changes that were major at the time, but which, in retrospect, could now be qualified as minor.
The first transition came in the 1990s with the use of deep ultraviolet excimer lasers, first with 248 nm (with a KrF laser) and then 193 nm (with an ArF laser), and allowed feature size resolution below the 0.1 µm limit to be reached. However, this evolution required major changes in either projection optics (use of CaF2 in addition to quartz) or in the choice of the transparent matrix of the photosensitive resist.
The normal evolution would have moved towards F2 lasers (λ = 157 nm) which need projection optics made mainly out of CaF2, a material whose bi-refringence has proven to be a major obstacle: in the decade after 2000, after many years of development, industry finally concluded that it was illusory to continue down this path for mass production.
Reducing the k1 parameter then appeared very promising. This is achieved by first improving the resist process, for example by increasing its contrast with nonlinear phenomena or by controlling the diffusion of the photosensitive compound. By optimizing illumination techniques (annular, quadripolar, etc.), it is also possible to gain resolution and process control but often by promoting certain shapes or pattern orientations.
It has been, above all, by mastering diffraction phenomena, and thus influencing the exposure light phases, that progress has been the most spectacular: it has been acknowledged that it is now possible to go beyond the Rayleigh criterion and print patterns even smaller than the exposure wavelength. From laboratory curiosities, these techniques have now become the workhorse of the microelectronics industry and are now known under the name “Resolution Enhancement Techniques”.
In a very schematic manner, and for a certain illumination and resist process, we will try to calculate what the patterns and phase-differentiated areas on a mask should be in order to achieve an image on a wafer which matches an image initially conceived by circuit designers. The reverse calculations are extremely complex and demand very powerful computers in order to obtain the result (in some cases taking up to several days, which affects the cycle time of prototypes of new circuits). In the end, the goal is to take proximity effects between close patterns (thus a combinational explosion of the calculation time) into account, by in turn taking into account the most precise possible optical models (and, as the technologies improve, it is important to not only take into account intensity and phase but also light polarization). The resulting pattern on a mask becomes particularly complex, and the cost of a mask set for a new circuit can exceed several million dollars for the most advanced technologies, which can become a major obstacle for small production volumes.
Despite this complexity, it is increasingly difficult to find a solution for arbitrary patterns (called random logic patterns, even though this term is inappropriate). The idea arose to simplify the problem by grouping patterns with the most periodicities (and therefore easier to process) and obtain the desired design on a wafer by multiple exposures. This approach, despite its significant production costs, has become common in the most advanced technologies.
Additionally, the numerical aperture (NA) of the projection tool has been studied, even though we know that an increase of the NA can only be made to the detriment of the depth of field. Of course, NA has increased over recent years, thus decreasing the size of the exposed field. This is why print patterns were “photo-repeated” by repeating the exposure of a field a few centimeters in size over the entire wafer (the tool used is called a photo-repeater or “stepper”), then the area exposed was reduced a little more by scanning a light-slit over the exposure field (using a tool called a “scanner”). Unfortunately lithography was limited by the numerical aperture, which could not exceed 1.
Researchers then returned to their old optical knowledge: by adding a layer of liquid (with a higher index than air) between the first lens of the exposure tool and the resist, the limit could be overrun. This “immersion lithography” has not been established without difficulties. The defect density generated by this process was at first high, not to mention there being an increased complexity of the lithographic tool. The conjunction of these major difficulties encountered in 157 nm lithography and the need to decrease the dimensions made this technique viable and it is starting to be used for mass production.
The next step was to increase the refraction index of the liquid to above that of water, and that of the projection systems (the lenses) to above that of quartz. However, in the case of 157 nm, this approach is blocked by major material problems, and the future of this path beyond that of the resist-water-quartz system seems highly endangered.
Many believe that a major decrease of the exposure wavelength would significantly relax the constraints that apply to lithography. Hence there has been a unique worldwide effort to develop Extreme UltraViolet lithography (EUV) using the 13.5 nm wavelength. However, despite an enormous effort during the past two decades, this technology stays blocked by major problems of source power and industrial facilities able to produce defectless masks. Initially foreseen to be introduced for 90 nm technologies, it has difficulties addressing 22 nm technologies. As a result, initially peripheral aspects, such as high numerical aperture optics, come back to the forefront, even though other technological problems are still unresolved for industrial manufacturing.
Complexity has considerably increased the cost of lithography for the fabrication of integrated circuits for the most advanced technologies. The newest immersion scanners, in addition to their environment (resist coating track, metrology) easily cost over $50 million each, and it would not be surprising if a price of $100 million was reached with EUV, hence the large amount of research into alternative technologies to optical lithography in order to either significantly decrease the cost or to address very specific applications that do not necessarily need the most advanced lithographic tools.
One alternative technique was established a long time ago: electron beam (often called “e-beam”) lithography. This technique is not limited by wavelength or by depth of field, thus making it very attractive. The absence of a mask is an additional advantage when looking at the never ending increase of mask prices, especially in the case of small volume production. The disadvantage of this technique is that pattern printing can only be achieved sequentially (the electron beam writes in the resist pixel after pixel), which does not allow high enough productivity for mass production. In addition, e-beam can no longer claim its superiority in terms of resolution and alignment precision because of the continuous progress of optical lithography. However, new projects are being developed, among which is the idea of massively multiplying the number of independently controlled beams (tens of thousands of beams is the number stated): productivity would then increase significantly, with the prospect of it being applied to small volume production. In addition to this application, electron beam lithography remains a preferred tool for research activities that can combine flexibility, dimension control and affordable price. It can also be used to precisely repair circuits (or to print specific patterns on demand), using either an electron or an ion beam.
Other alternative techniques offer interesting prospects for precise applications:
– NanoImprint lithography, similar to the techniques used to fabricate CDs or DVDs from a master. This enables nanoscale resolutions to be achieved, and could emerge as a contender technology if there were only one lithographic level. It has also been shown that this technique could be used to print 3D patterns. The stacking of dozens of layers in integrated circuits is still to be demonstrated industrially, in particular in terms of alignment precision and defect density due to fabrication.
– Near-field lithography is still the perfect tool when aiming for ultimate resolution (potentially positioning atoms one by one). Its current state suffers from the same intrinsic limitations as electronic lithography (small productivity) as well as a difficult setting when reaching ultimate resolutions, but this technique could open up to real prospects with tip-matrices of the millipede type.
– X-ray lithography was, for a long period after the major efforts of the 1980s, not considered adequate to become an industrial technique. Source weakness (even if synchrotrons are huge experimental systems), the difficulty of fabrication of transparent masks and the absence of reduction optics have heavily handicapped the future of this technique. However, it remains useful for specific applications (such as the LIGA technique1) given its great field depth that can be used in microsystems.
A special note should be made about self-organizing techniques. These rely on a simple fact: nature seems to be able to generate complex structures from apparently simple reactions. More specifically, local interactions can induce unexpected or even complex, emerging behaviors: this is called self-organization. Convincing examples of periodic structures generated by these techniques are regularly submitted to the scientific literature; however, it is hard to find an idea to exploit this technique in order to produce future low cost microprocessors. Thus, two directions now exist:
– the use of these phenomena to locally improve process quality. For example, the use of resists based on copolymers could help improve the line roughness of lithographic patterns; and
– the notion of “directed self-assembly” or “emplated self-assembly”, which is the most important direction for more complex structures. This is about defining and implementing limit conditions that, using local self-organization forces, could generate the complex structures desired.
Finally, it is important to remember that the fabrication cost aspect of these emerging technologies remains completely speculative, since the technical solutions to be implemented on an industrial scale are still unknown.
This focus on ultimate resolution as the connecting thread of this book should not hide other technical elements that are also critical for lithography's success. Thus, popular literature often forgets that the capacity to stack two patterns greatly contributes to the capacity to integrate many compounds in a circuit. Indeed if patterns are misaligned, an area around the pattern would have to be freed to ensure the functionality of the circuit, thus reducing the integration density (Figure I.2). Achieving alignment with a precision equal to a fraction of the minimum size of the pattern (a few nm), and measuring it, represents a challenge that lithography has so far been able to meet.
The functionality of a circuit will depend on the precision at which the patterns on the wafer are printed. Metrology is a key element in mastering the production yield, whereas the demands regarding precision, information integrity and measurement speed keep growing. Previously, optical microscopy techniques were enough to measure, in a relative way, the two critical parameters of a lithographic step, meaning the dimension of its pattern and alignment in relation to the underlying layers. As dimensions have decreased, standard optical techniques were replaced by different approaches:
– the use of an electron beam microscope (and more recently near-field techniques) enabled a natural extension to the smallest dimensions;
– light scattering of periodic patterns (for example scatterometry) gave access to more complete information on the dimensions and shape of the patterns, even though the interpretation of the results remains unsure. A move towards shorter wavelengths (for example SAXS for X-rays) opens up new perspectives (as well as some advantages, for example with substrate transparency).
However, the challenges to be fulfilled keep increasing. A relative measurement is no longer sufficient to guarantee a circuit's performance and the possibility of an absolute metrology on a nanometric scale still remains. In addition, the shape of the pattern is increasingly a 3D measurement which is essential, at least when considering mass production, even if the techniques used are still in the embryonic stages. Finally, the proximity effects between patterns make the measurement indicators less representative of the complexity of a circuit: the metrology of a statistical collection of meaningful objects in a complex circuit is a field of research that is still wide open.
It is important to mention a technical field which, even if not part of lithography in the strictest sense, is to a large extent connected to it: the measurement of physical defects in a production process. Indeed, two different aspects of the analysis and measurement of defectivity are interesting:
– For defects with an identified physical signature, techniques similar to lithography can be applied because it concerns acquiring an image with optical techniques (in the broad meaning, including charged particle beams) and treating it in order to extract meaningful information.
– Lithography is unique in the way that, in the case of the detection of a defect during this step, it is usually possible to rework the wafer and thus avoid the permanent etching of the defect into the circuit.
In conclusion, lithography has undergone several decades of unimaginable progress, by-passing unproven physical limits thanks to the ingenuity of microelectronics researchers and engineers. Even if questions emerge about the economic viability of dimension decrease at all costs, major steps forward are expected during the coming years, either in terms of the solutions reached, the integration density or capacity to produce cheap complex structures.
1 Introduction written by Michel BRILLOUËT.
1 LIGA is a German acronym for Lithographie, Galvanoformung, Abformung (Lithography, Electroplating, Molding).
The invention of X-ray proximity lithography [SPE 72] dates back to the early 1970s, when the declared objective was overcoming the resolution of the lithographic techniques then employed in the semiconductor industry. At that time, UV projection lithography was the leading technology, having reached a scale resolution of one micrometer [WIL 29]. Nevertheless, a very problematic future was forecast for UV-lithography, given the expected requirements of the microelectronics industry to achieve sizes as small as 250 nm. In fact, the physical barrier represented by diffraction was believed to be insuperable. Therefore, the quite obvious idea of using radiation of shorter wavelengths for exposures was seen as the only viable option to keep pace with Moore's law [MOO 65] and the semiconductor industry roadmaps [ITRS] for device miniaturization.
However, shifting to shorter wavelengths, down to the region of Extreme UV (EUV) which extends between approximately 30 and 250 eV, raises new problems. One fundamental problem is represented by the low transparency of most materials. At these photon energies, the radiation is so strongly absorbed by any dense material that it is difficult or even impossible to find suitable materials to be used as transparent substrates for photomasks and for the sophisticated demagnification optics of the projection system.
One viable solution to the low transparency problem considered was that of exploring even shorter wavelengths. It has been known, since Röntgen's discoveries concerning the properties of X-rays, that the region of soft to hard X-rays offers a sufficient penetration depth in materials. This possibility of selecting materials with sufficient transparency is likely to have played an important role in convincing the pioneers of X-ray lithography to undertake the development of a new lithographic technique based on the use of electromagnetic radiation in the spectral region of X-rays. One might have expected that jumping from the spectral region of Deep UV ( λ~200 nm) to that of X-rays with at least two orders of magnitude shorter wavelengths ( λ~1 nm) would have ensured a “resolution reserve” for all the technological nodes ahead in the microelectronics industry. This fact in itself would have represented an enormous advantage for X-ray lithography, compared to DUV lithography which requires the complete renewal of fabrication facilities at every new technological node. It was this fact, therefore, that motivated a large initial effort devoted to establishing X-ray lithography (XRL) as the “next generation” lithography.
Between the 1970s and the end of the century very intense activity was reported in the field of X-ray lithography, in particular in the development of exposure systems (steppers) [SEI 98, SIL 97], in the optimization of different protocols for mask fabrication [RAV 96, ROM 97, SHI 96, ROU 90, WAN 04], in the foundation of the theoretical background, and in the development of codes for quantitative analysis and simulations [AIG 98, GRI 04, ZUM 97, PRE 97]. In the meantime, mainly driven by their use in the study of the physics of matter, in chemistry and biology, third generation synchrotron radiation sources [BIL 05] were reaching a high level of maturity as high brilliance sources of nearly collimated X-ray beams on a wide range of energies. These sources have almost ideal performances for X-ray lithography, and represented a major improvement with respect to X-ray tubes [MAR 95].
A brilliant future seemed to be paved for XRL: it had all the crucial elements necessary to satisfy industrial requirements and accompany microelectronics for many years along the innovation steps forecast by Moore's law. Almost all the crucial elements were matched from XRL: all except one! No refractive lens in the X-ray region exists that is capable of focusing X-ray radiation with high efficiency, and this fact has a series of consequences that will become evident below.
Unlike with UV projection lithography, where a system of lenses is used to project a demagnified image of the mask pattern, the same technique is not possible for lithographic technology based on X-rays. In fact, Fresnel lenses, also known as “zone plates”, can be used to focus X-rays but are limited in diameter (~1 mm) and have a multiplicity of focal spots, corresponding to different diffractive orders, with rather low efficiency (~10% up to 30% in the best cases) [FEN 07]. Moreover, zone plates are highly chromatic optical devices, with a focal length depending on the wavelength of λ-1 . Using them to build a projection system would imply the use of monochromatic X-rays to ensure the formation of a demagnified pattern image, in focus, on the substrate. All these problems make X-ray radiation incompatible with the concept of projection lithography.
The development of XRL thus needs to accept proximity as a working configuration, i.e. the mask is placed in close proximity with the surface of the target substrate without interposed optical elements. In this case, a one-to-one replica of the mask pattern is obtained by a simple shadow printing process. The fact that XRL is a proximity lithography has two main consequences from the viewpoint of mask fabrication and alignment. In the case of UV projection lithography, mask fabrication is simplified by the fact that the pattern has to be written on a larger scale to pre-compensate for the rescaling by the demagnifying optics during exposure. In modern steppers, this fact relaxes the typical resolution at which a mask is written by a factor of 4 or 5. Secondly, the tolerances (mask distortion and placement errors, and alignment accuracy) are relaxed by a factor of 4 or 5. In X-ray lithography, the features are instead printed at the same scale as they are on the mask, which makes the lithographic steps for producing the mask much more challenging, and makes the entire process of pattern replication more prone to placement errors or pattern distortion in the mask, which are transferred onto the target substrate at the same scale.
In fact, during the development of XRL technology, several problems began to emerge, the most severe of which related mainly to aspects of the X-ray mask. The latter consists typically of a pattern in a strongly X-ray absorbing material (Au, W, Ta) supported by thin membranes of SiC, Diamond or SiNx that, for transparency reasons, are just a few micrometers thick and extend over areas of several square centimeters (in order to fit the entire pattern of a chip in a single undivided window). Given the small ratio between thickness and the lateral dimension, the membranes are inevitably prone to distortions. In particular, the main problem is represented by the distortion of the pattern caused by the residual stress in the absorber deposited on the membrane. An additional source of distortion is represented by thermal expansion [DZI 96] which is induced by the heat that the absorbed radiation causes to the mask.
Compensating all sources of pattern distortions proved to be extremely challenging. In fact, the positioning and registration accuracy of two subsequent lithographic levels of over a number of square centimeters has to remain within the required margin of error, a requirement that today, for microelectronics standards, is of the order of ~10 nm.
A further problem, again related to the mask, is lifetime. Masks are required to last months in order to reproduce the same pattern several thousands to several millions of times. The damage caused by protracted exposure to ionizing radiation and the risks connected to the handling and operation (typically the membrane has to be kept at a distance of 5-10 µm from the substrate in order to keep the diffraction effects low) creates serious risks for the long term survival of the mask. In fact, the membrane can easily be broken by a dust particle that might be present on the wafer when the mask is moved towards it for the exposure.
For several years, X-ray was evaluated as next generation lithography, trying to circumvent all problems and limitations, implementing solutions of increasing complexity. In the meantime, UV projection lithography has continued to serve the purposes of the semiconductor industry, down to the 45 nm node (where ‘45 nm’ refers to the average half-pitch of a memory cell manufactured at that technology level) and will presumably continue down to the 32 nm node with the aid of a variety of additional techniques, such as larger lenses, wavelength reduction by liquid immersion and double patterning. Eventually, the discouraging battle with UV projection lithography was finally lost and XRL was abandoned as being unfit for the purposes of the semiconductor industry.
However, this is just one and probably not the final chapter of the XRL story. As has often happened in the history of science and technology, when all attempts have been made to develop a technology for a very ambitious target, even if that target is not achieved, the technological solutions developed can sometimes be unexpectedly used to target new objectives in different fields of application. XRL failed to satisfy the requirements of the electronics industry. Nonetheless, XRL showed absolutely remarkable properties that make it uniquely suited for many specific purposes in nanodevice design and fabrication. XRL can certainly be considered the best lithographic technique with respect to penetration depth in thick resists and high aspect ratios patterns. It is a unique technique for generating some types of 3D micro- and nanostructures by single or multiple tilted exposures on multi-layer resists. Already, in the fields of micromechanics and optics, LIGA technology, which is based on X-ray lithography with hard X-rays ( λ~0.1 nm), has been used effectively and offers advantages with respect to other competing technologies.
The possibility of X-rays finding niche applications due to the peculiarities of this lithography is possible, avoiding competition with DUV lithography in the field of microelectronics but opening a new frontier of competition in new and innovative applications.
In this chapter we will try to provide a self-consistent description of the key physical concepts and the technology of X-ray lithography, followed by a series of examples and applications. The chapter is organized as follows. The principles of XRL are reviewed and the relevant physical phenomena discussed, in particular the absorption and propagation of X-rays, the equipment used, the role of diffraction in image formation, the interaction of X-rays with matter, and the mechanism of exposure of a resist by X-rays. The fabrication of X-ray masks is described in some detail. Aware that the opportunities for X-ray lithography to be used in the mass production of integrated circuits by the electronics industry have almost vanished, the main focus of the final section is devoted to those applications for which X-ray appears to offer important competitive advantages over all other lithographic technologies. Of particular interest is the technical potential that XRL offers in the field of micromachining, microfluidics and 3D nanopatterning.
X-ray lithography (XRL) belongs to the class of parallel lithographic techniques, along with UV, deep UV and extreme UV lithography, nanoimprinting, micro contact printing lithography, casting, injection molding, and others. This means that the pattern cannot be originated, just replicated. All constituent points of the pattern are addressed at the same time, and the process is typically fast. However, the pattern has to be first encoded into an object (called a mask) and then transferred entirely in one single parallel step.
X-ray masks consist of absorbing patterns supported by a transparent mask-carrier, which has a weak absorption of X-rays in the range of photon energies required for exposure. They are typically made by patterning by Electron Beam Lithography (EBL) and auxiliary techniques. The process of pattern replication consists of exposing the resist (polymeric material which changes its dissolution rate in a liquid solvent, called a developer, under high energy irradiation) through the mask containing the pattern. Where the beam is not stopped by the adsorbing material, it is transmitted by the membrane and exposes the resist deposited on the target substrate. The resist is defined as positive if the exposed part dissolves in the developer, and defined as negative if it crosslinks upon exposure and the unexposed parts dissolve in the developer. In both cases, after development, the resist exhibits, as a first approximation, the same geometrical features as the original pattern on the mask (see Figure 1.1).
When a feature size approaches the 100 nm scale length, a highly spatially coherent X-ray radiation subjected to a phase and amplitude modulation by high resolution features on the mask leads to a diffracted field that varies, propagating along the gap present between the mask and the target substrate. Therefore, lithographic structures of higher complexity and which are more difficult to be quantitatively explained can be generated.