image
image

About the Book


As a young man, Ed Catmull had a dream: to make the world’s first computer-animated movie. He nurtured that dream first as a Ph.D. student at the University of Utah, where many computer science pioneers got their start, and then forged an early partnership with George Lucas that led, indirectly, to his founding Pixar with Steve Jobs and John Lasseter in 1986. Nine years later and against all odds, Toy Story was released, changing animation forever.

Since then, Pixar has dominated the world of animation, producing such beloved films as Monsters, Inc., Finding Nemo, The Incredibles, Up, and WALL-E, which have gone on to set box-office records and garner twenty-seven Academy Awards. The joyousness of the storytelling, the inventive plots, the emotional authenticity – in some ways, Pixar movies are an object lesson in what creativity really is. Now, in this book, Catmull reveals the ideals and techniques, honed over years, that have made Pixar so widely admired – and so profitable.

Creativity, Inc. is a book for managers who want to lead their employees to new heights, a manual for anyone who strives for originality, and the first-ever all-access trip into the nerve center of Pixar Animation Studios – into the story meetings, the post mortems, and the ‘Braintrust’ sessions where art is born. It is, at heart, a book about how to build and sustain a creative culture – but it is also, as Pixar co-founder and president Ed Catmull writes, ‘an expression of the ideas that I believe make the best in us possible’.

For Steve

CONTENTS


Cover

About the Book

Title Page

Dedication

Introduction: Lost and Found

PART I: GETTING STARTED

Chapter 1: Animated

Chapter 2: Pixar Is Born

Chapter 3: A Defining Goal

Chapter 4: Establishing Pixar’s Identity

PART II: PROTECTING THE NEW

Chapter 5: Honesty and Candor

Chapter 6: Fear and Failure

Chapter 7: The Hungry Beast and the Ugly Baby

Chapter 8: Change and Randomness

Chapter 9: The Hidden

PART III: BUILDING AND SUSTAINING

Chapter 10: Broadening Our View

Chapter 11: The Unmade Future

PART IV: TESTING WHAT WE KNOW

Chapter 12: A New Challenge

Chapter 13: Notes Day

Afterword: The Steve We Knew

Starting Points: Thoughts for Managing a Creative Culture

Picture Section

Acknowledgments

Index

About the Authors

Copyright

INTRODUCTION: LOST AND FOUND


EVERY MORNING, AS I walk into Pixar Animation Studios—past the twenty-foot-high sculpture of Luxo Jr., our friendly desk lamp mascot, through the double doors and into a spectacular glass-ceilinged atrium where a man-sized Buzz Lightyear and Woody, made entirely of Lego bricks, stand at attention, up the stairs past sketches and paintings of the characters that have populated our fourteen films—I am struck by the unique culture that defines this place. Although I’ve made this walk thousands of times, it never gets old.

Built on the site of a former cannery, Pixar’s fifteen-acre campus, just over the Bay Bridge from San Francisco, was designed, inside and out, by Steve Jobs. (Its name, in fact, is The Steve Jobs Building.) It has well-thought-out patterns of entry and egress that encourage people to mingle, meet, and communicate. Outside, there is a soccer field, a volleyball court, a swimming pool, and a six-hundred-seat amphitheater. Sometimes visitors misunderstand the place, thinking it’s fancy for fancy’s sake. What they miss is that the unifying idea for this building isn’t luxury but community. Steve wanted the building to support our work by enhancing our ability to collaborate.

The animators who work here are free to—no, encouraged to—decorate their work spaces in whatever style they wish. They spend their days inside pink dollhouses whose ceilings are hung with miniature chandeliers, tiki huts made of real bamboo, and castles whose meticulously painted, fifteen-foot-high styrofoam turrets appear to be carved from stone. Annual company traditions include “Pixarpalooza,” where our in-house rock bands battle for dominance, shredding their hearts out on stages we erect on our front lawn.

The point is, we value self-expression here. This tends to make a big impression on visitors, who often tell me that the experience of walking into Pixar leaves them feeling a little wistful, like something is missing in their work lives—a palpable energy, a feeling of collaboration and unfettered creativity, a sense, not to be corny, of possibility. I respond by telling them that the feeling they are picking up on—call it exuberance or irreverence, even whimsy—is integral to our success.

But it’s not what makes Pixar special.

What makes Pixar special is that we acknowledge we will always have problems, many of them hidden from our view; that we work hard to uncover these problems, even if doing so means making ourselves uncomfortable; and that, when we come across a problem, we marshal all of our energies to solve it. This, more than any elaborate party or turreted workstation, is why I love coming to work in the morning. It is what motivates me and gives me a definite sense of mission.

There was a time, however, when my purpose here felt a lot less clear to me. And it might surprise you when I tell you when.

ON NOVEMBER 22, 1995, Toy Story debuted in America’s theaters and became the largest Thanksgiving opening in history. Critics heralded it as “inventive” (Time), “brilliant” and “exultantly witty” (The New York Times), and “visionary” (Chicago Sun-Times). To find a movie worthy of comparison, wrote The Washington Post, one had to go back to 1939, to The Wizard of Oz.

The making of Toy Story—the first feature film to be animated entirely on a computer—had required every ounce of our tenacity, artistry, technical wizardry, and endurance. The hundred or so men and women who produced it had weathered countless ups and downs as well as the ever-present, hair-raising knowledge that our survival depended on this 80-minute experiment. For five straight years, we’d fought to do Toy Story our way. We’d resisted the advice of Disney executives who believed that since they’d had such success with musicals, we too should fill our movie with songs. We’d rebooted the story completely, more than once, to make sure it rang true. We’d worked nights, weekends, and holidays—mostly without complaint. Despite being novice filmmakers at a fledgling studio in dire financial straits, we had put our faith in a simple idea: If we made something that we wanted to see, others would want to see it, too. For so long, it felt like we had been pushing that rock up the hill, trying to do the impossible. There were plenty of moments when the future of Pixar was in doubt. Now, we were suddenly being held up as an example of what could happen when artists trusted their guts.

Toy Story went on to become the top-grossing film of the year and would earn $358 million worldwide. But it wasn’t just the numbers that made us proud; money, after all, is just one measure of a thriving company and usually not the most meaningful one. No, what I found gratifying was what we’d created. Review after review focused on the film’s moving plotline and its rich, three-dimensional characters—only briefly mentioning, almost as an aside, that it had been made on a computer. While there was much innovation that enabled our work, we had not let the technology overwhelm our real purpose: making a great film.

On a personal level, Toy Story represented the fulfillment of a goal I had pursued for more than two decades and had dreamed about since I was a boy. Growing up in the 1950s, I had yearned to be a Disney animator but had no idea how to go about it. Instinctively, I realize now, I embraced computer graphics—then a new field—as a means of pursuing that dream. If I couldn’t animate by hand, there had to be another way. In graduate school, I’d quietly set a goal of making the first computer-animated feature film, and I’d worked tirelessly for twenty years to accomplish it.

Now, the goal that had been a driving force in my life had been reached, and there was an enormous sense of relief and exhilaration—at least at first. In the wake of Toy Story’s release, we took the company public, raising the kind of money that would ensure our future as an independent production house, and began work on two new feature-length projects, A Bug’s Life and Toy Story 2. Everything was going our way, and yet I felt adrift. In fulfilling a goal, I had lost some essential framework. Is this really what I want to do? I began asking myself. The doubts surprised and confused me, and I kept them to myself. I had served as Pixar’s president for most of the company’s existence. I loved the place and everything that it stood for. Still, I couldn’t deny that achieving the goal that had defined my professional life had left me without one. Is this all there is? I wondered. Is it time for a new challenge?

It wasn’t that I thought Pixar had “arrived” or that my work was done. I knew there were major obstacles in front of us. The company was growing quickly, with lots of shareholders to please, and we were racing to put two new films into production. There was, in short, plenty to occupy my working hours. But my internal sense of purpose—the thing that had led me to sleep on the floor of the computer lab in graduate school just to get more hours on the mainframe, that kept me awake at night, as a kid, solving puzzles in my head, that fueled my every workday—had gone missing. I’d spent two decades building a train and laying its track. Now, the thought of merely driving it struck me as a far less interesting task. Was making one film after another enough to engage me? I wondered. What would be my organizing principle now?

It would take a full year for the answer to emerge.

FROM THE START, my professional life seemed destined to have one foot in Silicon Valley and the other in Hollywood. I first got into the film business in 1979 when, flush from the success of Star Wars, George Lucas hired me to help him bring high technology into the film industry. But he wasn’t based in Los Angeles. Instead, he’d founded his company, Lucasfilm, at the north end of the San Francisco Bay. Our offices were located in San Rafael, about an hour’s drive from Palo Alto, the heart of Silicon Valley—a moniker that was just gaining traction then, as the semiconductor and computer industries took off. That proximity gave me a front-row seat from which to observe the many emerging hardware and software companies—not to mention the growing venture capital industry—that, in the course of a few years, would come to dominate Silicon Valley from its perch on Sand Hill Road.

I couldn’t have arrived at a more dynamic and volatile time. I watched as many startups burned bright with success—and then flamed out. My mandate at Lucasfilm—to merge moviemaking with technology—meant that I rubbed shoulders with the leaders of places like Sun Microsystems and Silicon Graphics and Cray Computer, several of whom I came to know well. I was first and foremost a scientist then, not a manager, so I watched these guys closely, hoping to learn from the trajectories their companies followed. Gradually, a pattern began to emerge: Someone had a creative idea, obtained funding, brought on a lot of smart people, and developed and sold a product that got a boatload of attention. That initial success begat more success, luring the best engineers and attracting customers who had interesting and high-profile problems to solve. As these companies grew, much was written about their paradigm-shifting approaches, and when their CEOs inevitably landed on the cover of Fortune magazine, they were heralded as “Titans of the New.” I especially remember the confidence. The leaders of these companies radiated supreme confidence. Surely, they could only have reached this apex by being very, very good.

But then those companies did something stupid—not just stupid-in-retrospect, but obvious-at-the-time stupid. I wanted to understand why. What was causing smart people to make decisions that sent their companies off the rails? I didn’t doubt that they believed they were doing the right thing, but something was blinding them—and keeping them from seeing the problems that threatened to upend them. As a result, their companies expanded like bubbles, then burst. What interested me was not that companies rose and fell or that the landscape continually shifted as technology changed but that the leaders of these companies seemed so focused on the competition that they never developed any deep introspection about other destructive forces that were at work.

Over the years, as Pixar struggled to find its way—first selling hardware, then software, then making animated short films and advertisements—I asked myself: If Pixar is ever successful, will we do something stupid, too? Can paying careful attention to the missteps of others help us be more alert to our own? Or is there something about becoming a leader that makes you blind to the things that threaten the well-being of your enterprise? Clearly, something was causing a dangerous disconnect at many smart, creative companies. What, exactly, was a mystery—and one I was determined to figure out.

In the difficult year after Toy Story’s debut, I came to realize that trying to solve this mystery would be my next challenge. My desire to protect Pixar from the forces that ruin so many businesses gave me renewed focus. I began to see my role as a leader more clearly. I would devote myself to learning how to build not just a successful company but a sustainable creative culture. As I turned my attention from solving technical problems to engaging with the philosophy of sound management, I was excited once again—and sure that our second act could be as exhilarating as our first.

IT HAS ALWAYS been my goal to create a culture at Pixar that will outlast its founding leaders—Steve, John Lasseter, and me. But it is also my goal to share our underlying philosophies with other leaders and, frankly, with anyone who wrestles with the competing—but necessarily complementary—forces of art and commerce. What you’re holding in your hands, then, is an attempt to put down on paper my best ideas about how we built the culture that is the bedrock of this place.

This book isn’t just for Pixar people, entertainment executives, or animators. It is for anyone who wants to work in an environment that fosters creativity and problem solving. My belief is that good leadership can help creative people stay on the path to excellence no matter what business they’re in. My aim at Pixar—and at Disney Animation, which my longtime partner John Lasseter and I have also led since the Walt Disney Company acquired Pixar in 2006—has been to enable our people to do their best work. We start from the presumption that our people are talented and want to contribute. We accept that, without meaning to, our company is stifling that talent in myriad unseen ways. Finally, we try to identify those impediments and fix them.

I’ve spent nearly forty years thinking about how to help smart, ambitious people work effectively with one another. The way I see it, my job as a manager is to create a fertile environment, keep it healthy, and watch for the things that undermine it. I believe, to my core, that everybody has the potential to be creative—whatever form that creativity takes—and that to encourage such development is a noble thing. More interesting to me, though, are the blocks that get in the way, often without us noticing, and hinder the creativity that resides within any thriving company.

The thesis of this book is that there are many blocks to creativity, but there are active steps we can take to protect the creative process. In the coming pages, I will discuss many of the steps we follow at Pixar, but the most compelling mechanisms to me are those that deal with uncertainty, instability, lack of candor, and the things we cannot see. I believe the best managers acknowledge and make room for what they do not know—not just because humility is a virtue but because until one adopts that mindset, the most striking breakthroughs cannot occur. I believe that managers must loosen the controls, not tighten them. They must accept risk; they must trust the people they work with and strive to clear the path for them; and always, they must pay attention to and engage with anything that creates fear. Moreover, successful leaders embrace the reality that their models may be wrong or incomplete. Only when we admit what we don’t know can we ever hope to learn it.

This book is organized into four sections—Getting Started, Protecting the New, Building and Sustaining, and Testing What We Know. It is no memoir, but in order to understand the mistakes we made, the lessons we learned, and the ways we learned from them, it necessarily delves at times into my own history and that of Pixar. I have much to say about enabling groups to create something meaningful together and then protecting them from the destructive forces that loom even in the strongest companies. My hope is that by relating my search for the sources of confusion and delusion within Pixar and Disney Animation, I can help others avoid the pitfalls that impede and sometimes ruin businesses of all kinds. The key for me—what has kept me motivated in the nineteen years since Toy Story debuted—has been the realization that identifying these destructive forces isn’t merely a philosophical exercise. It is a crucial, central mission. In the wake of our earliest success, Pixar needed its leaders to sit up and pay attention. And that need for vigilance never goes away. This book, then, is about the ongoing work of paying attention—of leading by being self-aware, as managers and as companies. It is an expression of the ideas that I believe make the best in us possible.

PART I


GETTING STARTED

CHAPTER 1


ANIMATED

FOR THIRTEEN YEARS we had a table in the large conference room at Pixar that we call West One. Though it was beautiful, I grew to hate this table. It was long and skinny, like one of those things you’d see in a comedy sketch about an old wealthy couple that sits down for dinner—one person at either end, a candelabra in the middle—and has to shout to make conversation. The table had been chosen by a designer Steve Jobs liked, and it was elegant, all right—but it impeded our work.

We’d hold regular meetings about our movies around that table—thirty of us facing off in two long lines, often with more people seated along the walls—and everyone was so spread out that it was difficult to communicate. For those unlucky enough to be seated at the far ends, ideas didn’t flow because it was nearly impossible to make eye contact without craning your neck. Moreover, because it was important that the director and producer of the film in question be able to hear what everyone was saying, they had to be placed at the center of the table. So did Pixar’s creative leaders: John Lasseter, Pixar’s creative officer, and me, and a handful of our most experienced directors, producers, and writers. To ensure that these people were always seated together, someone began making place cards. We might as well have been at a formal dinner party.

When it comes to creative inspiration, job titles and hierarchy are meaningless. That’s what I believe. But unwittingly, we were allowing this table—and the resulting place card ritual—to send a different message. The closer you were seated to the middle of the table, it implied, the more important—the more central—you must be. And the farther away, the less likely you were to speak up—your distance from the heart of the conversation made participating feel intrusive. If the table was crowded, as it often was, still more people would sit in chairs around the edges of the room, creating yet a third tier of participants (those at the center of the table, those at the ends, and those not at the table at all). Without intending to, we’d created an obstacle that discouraged people from jumping in.

Over the course of a decade, we held countless meetings around this table in this way—completely unaware of how doing so undermined our own core principles. Why were we blind to this? Because the seating arrangements and place cards were designed for the convenience of the leaders, including me. Sincerely believing that we were in an inclusive meeting, we saw nothing amiss because we didn’t feel excluded. Those not sitting at the center of the table, meanwhile, saw quite clearly how it established a pecking order but presumed that we—the leaders—had intended that outcome. Who were they, then, to complain?

It wasn’t until we happened to have a meeting in a smaller room with a square table that John and I realized what was wrong. Sitting around that table, the interplay was better, the exchange of ideas more free-flowing, the eye contact automatic. Every person there, no matter their job title, felt free to speak up. This was not only what we wanted, it was a fundamental Pixar belief: Unhindered communication was key, no matter what your position. At our long, skinny table, comfortable in our middle seats, we had utterly failed to recognize that we were behaving contrary to that basic tenet. Over time, we’d fallen into a trap. Even though we were conscious that a room’s dynamics are critical to any good discussion, even though we believed that we were constantly on the lookout for problems, our vantage point blinded us to what was right before our eyes.

Emboldened by this new insight, I went to our facilities department. “Please,” I said, “I don’t care how you do it, but get that table out of there.” I wanted something that could be arranged into a more intimate square, so people could address each other directly and not feel like they didn’t matter. A few days later, as a critical meeting on an upcoming movie approached, our new table was installed, solving the problem.

Still, interestingly, there were remnants of that problem that did not immediately vanish just because we’d solved it. For example, the next time I walked into West One, I saw the brand-new table, arranged—as requested—in a more intimate square that made it possible for more people to interact at once. But the table was adorned with the same old place cards! While we’d fixed the key problem that had made place cards seem necessary, the cards themselves had become a tradition that would continue until we specifically dismantled it. This wasn’t as troubling an issue as the table itself, but it was something we had to address because cards implied hierarchy, and that was precisely what we were trying to avoid. When Andrew Stanton, one of our directors, entered the meeting room that morning, he grabbed several place cards and began randomly moving them around, narrating as he went. “We don’t need these anymore!” he said in a way that everyone in the room grasped. Only then did we succeed in eliminating this ancillary problem.

This is the nature of management. Decisions are made, usually for good reasons, which in turn prompt other decisions. So when problems arise—and they always do—disentangling them is not as simple as correcting the original error. Often, finding a solution is a multi-step endeavor. There is the problem you know you are trying to solve—think of that as an oak tree—and then there are all the other problems—think of these as saplings—that sprouted from the acorns that fell around it. And these problems remain after you cut the oak tree down.

Even after all these years, I’m often surprised to find problems that have existed right in front of me, in plain sight. For me, the key to solving these problems is finding ways to see what’s working and what isn’t, which sounds a lot simpler than it is. Pixar today is managed according to this principle, but in a way I’ve been searching all my life for better ways of seeing. It began decades before Pixar even existed.

WHEN I WAS a kid, I used to plunk myself down on the living room floor of my family’s modest Salt Lake City home a few minutes before 7 P.M. every Sunday and wait for Walt Disney. Specifically, I’d wait for him to appear on our black-and-white RCA with its tiny 12-inch screen. Even from a dozen feet away—the accepted wisdom at the time was that viewers should put one foot between them and the TV for every inch of screen—I was transfixed by what I saw.

Each week, Walt Disney himself opened the broadcast of The Wonderful World of Disney. Standing before me in suit and tie, like a kindly neighbor, he would demystify the Disney magic. He’d explain the use of synchronized sound in Steamboat Willie or talk about the importance of music in Fantasia. He always went out of his way to give credit to his forebears, the men—and, at this point, they were all men—who’d done the pioneering work upon which he was building his empire. He’d introduce the television audience to trailblazers such as Max Fleischer, of Koko the Clown and Betty Boop fame, and Winsor McCay, who made Gertie the Dinosaur—the first animated film to feature a character that expressed emotion—in 1914. He’d gather a group of his animators, colorists, and storyboard artists to explain how they made Mickey Mouse and Donald Duck come to life. Each week, Disney created a made-up world, used cutting-edge technology to enable it, and then told us how he’d done it.

Walt Disney was one of my two boyhood idols. The other was Albert Einstein. To me, even at a young age, they represented the two poles of creativity. Disney was all about inventing the new. He brought things into being—both artistically and technologically—that did not exist before. Einstein, by contrast, was a master of explaining that which already was. I read every Einstein biography I could get my hands on as well as a little book he wrote on his theory of relativity. I loved how the concepts he developed forced people to change their approach to physics and matter, to view the universe from a different perspective. Wild-haired and iconic, Einstein dared to bend the implications of what we thought we knew. He solved the biggest puzzles of all and, in doing so, changed our understanding of reality.

Both Einstein and Disney inspired me, but Disney affected me more because of his weekly visits to my family’s living room. “When you wish upon a star, makes no difference who you are,” his TV show’s theme song would announce as a baritone-voiced narrator promised: “Each week, as you enter this timeless land, one of these many worlds will open to you ….” Then the narrator would tick them off: Frontierland (“tall tales and true from the legendary past”), Tomorrowland (“the promise of things to come”), Adventureland (“the wonder world of nature’s own realm”), and Fantasyland (“the happiest kingdom of them all”). I loved the idea that animation could take me places I’d never been. But the land I most wanted to learn about was the one occupied by the innovators at Disney who made these animated films.

Between 1950 and 1955, Disney made three movies we consider classics today: Cinderella, Peter Pan, and Lady and the Tramp. More than half a century later, we all remember the glass slipper, the Island of Lost Boys, and that scene where the cocker spaniel and the mutt slurp spaghetti. But few grasp how technically sophisticated these movies were. Disney’s animators were at the forefront of applied technology; instead of merely using existing methods, they were inventing ones of their own. They had to develop the tools to perfect sound and color, to use blue screen matting and multi-plane cameras and xerography. Every time some technological breakthrough occurred, Walt Disney incorporated it and then talked about it on his show in a way that highlighted the relationship between technology and art. I was too young to realize such a synergy was groundbreaking. To me, it just made sense that they belonged together.

Watching Disney one Sunday evening in April of 1956, I experienced something that would define my professional life. What exactly it was is difficult to describe except to say that I felt something fall into place inside my head. That night’s episode was called “Where Do the Stories Come From?” and Disney kicked it off by praising his animators’ knack for turning everyday occurrences into cartoons. That night, though, it wasn’t Disney’s explanation that pulled me in but what was happening on the screen as he spoke. An artist was drawing Donald Duck, giving him a jaunty costume and a bouquet of flowers and a box of candy with which to woo Daisy. Then, as the artist’s pencil moved around the page, Donald came to life, putting up his dukes to square off with the pencil lead, then raising his chin to allow the artist to give him a bow tie.

The definition of superb animation is that each character on the screen makes you believe it is a thinking being. Whether it’s a T-Rex or a slinky dog or a desk lamp, if viewers sense not just movement but intention—or, put another way, emotion—then the animator has done his or her job. It’s not just lines on paper anymore; it’s a living, feeling entity. This is what I experienced that night, for the first time, as I watched Donald leap off the page. The transformation from a static line drawing to a fully dimensional, animated image was sleight of hand, nothing more, but the mystery of how it was done—not just the technical process but the way the art was imbued with such emotion—was the most interesting problem I’d ever considered. I wanted to climb through the TV screen and be part of this world.

THE MID-1950S AND early 1960s were, of course, a time of great prosperity and industry in the United States. Growing up in Utah in a tight-knit Mormon community, my four younger brothers and sisters and I felt that anything was possible. Because the adults we knew had all lived through the Depression, World War II, and then the Korean War, this period felt to them like the calm after a thunderstorm.

I remember the optimistic energy—an eagerness to move forward that was enabled and supported by a wealth of emerging technologies. It was boom time in America, with manufacturing and home construction at an all-time high. Banks were offering loans and credit, which meant more and more people could own a new TV, house, or Cadillac. There were amazing new appliances like disposals that ate your garbage and machines that washed your dishes, although I certainly did my share of cleaning them by hand. The first organ transplants were performed in 1954; the first polio vaccine came a year later; in 1956, the term artificial intelligence entered the lexicon. The future, it seemed, was already here.

Then, when I was twelve, the Soviets launched the first artificial satellite—Sputnik 1—into earth’s orbit. This was huge news, not just in the scientific and political realms but in my sixth grade classroom at school, where the morning routine was interrupted by a visit from the principal, whose grim expression told us that our lives had changed forever. Since we’d been taught that the Communists were the enemy and that nuclear war could be waged at the touch of a button, the fact that they’d beaten us into space seemed pretty scary—proof that they had the upper hand.

The United States government’s response to being bested was to create something called ARPA, or the Advanced Research Projects Agency. Though it was housed within the Defense Department, its mission was ostensibly peaceful: to support scientific researchers in America’s universities in the hopes of preventing what it termed “technological surprise.” By sponsoring our best minds, the architects of ARPA believed, we’d come up with better answers. Looking back, I still admire that enlightened reaction to a serious threat: We’ll just have to get smarter. ARPA would have a profound effect on America, leading directly to the computer revolution and the Internet, among countless other innovations. There was a sense that big things were happening in America, with much more to come. Life was full of possibility.

Still, while my family was middle-class, our outlook was shaped by my father’s upbringing. Not that he talked about it much. Earl Catmull, the son of an Idaho dirt farmer, was one of fourteen kids, five of whom had died as infants. His mother, raised by Mormon pioneers who made a meager living panning for gold in the Snake River in Idaho, didn’t attend school until she was eleven. My father was the first in his family ever to go to college, paying his own way by working several jobs. During my childhood, he taught math during the school year and built houses during the summers. He built our house from the ground up. While he never explicitly said that education was paramount, my siblings and I all knew we were expected to study hard and go to college.

I was a quiet, focused student in high school. An art teacher once told my parents I would often become so lost in my work that I wouldn’t hear the bell ring at the end of class; I’d be sitting there, at my desk, staring at an object—a vase, say, or a chair. Something about the act of committing that object to paper was completely engrossing—the way it necessitated seeing only what was there and shutting out the distraction of my ideas about chairs or vases and what they were supposed to look like. At home, I sent away for Jon Gnagy’s Learn to Draw art kits—which were advertised in the back of comic books—and the 1948 classic Animation, written and drawn by Preston Blair, the animator of the dancing hippos in Disney’s Fantasia. I bought a platen—the flat metal plate artists use to press paper against ink—and even built a plywood animation stand with a light under it. I made flipbooks—one was of a man whose legs turned into a unicycle—while nursing my first crush, Tinker Bell, who had won my heart in Peter Pan.

Nevertheless, it soon became clear to me that I would never be talented enough to join Disney Animation’s vaunted ranks. What’s more, I had no idea how one actually became an animator. There was no school for it that I knew of. As I finished high school, I realized I had a far better understanding of how one became a scientist. The route seemed easier to discern. Throughout my life, people have always smiled when I told them I switched from art to physics because it seems, to them, like such an incongruous leap. But my decision to pursue physics, and not art, would lead me, indirectly, to my true calling.

FOUR YEARS LATER, in 1969, I graduated from the University of Utah with two degrees, one in physics and the other in the emerging field of computer science. Applying to graduate school, my intention was to learn how to design computer languages. But soon after I matriculated, also at the U of U, I met a man who would encourage me to change course: one of the pioneers of interactive computer graphics, Ivan Sutherland.

The field of computer graphics—in essence, the making of digital pictures out of numbers, or data, that can be manipulated by a machine—was in its infancy then, but Professor Sutherland was already a legend. Early in his career, he had devised something called Sketchpad, an ingenious computer program that allowed figures to be drawn, copied, moved, rotated, or resized, all while retaining their basic properties. In 1968, he’d co-created what is widely believed to be the first virtual reality head-mounted display system. (The device was named The Sword of Damocles, after the Greek myth, because it was so heavy that in order to be worn by the person using it, it had to be suspended from a mechanical arm bolted to the ceiling.) Sutherland and Dave Evans, who was chair of the university’s computer science department, were magnets for bright students with diverse interests, and they led us with a light touch. Basically, they welcomed us to the program, gave us workspace and access to computers, and then let us pursue whatever turned us on. The result was a collaborative, supportive community so inspiring that I would later seek to replicate it at Pixar.

One of my classmates, Jim Clark, would go on to found Silicon Graphics and Netscape. Another, John Warnock, would co-found Adobe, known for Photoshop and the PDF file format, among other things. Still another, Alan Kay, would lead on a number of fronts, from object-oriented programming to “windowing” graphical user interfaces. In many respects, my fellow students were the most inspirational part of my university experience; this collegial, collaborative atmosphere was vital not just to my enjoyment of the program but also to the quality of the work that I did.

This tension between the individual’s personal creative contribution and the leverage of the group is a dynamic that exists in all creative environments, but this would be my first taste of it. On one end of the spectrum, I noticed, we had the genius who seemed to do amazing work on his or her own; on the other end, we had the group that excelled precisely because of its multiplicity of views. How, then, should we balance these two extremes, I wondered. I didn’t yet have a good mental model that would help me answer that, but I was developing a fierce desire to find one.

Much of the research being done at the U of U’s computer science department was funded by ARPA. As I’ve said, ARPA had been created in response to Sputnik, and one of its key organizing principles was that collaboration could lead to excellence. In fact, one of ARPA’s proudest achievements was linking universities with something they called “ARPANET,” which would eventually evolve into the Internet. The first four nodes on the ARPANET were at the Stanford Research Institute, UCLA, UC Santa Barbara, and the U of U, so I had a ringside seat from which to observe this grand experiment, and what I saw influenced me profoundly. ARPA’s mandate—to support smart people in a variety of areas—was carried out based on the unwavering presumption that researchers would try to do the right thing and, in ARPA’s view, overmanaging them was counterproductive. ARPA’s administrators did not hover over the shoulders of those of us working on the projects they funded, nor did they demand that our work have direct military applications. They simply trusted us to innovate.

This kind of trust gave me the freedom to tackle all sorts of complex problems, and I did so with gusto. Not only did I often sleep on the floor of the computer rooms to maximize time on the computer, but so did many of my fellow graduate students. We were young, driven by the sense that we were inventing the field from scratch—and that was exciting beyond words. For the first time, I saw a way to simultaneously create art and develop a technical understanding of how to create a new kind of imagery. Making pictures with a computer spoke to both sides of my brain. To be sure, the pictures that could be rendered on a computer were very crude in 1969, but the act of inventing new algorithms and seeing better pictures as a result was thrilling to me. In its own way, my childhood dream was reasserting itself.

At the age of twenty-six, I set a new goal: to develop a way to animate, not with a pencil but with a computer, and to make the images compelling and beautiful enough to use in the movies. Perhaps, I thought, I could become an animator after all.

IN THE SPRING of 1972, I spent ten weeks making my first short animated film—a digitized model of my left hand. My process combined old and new; again, like everyone in this fast-changing field, I was helping to invent the language. First I plunged my hand into a tub of plaster of Paris (forgetting, unfortunately, to coat it in Vaseline first, which meant I had to yank out every tiny hair on the back of my hand to get it free); then, once I had the mold, I filled it with more plaster to make a model of my hand; then, I took that model and covered it with 350 tiny interlocking triangles and polygons to create what looked like a net of black lines on its “skin.” You may not think that a curved surface could be built out of such flat, angular elements, but when you make them small enough, you can get pretty close.

image

I’d chosen this project because I was interested in rendering complex objects and curved surfaces—and I was looking for a challenge. At that time, computers weren’t great at showing flat objects, let alone curved ones. The mathematics of curved surfaces was not well developed, and computers had limited memory capability. At the U of U’s computer graphics department, where every one of us yearned to make computer-generated images look as if they were photographs of real objects, we had three driving goals: speed, realism, and the ability to depict curved surfaces. My film sought to address the latter two.

The human hand doesn’t have a single flat plane. And unlike a simpler curved surface—a ball, for example—it has many parts that act in opposition to one another, with a seemingly infinite number of resulting movements. The hand is an incredibly complex “object” to try to capture and translate into arrays of numbers. Given that most computer animation at the time consisted of rendering simple polygonal objects (cubes, pyramids), I had my work cut out for me.

Once I had drawn the triangles and polygons on my model, I measured the coordinates of each of their corners, then entered that data into a 3D animation program I’d written. That enabled me to display the many triangles and polygons that made up my virtual hand on a monitor. In its first incarnation, sharp edges could be seen at the seams where the polygons joined together. But later, thanks to “smooth shading”—a technique, developed by another graduate student, that diminished the appearance of those edges—the hand became more lifelike. The real challenge, though, was making it move.

image

Hand, which debuted at a computer science conference in 1973, caused a bit of a stir because no one had ever seen anything like it before. In it, my hand, which appears at first to be covered in a white net of polygons, begins to open and close, as if trying to make a fist. Then my hand’s surface becomes smoother, more like the real thing. There is a moment when my hand points directly at the viewer as if to say, “Yes, I’m talking to you.” Then, the camera goes inside the hand and takes a look around, aiming its lens inside the palm and up into each finger, a tricky bit of perspective that I liked because it could be depicted only via computer. Those four minutes of film had taken me more than sixty thousand minutes to complete.

Together with a digitized film that my friend Fred Parke made of his wife’s face around the same time, Hand represented the state-of-the-art in computer animation for years after it was made. Snippets of both Fred’s and my films would be featured in the 1976 movie Futureworld, which—though mostly forgotten by moviegoers today—is still remembered by aficionados as the first full-length feature to use computer-generated animation.

PROFESSOR SUTHERLAND USED to say that he loved his graduate students at Utah because we didn’t know what was impossible. Neither, apparently, did he: He was among the first to believe that Hollywood movie execs would care a fig about what was happening in academia. To that end, he sought to create a formal exchange program with Disney, wherein the studio would send one of its animators to Utah to learn about new technologies in computer rendering, and the university would send a student to Disney Animation to learn more about how to tell stories.

In the spring of 1973, he sent me to Burbank to try to sell this idea to the Disney executives. It was a thrill for me to drive through the red brick gates and onto the Disney lot on my way to the original Animation Building, built in 1940 with a “Double H” floor plan personally supervised by Walt himself to ensure that as many rooms as possible had windows to let in natural light. While I’d studied this place—or what I could glimpse of it on our 12-inch RCA—walking into it was a little like stepping into the Parthenon for the first time. There, I met Frank Thomas and Ollie Johnston, two of Walt’s “Nine Old Men,” the group of legendary animators who had created so many of the characters in the Disney movies I loved, from Pinocchio to Peter Pan. At one point I was taken into the archives where all the original paper drawings from all the animated films were kept, with rack after rack after rack of the images that had fueled my imagination. I’d entered the Promised Land.

One thing was immediately clear. The people I met at Disney—one of whom, I swear, was named Donald Duckwall—had zero interest in Sutherland’s exchange program. The technically adventuresome Walt Disney was long gone. My enthusiastic descriptions were met with blank stares. To them, computers and animation simply didn’t mix. How did they know this? Because the one time they had turned to computers for help—to render images of millions of bubbles in their 1971 live-action movie Bedknobs and Broomsticks—the computers had apparently let them down. The state of the technology at the time was so poor, particularly for curved images, that bubbles were beyond the computers’ reach. Unfortunately, this didn’t help my cause. “Well,” more than one Disney executive told me that day, “until computer animation can do bubbles, then it will not have arrived.”

Instead, they tried to tempt me into taking a job with what is now called Disney Imagineering, the division that designs the theme parks. It may sound odd, given how large Walt Disney had always loomed in my life, but I turned the offer down without hesitation. The theme park job felt like a diversion that would lead me down a path I didn’t want to be on. I didn’t want to design rides for a living. I wanted to animate with a computer.

JUST AS WALT Disney and the pioneers of hand-drawn animation had done decades before, those of us who sought to make pictures with computers were trying to create something new. When one of my colleagues at the U of U invented something, the rest of us would immediately piggyback on it, pushing that new idea forward. There were setbacks, too, of course. But the overriding feeling was one of progress, of moving steadily toward a distant goal.

Long before I’d heard about Disney’s bubble problem, what kept me and many of my fellow graduate students up at night was the need to continue to hone our methods for creating smoothly curved surfaces with the computer—as well as to figure out how to add richness and complexity to the images we were creating. My dissertation, “A Subdivision Algorithm for Computer Display of Curved Surfaces,” offered a solution to that problem.

Much of what I spent every waking moment thinking about then was extremely technical and difficult to explain, but I’ll give it a try. The idea behind what I called “subdivision surfaces” was that instead of setting out to depict the whole surface of a shiny, red bottle, for example, we could divide that surface into many smaller pieces. It was easier to figure out how to color and display each tiny piece—which we could then put together to create our shiny, red bottle. (As I’ve noted, computer memory capacity was quite small in those days, so we put a lot of energy into developing tricks to overcome that limitation. This was one of those tricks.) But what if you wanted that shiny, red bottle to be zebra-striped? In my dissertation, I figured out a way that I could take a zebra-print or wood-grain pattern, say, and wrap it around any object.

“Texture mapping,” as I called it, was like having stretchable wrapping paper that you could apply to a curved surface so that it fit snugly. The first texture map I made involved projecting an image of Mickey Mouse onto an undulating surface.

I also used Winnie the Pooh and Tigger to illustrate my points. I may not have been ready to work at Disney, but their characters were still the touchstones I referenced.