The Dawn of Something Special
January 20, 2014

The Dawn of Something Special

Dr. Ivan Sutherland looks back on the early days of computer graphics.

When discussing the history of computer graphics, there are a few names that will always be at the forefront of discussion: Jim Blinn, Carl Machover, and Ed Catmull, to name a few. But there is only one who has earned the distinction as the father of computer graphics: Dr. Ivan Sutherland.

Sutherland has been responsible for many pioneering advances and fundamental contributions to the CG technology used for information presentation, as well as the interactive interfaces that allow people to utilize computers without the need for programming.

Back in the early 1960s while at MIT, Sutherland had devised the Sketchpad interactive graphics software - a breakthrough application that allowed users to directly manipulate figures on a computer screen through a pointing device. Sketchpad was years ahead of its time and served as a conceptual progenitor to today's graphical user interface, used in everything from computer workstations to smartphones. Sketchpad was capable of automatically generating accurate drawings from rough sketches by depicting the component elements of objects and their interrelationships. It could draw horizontal and vertical lines, and combine them into figures and shapes, which could be copied, moved, rotated, and resized while retaining their original properties. (Today's CAD system is a descendent of that program.)

 "I was driven by the idea that I could integrate a numerical representation of an object with a graphical representation, and that by manipulating the graphical representation, I could manipulate the underlying numerical representation," Sutherland had stated in the documentary The Story of Computer Graphics.

Sutherland earned his bachelor's in electrical engineering from Carnegie Institute of Technology, which later became Carnegie Mellon University, his master's from Caltech, and his PhD from MIT in electrical engineering and computer science (EECS). Afterward, he held a number of posts, from head of the US Defense Department Advanced Research Project Agency's Information Processing Techniques Office, to associate professor at Harvard, to professor at the University of Utah. His work alongside students enabled the industry to take giants strides in CG technology with the development of a computer graphics line-clipping algorithm, a virtual reality and augmented reality head-mounted display system, Gouraud shading, anti-aliasing, and more. In fact, Sutherland designed many of these fundamental algorithms now used in CG.

By the late '60s, he and Dave Evans co-founded the Evans and Sutherland Computer Corporation with colleagues from the University of Utah, developing CG workstations such as the ESV series, and producing pioneering work in digital projection and simulation. His work has supported applications ranging from computer operating systems to video editing, animation, 3D modeling, and virtual reality. Presently, he is a visiting scientist at Portland State University.

Over the years, Sutherland has received many awards. Several months ago, he was presented with the Kyoto Prize - Japan's highest private award for global achievement - in Advanced Technology for his lifetime of pioneering work in developing graphical methods of interacting with computers.

At the Kyoto Symposium Organization's presentation gala, Dr. Sutherland spoke to Karen Moltenbrey, Computer Graphics World 's chief editor, about his accomplishments and the industry in general.

What prompted you to pursue this then-new field so many years ago?

When I was in grade school and in early high school, our textbooks all had to be covered. My friends all had fancy covers from Yale, Princeton, or Cornell, where they aspired to go to college. My mother argued that we could not afford such fancy covers, but she had some blueprints from my father's civil engineering work, and they were large enough to cover the books. So my books were covered with blueprints. I got bored in class and started looking at the blueprints to figure out what they meant. As a result, I could read blueprints before I got very far in high school. I liked them - they said quite a lot in just a few lines. Then when I went to college, I had to take an engineering drawing class. The purpose was to teach us to read blueprints, which I could do already, and to make beautiful drawings, which I hated because I had neither the manual dexterity nor the patience to do that. When I would erase something, it would make a mess. I thought, wouldn't it be nice if we could do something better?

When I got to MIT, I stumbled into the MIT Lincoln Laboratory and was lucky enough to use the TX-2 computer. You have to realize that was the largest computer in the world at the time - it filled a room. [The transistor-based computer had 64K 36-bit words of core memory] - less computing power than you have in your cell phone. [Yet] it had twice as much memory as the next-largest computer. It had been built as an experimental machine to see how transistors could be used in large numbers to build computer equipment. It was used online, while all the other computing activities that were going then required you to put your deck of punch cards into the computer and two hours later, or two days later, you would get a stack of printouts back.

I was allowed to use the TX-2 for hours at a time as my personal computer. So that was a stroke of luck, but I already had some notion of what engineering drawings looked like, and I thought perhaps we could do that on this computer -and that is what we did. You have to remember that the display system on the TX-2 was a point-plodding display. There was an instruction within the instruction set that allowed the computer to flash one dot at one of a billion locations on the screen; that was the total capability of the display. Raster displays hadn't been invented yet, line drawing displays hadn't been invented yet, character displays hadn't been invented yet.

Takes us back to your work with Sketchpad.

It came about because I thought engineering drawings were interesting and I thought this computer could do that [digitally], so I figured out things to do to make that happen. And here they were, the pictures. Nobody had seen a computer produce pictures at that time, let alone ones that could move. So it opened people's eyes to what was possible.

Sketchpad itself was not very useful. It was not developed to a point where ordinary engineers could use it, nor was it affordable. It was the only machine in the world that could do this work. It was a very expensive machine, so computer time was quite precious, but it was a demonstration of potential, and that is why [Sketchpad] was valuable.

Was the start, or at least the spark, that lead to more developments computer graphics?

You can say that, but I didn't think of it that way. I just thought it was a smart [way] to earn a PhD from MIT.

What was your vision at that point for the technology?

I didn't have much vision for the tech going forward. The next thing I did was go into the US Army. I had been in ROTC and had been putting off military service until I got my PhD, so I spent a few years in the army, and the army saw fit to send me to the National Security Agency - I was told they had computers there. They wouldn't tell me how many.

I then went to Harvard. I had seen some work at the Bell Helicopter Company, which demonstrated that a camera could be slaved to the position of user's head: The user would think of himself as being where the camera was. A good demonstration of that was a camera in the presence of a game of catch - two people throwing a ball back and forth. The observer is in a different room watching the ball go back and forth by turning his head, and the camera would turn to watch the ball. Then they threw the ball at the camera and the observer ducked. So it was quite clear that he had identified himself as being where the camera was rather than where he was. I thought, hey, we could put a computer there to compute. To do that, you have to make perspective drawings that change in real time, and the computers in those days were not capable of doing that, so we had to build them special to do that job. That special equipment ended up being the first products that the Evans and Sutherland Company ever made.

What was it like at the University of Utah back in the early '70s when there was so much innovation happening in CG?

There was an outpouring of computer graphics in those days. I thought about this quite a lot because many people have asked me what brought about that phenomenon at the University of Utah in the late 1960s/early 1970s. There were a lot of people trained there. John Warnock (co-founder of Adobe) was there, Ed Catmull (current president of Walt Disney Animation and Pixar Animation Studios), Jim Clark (co-founder of Silicon Graphics)...a bunch of people were there, the top people in computer graphics. So the question is, why did that happen?

I think a good research program takes three things. First, there needs to be a good problem, one that could be solved now that could not be solved in the past. In the computing field, that often comes about because the increase in computing power makes the problem solvable today that wasn't solvable yesterday. So a good problem in this case was, how do we make pictures of objects that look solid instead of being wireframe drawing? The second thing you need is support, or money, to make the research possible. At that time, DARPA was fairly generous with supporting university research in a number of fields, and there was a DARPA contract at the University of Utah that provided the resources that were needed. And the third thing that is needed - and this is the hardest thing to identify and find - is leadership. There was a wonderful man there by the name of Dave Evans, and he was the founder of the Computer Science Department at the University of Utah. Why was he there? Because the president of the University of Utah at the time was Jim Fletcher, who was twice the administrator of NASA. Jim Fletcher provided the leadership and recruited Dave Evans from Berkeley, telling him he had to come start a computer science department at the University of Utah.

Dave Evans' leadership was superb. Everyone trusted him. He had a view of what direction we should take and posed the problem of making realistic-looking pictures [on the computer]. He recruited me and some other people who made interesting things happen. When you see interesting research happening, you look to see who the leader is because there always is one.

What were the hurdles to furthering the research at the time?

A fundamental problem in computing is often the algorithms. How do you find the efficient algorithms to do some particular job, be they implemented in hardware or software? At that time, the algorithms to do computer graphics were not well understood, and many of the early ones evolved at Utah as various people put their minds to 'doing something that's better than what is done now.' At the end of that period, Bob Sproull, and Bob Schumacker, and I published a paper called 'A Characterization of Ten Hidden Surface Algorithms' that paper pointed out that of the 10 then-known algorithms used to do solid-looking pictures, all of them used sorting. The problem was finding out what was in front of what. To do that, you have to sort things geometrically so you know what is in front. You have various choices to do that sorting: Do you sort in X, then in Y, then in Z? Or in Z, then in Y, then in X? The order in which you sort matters, so is the kind of sorting that you use. It's widely known that depending on the statistics of the things that you sort, the best sorting algorithm will be different. If things are almost sorted already, then you use one kind of algorithm, and if they are random - there is no order to them at all - a different kind of algorithm will be better. That paper pointed out that this problem is really one of sorting.

Having written that paper, I stopped doing computer graphics because I did not find sorting all that interesting.

What did you find more interesting?

I went to Caltech and worked with Carver Mead, and there was an integrated circuit revolution going on, which happened because of Conway's leadership in teaching how to do it. That integrated circuit revolution provided the trans-engineers [with the capability] to make all the exciting results in integrated circuits for the next 30 years. I had been involved in integrated circuit design every since [and did not return to computer graphics].

What were some of the more memorable advances in computer graphics, in your opinion?

How do I answer that question? I don't want to point a finger at any one thing. It's pretty thrilling, you have to admit. How do you identify the key things?

Do you have a vision of what CG will be like in the next decade or beyond?

I have a stock answer for this: Ask the young people who make it so, don't ask the old guys because we haven't been in the field for 25 years. They are the ones who will make it happen. It's going to be exciting. We know that. We just don't know what it's going to be.

Is there some new technology that's needed in order for us to take a big leap forward?

Government people are fond of talking about a paradigm shift. If you look up paradigm shift on the Web, you will find a little movie called 'A Paradigm Shift.' We take two coins for a total of 20 cents and put them on a table and move them sideways in the movie and that is a paradigm shift. (laugh). The paradigm of how you use computers shifted from batch processing to online, and graphics had something to do with that. The paradigm of who designed integrated circuits shifted from a few commercial individuals to academic institutions that teach people to do this. Conway had something to do with that, and I like to think I made some small contribution. Today, we have a design paradigm for digital hardware that's called the Clock Design Paradigm. You buy a 3.5GHz processor, and 3.5GHz refers to the clock frequency. The clock's paradigm depends on all parts of this design - on every clock tick things are done simultaneously. And simultaneous over space is impossible, that's what Einstein teaches us. You may recall when Curiosity landed on Mars. Nobody thought that a pilot on Earth could control the landing [with a thirteen and a half-minute delay]. Everyone's circuit today is just big enough that you cannot do anything simultaneously over the entire circuit, the entire chip area, so you have to do something else.

People are struggling today to extend the synchronous paradigm over larger areas, and they can't. It's physically impossible. I think there is an alternative. It is to give up the Clock Design Paradigm in favor of self-timing. Each part of the machine can do its thing and provide and deliver its answer to the next piece - instead of giving this piece of equipment exactly [a specific time] to do its job. Well, some jobs are easy and some are hard. So, why don't we take a little longer for the hard ones and a little shorter for the easier ones, and get the same performance with less energy consumption and less stress on the designer? That is a paradigm shift that I think is going to happen. It's inevitable, and physics says that it must happen.

But today we have legions of designers who know how to design synchronous equipment. We have billions of instructions of computer-aided design code that all depends on the synchronous paradigm. We only have a half-dozen places in the US, universities, that are doing asynchronous studies, self-timed, that understand how to do self-time design. I try to devote my efforts to enhancing that activity so that we can be prepared with designers and techniques, and understand when it becomes clear to management that the synchronous design paradigm has run its course.

Other people are also pushing that, as am I. But the vast majority of the computer world is resisting because they do not wish to be dragged screaming into a new world. People hate doing things in new ways. Ultimately, the self-time paradigm will prevail because physics says it must.

So much of the development of CG occurred in small pockets, at the university level, with only a few people having access to the technology. Now a lot of high-end CG technology has gone mainstream. What is your reaction to that?

Whoopee. I love computer-generated movies. If you go to some of Pixar's stuff, it is very entertaining. The fact that it is generated by computers is irrelevant. It's great stories. The great things that Pixar does are done by the artists and the creative people, and the computer graphics is just incidental to making it happen. It's like the stage, the wood of the stage, which is just incidental to what the choreographer has done with the dancer. Here, the movie projector is incidental to what Hollywood has produced in terms of great movie picture literature. I think that computer graphics is a very nice technology, but one must remember that it is the supporting role, not the front role.