User Interfaces: Survey Information
9 May 1997
Often it is hard to learn a new tool. This is particularly the case
in the complicated world of the computer where there are many different
technologies (software tools) and many different ways to access them (different
hardware and different screen layouts.) Bridging the gap between
the technology and the user -- making the technology easy to learn and
easy to use -- is concern and the task of the "user interface."
Users are different people who might use a different technology.
Interface is getting in there and using it.
From this report, I hope that the reader will receive information which
puts computer user interfaces into perspective. To receive that perspective,
I hope that you will get some picture of the philosophy, history, present
and potential future forms of the computer user interface.
User refers to the different people who might be using a certain tool.
In these times of wide software distribution, there are actually many different
users to consider when designing an interface. Interface seems to
be the most appropriate word for describing how a person will need to get
access to a technological capability. The user develops a conceptual
understanding of the way in which the tool, say computer, works, and uses
that conceptual understanding the features available to accomplish available
tasks, putting information to work.
Any technology used by humans has a user interface.
Any technology that is used by humans and even many that are not have some
sort of user interface. Most user interfaces are simply design choices
and arrangement of the visible or accessible features of a technology that
can be used to bridge the gap to the technology's capabilities. Sometimes
it is necessary to hide certain features of the technology to make it more
comprehensible and easier to understand. In all complicated cases
at the minimum and probably all cases, the user interface should be chosen
and designed very carefully. The user is the ultimate concern in
the user interface, and the user interface goes a long way to determining
the success of a certain product.
I will restrict myself to computer user interfaces and try to present
a survey perspective about the computer user interface.
To cover all of the details of user interfaces in a single report is an
impossible task. My plan is to do three things: limit myself to the
world of computers, in which I have the opportunity to provide some interesting
and structured survey information; restrict myself to a somewhat heuristic
approach, giving commentary and thoughts and only some formal theoretical
background; and chose some real technologies in the world of computers
to give some specific thoughts about. By technologies, I do not
mean actual physical computers or computer models. Rather I mean
more general aspects of the user interface for computers, even if a certain
user interface technology was generally associated with a specific computer.
I hope to also give some historical as well as future looking information
about the computer user interface.
Jump to Outline
Not only is the computer and the interface to it becoming increasingly
important in everyday life, but the computer provides a fertile, malleable
ground for user interface design. In the last decade or so, the most
interesting user interface work has been in the human-computer interface
(HCI) for computers. Much work has gone into the development of user-friendly
and easy-to-learn HCI, possibly because the computer user interface is
malleable, partly because there is no final best computer user interface,
and largely because computers are being used everywhere.
Computers are an interesting case example for studying user interfaces
for many reasons. The computer is a very complicated object.
Computer scientists, essentially specialists, are needed to write programs
for it and such programs are sophisticated and complicated, requiring complex
tools and man hours of effort. Furthermore the complication is not
just in the end-user programs but also in the hardware that makes up a
computer. In fact the complication starts with the hardware!
For many computer companies, the user interface is icing on top of their
primary concerns: getting the actual program or hardware to work.
The goal is to get people using computers as effectively as possible
as easily as possible.
Many computer companies' neglect the icing. User interface design
is an area which can add significant value to users of a particular technology.
Any tool that is easier to learn and use and extend and understand, will
provide more value to its users.
User interface is the icing for many companies. Many companies
neglect the icing.
People have been looking at the problem of improving user interfaces for
many years. Not all of that work is entirely heuristic or fuzzy.
The military has done significant theoretical work in Human Factors theory
to better codify and understand the principles that are important in user
Jump to Outline
Human Factors Theory:
For a solid treatment of Human Factors theory, see http://www-inst.eecs.berkeley.edu/~eecsba1c.
Human Factors finds its roots in the following fields, where the first
three have been the biggest influence: Computer Science, Cognitive Psychology,
Ergonomics and Human Factors, Engineering and Industrial Design, Anthropology,
Sociology, Philosophy, Artificial Intelligence. Within Computer Science,
Human Factors usually falls into the category of User Interface Design.
This is an appropriate name, since the field of HF is ultimately concerned
with the user, enhancing the users' experience be it in productivity or
understanding or capability.
Human Factors (HF) concentrates on the user with an interdisciplinary
The formal field of Human Factors begain in aviation where it became evident
that human safety was at issue when pilots became too challenged by the
design of their cockpits (see Human Factors Guide to Aviation, http://www.hfskyway.com/hfg/c01s01.htm.)
The first identifiable work in the area of equipment design and human performance
was done during world War II (Preece, Jenny. A Guide to Usability:
Human Factors in Computing, 14). This work was concerned primarily with
eliminating certain accidents related to cockpit design and aircrew performance.
In fact, much of the pioneering work related to equipment design, training,
human performance under stress, vigilance, and other topics was conducted
and published in the period following the war.
HF originated in aviation design in the military.
Ergonomics was the word used in Europe to describe this field of study.
The original distinction between ergonomics (the word's Greek roots mean
"the study of work") and human factors has gradually disappeared.
The distinction was officially removed recently when the Human Factors
Society changed its name to the Human Factors and Ergonomics Society (see
http://hfes.org/). The terms "human
factors" and "ergonomics" are used interchangeably.
At present, Human Factors is a growing field, with new applications of
Human Factors principles being developed often. For computer applications,
for instance, Human Factors is both part of the software engineering design
cycle as well as part of the end user experience. Design is also
the appropriate work for mechanical design with many Industrial Design
companies being started to design instruments and the physical form of
products that appeal to the end users. Human Factors also is used
in the design of medical products and medical applications. At present
there are human factors specialists in each of these fields who can apply
the principles and the interdisciplinary theory of human factors in analysis
of a particular technology.
HF is being applied in many different fields.
HF is best seen as a human jumping from one side of a cliff to the
Here we see a person attempting to jump from one side of a cliff to
the other. The challenge is proportional to the size of the gap
between the two cliffs. The person is a user of a computer system. The
user begins with Task Inception. This is the point where the user has
decided to accomplish a task by using a computer system. The user
faces the challenge of computer system in order to reach Task
Completion. In an ideal system there would be no challenge; the user
could accomplish the task without having any human factors agitated.
Human Factors are the user limitations, for example users have limited
patience, limited memory, and need analogies. In an ideal computer
system there would be no challenge to these factors, and
thus the gap between Task Inception and Task Completion would go to
5.Changes in mood
6.The need for motivation
12.Process information non-linearly
16.Can only perform a limited number of concurrent tasks
17.Short-term memory works differently than long-term memory
18.Users are all different
19.Think in terms of ideas composed of words, numbers, multimedia,
21.Must see and hear to understand
23.Need information presented in sets of threes
24.Need complex information presented hierarchically
25.Confined to one physical location at a time
26.Require practice to become good at doing things
27.Embarrassment can act as a limitation to accomplishing some tasks
28.Tend to do things the easy way
29.Resistance to change
30.Can be physically harmed by some tasks
31.Prefer to learn by doing than by explanation
32.Have difficulty converting ideas into modes of communication
33.Have difficulty converting modes of communication into ideas
35.Sometimes affected adversely by stimuli such as color and patterns
37.Miss details when tasks are memorized and performed cursorily
38.Can be affected by socio/political climate, which the designer
has no control over.
39.Prefer standard ways of doing things
40.Constrained by time
42.Work better in groups than individually (1+1=3)
43.Require tasks to be modularized in order to work in groups
44.Use intuitions to construe information that is sometimes wrong
45.Rely on tools to complete tasks (like spell checking) thus causing
46.Must delegate responsibility in order to free the mind of complexity
48.Associate unrelated things
49.Sometimes do not trust what is not understood
Identifying the Human Factors associated with a design is the most important
thing an HCI practitioner can take into account when designing a system.
The second most important thing that can be done is user testing: seeing
if the user is able to effortlessly accomplish the intended task. These
two principles are the most compelling ones in the design cycle.
Jump to Outline
The keyboard is perhaps the most ubiquitous computer interface. Resembling
a typewriter, the keyboard was introduced as a good alternative to input
via single buttons or flip switches that was used in early computers.
The keyboard works by translating each keypresses into binary codes, which
in many senses simply expedites this input of bits that the computer can
QWERTY Keyboard Layout
In the history of keyboards, it is interesting to note that the keyboard
design is not necessarily optimal in either efficiency or in ease of use.
At present the QWERTY keyboard layout shown above requires a significant
amount of training to learn proficiently. The QWERTY keyboard was
chosed originally for typewriters in order to slow typists down as to avoid
jamming the typewriter mechanism (see http://www-leland.stanford.edu/~pwr/dvorak.)
Since the computer keyboard is not constrained by the requirements of the
typewriter, it is possible and indeed easy to have a different layout.
A more efficient layout that better suites users was the DVORAK computer
layout shown below. But this keyboard layout was not adopted, largely
due to resistance from the existing base of QWERTY users and typing trainers.
DVORAK Keyboard Layout
From a Human Factors perspective, the QWERTY keyboard is bad.
Repetitive stress injury and finger and wrist strain are common among heavy
computer users largely because the layout of the computer keyboard is less
than optimal. Furthermore, QWERTY is difficult to use, and touch
typing requires concentration and skill. To some degree, keyboards
overload the human factors, especially among novice users. Nevertheless,
familiarity breeds comfort regarding the keyboard. There are many
extremely proficient keyboard users, and there are many computer users
who would prefer typing to handwriting for many applications.
One of the most significant steps in improving the user interface to the
computer after the keyboard was the mouse, a wired pointing device that
sits on its own platform usually near the keyboard. The mouse greatly
improved the ability of a computer user to point and select, simply with
a mouse movement and a click. Such movements are quite natural and
are analogous to pointing. Especially with the increasing popularity
of graphical user interfaces, the mouse has become essential in the use
of a computer.
Different mouse shapes have evolved, including mice that are sized differently
for different size hands. Substitutes for the mouse include rollerballs
that can be used to specify displacements for cursor movement. Touch
pads have also been introduced to substitute for the mouse. Displacements
are specified by touching the pad and then dragging the finger.
Quite obviously, the idea of pointing that exists with the mouse or mous-like
device is extremely powerful. It has been incorporated into almost
all computer systems. The mouse fits well in human factors.
Not only is the mouse simple and easy to use, but it requires no real learning
or instruction. Just click and point and that's it.
The idea of pointing has been taken further in certain modern displays
by allowing dragging and ultimately handwriting recognition. Handwriting
recognition replaces the keyboard by allowing the user to input via gestures.
Handwriting seems to be a natural input for many users, though the technology
has not advanced to the point where it is possible to readily recognize
cursive handwriting. But handwriting input looks like a promising
input to computers in the future if not as a replacement for the computer
then likely as a occassional substitute for it.
Palm Pilot Personal Organizer
Handwriting recognition technology has become most popular in the Pilot
personal digital assistant show above (see
http://www.usrobotics.com/). The Pilot uses a variation on traditional
printing known as grafitti, which enables gestures to be able to be recognized
as inputted letters into the Pilot. There is no keyboard in the Pilot,
though there is an on screen keyboard which can be used to pick out letters.
It is quite likely that the grafitti system used in the pilot will be used
in other computer systems as well. Handwriting input is quite natural
for most users. Handwriting technology is also merging towards general
gesture recognition technology, where inputted pen movements can determine
the ways that the computer behaves, eventually giving the computer the
ability to understand specifications of not only letters but also images
and drawings. The only drawback right now is that the recognition
algorithms are not 100% accurate. Furthermore, handwriting into the
pilot is not as quick as using typing in a full size keyboard for most
Speech Input and Output
Speech input is becoming increasing popular as well as an alternative to
the keyboard and also a command specification mechanism. Speech recognition
appears to be one of the holy grails of computer use as far as human factors
is concerned. For the last twenty years, many computer developers
have looked at speech and conversational input as a potential great improvement
in the input to computers.
Voice Input to a Computer
Icon from IBM Voice Type Software
Speech recognition has proven extremely difficult. Right now, present
methods require significant computing power to implement speech recognition
algorithms and furthermore, such recognition is not always terribly accurate.
But such technology is improving with research from many different sides.
With increasing computing power, speech recognition looks like it will
be more prevalent in computer interfaces.
Audio output is somewhat more common. Not only is audio output a
somewhat more tractable problem, but it is a ready and can provide immediate
value to most programs.
What's most troublesome right now is that there are no guidelines available
for speech input or output use in computers. This is actually a hot
topic in current User Interface research, with a wide variety of people
working under human factors methodologies to create the best user interfaces
Virtual Reality Inputs
In the category of more exotic input technologies, the data glove is one
of the first technologies that comes to mind. The data glove looks
like a regular glove, but together with its interface to the computer,
it allows the user to specify three dimensional movements, grasping, and
other operations much like a traditional hand. The feel for many
users is having a virtual hand.
Spaceball is another technology that improves on the mouse. It is
essentially a mouse with six degrees of freedom allowing a user three dimensional
displacements in environments that support it.
The spaceball and data glove and the head mounted display, described later,
are often seen as the most likely future interfaces when Virtual Reality
technologies become more popular for input. Virtual Reality is the
name given to the three dimensionally based environments that computers
can create through significant computation. These three dimensional
environments presented in a certain way can give the user an immersive
experience, hence the name virtual reality.
Jump to Outline
Though it is often not viewed as a "user-interface" per se, there
is no doubt that the display of computers are perhaps one of the most important
aspects of computers and the computer use experience. Computers thirty
years ago did not have displays. They only had punch cards and binary
input and output. Eniac, the earlier computer, for instance had a
series of light bulbs which displayed the output results. Inputs
were enterred via physical vacuum tube switches to represent binary numbers.
Later versions of computers were able to output results onto paper and
tickertape which improved life for some users.
Early displays for mainframe computers were text screens that scrolled
sequentially. Significant advances were required to have a single
screen that was able to be updated without the appearance of scrolling.
One technology that enabled displays was the bitmapped screen where a single
light point on a screen or pixel could be turned on or off based on the
computer output. A single display involved scanning from top to bottom
through each pixel to give the final picture.
Early Bitmap Display from an early Alto computer
at Xerox PARC (not a color display)
These bitmap screens eventually incorporated color. Multiple beinary
bits represent different levels of contribution of some primary colors,
for instance red, green, and blue. Different colors give the representation
of three dimensions or shadows and give highlights to certain visual elements.
Colors also make images pictures more realistic. The visual experience
possible with color is significantly more elaborate than the experience
possible with a single color.
In the push towards making pictures more realistic, display resolution
has expanded quite dramatically. Only ten years ago, the maximum
display resolution was 640x480 pixels on a 13 inch screen. At present,
1280x1024 pixels displays on 21 inch screen are common. The added
pixels and resolution improves the representation of text and images.
Extra resolution has also increased the real estate available on the screen.
This means that more windows can be
placed on a screen, more accurate or detailed images can be used to represent
commands. In almost all ways, the user can see more realistic and
more pleasing pictures.
Toshiba Laptop Computer
With the move towards more portable computers, there has been a corresponding
evolution from black and white laptop displays to high color LCD displays.
The extra resolution enhances a users experience in much the same way as
a large, regular CRT based display. But LCD display technology is
also enabling other sorts of displays, such as head mounted displays.
Future Display Technologies
Head mounted displays (HMD) are often seen as the likely future of display
technology. A head mounted display generally has two small LCD screens
placed in the lens area of a mechanism that looks like glasses. By
placing the LCD's in such a mechanism and then wearing the glasses, it
is possible to give the user the impression that he or she is immersed
in the display. The users field of view includes not much else other
than the LCD screen. Provided the images are high resolution, and
LCD technology is improving to enable this, the user will get a highly
immersive experience through the head mounted display.
Virtual Reality Game
Displays are not only shrinking, but are also getting larger. Some
of the most sophisticated displays are extremely high resolution (300dpi)
LCD displays that are the size of large screen TV's, 40-inches or so.
These displays can hang from walls as decorations or can be used more practically
as white-board style displays.
Technology has advanced to the point where it is possible to enable certain
displays to be responsive to touch, either of a finger or of a stylus to
the screen. Such touch responsiveness will enable interesting forms
of interaction, including giving white board style interaction with displays.
This concept has been demonstrated already in the Xerox LiveBoard, a computer
hooked up to a touch responsive, large screen TV display. The Liveboard
can be used by various users to brainstorm or simply run computer programs
in view of multiple users. The Liveboard was developed at Xerox PARC.
UC Berkeley Infopad
Another style of collaborative interaction, also developed at Xerox PARC
and also at UC Berkeley in the Infopad project, involves using a Liveboard
style main screen but giving each user a pad-like LCD display that they
can use as their own copy of the whiteboard. Their pad, called the
Infopad at UC Berkeley, can be used collaboratively to share notes in
real time with others of the same computer. Essentially the paradigm
enables multiple users to share their displays and computer resources in
a convenient way.
Jump to Outline
Graphical User Interface (GUI)
Command Line Input
Though this section is concentrating on the graphical user interface,
no discussion of computer user interfaces would be complete without some
mention of the command line.
DOS Command Line
The command line is still quite popular among users of UNIX and users of
DOS, two extremely popular operating systems that had no real graphical
interfaces. Command line interfaces required users to memorize keywords
to accomplish certain basic computer, such as copy, edit, or run a file
or list available files. The GUI developed as a response to
the command-line interface, in an effort to make computers easier to use,
to remove the need to memorize commands, to make finding and running applications
Xerox PARC Star Computer
Perhaps the most significant event in the history of the graphical user
interface was the development of the Star computer at the Xerox Palo Alto
Research Center (PARC) in 1979, the first graphical user interface.
The graphical user interface was remarkable in that it was an extreme departure
from the command line text based interfaces in early computers. It
made using computer much easier and more accessible and matched users human
factors extremely well. The Star graphical user interface more closely
resembles the graphical user interfaces of today than the text based interfaces
of the past.
Xerox Star User Interface
The Star was also particularly remarkable in that it was developed in a
human factors design cycle. The users capabilities, preferences,
learning styles were all incorporated into the initial design of the Star
and iteration was used to determine the final design. The Star was
developed with the user in mind. The user was not left to fit into
the interface once it was developed.
Some of the principles that were developed with the Star computer included:
direct manipulation, WYSIWYG (What You See Is What You Get), consistency
of commands (Winograd, 33-35). Direct manipulation involved giving
the user metaphors for certain actions, such as file cabinet icons and
folders to represent where files were stored, and then allowing the user
direct control of those icons in the sense of being able to move them around
or open them. WYSIWYG was an important innovative use of the bitmap
display; pictures and font styles were incorporated into the display rather
than just in available text representations. Consistency in commands
across applications was also a key innovation in the Star computer.
With similar commands from application to application, such as File->New,
Edit->Copy, learning and using and even writing applications has
Many of the most sophisticated early and current computer users have used
workstations, computers that generally use the highest-end processors and
peripherals of the day. Most workstations used some version of the
UNIX operating system, a powerful text-based operating system. Users
of workstations at MIT developed the X-Windows environment in the early
1980s, taking some of the ideas from the Star computer and giving the UNIX
operating system a windowing environment.
X-Windows System User Interface
X-windows is a solid, highly extensible implementation of the ideas presented
in the Star User Interface. At present there are at least a dozen
different window managers, the programs that determine the look and feel
of the interface, and X-windows internal model gives the user great control
over many of the look and feel features in the operating system.
With its origins in the text-based early UNIX systems, though, understanding
how to change colors and other options definitely requires computer sophistication.
In the early 1980's, Xerox decided that personal computers were far removed
from their core copying business and decided not to mass market the Star
System. At that point, another Silicon Valley company based in nearby
Cupertino, Apple Computer was doing extremely well selling the Apple I
and Apple II computers. In the early 1980's, Apple Computer founder
Steven Jobs visited Xerox PARC and received a demo of the Star user interface.
Jobs had a vision and took the ideas in the Star user interface and developing
the follow-on to the Apple series of computers, the Macintosh.
Apple's initial version of the Macintosh, the LISA, incorporated ideas
from the Star graphical user interface, but was very difficult to manufacture
and hence very expensive. With iteration, Apple's next version, the
Macintosh, was widely successful when it was released with much fanfare
and an Orwellian Theme Super Bowl Commercial in 1984.
Macintosh Super Bowl Commercial
The 1984 theme was a direct attack on the command-line based PC domination
of IBM in the computer world of that time. The Macintosh's particular
appeal to users was its ease of use, its interface. The Macintosh
user interface was significantly easier to use than the text access to
IBM mainframes or the Microsoft-DOS interface to the early IBM personal
computers. This does not mean that the Macintosh was the most successful
computer however. Most companies chose to use IBM personal computers
largely because of IBM's corporate reputation and the vast amounts of software
available to them on the IBM PC platform.
Macintosh User Interface
Still, Microsoft and the other players in the PC industry realized that
they needed to develop a more usable, graphical computer interface.
Microsoft took many of the ideas available in the Macintosh User Interface
and created Windows.
Microsoft Windows came out in an initial form in the mid-1980's.
It was not terribly successful because the interface was clumsy and not
many software programs took full advantage of the features available in
Windows. Most users were still comfortable with the DOS based interface
to their PC's. Users who wanted a graphical user interface chose
the Macintosh. In 1990, a revised version, Windows 3.0, changed the
entire PC interface arena and moved the PC closer in ease of use to the
Program in Windows 3.0
Windows 3.0 was effective not because it was the best user interface; many
people still viewed the Macintosh as easier to use. But Windows 3.0
was installable on top of DOS, allowing use of DOS programs, making the
transition to Windows upgrades of popular programs smooth. Many
software developers embraced Windows 3.0 for development of their own programs.
In a few years, many applications were available exclusively on the Windows
3.0 platform. These applications ranged from business applications
to games. A Windows PC became an essential tool in most businesses
and an extremely popular tool at home.
Microsoft upgraded Windows in 1995 with the release of Windows 95, a new
operating system with a further improved user interface as well as some
features that were supposed to make having and using a computer easier.
For instance, adding common hardware is supposed to be accomplished using
'plug-and-play' technology, by which Windows 95 itself can figure out what
particular hardware is installed and install the proper drivers to get
the computer access to that particular hardware.
It is interesting to note that the Macintosh was able to circumvent the
problems of different brands and types and compatibilities of peripheral
hardware by holding tight control on its manufacturing. Apple did
not license its operating system to other computer manufacturers for many
years and recently has only licensed to a few manufacturers. This
means that the add-on hardware available for Macintosh computers is usually
quite compatible and easy to install. It also means, however, that
the price of most Macintosh components is higher than most PC components
since there is less competition among manufacturers and lower economies
Windows95 User Interface
In any case, Windows 95 has largely received positive reviews from users
regarding its user interface. Some of the innovations in Windows
95 include a Start button which actually works like a hierarchical menu,
giving the user the ability to see what programs are installed on the computer
and to run them directly from that menu. Windows 95 also includes
an integrated task bar which exists on the bottom of the screen and stores
button representations of the programs that are presently running.
This allows easy access to the list of programs running and access to them.
Before Windows 95 was released Microsoft introduced a software program
that was supposed to make computers accessible for technophobes and novice
computer users. This included using a very elaborate desk and house
metaphor; pictures represented elements on the desk. The user would
click on a picture of a rolodex to access the address book for instance.
The most important innovation in MS-BOB was the use of agents; iconic guides,
usually animals, who helped explain the available features as well as give
general help to the users of the program.
As a whole, MS-BOB was a financial failure. It was often seen as
under functional and was most popular only among school children.
Perhaps this can be understood by considering that MS-BOB needed to be
installed on an expensive PC often as an add-on program to Windows 3.0.
Hence the users of the computer would have to know how to use Windows 3.0
before installing MS-BOB. Most of them did not appreciate needing
to dumb down their expensive computer and most reacted negatively to the
Microsoft Bob User Interface
WWW Browsing: Mosaic and Netscape
In 1994 with the introduction of the NCSA Mosaic WWW browser, there
was another revolutoin in the GUI. The WWW began as a way to share
documents among computers connected to the world wide interconnected network,
the Internet. These documents are uniquely identified by a Universal
Resource Location (URL) and are able to contain links to other documents
on the web, hyperlinks.
Netscape WWW User Interface
A few WWW browsers have become extremely popular. Most notably are
Microsoft's Internet Explorer and Netscape's Navigator. These programs
are using the graphical user interface and the network infrastructure to
give users the ability to communication with each other not only through
WWW documents, but also electronic mail, voice collaboration, and real
time virtual whiteboards. These technologies have actually been incorporated
into the latest version of Netscape's Navigator, Netscape Communicator.
These changes are resulting from new underlying technological capabilities.
The WWW browser and communicator are good starts towards bringing these
capabilities to users in a user friendly way. However, more work
needs to be done to make accessing these new technologies intuitive and
Future: Microsoft Active Desktop and Netscape
Constellation and Macintosh Copland
What are the potential future forms of the GUI? There are indeed
many proposals, but the major proposals seem to have some common underlying
themes. One theme is that the GUI interface will be customizable,
able to be easily changed to match the users preferences. These changes
might be stored over the network, or might not.
Macintosh Copland Customizable User Interface
(now abandoned by Apple)
Another common theme is that the interface will allow the user to access
network features readily. The new push in computers is towards networked
capabilites, moving away from the single user model and moving towards
interoperations among users. These new capabilities are important
and will likely be carefully incorporated into GUI's in the future.
Netscape Constellation User Interface
(under current development by Netscape)
Jump to Outline
Ultimately user interfaces are concerned
with how the user is able to access technology. Technological
innovations and sophistication might be underneath the interface,
but the user interface ultimately gives that technology
value. A technology is worthless unless users can use it!
User interfaces are often most effective when they are molded to the
learning styles of the users under consideration. They are
generally best designed with constant feedback from actual
Since iteration is so important in user interface design, this
brings up the interesting scenario. Perhaps the best user interfaces
might be able to change themselves in response to user
tendencies. Many operating systems give this sort of option,
letting the user change the colors and the shapes of certain icons and
window appearance. As computers get closer to understanding
users, perhaps the computer can choose the interface automatically.
At the minimum, it should be possible to present each user who sits
down at a computer with a different user interface, based on
customization files that the user can activate when logging into a
certain location. This kind of system should become increasingly
common in networked computer environments where configuration files
can be stored on the network.
Thus, in the computer environment it is important to remember that
it is readily possible to change user interfaces. For some other
interfaces, such as constructed buildings or structures or physical
hardware, it takes tremendous effort to change the interface.
User interfaces for computers should always be viewed as dynamic.
On the flip side, users are generally quite malleable to certain
user interfaces. People find that they have to make some effort
to learn any technology, and are generally willing to make that
effort, especially if they have a need for that technology.
Different users find different ways to use certain technologies as
well. Certainly it is nicer to be able to learn more quickly, but it
is also true that many users will be fairly persistent and tolerate
sub-optimal technology. As technology becomes more familiar, by
nature it becomes easier to use, even if it is difficult to use in the
first place. Any user of a technology who becomes trained in
using it will generally find that it is easier to use than if he or
she is not trained in using it.
So this brings us to our final thought. Interfaces should be
designed so that they are easy to use, but they also should be
designed so that they do not limit users. A user is willing to
learn from a user interface, and user interface designers should take
advantage of that fact. A computer interface is changeable and
configurable and the designer should take advantage of that as well.
The balance of complexity and clearness that is required is certainly
difficult to achieve, and likely varies user by user. But this
is why user interfaces are difficult to design and so important to
study and design carefully.
Jump to Outline
Cringeley, Bob. Triumph
of the Nerds: Accidental Empires. 1996.
Human Factors Group Web Site: http://www-inst.eecs.berkeley.edu/~eecsba1c
Laurel, Brenda. The Art of Human Computer Interface Design.
Addison Wesley, 1990.
Preece, Jenny. A Guide to Usability: Human Factors in Computing,
Addison Wesley, 1994.
PC Magazine's Future of the User Interface http://www8.zdnet.com/pcmag/special/anniversary/forward/lfr8.htm
Silicon Base - History of Silicon Valley: http://www-leland.stanford.edu/group/itsp/
Stuart, Roby. The Design of Virtual Environments. McGraw-Hill,
Winograd, Terry. Bringing Design to Software. Addison
Michael Shilman, Walter Hsiao: Virtual Reality Class
Michael Osofsky, Robert Shear: Strategic Technology Class
Jump to Outline. Jump to