CHAPTER OUTLINE
10.1 Introduction
10.2 Models of Response-Time Impacts 1 0.3 Expectations and Attitudes
10.4 User Productivity
1 0.5 Variability in Response Time 1 0.6 Frustrating Experiences
m
0.1 Introduction
1
Inthe 1960s, user perception of computer speed was determined by response
time for mathematical computations, program compilations, or database
searches. Then, as time-shared systems emerged, contention for the scarce com¬
putational resources led to more complex reasons for delays. With the emer¬
gence of the World Wide Web, user expectations for expanded services grew,
along with still more complex explanations of delays. Users now have to under¬stand the si
ze differences between text and graphics pages to appreciate the
huge variations in server loads and to tolerate network congestion. They alsohave to und
erstand the multiple sources of problems, such as dropped connec¬tions, unavailable web si
tes, andnetwork outages. This complex set of concernsis usually discussed under the umbrell
a term Quality ofService (QoS). This term
was originally derived from the telecommunications industry,where Quality of
Service can bemeasured interms of telephone callquality,lost connections,cus¬tomer satisfacti
on, connection time, cost, andother factors.
Concern over Quality of Service stems from a basic humanvalue: Time ispre¬cious. When ext
ernally imposeddelays impede progress on a task, many peoplebecome frustrated, annoyed,
andeventually angry. Lengthy or unexpected sys¬ tem responses resulting in long times to
display or refresh screens produce these reactions in computer users, leading to frequent
errors and low satisfaction.Some users accept the situation with a shrug of their shoulders,
but most users prefer to work more quickly than the computer allows.Discussions of Qua
lity of Service must also take into account a second basichuman value: Harmfulmistakes sho
uld be avoided. However,balancing rapidperformance with low error rates sometimes mea
ns that the pace of work must
slow. If users work too quickly, they may learnless,read with lower comprehen¬sion, com
mit more data-entry errors, and make more incorrect decisions. Stress
406
10.2 Models of Response-Time Impacts 407
can build
in
these situations, especially if
it is hard
to recover from errors, or
ifthe error
s destroy data, damage equipment, or imperil humanlife, as they may
in air-traffic-controlor medical systems (Kohlisch and Kuhmann, 1997).
A third aspect of Qualityof Service is reducing user frustration. Withlongdelays,
users may become frustrated enough to make mistakes or give up working. Delays are often a
cause of frustration, but there are others, such as crashes that destroydata, software bugs that pr
oduce incorrect results, and poor designs that lead to user confusion. Networked environme
nts generate further frustrations: unreliable service providers,dropped lines,email spam, and malicious viruses.
Quality of Service discussions usually focus on the decisions to be made bynetwork designe
rs and operators. This is appropriate, because their decisions
have a profound influence on many users. They also have the tools and knowl¬
edge to be helpful, and increasingly, they must adhere to legal and regulatorycontrols. Inte
rface designers and builders must also make design decisions that
dramatically influence the user experience. For example, they can optimize web
pages to reducebyte counts andnumbers of files or provide previews of materi¬ als availabl
in digitallibraries or archives to help reduce the number of queries and accesses to the net
work (Fig. 10.1and Section 13.2). Inaddition, users may have the opportunity to choose from
e
fast or slow services and from viewing low-resolution versus high-
resolution images. Users need guidance to under¬stand the implications of their choices an
d help them to accommodate varying levels of Quality of Service. For users, the mainexperi
ence of Quality of Service is the computer system's response time, so we'll deal with those is
sues first,before addressing application crashes, unreliable network service, andmalicious t
hreats.Section 10.2 begins by discussing a model of responsetime impacts, thenlooks at response-time issues, reviews shortterm human memory, and identi¬fies the sources of human error. Section 10.3 focuses on th
e roleof users' expec¬tations and attitudes in shaping their subjective reactions to the Qual
ity ofService. Section 10.4 deals with productivity as a function of response time, andSection
10.5 reviews the research on the influence of variable response times. Section 10.6 examines t
he severity of frustrating experiences, including spam and viruses.
10.2
Models of Response-Time Impocts
Response time is defined as the number of seconds it takes from the
moment auser initiate
s an action, usually by pressing the Enter key or a mouse button,
until the computer begins to present results (whether on the display, via a
printer, through a speaker, or on a mobile device). When the response is completed,the user
begins formulating the next action. The user think time is the
408 Chapter 10 Quality of Service
Global Land Cover Foclllry
Earth Science Data Interface
Home Mop Search Product Search
Help Contact Us GLCF
Preview 8c Download Update Map
Draw
If Hap Layer*
I
Elevation Data
r
SRTM. WRS2 Tilt!
r
SfiTM, QTCPQ3Q
MODIS Product*
AVHRR
r
Product*
Global Land Co.er.
1
Peoonal
r
Global Land Cover. Global
r
Continuous Fields Tree
FIGURE 10.1
The University of Maryland's Global Land Cover Facility's online search page(http://glcf.umiacs.umd.
edu/) indicates where data is available in red on a zoomable map. Users looking for data inAfrica can
thus tell where to focus their searches,allowing them to find what they need with fewer queries a
nd network accesses.
number of seconds that elapse between the computer's response and the user's
initiation of the next action. Inthis simple stages-of-action model, users (1) initi¬
ate, (2) wait for the computer to respond, (3) watch while the results appear, (4)
think for a while, and then initiate again (Fig. 10.2).
In a more realistic model (Fig. 10.3), users plan while interpreting results,
while typing /clicking, and while the computer is generating results or retriev¬
ing information across the network. Most people will use whatever time they
have to plan ahead; thus, precise measurements of user think time are difficult to obtain.
The computer's response isusually more precisely defined and mea¬ surable,but there are pr
oblems here as well. Some interfaces respond with dis¬ tracting messages, informative feedba
ck, or a simple prompt immediately after
an action is initiated, but actual results may not appear for a few seconds. For
example, the user may drag a file to a network printer iconusingdirect manipu¬ lation,bu
t it may take many seconds for confirmation that the printer has been
10.2 Models of Response-Time Impacts 409
User initiates Computer activity responds
-
•
Response time
-- »p
User think time
-
FIGURE 10.2Simple stages of action model of system response time and user think time.
activated or for a dialog box reporting that the printer is offline to appear.
Delays of more than 160 milliseconds while dragging the icon are noticed andbecome anno
ying, but users have come to accept delays for responses from net¬
worked devices.
Designers who specify responsetimes and network managers who seek to pro¬ vide highQual
ity of Service have to consider the complex interaction of technicalfeasibility, costs, task compl
exity, user expectations, speed of task performance, error rates, and errorhandling procedures. Decisions about these variables are further complicated by the influence of
users' personality differences, fatigue,familiarity with computers,experience with the task, an
d motivation (King,2008;
Guastello,2006; Wickens et al.,2004; Bouch et al.,2000).Although some people are content wit
h slower responses for some tasks, theoverwhelming majority prefer rapid interactions. Over
all productivity depends not only on the speed of the interface,but also on the rate of huma
n error and the
ease of recovery from those errors. Lengthy (longer than 15 seconds) response
times arc generally detrimental to productivity, increasingerror rates and decreas¬
ing satisfaction. Morerapid (less than 1 second) interactions are generally preferredand can incr
ease productivity, but they may also increase error rates for complex
tasks. The high cost of providing rapid response times and the loss from increased errors must
be evaluated in thechoice of anoptimum pace. Website display performance was studied by evaluating delay plus two web-
site design variables (site breadthand content familiarity) to examineinter¬action effects o
n user performance, attitudes, stress, and behavioral intentions.
Userstarts
typing
Userinitiates
activity
Computerbegins
response
Computercompletesresponse
-
Response time
User
planning
User think time
time
—
-*
FIGURE 10.3
Model of system response time, user planning time, and user think time.This modelis more realistic t
han the one in Fig. 10.2.
410 Chapter 10 Quality of Service
The three experimental factors (delay, familiarity, and breadth) were demon¬strated to coll
ectively impact the cognitive costs and penalties that users incur when making choices in t
heir search for target information (Galletta, 2006;Galletta et al., 2006). Laboratory experime
nts were also conducted (though not all of the results have yet been published) to examine
"acceptable" perceived
delay, whether this perception holds across two cultures (U.S. and Mexico), andcombination
s of a number of variables, including acceptable perceived delay,information sent (in the ri
ght direction), site depth, feedback, stress, and timeconstraints. Preliminary conclusions ar
e that user impatience is high, especially
in the U.S. as compared with Mexico, and that the effects of delay and poor
information sent explain significant variance in a number of outcomes, espe¬
cially when considering other interacting antecedents.
Screen refresh rates for webbased applications and mobile communications (text messaging, accessing the Internet via
web-enabled mobile devices, etc.)
can lead to frustration when slow and can sooth thesoul when working speedily.
Indemanding web applications on desktop machines, screen refresh rates are
usually limited by network transmission speed or server performance. Por¬ tions of imag
es or fragments of a page may appear with interspersed delays ofseveral seconds.
A home user with only a dial-up modem that has a 56-kilobits-per-second
(Kbps) data throughput rate may find that
it takes 30 seconds or more to
display
of text or a small image. Home or business users with digital subscriber
lines (DSL) connections, Fiber Optic Service (FiOS™), or cable modems can per¬ form tasks at
much higher rates (thousands or even millions of bps); however,
those rates vary by location, service provider,andsubscribed options. Broadband service pr
oviders typically do not offer the same upload anddownloadspeeds: Since the majority of u
a page
sers downloadmuch more information to their computers (text, photos, audio, video, soft
ware, etc.) than they upload, most service providers have opted for much higher download
speeds at the expense of fast upload capability. Those who require faster upload times— for
example, webmasters, software developers working on collaborative projects, or users who
regularly transfer large files —
might find that their broadband ser¬vice providershave left much to be desired in terms o
f upload times.
In an era ofuser-
generated content, for an increasing number of users, it is important forupload speeds to
keep pace
with download capability.
There arc web tools that can permit computer users to assess their downloadand upload spe
eds (to find them, run a search on "download upload speedtest"). Running this test gives
users a better idea of the Quality of Service and
provides them with useful information to present to their broadband serviceproviders
when asking for better service or an upgrade to meet their network responsetime needs.Users working on a company or university intranet with firewall protection
will often notice changes in network performance based on traffic,infrastructure
10.2 Models of Response-Time Impacts 411
tools andservices runningon the network,and occasional virus or other attacks on the netw
ork infrastructure. Those individualsbenefit from the best commu¬nications capability wit
h advanced, direct network connections (such as Asyn¬chronousTransfer Mode, or ATM); T
1 lines and/ or satellite connections can alsoreduce transfer delays and providefaster screen
refresh rates. Wireless networkdevices are not always in service and often cannot match t
he speeds of their
wired counterparts; however, that too is improving as entire communities
embrace wireless networking (which can aid
in universal accessibility).Cell-
phone users express frustration if a contact cannot be dialed quickly, if"roaming" takes too lo
ng to find a signal
inan out-of-the-way place, or if
a call isdropped. Just as there are web-
based tools that can measure network performance,there are also web forums that focus on testi
ng cell-phone speed (both in-person
testing by consumer groups and test results posted to user communities).
While improvements are being made in the technology of computertocomputer communication, user task performance times will not automati¬ cally improve a
t the same rate. As this section willillustrate,improved throughputdoes not necessarily imp
ly improved productivity. Computing systems stillneed to be usercentered, promoting universal usability (Shnciderman, 2003;Raskin,2000).Reading textual i
nformation from a screen is more difficult than reading
from a book (Section 12.3). Ifthe screen display appears to fill instantly (beyondthe speed a
t which someone might feel compelled to keep up), users seem to relax, to pace themselv
es, and to work productively. Since users often scan a web page looking for highlights or li
nks, rather than reading the full text, it isuseful to display text first, leaving space for the
graphical elements that are slower to display. Since a graphics file can easily be more than a
megabyte
insize, user control over image quality and size shouldbepossible.The relative me
rits of reading online versus printed copy have longbeen thesubject of heated discussion, alt
hough admittedly much of the debate is based
on personal preference and experience. With computerdisplay technology improving, a renewed emphasis on a more paperless "green" environmen
t, and
the increased availability and depth of online books and newspapers, it does seem inevitable
that we are movingina direction of increaseddemand for rapid
display of textual and graphical data. High-end performance needs such as
photo, movie, simulation, and gaming applications add to the mix, increasinguser expecta
tions and demand.
Consumer demand is a key factor in promoting rapid performance. Many
desktop and laptop computers still start up slowly, but cell phones, mobiledevices, and ga
mes are designed to start
in
seconds. If market competition is insufficient to produce chang
e, consumer pressure on software and hardwaremakers will be needed to force changes that
result
in more rapid computer
start-ups.
Web
sites
often distinguish themselves with rapid performance —
an attribute that surfers expect from Google or Yahoo! and buyers demand at
412 Chapter 10 Quality of Service
Amazon.com or eBay (King,2008; Morrisand Turner, 2001) —
and manufacturers may soon start making similar claims about their products to attract cus
tomers. A cognitive model of human performance that accounts for the experimental
results
inresponse time would be useful in makingpredictions,designing inter¬ faces, and fo
rmulating management policies. A complete predictive model that accounts for all the variab
les may never be realized, but even fragments of such
a model are useful to designers.
Robert B. Miller's review (1968) presented a lucid analysis of responsetimeissues and a list of 17 situations in which preferred response times might
differ. Much has changed since his paper was written, but the principles of
closure, short-term memory limitations, and chunking still apply. Any cognitive
model must emerge from an understanding of these human problem-
solvingabilities and informationprocessing capabilities. A central issue is the limita¬ tion of shortterm memory capacity, as outlined in George Miller's (1956)
classic paper, "The magical number seven, plus or minus two." Miller identi¬
fied the limited capacities people have for absorbing information: People can
rapidly recognize approximately seven (this value was contested by later
researchers, but it serves as a good estimate) "chunks" of information at a
time and can hold those chunks in shortterm memory for 15 to 30 seconds.The size of a chunk of information depends on the per
son's familiarity withthe material.For example, most people can look at seven binary di
gits for a few secondsand then recall the digits correctly from memory within 15 seconds. H
owever,performing a distracting task during those 15 seconds, such as reciting a poem,
erases the binary digits. Of course, if people concentrate on remembering the
binary digits and succeed in transferring them to longterm memory, they canretain the binary digits for muchlonger periods. MostAmericans ca
n also prob¬
ably remember seven decimal digits, seven alphabetic characters, seven Englishwords, or ev
en seven familiar advertising slogans. Although these items have
increasing complexity,they are stilltreated as single chunks. However,Americans
might not succeed in remembering seven Russian letters, Chinese pictograms,
or Polish sayings. Knowledgeand experiencegovern the size of a chunk and the
ease of remembering for each individual.People use shortterm memory inconjunction with working memory for pro¬cessing informationandfor pro
blemsolving.Shorttermmemory processes per¬ ceptual input, whereas working memory is used to generate a
nd implementsolutions. If many facts and decisions are necessary to solve a problem, shortterm and working memory may become overloaded. People learn to cope withcomplex pr
oblems by developing higher-level concepts that bring together sev¬
eral lower-level concepts into a single chunk. Novices at any task tend to work
with smaller chunks untilthey can cluster concepts into larger chunks. Experts
rapidly decompose a complex task into a sequence of smaller tasks that they are confident ab
out accomplishing.
10.2 Models of Response-Time Impacts 413
Shortterm and working memory are highly volatile; disruptions cause lossof information,and d
elays can require that the memory be refreshed.Visual dis¬ tractions or noisy environments a
lso interfere with cognitive processing. Fur¬thermore, anxiety apparently reduces the size
of the available memory, since theperson's attention ispartially absorbed inconcerns that ar
e beyond the realmof
the problem-solvingtask.
If people are able to construct
a solution to a problem inspite of interference,
they must still record or implement that solution. If they can implement thesolution imm
ediately, they can proceed quickly through their work. On theother hand, if they must reco
rd the solution
in long-
term memory, on paper, oron a complex device,the chances for error increase and the pace of
work slows.
Multiplying two four-digit numbers in your head is difficult because the
intermediate results cannot be maintained
in working memory and must
betransferred
to long-
term memory. Controlling nuclear reactors or air traffic is a challenge inpart because thes
e tasks often requireintegration of information (in
shortterm and working memory) from several sources, as well as maintenance of awareness of th
e complete situation. Inattending to newly arriving informa¬ tion, operators may be distr
acted and may lose the contents of their short-term
or working memory.
When using an interactivecomputer system, users may formulate plans andthen have to w
ait while they execute each step in the plan. If a step produces an
unexpected result or
if the delays are long, the users may forget part of the plan or be force
d to review the plan continually. This model leads to the conjecture that,for a given user a
nd task, there is a preferred response time. Long responsetimes lead to wasted effort and m
ore errors, because the solution plan must bereviewed repeatedly. On the other hand, short
response times may generate a faster pace in which solutionplans are prepared hastily and i
ncompletely. Moredata from a variety of situations and users would clarify these conjectur
es.As response times grow longer, users may become more anxious because thepenalty for an
error increases. As the difficulty in handling an error increases,
users' anxiety levels intensify, further slowing performance and increasingerrors. Howeve
r, as response times grow shorter and screens refresh more quickly, users tend to pick up th
e pace of the interface and may fail to fully com¬ prehend the presented material, may gener
ate incorrect solution plans, and maymake moreexecution errors. The term "screen refresh
" used here can apply both to updates of displayed data (e.g., animated weather maps) and t
o the initial
presentation of data on the screen (e.g., when first loading a web page that con¬tains several
graphics or animations, potentially invoking plug-ins to fully dis¬
in so m
any domains is also apparent in interface usage. A related factor is performance in paced
ver¬sus unpaced tasks. Inpaced tasks, the computer forces decisions within a fixed
play its contents on the screen).The speed/accuracy tradeoff that is a harsh reality
time period, thereby adding pressure. Such high-stress interfaces may be
414 Chapter 10 Quality of Service
appropriate with trained users in life-
critical situations or in manufacturing,where high productivity isa requirement. Howeve
r, errors, poor-quality work,
and operator burnout are serious concerns. In unpaced tasks, users decidewhen to respon
d and can work at a more relaxed pace, taking their time to make careful decisions.Car d
riving may offer a useful analogy. Although higher speed limits areattractive to many d
rivers because they lead to faster completion of trips, they also lead to higher accident ra
tes. Since automobile accidents can have
dreadful consequences, we accept speed limits. When incorrect use of com¬puter systems
can lead to damage to life, property, or data, should not speed
limitsbeprovided?
Distracted car drivingoffers another analogy to Quality of Service issues. Forexample, drivi
ng while speaking on a cell phone has been shown to result in higher accident rates. Simila
rly, computer users who pridethemselves on multi¬tasking can easily make mistakes. There
are computer systems that can helpdri¬
vers make fewer mistakes, such as GPS systems that aid drivers
on to another. Will
ingetting from one destinati
it bethat far in the future when there are agents and wizards guiding no
vice computer users to successful conclusions?Another lesson from driving is the importa
nce of progress indicators. Dri¬ vers like to know how far it is to their destination and
what progress they are making, and they get feedback by seeing the declining number of
miles on road signs. Similarly, computer users may want to know how long
it will take
for a web page to load or a file directory scan to becompleted (Fig. 10.4). Users given gra
phical dynamic progress indicatorsrather than static ("Please wait"),blinking, or numeric
(number of seconds left) messages report higher satisfac¬
tion and shorter perceived elapsed times to completion (Meyer et al., 1996). It is import
ant, however, that the progress indicatorsbe truthful representations of the state of affairs.
How oftenhave
computer users been lulled into an increasingly frustrating state of antic¬ipation, for exa
mple,watching a webpage download-indicator status barshow that the page isloading,only to
find that the Internet connection has
been lost or the server is down? FIGURE 10.4 Users may achieve rapidtask perfor- Dyna
mic progress indicators reassure mance,
low error rates, and high satis- users that the proce
ss is underway. faction if the following criteria
are met: P'°vidin3 time estimates is best,
but when that information is difficult
to calculate other progress indicators—
•
Users have adequate knowledge such as the name ofthe file orthe
of the objects and actions necessary fj|e count-can be updated at for the problemsolving task. regular intervals.
*]
SMp7cO RMOngflM
KMngUa 1000c* 2M01 COmlTrw ÿPI"
TT»ramanng 2 mntm
C «nc
10.2 Models of Response-Time Impacts 415
•
•
The solution plan can be carried out without delays. • Distractions are eliminated.
User anxiety islow. • There is feedback about progress towards the solution. • Errors
can be avoided or,
if they
occur, can be handled easily.
These conditions for optimum problem solving, with acceptable cost andtechnical feasibili
ty, are the basic constraints on design. However,other conjec¬ tures may play a role
sing the optimuminteraction speed:
in choo
•
Novices may exhibit better performance with somewhat slower response
times.
• Novices prefer to work at speeds slower than those chosen by knowledge¬
able, frequent users.
• When there islittle penalty for anerror,users prefer to work more quickly. • When
the task is familiar and easily comprehended, users prefer more
rapid action.
• If
users have experienced rapid performance previously,they will expect and
demand
itin future situations.
These informal conjectures need to be qualified and verified. Then, a more
rigorous cognitive model needs to be developed to accommodate the great
diversity
in human work styles and in computer-use situations. Practitioners
can conduct field tests to measure productivity, error rates, and satisfaction as a
function of response times in their application areas.
Researchers are extending models of productivity to accommodate the reali¬ ties of work an
d home environments. These situated action models now include
tempting distractions and unavoidable interruptions, such as arriving e-mail
messages, pop-up instant messages, phone calls, and requests from fellow
workers or family members. Enabling users to easily limit or block interruptionsis beco
ming necessary. Another useful functionality is to provide users with
feedback about the amount of time spent onvarious tasks and a log of how theyhandled inte
rruptions. Personal, organizational, and cultural differences willhave to be accommodated,
as well as variations in the necessity to accept inter¬
ruptionsfrom managers or family members.The researchand experiments described
in the
following sections are tiles in the mosaic of human performance with computers, but m
any more tiles arenecessary before the fragments can form a complete image. Some guidelin
es
have emerged for designers and informationsystem managers, but local testingand continuous monitoring of performanceand satisfact
ion are still necessary.
416 Chapter 10 Quality of Service
The remarkable adaptability of computer users means that researchers andpractitioners w
ill have to be alert to novel conditions that require revisions of
these guidelines.
10.3
Expectations and Attitudes
How long will users wait for the computer to respond before they become
annoyed? This simple question has provokedmuchdiscussion and several experi¬ ments. There
is no simple answer, though, and more importantly, it may be the
wrong question to ask. More refined questions focus on users' needs: Will usersmore happily
wait for a valued document than an undesired advertisement?Relateddesign issues may clar
ify the question of what an acceptable responsetime is. For example, how longshould users h
ave to wait before they hear a dial tone on a telephone or see a picture on a television? If t
he cost is not excessive,the frequently mentioned twosecond limit (Miller, 1968) seems appropriate for many tasks. In some situations, however, u
sers expect responses within 0.1 sec¬
ond (for example, when turning the wheel of a car; pressing a key on a key¬board, piano, o
r telephone; dragging an icon; or scrolling through a list on a cellphone).Twosecond delays in these cases might be unsettling,because users have adapted a working style
and expectations based on responses within a fraction of a second. Inother situations,users
are accustomed to longer responsetimes, such as waiting 30 seconds for a red traffic light
to turn green, two days
for a letter to arrive, or a monthfor flowers to grow. The first factor influencing acceptable
response time is that people haveestablished expectations based on their past experiences of
the time required to complete a given task. If a task is completed more quickly than expect
ed, people
willbe pleased; but if the task is completed much more quickly than expected,
they may become concerned that something iswrong. Similarly, if a task iscom¬ pleted muc
h more slowly than expected, users are likely to become concerned or frustrated. Even thoug
h people can detect 8% changes
in a 2- or 4-second
response time (Miller, 1968), users apparently do not become concerned untilthe change is
much greater. Two installers of networked computer systems have reported a problem con
¬
cerning user expectations with new systems. The first users are delighted,because the respo
nse time is short when the load is light. As the load builds,however, these first users beco
me unhappy because the response time deterio¬ rates. Users who join later, on the other han
d, may be satisfied with what theyperceive as normal response times. In response to this p
roblem, both installersdevised a responsetime choke by which they could slow down the system whenthe load was light.This surprisi
ng policy makes the response time uniform overtime and across users, thus reducing compl
aints.
10.3 Expectations and Attitudes 417
Network managers have similar problems with varying response times asnew equipment i
s added or as large projects begin or complete their work. The variation in response time c
an be disruptive to users who have developed expectations and working styles based on a
certain level of responsiveness.
There are also periods within each day when the response time tends to beshorter, such as
at lunchtime, or longer, such as midmorning or late afternoon.Both extremes can be probl
ematic: Some users rush to complete tasks when response times are short, and as a result, th
ey may make more errors; on the
other hand, some workers refuse to work when the response time is slow rela¬ tive to their
expectations. One subject ina study of web shopping commented, "You get a
. . once you are
bitspoiled .
used to the quickness, then you want it all the time" (Bouch et al.,200
0). An important design issue is that of rapid start-up. Users are annoyed if they
have to wait for a laptop or a digital camera to be ready for usage, and conse¬
quently,fast starts are a strong distinguishingfeature
inconsumer electronics. A related issu
e is the tradeoff between rapid start-up and rapid usage. For exam¬
ple,
it may take several minutes
to download a Java or other web application,but then pe
rformance is rapid for most actions. An alternative design might
speed the start-up, but the cost couldbe occasional delays during usage.
A second factor influencing response-time expectations is the individual's
tolerance for delays.Novice computer users may be willing to wait muchlongerthan experi
enced users.
Inshort, there are large variations in what individualsconsider acceptable wai
ting time. These variations are influenced by many fac¬ tors, such as personality, cost, age, m
ood, cultural context, time of day, noise, and perceived pressure to complete work. The laid
back web surfer may enjoy chatting with friends while pages load, but the anxious deadline
-fighting jour¬ nalist may start banging on desks or keys in a vain attempt to push the
computer along.Other factors influencing responsetime expectations are the task complexityand the users' familiarity with the task. For simpl
e, repetitive tasks that require
little problem solving, users want to perform rapidly and are annoyed by
delays of more than a few tenths of a second. For complex problems, users willtypically per
form well even as response time grows, as they can use the delays to plan ahead. Users are h
ighly adaptive and can change their working styles to accommodate different response time
s. This factor was found in early studiesof batchprogramming environments and
in
recent studies of interactive-
Ifdelays are long, users will seek alternate strategies that reducethe number
of interactions whenever possible. They will fill in the long delays
system usage.
by performingother tasks, daydreaming, or planningahead in their work. But
even if diversions are available, dissatisfaction grows with excessively longresponse times.
An increasing number of tasks place high demands on rapid system
performance; examples are user-controlled three-dimensional animations,
418 Chapter 10 Quality of Service
flight simulations, graphic design, and dynamic queries for informationvisualization. Inth
ese applications, users are continuously adjusting the input
controls, and they
that
expect
changes
to appear
with no perceived delay —
is,
within less than 100 milliseconds. Similarly, some tasks (for example, video¬
conferencing, Voice over IP telephony, and streaming multimedia) require
rapid performance to ensure high Quality of Service, because intermittent
delays cause jumpy images and broken sound patterns that seriously disrupt users. Promot
ers of these services see the need for ever faster and higher-
capacity networks.The expanded audiences and novel tasks on the World Wide Web havebr
ought new considerations into the sphere of Quality of Service. Since
ecommerce shoppers are deeply concerned with trust, credibility, and privacy, researchers h
ave begun to study interactions with time delay. The range of response times is highly va
ried across web sites (Huberman, 2001; Fig. 10.5),
and site managers are regularly compelled to decide what level of resource expenditure is
appropriate to reduce response times for users. Studies havefound that as the response ti
mes increase, users find web page content less
interesting (Ramsay et al., 1998) and lower in quality (Jacko et al., 2000). Long response ti
mes may even have a negative influence on user perceptions of the companies who provid
e the web sites (Bouch et al., 2000). One web-shopping
study participant who believed that successful companies have the resources to
build high-performance web sites remarked, "This is the way the consumer
sees the company
... it
should look good,
it should be fast." Increased
use of
Ajax (Asynchronous JavaScript and XML) and dynamic techniques increase both responsive
ness and user expectations.
In summary, three primary factors influenceusers' expectations andattitudes
regarding response time:
1 . Previousexperiences
2. Individualpersonality differences3. Task differences
Experimental results show interesting patterns of behavior for specific back¬grounds, indiv
iduals,and tasks, but
itis difficult
to distill
a
simple set of conclu¬ sions. Several experime
nts attempted to identify acceptable waiting times by
allowing participants to press a key if they thought that the waiting time was too
long. Participants who could shorten the response time in future interactions took advanta
ge of that feature as they became more experienced, forcing
response times for frequent actions down to well below one second. It seems appealing to
offer users a choice for the pace of the interaction. Videogamedesigners recognize the role of usercontrolled pace setting and the increasedchallenge from fast pacing, which expert users crav
e. On the other hand, older
10.3 Expectations and Attitudes 419
Heavy-tailed distribution(log normal)
4-
z
00 1000 2000 3000
Download time (ms)
FIGURE 10.5
Distribution of response times for 40,000 randomly selected web pages, showing alog normal distri
bution. Half the pages were delivered in under half a second, but thelong tail shows the variability (
Huberman, 2001).
adults and users with disabilities may appreciate being able to slow the pace ofinteractio
n. Differing desires also open opportunities to charge premiums forfaster service; for exam
ple, many World Wide Web users are willing to pay extra for faster network performance.
In summary, three conjectures emerge:
1 . Individualdifferences are large and users are adaptive. They will work
faster as they gain experience and willchange their working strategies asresponse times chang
e. It may be useful to allow people to set their ownpace of interaction.
2. For repetitive tasks, users prefer and will work more rapidly with shortresponse times.
3. For complex tasks, users can adapt to working with slow response times
with no loss of productivity,but their dissatisfaction increases as response
times lengthen.
420 Chapter 10 Quality of Service
10.4
User Productivity
Shorter system response times usually lead to higher productivity, but
insomesituations, u
sers who encounter long system response times can find clevershortcuts or ways to do conc
urrent processing to reduce the effort and time required to accomplish a task. Working too
quickly, though, may lead to errors
that reduce productivity.
In computing, just as in driving, there is no general rule about whether the
high-speed highway or the slower, clever shortcut is better. The designer must
survey each situation carefully to make the optimal choice. The choice is not criticalfor occa
sional usage, but
it becomes worthy of investigation when the fre¬
in highvolume situations,more effort can be expended in discovering the proper response time for a
given task and set of users. Itshould not be surprising that a new study must be conducted
quency isgreat. When computers are used
when thetasks and users change, just as a new route evaluation must be done for each trip.
An alternative solution is masking delay, by displaying important, crucialinformation fir
st while the background is filling in. Well-
designed web sites often downloadcriticalinformationfirst;likewise,web designers may cho
ose to download the intriguing information first, so the user is motivated and encour¬
aged to wait during any download delay to see the end result. Some news websites downl
oad the textual headlines first to motivate the news reader to remain
patient while the remainder of the articles is downloaded. The user can then start reading
an article while additional animations, advertisements, etc. down¬
load, untileventually the screen is fully painted with its intended information.The nature
of the task has a strong influence on whether changes
inresponsetime alter user productivit
y. A repetitive control task involves monitoringa displayand issuing actions inresponse to ch
anges
in the display. Although the operatormay be trying
ocess, the basic activities are to respond to a change
in
to understand the underlying pr
the display, to issue commands,and
then to see whetherthe commands produce the desired effect. When there isa choice among a
ctions, the problembecomes more interesting, and the operator tries to pick the optimal acti
on in each situation. With shorter system response times, the operator picks
up the pace of the system and works more quickly,but decisions on actions maybe less than
optimal. On the other hand, with short response times, the penaltyfor a poor choice may be s
mall because it may be easy to try another action. Infact, operators may learn to use the int
erface more quickly with short systemresponse times because they can explore alternatives
more easily.
In a study of a data-entry task, users adopted one of three strategies, depend¬
ingon the response time (Teal and Rudnicky, 1992). With response times under
one second, users worked automatically without checking whether the system
was ready for the next data value. This behavior resulted innumerous anticipa¬
tion errors, in which the users typed data values before the system could accept
10.4 User Productivity 421
those values. With response times above two seconds, users monitored the dis¬play carefully
to make sure that the prompt appeared before they typed. Inthemiddle ground of one to
two seconds, users paced themselves and waited an
appropriate amount of time before attempting to enter data values.When complex problem
solving is requiredand many approaches to the solu¬ tion are possible, users willadapt their
work styles to the response time. Produc¬
tivity with statistical problem-solving tasks was also found to be constant
despite response-time changes over the range of 0.1 to 5.0 seconds (Martin and
Corl, 1986). The same study with regular users found linear productivity gains
for simple dataentry tasks. The simpler and more habitual the task was, the greater the productivity benefi
t of a short response time was. Barber and Lucas (1983) studied professional circuitlayout clerks who
assigned telephone equipment inresponse to service requests. For this complextask, the low
est error rate occurred with a 12-second response time (Fig. 10.6).
With shorter response times, the workers made hasty decisions; with longer
response times, the frustration of waiting burdened shortterm memory. Thenumber of productive transactions (total minus errors) increased almost li
nearly
with reductions inresponse time, and subjective preference was consistently in
favor of the shorter response time.
In summary,users pick up the pace of the interface,and they consistently pre¬
fer a faster pace. Error rates at shorter response times increase with the cognitive
complexity of the tasks. Each task
appears to
have
an
optimal
pace —
response times that are shorter or longer than this pace lead to increased errors.
If error
damage can be large and recovery is difficult, users should slow themselves
downand make carefuldecisions.
Percenttransactionswitherrors
50
40
30
20
10
-4- 4-
8 12 16 20 Response time (sec)
24
FIGURE 10.6
Error rates as a function of response time for a complex telephone-circuitlayouttask by Barber and Lucas (1983). Although error rates were lowest with longresponse times (1
2 seconds), productivity increased with shorter times becausethe system could detect errors and use
rs could rapidly correct them.
422 Chapter 10 Quality of Service
10.5 Variability in
People are
Response Time
willing to pay substantial amounts of money to reduce the variability
in their lives. The entire insurance industry is based on the reductionof present
pleasures,through the payment of premiums, to reduce the severity of potentialfuture loss
es. Most people appreciate predictable behavior that lessens the anxiety of contemplating unple
asant surprises. When using computers, users cannot see into the machines to gain reassur¬
ance that their actions are being executed properly, but the response time can
provide a clue. If users come to expect a response time of 3 seconds for a com¬monaction, th
ey may becomeapprehensive if this action takes0.5 or 15seconds.Such extreme variation is
unsettlingand should be prevented or acknowledged
by the interface, with some indicator for an unusually fast response or a
progress report for an unusually slow response. The more difficult issue is the effect of mod
est variations inresponse time. Asdiscussed earlier, Miller (1968) raisedthis issue and report
ed that 75% of partici¬pants tested could perceive 8% variations intime for periods in the i
ntervalof 2
to 4 seconds. These results prompted some designers to suggest restrictive rulesfor variabil
ity of response time. Since it may not be technically feasible to pro¬ vide a fixed short resp
onse time (such as 1 second) for all actions, several
researchers have suggested that the time be fixed for classes of actions. Many actions could ha
ve a fixed response time of less than
er actions could take 12 seconds.
1 second, other actions could take 4 seconds, and stilloth
Experimental results suggest that modest variations inresponse time do not severely affect p
erformance. Users are apparently capable of adapting to vary¬
ingsituations, although some of them may become frustrated when performing
certain tasks. Goodman and Spence (1982) measured performance changes as a result of respon
se-time variation ina problem-
solving situation (a similar situa¬tion was used in their earlier experiment, described in Sec
tion 10.4). They found no significant performance changes as the variability was increased.Th
e time to solution and the profile of command use were unchanged. As the variabilityincrea
sed, participants took advantage of fast responses by entering subsequentcommands immedi
ately,balancingthe time lost inwaiting for slower responses. Other researchers found simila
r results.The physiological effect of response time is an important issue for stressful,long-
duration tasks such as air-traffic control, but
it is also a concern for office
workers and sales personnel. While no dramatic differences have been foundbetween const
ant and variable response-time treatments statistically, signifi¬
cantly higher error rates, higher systolic blood pressure, and more pronounced
pain symptoms were found repeatedly with shorter response times (Kohlischand Kuhman
n, 1997). Although diastolic blood pressure and masseter (jawmuscle) tension did increase when compared to resting baseline values, there
10.6 Frustrating Experiences 423
were no significant differences
in these physiological measures between con¬stant and varia
ble treatments.
In summary, modest variations in response time (plus or minus 50% of the
mean) appear to be tolerable and to have little effect on performance. Frustra¬
tion emerges only if delays are unusually long (at least twice the anticipatedtime). Similarl
y, anxiety about an erroneous command may emerge only if the
response time is unusually short—say, less than onequarter of the anticipated time. But even with extreme changes, users appear to be adapta
ble enough to
complete their tasks.
It may be useful to slow downunexpected fast responses to avoid surprising
users. This proposal is controversial,but
it would affect only a small fraction of user intera
ctions. Certainly, designers should also make a serious effort to avoidextremely slow respon
ses or,
if
responses must be slow, shouldgive users infor¬
mation to indicate progress towards the goal. One graphics interface displays a
large clock ticking backwards; the output appears only when the clock hasticked down to z
ero. Likewise, many printing and downloading programs dis¬
play the page numbers to indicate progress and to confirm that the computer is at work p
roductively on the appropriate document.
10.6
Frustrating Experiences
Quality of Service is usually defined in terms of network performance, but
another perspective is to think about the quality of user experiences. Many tech¬
nology promoters argue that the quality of user experiences with computers hasbeen impr
oving over the past four decades, pointing to the steadily increasing
chip and network speeds and harddrive capacities. However, critics believefrustration fro
m interface complexity,network disruptions,andmalicious inter¬
ference has grown. Recent research has begun to document and help us under¬stand the sou
rces of user frustration with contemporary user interfaces.When hard-touse computers cause users to become frustrated, it can affectworkplace productivity, users'
moods, and interactions with other coworkers
(Lazar et al., 2006). Analysis was accomplished by collecting modified time diaries from 50
workplace users, who spent an average of 5.1 hours on the computer. Users reported wasti
ng, on average, 42 to 43% of their time on the computer due to frustrating experiences. Th
e largest number of frustrating experiences occurred while using word processors, email, and web browsers.The causes, time lost, and effects on the mood of the users were anal
yzed in thisresearch, along with implications for designers, managers, users, information
technology staff,and policymakers.Another study of 107 student computer users and 50 wor
kplace computerusers showed highlevels of frustration and loss of 1/3 to 1/2of timespent
(Lazar
424 Chapter 10 Quality of Service
et al., 2006). This research reported incident-specific and user-
specific factorsthat caused frustration, how those factors impacted the severity of the users'fr
ustration, and explored how frustration impacted the daily interactions of the
users. For both student and workplace users, frustration levels were stronglycorrelated wi
th the amount of time lost/time required to fix the problem and
with the importance of the task.
Interruptions appear to be troubling to users regardless of whether they orig¬inate from
the current task or an unrelated task, but surprisingly, people havebeen shown to complet
e interrupted tasks
in less time than uninterrupted tasksand with no difference in quality
(Mark et al., 2008). The authors of this studyconjectured that people compensate for interru
ptions by working faster; how¬ ever, this comes at the price of more stress and higher frus
tration, time pressure, and effort. An appropriate interface design change would allow user
s to limit
interruptions, reducing their negative effects.One study used memory as an indication of wh
ere frustration occurs while using technologies such as operating systems, web browsers, text
editors, emailclients, mobile devices, digital video recorders (TiVo), and others (Mentis, 2007).The majo
rity of users rememberedfrustrating incidents such as incorrect autoformatting, computer errors or bugs, slow or dropped Internet connections, andunwanted pop
-ups. These incidents all seem to have two things
incommon: They
are external to the user's cognitive processing, and they interrupt the user's task
and take control away from the user. The users' memories of their experiences,
usability incidents,and the users' emotional reactions can leaddesigners to create
a better overall user experience by avoiding interrupting theuser's cognitive flow.
This principlealso would apply to interfaces outside of desktop environments. Another study
examining user frustration with mobile devices evaluatedusers' experiences with using loc
ationsensitive mobile services in an urban environment, collected through a diary study and us
er interviews (HakkilaandIsomursu,2005). Userperceived problems and the resultingfrustration and dif¬ficulties inuse were mainly caused
by slow or unreliable data connections andlack of content in the mobile services. User surve
ys elicit strong responses that convey unsatisfactory experiencesamongst the general popula
tion. A British study of 1,255 office workers by a
major computer manufacturer found that nearly half of the respondents feltfrustrated or st
ressed by the amount of time it takes to solve problems. In anAmerican survey of 6,000 co
mputer users, the average amount of wasted timewas estimated at 5.1 hours per week.Repla
cing these possibly exaggerated impressions with more reliabledata is
a serious challenge. Self-reports and observations from
more than 100 users
doing their own work for an average of 2.5 hours each produced disturbingresults: 46 to 5
3% of the users' time was seen as being wasted (Ceaparu et al.,2004). Frequent complaints incl
uded dropped network connections, application
crashes, long system response times, and confusing error messages, but no indi10.6 Frustrating Experiences 425
vidual cause contributedmore than 9%. The major sources of problems were thepopular app
lications for web browsing, email, and word processing. Recom¬mendations for reducingfrustration includeinterface red
esign, software quality
improvement, and network reliability increases. Other recommendations focus on what us
ers can do through increased learning, careful use of services, andselfcontrol of their attitudes.Infrastructure improvements to server capacity andnetwork spe
ed andrelia¬
bility will improve user experiences, but the continuing growth of Internet
usage means there willbeproblems for many years to come. Improvednetworkperformanc
e and reliability promotes trust
in users, easing their concerns andultimately improving
work performance and output. Consequently, poor Qualityof Service is a still greater diffic
ulty in emerging markets and developing nations, where infrastructurereliability remains
a problem.Since user training can have a profound influence on reducing frustrating
experiences, efforts to improve education through schools and workplaces
could improve user experiences. Improved educational programs and refined user interface
s are likely to have the largest effect on poorly educated users, whose difficulties in usin
g Internet services undermine efforts to providee-learning, e-commerce, and egovernmcnt services.Networkedservices, especially email, are among the most valued benefits ofinformation and communications technologies.
There are numerous sources ofinformation on "netiquette," proper usage, and productivit
y to guide users in the proper use of e-mail. Many corporations publishemail guidelines, not only to coach their employees on the proper use of email in the workplace, but also to address best practices for reducingemail informationoverload, thus enhanc¬ ing workplace productivity.Email has become the source of frustrating "spam" (the pejorative term
given to unwanted, unsolicited email, including advertisements, personalsolicitations,andpornographic invitations). Some o
f these messages come from
major corporations who make an effort to focus their email on current cus¬tomers, but much spam comes from small companies and individuals
who take
advantage of the low cost of e-mail to send blanket notices to hugelistsof unfil-tered e-
mail addresses. Anti-spam legislation is being passed inmany nations,
but the Internet'sinternationalreach and open policieslimitthe success of legal
controls. Many network providers intercept email from known spam sources, which account for 80% of allemails,but users stillcomplainof too much spam. Usercontrolled spam filters also help, but the complexity of installation and user controls unde
rmines many users' willingness to use these tools. Further¬ more, the increasingly clever s
pam senders rapidly change their messages to bypass existing filters. Similarly, distributors
of pop-up advertisements refine
their schemes to account for changing technology and to bypass userprotectionstrategies. A consumer uprising could pressure software developers, network
providers,and government agencies to deal more directly with these annoying
426 Chapter 10 Quality of Service
problems. Some spam senders and advertisers claimfreedom of speech
in their
right to send spam or ads, but most users wish to see some limitation on theright to s
end bulk e-mails or unsolicited pop-up ads.
Another frustrating problem for users is the prevalence of maliciousviruses that, once inst
alled on a machine, can destroy data, disrupt usage,or produce acancerous spread of the vir
us to everyone on the user's email contact list.Viruses are created by malevolent programmers who want to spread ha
voc,usually via e-mail attachments. Unsuspecting recipients may get an infected
email from a knowncorrespondent,but the absence of a meaningfulsubject line or message is o
ften a clue that the e-mail contains a virus. Deceptive messages that mention previous emails or make appealing invitations complicate user decisions, but safetyconscious users
will not open attachments unless they expect a document or photo and g
et an appropriate message from the sender.
In 2000, before anti-virus software became effective, the famed ILOVEYOU
virus contaminated millions of personal computers worldwide by tricking users to open
e-mail messages by placing the words "ILoveYou"
in the subject
line; recovering from the damage cost an estimated SI0.2 billion. Most network
service providers offer virus filters that stop known viruses, but professional programme
rs must make weekly or even daily revisions to antivirus software(suppliers includeMcAfee™andSymantec™) to keep up with the increasing
lysophisticated virus developers. Since e-
mail is the source of so many threats,developers of emailsoftware must take moreinitiatives to protect users. Universalusability presents its
own set of challenges in terms of user frustra¬ tion. In one research project, 100 blind user
s, using time diaries, recorded theirfrustrations using the Web (Lazar et al., 2007). The top c
auses of frustrationreported were: (1) page layout causing confusing screenreader feedback; (2)conflict between the screen reader and the application; (3) poorly design
ed/unlabeled forms; (4) no alternative text for pictures; and (5) a threeway tie between misleading links, inaccessible PDFs, and screenreader crashes. Inthisstudy, the blind users reported losing, on average, 30.4% of their time d
ue to
these frustrating situations. Web designers concerned with universal usability can improve
matters by using more appropriate form and graphic labels and
avoiding confusing page layouts.
Since frustration, distractions, and interruptions can impede smooth
progress, design strategics shouldenable users to maintain concentration. Three
initial strategies can reduce user frustration: reduce short-term and working
memory load, provide information-abundant interfaces, and increase automaticity (Shneiderman, 2005). Automaticity
in
this context is the processing ofinformatio
n (in response to stimuli) ina way that is automatic and involuntary,
occurring without conscious control. An example is when a user performs a complex seque
nce of actions with only a light cognitive load, like a driver fol¬
lowing a familiar routeto work with little apparent effort.
Practitioner's Summary 427
Practitioner's Summary
Quality of Service is a growing concern for users and providers on networks, computers, a
nd mobile devices. Rapid system response times with fast screen refreshes are necessary, beca
use these factors are determinants of user produc¬
tivity, error rates, working style, and satisfaction (Box 10.1). In most situations,
shorter response times (less than one second) lead to higher productivity. For mouse action
s, multimedia performances, and interactive animations, even faster performance is necessa
ry (less than 0.1 second). Satisfaction generally increases as the response time decreases, but the
re may be a danger from stress
induced by a rapid pace. As users pick up the pace of the system, they may make moreerro
rs. Ifthese errors are detected and corrected easily, productivity
will generally increase. However,if errors are hard to detect or are excessively
costly, a moderate pace may be most beneficial.Designers can determine the optimal respons
e time for a specific applicationand user community by measuring productivity, errors, and
the cost of providing
BOX 10.1Response-time guidelines.
•
Users prefer shorter response times. • Longer response times (> 15seconds) are disruptive. •
Users' usage profiles change as a function of response time. • Shorter response time leads to sho
rter user think time. • A faster pace may increase productivity, but it may also increase error rates.
•
Error-
recovery ease and time influence optimal response time. • Response time should be appropriate
to the task: • Typing, cursor motion, mouse selection: 50150 milliseconds • Simple,frequent tasks: 1second • Common tasks: 2-
4 seconds • Complex tasks: 812 seconds • Users should be advised of long delays. • Strive to have rapid startups. • Modest variability in response time is acceptable. • Unexpected delays may be disruptive.
•
Offer users a choice in the pace of interaction. • Empirical tests can help to set suitable respons
e times.
428 Chapter 10 Quality of Service
BOX 10.2Reducing user frustration.
• Increase server capacity, network speed, and network reliability. • Improve user training, onli
ne help, and online tutorials. • Redesign instructions and error messages. • Protect against spa
m, viruses, and pop-up advertisements. • Organize consumerprotection groups. • Increase research on user frustration. • Catalyze public discussion to raise a
wareness.
short response times. Managers must be alert to changes
in work style as the pace
quickens; productivity is measured by correctly completed tasks rather than by interactions
per hour. Novices may prefer a slower pace of interaction. Modestvariations around the m
eanresponse timeare acceptable,but large variations (lessthan onequarter of the mean or more than twice the mean) should be accompa¬ niedby informative m
essages. An alternativeapproach for overlyrapid responsesis to slow them downand thus to
avoid the needfor explanatory messages.A continuingconcern is the frustration levelof the in
creasingly diverse set of computer users (Box 10.2). In anera of usergenerated content and social media
participation, a satisfying user experience is determined by a preferred or, at least, an acce
ptable level of Quality of Service. Maliciousspreaders of spam and viruses are a serious threat
to the expanding community of Internet users. Application crashes, confusing error messag
es, and network disruptions are problems that could be addressed by improved interface an
d software design.
Researcher's Agenda
The increased understanding of Quality of Service issues today is balanced bythe richness of
new technologies and applications. The taxonomy of issues provides a framework for resear
ch, but a finer taxonomy of tasks, of relevant
cognitive-style differences, and of applications is needed. Next, a refined theoryof problemsolving and consumer behavior is necessary if we are to generate usefuldesign hypotheses.
The interesting result of a Ushaped error curve for a complex task, with thelowest error rate at a 12second response time (Barber and Lucas, 1983), invites further work. It would be productiv
e to study error rates as a function of
Researcher's Agenda 429
response time for a range of tasks and users. Another goal is to accommodatethe realworld interruptions that disrupt planning, interferewith decision mak¬
ing, and reduce productivity.
It is understandable that error rates vary with response times, but how else
are users' work styles or consumers' expectations affected? Is the modern era of employee m
ultitasking between numerous applications coupled with rou¬tine office distractions spre
ading us too thin, adding stress, and drasticallyreducing productivity, ultimately affectin
g corporate profits? Can we train
modern users to better manage their time among diverse applications and tasks,
yet provide tools for improved communication, collaboration, usergenerated content, and networking? Can users beencouraged to be more careful in their
decisions by merely lengthening response times and degrading Quality ofService? Does the
profile of actions shift to a smaller set of more familiar actions as the response time short
ens?Many other questions are also worthy of investigation. When technical feasi¬
bility prevents short responses, can users be satisfied by diversionary tasks, or
are progress reports sufficient? Do warnings of long responses or apologiesrelieve anxiety
or simply further frustrate users?Methodsfor assessinguser frustrationlevels are controversia
l.Timediaries may be more reliablethan retrospective surveys, buthowcouldautomated loggin
gandobservational techniques be made more effective? More importantly, how couldsoftware
developers and network providers construct reliable monthly reports to gaugeimprovemen
ts in Quality of Serviceand reductions
inuser frustration?
UORLD bJIDE UEB RESOURCES
http://www.aw-com/DTUI/
Responsetime issues have a modest presence on the Internet, although theissue of long network delays gets d
iscussed frequently. User frustration is a lively topic, and many web sites point out flawed interfac
es and relatedfrustrat¬ing experiences.The New Computing movement's web site (http://www.cs.u
md.edu/hcil/newcomputing/) suggests ways to help bring about change.
References
Barber,R. E. and Lucas,H.C.,System response time, operator productivity and jobsatisfaction, Communica
tions of the ACM 26, 11 (November 1983),972-986.
Bouch,Anna, Kuchinsky,Allen, and Bhatti,Nina,Quality is in the eye of the beholder:Meeting user requ
irements for Internet quality of service, Proc. CHI2000 Conference: HumanFactors inComputing Systems
,
ACM Press,New York (2000), 297-304.
430 Chapter 10 Quality of Service
Ceaparu,I.,Lazar,J., Bessiere, K.,Robinson,J., and Shneiderman, B.,Determining
causes and severity of end-user frustration,InternationalJournal of HumanComputerInteraction 17, 3 (September 2004),333-356.
Galletta,Dennis E,Understanding the direct and interaction effects of web delay andrelated factors, in G
alletta,Dennis F. and Zhang, Ping(Editors),HumanComputerInteractionand Management InformationSystems: Applications (Advances in Management
Information Systems), M.E. Sharpe,Armonk, New York (2006), 29-69.
Galletta,Dennis E,Henry,Raymond,McCoy,Scott, and Polak,Peter,When the waitisn't so bad,Information
Systems Research 17, 1 (March 2006),20-37.
Goodman,Tom and Spence, Robert,The effects of potentiometer dimensionality, system response time, a
nd time of day on interactive graphicalproblemsolving,HumanFactors 24, 4 (1982), 437-456.
Guastello,Stephen J.,HumanFactors Engineering and Ergonomics: A Systems Approach,LawrenceErlbaum Asso
ciates, Mahwah,NJ (2006).
Hakkila,Jonna and Isomursu,Minna,User experiences on location-aware mobile ser¬
vices, Proc. OZCHI2005, ACM Press,New York (2005), 14. Huberman,Bernardo A., The Law s of the Web: Patterns inthe Ecology of Information,MIT
Press,Cambridge,MA (2001).
Jacko,J., Sears, A., and Borella,M.,The effect of network delay and media on user per¬ ceptions of web res
ources, Behaviour &InformationTechnology 19,6 (2000), 427-439.
King,Andrew B., Website Optimization: Speed, Search Engine &Conversion Rate Secrets,O'Reilly Media,Sebasto
pol,CA (2008).
Kohlisch,Olaf and Kuhmann,Werner,System response time and readiness for task exe¬ cution: The opti
mum duration of inter-task delays, Ergonomics 40, 3 (1997), 265-280.
Lazar,J.,Allen, A., Kleinman,J., and Malarkey,C.,What frustrates screen reader userson the Web: A study
of 100 blind users, International Journal of Human-ComputerInteraction22,3 (May 2007),247-269.
Lazar,J.,Jones, A., Hackley,M.,and Shneiderman, B.,Severity and impact of computeruser frustration: A c
omparison of student and workplace users,Interactingwith
Computers 18,2 (2006), 187-207.
Lazar,J.,Jones, A., andShneiderman, B.,Workplace user frustration with computers: An
exploratory investigation of the causes and severity, Behaviour &InformationTechnology25, 3 (May/June
2006), 239-251.
Mark,Gloria,Gudith,Daniela,and Klocke,Ulrich,The cost of interruptedwork: More
speed and stress —
Don't interrupt me,Proc. CHI2008 Conference:Human Factors inComputing Systems, ACM Press,New York (
2008), 107-110.
Martin,G. L.and Corl,K. G., System response time effects on user productivity,
Behaviour &Information Technology 5, 1 (1986),3-13.
Mentis,Helena,Memory of frustrating experiences, in Nahl.D. and Bilal,D. (Editors),Informationand Em
otion,InformationToday, Medford,NJ (2007).Meyer,Joachim, Shinar, David,Bitan,Yuval, and Leiser,David,
Duration estimates and
users' preferences in human-computer interaction, Ergonomics 39, 1 (1996), 46-60.
References 431
Miller,George A., The magicalnumber seven, plus or minus two: Some limits on our
capacity for processing information,Psychological Science 63 (1956), 81-97.
Miller,Robert B.,Response time inman-computer conversational transactions, Proc.
AFIPS Spring Joint Computer Conference 33,AFIPS Press, Montvale,NJ (1968), 267-277.
Morris,MichaelG. and Turner,Jason M., Assessing users' subjective quality of experi¬
ence with the World Wide Web: An exploratory examination of temporal changes in technology acce
ptance, International Journal of Human-Computer Studies 54 (2001),877-901.
Ramsay,Judith,Barbesi,Alessandro, and Preece,Jenny, A psychological investigation of
longretrieval times on the World Wide Web,Interactingwith Computers 10 (1998), 77-86.
Raskin,Jef, The HumaneInterface:New Directionsfor Desgining Interactive Systems,AddisonWesley, Reading,MA (2000).
Shneiderman, Ben,Leonardo's Laptop:HumanNeeds and the New Computing Technologies,
MIT Press,Cambridge,MA (2003).
Shneiderman, Ben and Bederson,Ben,Maintaining concentration to achieve task com¬
pletion,Proc. Conference on Designingfor User Experiences 135,ACM Press,New York(November 2005),2-7.
Teal,Steven L. and Rudnicky,Alexander I., A performance modelof system delay and user strategy selecti
on, Proc. CHI '92 Conference: Human Factors in Computing Systems,
ACM Press,New York (1992), 295-305.
Wickens,Christopher D.,Lee,John D.,Liu,Yili,and Becker, Sallie E.Gordon,An Intro¬duction to Human Fact
ors Engineering, Second Edition,Pearson Prentice-Hall,UpperSaddle River,NJ(2004).
Purchase answer to see full
attachment