Record:   Prev Next
作者 Albert, William
書名 Measuring the User Experience : Collecting, Analyzing, and Presenting Usability Metrics
出版項 San Francisco : Elsevier Science & Technology, 2013
©2013
國際標準書號 9780124157927 (electronic bk.)
9780124157811
book jacket
版本 2nd ed
說明 1 online resource (320 pages)
text txt rdacontent
computer c rdamedia
online resource cr rdacarrier
系列 Interactive Technologies Ser
Interactive Technologies Ser
附註 Front Cover -- Measuring the User Experience -- Copyright Page -- Dedication -- Contents -- Preface to the Second Edition -- Acknowledgments -- Biographies -- 1 Introduction -- 1.1 What is User Experience -- 1.2 What are User Experience Metrics? -- 1.3 The Value of UX Metrics -- 1.4 Metrics for Everyone -- 1.5 New Technologies in UX Metrics -- 1.6 Ten Myths about UX Metrics -- Myth 1: Metrics Take Too Much Time to Collect -- Myth 2: UX Metrics Cost Too Much Money -- Myth 3: UX Metrics are not Useful When Focusing on Small Improvements -- Myth 4: UX Metrics Don't Help us Understand Causes -- Myth 5: UX Metrics are Too Noisy -- Myth 6: You Can Just Trust Your Gut -- Myth 7: Metrics Don't Apply to New Products -- Myth 8: No Metrics Exist for the Type of Issues We are Dealing with -- Myth 9: Metrics are not Understood or Appreciated by Management -- Myth 10: It's Difficult to Collect Reliable Data with a Small Sample Size -- 2 Background -- 2.1 Independent and Dependent Variables -- 2.2 Types of Data -- 2.2.1 Nominal Data -- 2.2.2 Ordinal Data -- 2.2.3 Interval Data -- 2.2.4 Ratio Data -- 2.3 Descriptive Statistics -- 2.3.1 Measures of Central Tendency -- 2.3.2 Measures of Variability -- 2.3.3 Confidence Intervals -- 2.3.4 Displaying Confidence Intervals as Error Bars -- 2.4 Comparing Means -- 2.4.1 Independent Samples -- 2.4.2 Paired Samples -- 2.4.3 Comparing More Than Two Samples -- 2.5 Relationships Between Variables -- 2.5.1 Correlations -- 2.6 Nonparametric Tests -- 2.6.1 The χ2 Test -- 2.7 Presenting your Data Graphically -- 2.7.1 Column or Bar Graphs -- 2.7.2 Line Graphs -- 2.7.3 Scatterplots -- 2.7.4 Pie or Donut Charts -- 2.7.5 Stacked Bar or Column Graphs -- 2.8 Summary -- 3 Planning -- 3.1 Study Goals -- 3.1.1 Formative Usability -- 3.1.2 Summative Usability -- 3.2 User Goals -- 3.2.1 Performance -- 3.2.2 Satisfaction
3.3 Choosing the Right Metrics: Ten Types of Usability Studies -- 3.3.1 Completing a Transaction -- 3.3.2 Comparing Products -- 3.3.3 Evaluating Frequent Use of the Same Product -- 3.3.4 Evaluating Navigation and/or Information Architecture -- 3.3.5 Increasing Awareness -- 3.3.6 Problem Discovery -- 3.3.7 Maximizing Usability for a Critical Product -- 3.3.8 Creating an Overall Positive User Experience -- 3.3.9 Evaluating the Impact of Subtle Changes -- 3.3.10 Comparing Alternative Designs -- 3.4 Evaluation Methods -- 3.4.1 Traditional (Moderated) Usability Tests -- 3.4.2 Online (Unmoderated) Usability Tests -- 3.4.3 Online Surveys -- 3.5 Other Study Details -- 3.5.1 Budgets and Timelines -- 3.5.2 Participants -- 3.5.3 Data Collection -- 3.5.4 Data Cleanup -- 3.6 Summary -- 4 Performance Metrics -- 4.1 Task Success -- 4.1.1 Binary Success -- Calculating Confidence Intervals for Binary Success -- 4.1.2 Levels of Success -- How to Collect and Measure Levels of Success -- How to Analyze and Present Levels of Success -- 4.1.3 Issues in Measuring Success -- 4.2 Time on Task -- 4.2.1 Importance of Measuring Time on Task -- 4.2.2 How to Collect and Measure Time on Task -- Turning on and off the Clock -- Tabulating Time Data -- 4.2.3 Analyzing and Presenting Time-on-Task Data -- Ranges -- Thresholds -- Distributions and Outliers -- 4.2.4 Issues to Consider When Using Time Data -- Only Successful Tasks or All Tasks? -- Using a Concurrent Think-Aloud Protocol -- Should You Tell Participants about the Time Measurement? -- 4.3 Errors -- 4.3.1 When to Measure Errors -- 4.3.2 What Constitutes an Error? -- 4.3.3 Collecting and Measuring Errors -- 4.3.4 Analyzing and Presenting Errors -- 4.3.5 Issues to Consider When Using Error Metrics -- 4.4 Efficiency -- 4.4.1 Collecting and Measuring Efficiency -- 4.4.2 Analyzing and Presenting Efficiency Data -- Lostness
4.4.3 Efficiency as a Combination of Task Success and Time -- 4.5 Learnability -- 4.5.1 Collecting and Measuring Learnability Data -- 4.5.2 Analyzing and Presenting Learnability Data -- 4.5.3 Issues to Consider When Measuring Learnability -- What is a Trial? -- Number of Trials -- 4.6 Summary -- 5 Issue-Based Metrics -- 5.1 What is a Usability Issue? -- 5.1.1 Real Issues versus False Issues -- 5.2 How to Identify an Issue -- 5.2.1 In-Person Studies -- 5.2.2 Automated Studies -- 5.3 Severity Ratings -- 5.3.1 Severity Ratings Based on the User Experience -- 5.3.2 Severity Ratings Based on a Combination of Factors -- 5.3.3 Using a Severity Rating System -- 5.3.4 Some Caveats about Rating Systems -- 5.4 Analyzing and Reporting Metrics for Usability Issues -- 5.4.1 Frequency of Unique Issues -- 5.4.2 Frequency of Issues Per Participant -- 5.4.3 Frequency of Participants -- 5.4.4 Issues by Category -- 5.4.5 Issues by Task -- 5.5 Consistency in Identifying Usability Issues -- 5.6 Bias in Identifying Usability Issues -- 5.7 Number of Participants -- 5.7.1 Five Participants is Enough -- 5.7.2 Five Participants is Not Enough -- 5.7.3 Our Recommendation -- 5.8 Summary -- 6 Self-Reported Metrics -- 6.1 Importance of Self-Reported Data -- 6.2 Rating Scales -- 6.2.1 Likert Scales -- 6.2.2 Semantic Differential Scales -- 6.2.3 When to Collect Self-Reported Data -- 6.2.4 How to Collect Ratings -- 6.2.5 Biases in Collecting Self-Reported Data -- 6.2.6 General Guidelines for Rating Scales -- 6.2.7 Analyzing Rating-Scale Data -- 6.3 Post-Task Ratings -- 6.3.1 Ease of Use -- 6.3.2 After-Scenario Questionnaire (ASQ) -- 6.3.3 Expectation Measure -- 6.3.4 A Comparison of Post-task Self-Reported Metrics -- 6.4 Postsession Ratings -- 6.4.1 Aggregating Individual Task Ratings -- 6.4.2 System Usability Scale -- 6.4.3 Computer System Usability Questionnaire
6.4.4 Questionnaire for User Interface Satisfaction -- 6.4.5 Usefulness, Satisfaction, and Ease-of-Use Questionnaire -- 6.4.6 Product Reaction Cards -- 6.4.7 A Comparison of Postsession Self-Reported Metrics -- 6.4.8 Net Promoter Score -- 6.5 Using Sus to Compare Designs -- 6.6 Online Services -- 6.6.1 Website Analysis and Measurement Inventory -- 6.6.2 American Customer Satisfaction Index -- 6.6.3 OpinionLab -- 6.6.4 Issues with Live-Site Surveys -- 6.7 Other Types of Self-Reported Metrics -- 6.7.1 Assessing Specific Attributes -- 6.7.2 Assessing Specific Elements -- 6.7.3 Open-Ended Questions -- 6.7.4 Awareness and Comprehension -- 6.7.5 Awareness and Usefulness Gaps -- 6.8 Summary -- 7 Behavioral and Physiological Metrics -- 7.1 Observing and Coding Unprompted Verbal Expressions -- 7.2 Eye Tracking -- 7.2.1 How Eye Tracking Works -- 7.2.2 Visualizing Eye-Tracking Data -- 7.2.3 Areas of Interest -- 7.2.4 Common Eye-Tracking Metrics -- Dwell Time -- Number of Fixations -- Fixation Duration -- Sequence -- Time to First Fixation -- Revisits -- Hit Ratio -- 7.2.5 Eye-Tracking Analysis Tips -- 7.2.6 Pupillary Response -- 7.3 Measuring Emotion -- 7.3.1 Affectiva and the Q-Sensor -- 7.3.2 Blue Bubble Lab and Emovision -- 7.3.3 Seren and Emotiv -- 7.4 Stress and Other Physiological Measures -- 7.4.1 Heart Rate Variance -- 7.4.2 Heart Rate Variance and Skin Conductance Research -- 7.4.3 Other Measures -- 7.5 Summary -- 8 Combined and Comparative Metrics -- 8.1 Single Usability Scores -- 8.1.1 Combining Metrics Based on Target Goals -- 8.1.2 Combining Metrics Based on Percentages -- 8.1.3 Combining Metrics Based on Z Scores -- 8.1.4 Using Single Usability Metric -- 8.2 Usability Scorecards -- 8.3 Comparison to Goals and Expert Performance -- 8.3.1 Comparison to Goals -- 8.3.2 Comparison to Expert Performance -- 8.4 Summary -- 9 Special Topics
9.1 Live Website Data -- 9.1.1 Basic Web Analytics -- 9.1.2 Click-Through Rates -- 9.1.3 Drop-Off Rates -- 9.1.4 A/B Tests -- 9.2 Card-Sorting Data -- 9.2.1 Analyses of Open Card-Sort Data -- Hierarchical Cluster Analysis -- Multidimensional Scaling -- 9.2.2 Analyses of Closed Card-Sort Data -- 9.2.3 Tree Testing -- 9.3 Accessibility Data -- 9.4 Return-On-Investment Data -- 9.5 Summary -- 10 Case Studies -- 10.1 Net Promoter Scores and the Value of a Good User Experience -- 10.1.1 Methods -- 10.1.2 Results -- 10.1.3 Prioritizing Investments in Interface Design -- 10.1.4 Discussion -- 10.1.5 Conclusion -- References -- Biographies -- 10.2 Measuring the Effect of Feedback on Fingerprint Capture -- 10.2.1 Methodology -- 10.2.2 Discussion -- 10.2.3 Conclusion -- Acknowledgment -- References -- Biographies -- 10.3 Redesign of a Web Experience Management System -- 10.3.1 Test Iterations -- 10.3.2 Data Collection -- 10.3.3 Workflow -- 10.3.4 Results -- 10.3.5 Conclusions -- References -- Biographies -- 10.4 Using Metrics to Help Improve a University Prospectus -- 10.4.1 Example 1: Deciding on Actions after Usability Testing -- 10.4.2 Example 2: Site-Tracking Data -- 10.4.3 Example 3: Triangulation for Iteration of Personas -- 10.4.4 Summary -- Acknowledgments -- References -- Biographies -- 10.5 Measuring Usability Through Biometrics -- 10.5.1 Background -- 10.5.2 Methods -- 10.5.3 Biometric Findings -- 10.5.4 Qualitative Findings -- 10.5.5 Conclusions and Practitioner Take-Aways -- Acknowledgments -- References -- Biographies -- 11 Ten Keys to Success -- 11.1 Make Data Come Alive -- 11.2 Don't Wait to be Asked to Measure -- 11.3 Measurement is Less Expensive Than You Think -- 11.4 Plan Early -- 11.5 Benchmark Your Products -- 11.6 Explore Your Data -- 11.7 Speak the Language of Business -- 11.8 Show Your Confidence -- 11.9 Don't Misuse Metrics
11.10 Simplify Your Presentation
Measuring the User Experience was the first book that focused on how to quantify the user experience. Now in the second edition, the authors include new material on how recent technologies have made it easier and more effective to collect a broader range of data about the user experience. As more UX and web professionals need to justify their design decisions with solid, reliable data, Measuring the User Experience provides the quantitative analysis training that these professionals need. The second edition presents new metrics such as emotional engagement, personas, keystroke analysis, and net promoter score. It also examines how new technologies coming from neuro-marketing and online market research can refine user experience measurement, helping usability and user experience practitioners make business cases to stakeholders. The book also contains new research and updated examples, including tips on writing online survey questions, six new case studies, and examples using the most recent version of Excel. Learn which metrics to select for every case, including behavioral, physiological, emotional, aesthetic, gestural, verbal, and physical, as well as more specialized metrics such as eye-tracking and clickstream data Find a vendor-neutral examination of how to measure the user experience with web sites, digital products, and virtually any other type of product or system Discover in-depth global case studies showing how organizations have successfully used metrics and the information they revealed Companion site, www.measuringux.com, includes articles, tools, spreadsheets, presentations, and other resources to help you effectively measure the user experience
Description based on publisher supplied metadata and other sources
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2020. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries
鏈接 Print version: Albert, William Measuring the User Experience : Collecting, Analyzing, and Presenting Usability Metrics San Francisco : Elsevier Science & Technology,c2013 9780124157811
主題 User interfaces (Computer systems);User interfaces (Computer systems) -- Measurement.;Measurement.;Technology assessment
Electronic books
Alt Author Tullis, Thomas
Albert, William
Record:   Prev Next