X hits on this document





6 / 28

Judgment Learning interaction with and





(HAJL) examines a person's learning about an automated methodology for investigating





capturing the judgment processes of the human and automated judge, features of the task environment, and relationships between them. As with IPL, it includes three phases: training, interactive learning, and prediction. In training, the human is trained to make judgments about the environmental criterion without interaction with the automated judge. In the interactive learning phase, the human judge first provides a judgment before having access to the automated judge's judgment and then provides a revised judgment after access. In the prediction phase, the human judge provides judgments with respect to the environmental criterion and predicts what the automated judge would judge. HAJL provides measures for conflict between the judges, compromise by the human judge, adaptation of the human judge to the automated one, and for assessing how well the human judge understands the automated one. HAJL was empirically tested using a simplified air traffic conflict prediction task. Two between-subjects manipulations were crossed to investigate HAJL's sensitivity to training and design interventions. Statistically significant differences were found with respect to 1) males outperforming females judgment performance before








while the eliminated

automated judge's subsequent output this difference; 2) participants tended to








HAJL also identified a trend higher judgment achievement

for participants with to predict better the

automated judgment and thought that judgments were closer to the automated





they were.

Research on Visualization and Decision Making

Ann Bisantz University at Buffalo, The State University of New York bisantz@eng.buffalo.edu

Over the past year we have been involved in a number of studies involving human judgment and information visualization. Separate reports submitted by Pratik Jha, Gordon Gattie, and Younho Seong will present details of the our collaborative projects in areas of modeling pilot and controller judgments in air traffic management, applying real time cognitive feedback in a dental diagnosis task, and the use of Lens Model based framework to understand trust and calibration of use of automation systems.

In other studies, we are continuing work focused on methods for displaying probabilistic

information to decision makers, including research in this focused on developing and investigating the properties of graphical representations of uncertainty based on blurred or degraded icons, as well as visual, auditory, and tactile representations of spatially distributed uncertainty. The research

has revealed through people are able to representations to uncertainty, and that decision making tasks

a number of studies that successfully map iconic underlying concepts of performance on dynamic using such representations

is similar to that of numeric representations. Further studies explored participants’ interpretation of the icons by empirically generating fuzzy membership functions which mapped their interpretation of the icons’ meaning to probabilities. Results from this work support the experimental findings, indicating that people generated membership functions with maximal values closely correlated to the intended numeric values. Additionally, it indicated that membership functions were reasonably similar across individuals. In combination, these are important findings, because it suggests that such representations may be implementable, and allow display designers to use a single icon to encode the uncertainty about an object’s identity (e.g., whether it is hostile or friendly aircraft) along with other dimensions such as its location. These studies were supported by grants from the US Airforce Human Effectiveness Directorate, and the National Science Foundation (#IIS9984079), and were performed in collaboration with students Stephanie Schinzing, Jessica Munch, and Richard Finger. Additionally, a set of icons based on military symbology was generated for implementation in a demonstration battlespace visualization system, as part of a research effort funded by the Sarnoff Corporation. Also, in cooperation with co-investigator T. Kesevadas and M. S> student Santosh Basapur, research was performed to compare the utility of visual, auditory, and tactile modes for communicating a two dimensional probability density function. Specifically, color, tone pitch, and vibration were used to encode levels of uncertainty associated with points in a grid, representing probability of a hazard (i.e., an explosive or mine). Initial findings from a path finding task indicated that the visual modality resulted in path lengths that were less risky, but took longer than other modalities. We plan to pursue further work in this area to explore different schemes within each modality as well individuals’ mappings of representations to levels of uncertainty.

Finally, we are currently undertaking studies to understand how people adapt their judgment strategies in cue-criterion judgment tasks, as the underlying probabilistic structure describing the cue-criterion relationships change,

Newsletter 2002 page 6 of 28

Document info
Document views24
Page views24
Page last viewedTue Aug 09 02:39:32 UTC 2016