In addition to the technical challenges of designing and implementing this collaborative system, there are the more subtle issues of convincing senior professionals and their employers that the system is easy to use, is appropriately realistic and will deliver the training they require. Would you trust the captain of your passenger ferry who has practised his command and control skills on something which looks suspiciously like an arcade game? Similarly, would you, as personnel manager of a large shipping company buy this product and expect to have your ships' captains to use it?
The DISCOVER trainers believed that the collaborative virtual environment should aim to model a ship as fully and as accurately as possible to bridge this credibility gap. However, we recognize that the DISCOVER presentation of reality must necessarily fail, as it is likely that even small limitations will shatter the illusion. So the design challenge is to identify and abstract from reality those elements which will give a sufficiently good impression of a ship. But how real is real? To date, existing research has focused on achieving a sense of presence and evaluation instruments have been developed to measure just that, but what has not been established is whether "presence" is a good measure of being real. During our requirements work at one of the partners' sites in Denmark, a trainer told us that when a mariner was using their physical simulator they spoke of it as being "their ship" within 30 minutes of use. Physical simulators in contrast to a collaborative virtual environment are equipped with real, physical controls, readouts, charts and manuals with a synthetic display: for CVEs, all is synthetic.
Evaluating Reality?
From a functional perspective, the environment must (for example) be robust, adequately fast, allow the movement of trainees and their interaction with each other and various objects, support trainer-trainee interaction, the modification of the environment by trainees and provide the numerous other functions specified in the requirements. Evaluation of such features is relatively simple, through inspection against the requirements list combined with simple trials covering the actions necessary to support the training scenarios. Narrow usability evaluation is again fairly unproblematic. Initially, we have used expert cognitive walkthroughs based on the structure suggested by the COVEN project, extended to cover aspects of usability for pedagogic interaction. These results are supplemented by the administration to representative users undertaking task-based trials of Kalawsky's VRUSE questionnaire instrument (Kalawsky, 1999), with additional material to elicit data about usability for collaboration. We are left with the questions, "However how does one evaluate reality?" and "Is presence an appropriate measure?". At the time of writing we are left with a round of iterative evaluation and redesign to determine whether the system is sufficiently real.
No comments:
Post a Comment