Comparison of an Expand Interaction Technique with Raycasting for Immersive Virtual Environments
This project is already completed.

Introduction
This project covers an experimental comparison of two interaction techniques for out of arm’s reach 3D object selection in immersive virtual environments. An exemplary task like selecting an object from a distant shelf or triggering a light switch without touching it directly, is currently feasible by the means of various techniques, such as the bubble cursor (Grossman, T., & Balakrishnan, R., 2005), gaze-supported selection (Stellmach, S., & Dachselt, R., 2013), or the Bendcast (Cashion, J., et al., 2013). Every technique comes with its individual strengths and weaknesses, considering for example the density of objects to select from, or the visible size of a target object in general, for example limited by occlusion, or distance. To handle the selection of objects with a small visible size, Cashion et al. (2012) developed an interaction technique called Expand. It is based on a Raycast (Mine, M., 1995; Bowman, C. et al., 2004) additionally using an on-screen circular cursor which zooms in on the region of potential targets. It is combined with a modified SQUAD technique (Kopper, R., et al., 2011) and thereby reduces density of objects. The approach consists of a two-step refinement technique, which allows the user to first select a group of spatially close objects and refine their selection afterwards by using a Raycast. Their implementation showed promising results. However, Expand was only tested in a minimal-immersive setup using a TV-screen, leaving the technique’s effectiveness unexplored with respect to fully-immersive Head-Mounted-Display (HMD) setups.
Problem Statement
Based on Expand by Cashion et al. (2012), Heidrich (2018) implemented a VR version of Expand for immersive virtual environment setups. It is to be evaluated, whether Expand is also suitable for immersive VR applications. To compare the suitability of different techniques for selecting an object, two main criteria may be used. First, the saliency of an object, which means the visual conspicuousness. For example if the target object differs in its color from its surrounding. Or transparency, when one can see through and miss its contours in a crowded environment. Second, the visible size of an object determines the difficulty of a selection. It can be defined by an object’s size, the distance to the user, or occlusion. These criteria can be used to derive objective measurements to compare the suitability of two techniques.
Approach
In the scope of this project, an experiment is going to be performed, to compare Expand VR and Raycast in terms of their suitability in the context of target objects with varying visible size. To evaluate which technique is more useful according to the level of the target object’s visible size, objective and subjective measurements are going to be taken into account. For objective measurements, the completion time, error rate and as a newly developed measurement, the maximum angle deviation (MAD) a participant can perform and still the target, are going to be used. The angle values are going to be used as a measurement for the visible size of an object. The MADs are going to be tracked for the controller and the HMD. In combination with the error rate or task completion time, the MADs are expected to give evidence about the usefulness of an interaction technique with regards to the visible size of a target object. The angle values are also going to be an indicator for the objective difficulty of a selection task. A comparison of MAD value series in combination with another measurement, like the completion time or error rate, for two techniques is expected to provide some sort of decision boundary, which technique is better suited at which level of difficulty. Within the scope of the experiment, the new method of using MAD is going to be tested to contribute an objective measurement for the visible size of an object and thereby for the usability of a technique under certain circumstances. The technical implementation regarding MAD is done by a research assistant of the chair for human computer interaction. To also take subjective aspects into consideration, the participant is going to be asked to rate the level of difficulty after each selection task. For that, a Single Ease Question (Sauro, J., & Dumas, J., 2009) with the wording ’Wie schwierig oder einfach fanden Sie diese Aufgabe?’ is going to be used. The participant is going to answer that question on a scale from one to seven, where one means ‘very easy’ and seven means ‘very difficult’. To assess the different levels of difficulty, the retrieved MAD values are going to be clustered to find accumulations of different difficulties. For that, a clustering algorithm is going to be implemented. These clustered values are going to be plotted against the task completion time as well as the error rate. The MAD is also going to be evaluated with respect to the subjective measurements obtained by the SEQ, to find correlations and causalities between MAD and subjectively estimated difficulty.
Methodology & Concept
To evaluate Expand VR within a HMD setup, a comparative experiment with Expand VR as one technique and the widely known Raycast as technique to be compared to, is going to be performed. The goal of the comparative study is to discover a decision boundary for usability between both techniques. This is going to be tested in scenarios with differently occluded - different visible size - and thereby differently difficult target objects. The study environment and experiment design have been set up within the scope of a previously completed scientific internship at the chair for human computer interaction at the university of Würzburg (Heinrich, R., 2019). The study is set up as a within subject design.
Hypotheses
- H1 For target objects with a small visible size, Expand VR is going to outperform Raycast in terms of task completion time.
- H2 For target objects with a small visible size, Expand VR is going to have a lower error rate than Raycast.
- H3 For target objects with a small visible size, Expand VR is going to be preferred by the user over Raycast.
- H4 For target objects with a large visible size, Raycast is going to outperform Expand VR in terms of task completion time.
- H5 For target objects with a large visible size, Raycast and Expand VR are not going to differ greatly in error rates.
- H6 For target objects with a large visible size, Raycast is going to be preferred by the user over Expand VR.
These hypotheses are strengthened by the findings of Kopper et al. (2010) regarding distal pointing in non-VR setups. They found target acquisition of objects with shrinking angular width to get exponentially slower, due to hand tremor and the Heisenberg effect (Bowman, D., et al., 2002). Which is the movement the cursor or visible representation makes when a button on the controller is pressed. This effect may also be considered when evaluating the error rate for target objects with very small MAD. Kopper et al. (2010) also conclude that interaction techniques which include distal pointing should consider increasing the angular width of target objects, which Expand does. They further recommend investigating the effect of visual angular width on performance, which is exactly what this experiment is going to deal with.
Measurements
One aim of the experiment is to rate the usability of the mentioned 3D interaction techniques in a way to gain insights about their usefulness with different levels of difficulties, using objective and subjective measurements.
Objective measurements:
- Completion time
- Error rate
- MAD Subjective measurement:
- SEQ
With these measurements the hypotheses are going to be evaluated. For instance, whether with shrinking MAD values, Expand VR will outperform Raycast in terms of task completion time, error rate, user’s preference and vice versa.
Task

References
Bowman, D., Wingrave, C., Campbell, J., Ly, V., Rhoton, C. (2002). Novel uses of pinch gloves TM for virtual environment interaction techniques. Virtual Reality 6 (3).
Bowman, D., Kruijff, E., LaViola, J., & Poupyrev, I. (2004). 3D User Interfaces: Theory and Practice. Addison-Wesley.
Cashion, J., Wingrave, C., & LaViola Jr., J. J (2012). Dense and Dynamic 3D Selection for Game-Based Virtual Environments. IEEE Transactions on Visualization and Computer Graphics.
Cashion, J., Wingrave, C., & LaViola Jr., J. J. (2013). Optimal 3D selection technique assignment using real-time contextual analysis. In Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI).
Grossman, T., & Balakrishnan, R. (2005). The Bubble Cursor: Enhancing Target Acquisition by Dynamic Resizing of the Cursor’s Activation Area. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
Heidrich, D. (2018). Scientific internship. Chair for HCI at the university of Würzburg. Unpublished.
Heinrich, R. (2019). Scientific internship. Chair for HCI at the university of Würzburg. Unpublished.
Kopper, R., Bacim, F., & Bowman, D. (2011). Rapid and accurate 3D selection by progressive refinement. In Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI).
Kopper, R., Bowman, D., Silva, M. G., McMahan, R. P. (2010). A human motor behavior model for distal pointing tasks. International journal of human-computer studies, 68(10).
Mine, M. (1995). Virtual environments interaction techniques. Tech. Rep. TR95-018, Dept. of Computer Science, Univ. of North Carolina at Chapel Hill.
Sauro, J., & Dumas, J. (2009). Comparison of three one-question, post-task usability questionnaires. In Proceedings of the CHI International Conference on Human Factors in Computing Systems.
Stellmach, S., & Dachselt, R. (2013). Still Looking: Investigating Seamless Gaze-supported Selection, Positioning, and Manipulation of Distant Targets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems.
Contact Persons at the University Würzburg
Chris Zimmerer (Primary Contact Person)Mensch-Computer-Interaktion, Universität Würzburg
chris.zimmerer@uni-wuerzburg.de
Dr. Martin Fischbach (Primary Contact Person)
Mensch-Computer-Interaktion, Universität Würzburg
martin.fischbach@uni-wuerzburg.de