Analysing Mouse And Pen Flick Gestures Computer Science

Essay add: 28-10-2015, 18:26   /   Views: 173

Gesture based on interfaces provides assurance to increase the efficiency of user input, mainly in mobile computing. This concept was motivated by users need to pertain some sensible constraints values such as magnitude, angular and timing accuracy of gestures for a marking -menu implementation. This paper describes user created by using mouse and pen input devices such as low-level physical properties of linear flick gestures. Users employ higher mouse accelerations to traverse the screen quickly, as bigger screens and multi-screen (monitor) configurations become more popular. This show that pen gestures are larger than mouse gestures, that vertical gestures are 'clumsy' with the mouse, that angular errors are more in left and right directions, and downward gestures are approximately 11% slower than other directions.


According to the Michael and Andy, "Gesture based input mechanisms promise two major user interface benefits" (2003). Based on the input mechanisms gesture provides mainly two user interface benefits. They are, firstly, gestures can reduce the time taken to issue some simpler commands since they are based on 'marking menus', for example, reduce the Fitts' law (McGuffin and Balakrishnan, 2005) time-to-target constraints of normal menus. This can be done by allowing users to select menu items towards each item location in a 'pie menu' by gesturing centered on the user's cursor location. Secondly, gestures based on input methods are readily implemented on mobile devices where mouse and keyboards are not practical i.e., Touch screens.

Desktop interfaces include the 'Microsoft' web browser, and 'Google' which adds gestural commands to many commercial desktop environments. Gestural systems on mobile devices include text entry systems such as Unistrokes and Graffiti (Isokoski, 2001). In pen-based interfaces, Unistrokes and Graffiti are viable form of text input. Gesture recognition software must set constraints on the timing, magnitude and direction of gestures, in order to distinguish between different gestural commands and mouse-driven actions. This paper aim is to determine values that can be used in designing of improved gesture recognition systems by using 'marking menu' concepts.

Characteristics of gestures:

The gestures which generates from the motor movement will help to explain some of the preference differences and observed performance. When making right and left gestures with the pen the fingers require some amount of extension combined with rotational wrist movement. While, when making right and left gestures for the mouse there will be just movement in the hand, no finger movements.

According to Yu-Te Shen, Zheng, Chen, Composite mouse gestures (2004), which means in mouse, we can create vertical gestures by using different methods. One of the method is to keep the hand and wrist still and moved the mouse by contracting (down) and extending (up) with fingers ,the other method is used to move the whole arm with little movement in fingers and wrist .Where as with the pen gestures were made by contracting and extending the thumb and finger

Introduction to Marking Menus:

Menus are largely used in Human Computer Interfaces. They provide so much of detailed information regarding what commands are available and how to invoke a particular command. Earlier they used menus with the help of an "accelerator key" on the keyboard (Yun & Lee, 2007). The problem in using accelerator key is that we have to use both the hands for marking menus. To overcome this problem we are using marking menus. Marking menus are designed to allow a user to perform a menu selection by popping-up a pie-menu under the user cursor when mouse button is pressed. A pie-menu is a circular shaped context menu, where the selection of menu depends on direction.

Expert user selects the item with a rapid flick in the correct direction. If he fails to mark menus then the pie menu will appear once again with a delay of about 1/2 second. Early they used to generate gesture command by using left mouse buttons ,in general we used the left mouse button for selecting the text .So recent operation is done by using the right mouse button to reduce the overloading where as we used rarely right mouse button.

Non-Linear Gesture Input Schemes:

According to Michael and Andy, "In selecting one item from a single marking menu, the recognition software need only compare the total distance traveled on the X and Y coordinates to determine the direction of the gesture" (2003). Which means by using the cascading menus in marking menu concept, the user can access large sets of menu items through gestures with a series of linear edges (Cockburn & Gin, 2006).

The Unistrokes gestural input, for instance, allow users to access all letters with gestures. Consider an example of T-CUBE and Graffiti (Isokoski, 2001), several other character sets are implemented using gesture techniques. Other than text input, non-linear gestural input has been used for a wide range of application areas such as sketching for teaching based on pen gesture (Yang, Ma, Teng, Dai & Wang, 2007). The GRANDMA toolkit for rapidly adding gestures to direct manipulating interfaces by having the system developer provides associated interface actions.

Composite Mouse Gestures:

According to Shen, Zheng, Chen (2004), Composite Mouse Gesture is an authoring tool with Graphical user interface for utilizing mouse gestures. Composite Mouse Gestures by which users can transmit complicated meaning that barely characterize by conventional GUI. Composite Mouse Gestures consist of three main component vocabulary, grammars and implicated computational models.

A vocabulary (gesture) is a mouse action that can be understood as a pre-defined meaning through recognition such as an arrow, an X sign, single click or a double click. The grammar are defined as combining rules for vocabularies. In this each gesture in the sequence can be viewed as a word in a sentence and that sentence is valid if and only if it mapped into one of the specified grammars. The implicated computational models determine how the temporal order of input series encodes the logic mechanism. According to Moyle & Cockburn (2003) in behavior authoring preceding gestures suggest preconditions that fire succeeding ones.

Mouse Acceleration:

According to Yun & Lee, (2007) the mouse motion is generally controlled by two configurable parameters that determines the mapping between movement of the mouse and the corresponding movement of the cursor on the screen. The two parameters are acceleration and threshold.

Acceleration parameter is a multiplier that is applied to the mouse cursor motion, if the parameter is set to '4' then the mouse cursor moves four times faster than you move the mouse.

The acceleration setting determines the mapping between screen distances and mapping movement.

If the acceleration is high then it is convenient for moving the cursor to longer distance on your screen. However, it will be very awkward if we want the cursor position to be more precisely. The pointers moves so quickly and becomes difficult for the mouse cursor to focus small screen area. To get over this problem we can set the threshold parameter to some value because of which it controls the number of pixels the mouse cursor move before the cursor motion accelerates.

According to Michael and Andy, they did the case study on twenty nine subjects with all right-handed post-graduate of Computer Science students. The subjects were assigned to any one of three gesture-input conditions (2003).

1. Mouse input, no acceleration: All gestures were created with no mouse acceleration, providing a static linear between cursor movement and corresponding physical movement of the mouse, which is called constant control-display gain.

2. Pen point: A pressure sensitive pen-computer is used to create gestures. There was a one-to-one mapping between physical movement of the pen on the surface of screen and the resultant size of the gesture.

3. Two-to-one mouse acceleration input, threshold: Common default setting for mouse motion is used to create the gestures, with an accelerated "two-to-one" mapping and a threshold setting.


The data that is stored was analysed in three different experimental designs. The three dependent variables measured were as follows.

Gesture magnitude: The distance between the physical location of the mouse or pen when the gesture begins and finishes. The data values were stored in pixel coordinates, into the corresponding millimeter motion values at the physical device.

Gesture timing: The time taken between each gesture to start and finish. These values were not measured for the pen condition because of the low timing granularity supported by the pen-computer. This time can be measured in milliseconds.

Angular error: The pre-gesture offset between the intended gesture direction and the actual direction. This variable allows user to detect stereotypical biases towards particular angular errors for each input device for each gesture direction.

Fig 1.1: Size distribution of 1400 non-accelerating mouse gestures

Fig 1.1 taken from Michael and Andy, 2003

Considering the subjects that were given minimal training and instruction, there was little variation in the gesture magnitude, timing and angular error. Across the total 5200 gesture set, the mean gesture size was 6.6mm; mean gesture time was 151 milliseconds; mean angular error was 4.2 degrees. The relative normal shape of the above graph is typical of the data collected for each of the three dependent variables.

Fig 1.2.a taken from Michael and Andy, 2003

Fig 1.2.a taken from Michael and Andy, 2003

Figure 1.2: Magnitude and angular errors of gestures (left, up, right, down) using the pen and mouse.

Pen gestures were actually larger than mouse gestures, with the mean gesture magnitudes of the pen and mouse of 18.9mm and 7.0mm. Since the time taken to move from left to right and upward to downward will takes more time than pen gestures. This would be a reliable difference between pen and mouse gestures. The gesture magnitudes in four directions were not reliably different from each gesture.


Marking menus and other forms of gesture input are used in commercial desktop systems and in mobile devices to increase frequency in navigation either in back or front or in upward or downward or in left or right directions. These marking menus concept with the gestures idea will be helpful for multi-touch pad screens for easy control over the menu items.

This paper reports on an empirical analysis between magnitude, timing and accuracy of a linear 'flick' gestures which are used in marking menus (Streit, 1997). The aim is to guide the proper selection of values that are appropriate for some parameters such as magnitude, timing and angular error. Further work progresses on investigation of leftwards and rightwards 'sloppiness' observed with the pen and mouse gestures. It understands the user how this effect impacts on pen-based or mouse-based marking menus

Article name: Analysing Mouse And Pen Flick Gestures Computer Science essay, research paper, dissertation