Visualizing Fitts' Law. What about common foods and everyday situations? When not to use Hick's law

MEMORY CAPACITY AND SPEED

If you follow the logic of D. Hartley, A.A. Ukhtomsky, N.G. Samoilova, M.N. Livanov, G. Walter, E.R. John, K. Pribram and other supporters of the idea of ​​dynamic encoding of information perceived and stored in memory, it can be assumed that the neural ensembles responsible for subjective reflection are activated periodically, discharged by impulses.
Due to the beating frequencies that make up the EEG, the updated memory images seem to pulsate with a beating period, the maximum duration of which is calculated by the following formula: T = 1/(FR). Note that 1 / R= FT.
From the entire set C of long-term memory images, a limited number M of different images are updated at each current moment. At each current moment, with probability 1/M, one of the images has maximum excitability. The reaction time in response to the appearance of a stimulus adequate to the image is minimal at this moment. Stimuli are delivered regardless of periodic fluctuations in neuronal activity (the experimenter does not see them). Consequently, the probability of a stimulus coinciding with one or another phase of excitability oscillations is the same throughout the entire period of oscillations. Some stimuli coincide with the phase of increased excitability, and responses follow without any additional delay. In other cases, the delay is evenly distributed throughout the entire period of reduced excitability of neuronal ensembles.
The foregoing is sufficient to calculate the average duration of the delay t depending on the number M of equally likely expected stimuli and the number K of simultaneously presented stimuli:
(1)
Where
;
;
F = 10 Hz (Berger constant); R = 0.1 (Livanov constant).
The equation quantifies the speed of human information processing. In particular, the time required on average to recognize one stimulus from a number of M equally probable stimuli is determined by the following formula:
.
In psychology, there is a law of the speed of human information processing, established by W. Hick. Processing time increases linearly with a linear increase in the logarithm of the number of alternatives in choice situations. The main disadvantage of this law is its narrow scope. The law is valid if the number of alternatives is less than ten. The law has been criticized and the subject of much debate.
Equation (1), which includes both physiological constants and is derived from ideas about the encoding of information by cycles of neural activity, is much more accurate. It is effective with an unlimited number of alternatives and predicts the results of psychological experiments with a high degree of accuracy [Bovin, 1985]. In the studies of I.Yu. Myshkina, A.V. Pasynkova, Yu.A. Shpatenko, T.S. Knyazeva, G.V. Kotkova, D.V. Lozovoy, O.Zh. Kondratieva, V.K. Oshe and other employees of the laboratory A.N. Lebedev, it was found that equation (1) for assessing the speed of perception and memory correctly predicts a variety of psychological data. Consequently, the idea of ​​encoding perceived and stored information in memory by waves of neural activity has a solid experimental basis.
Now let's move on from the temporal characteristics of perception to assessing the volume of information perceived and stored in memory. Psychologists have long identified several types of human memory: iconic, short-term and long-term. There are other classifications.
On the one hand, a person’s memory seems boundless. This is long-term memory. On the other hand, it is surprisingly small. This is operational, or short-term (working, as it is sometimes called) memory. And before it was called the volume of consciousness. Psychologists tried to solve the problem of the dependence of the volume of short-term memory on the alphabet of memorized stimuli and gave up. D. Miller’s rule “seven plus or minus two” appeared, asserting the independence of volume from the alphabet of memorized stimuli. The given spread is wide, but in reality it is even greater: from one or two units (for example, in the case of hieroglyphs) to 25-30 in the case of binary signals. The idea of ​​cycles of neural activity as a material substrate of memory has justified itself here too, in full accordance with the original idea of ​​D. Hartley.
The units of memory, its neural codes, are wave packets, i.e. synchronous pulse discharges of many neurons in one ensemble. There are a huge number of neural ensembles. Each of them stores information about some memory object in the form of a stable wave pattern. The ensemble consists of several groups of neurons. A separate group is capable of generating sequentially from 1 to 10 coherent volleys of pulses during one period of dominant oscillations, provided that the intervals between volleys are not less than the Lebanese step R = 0.1 in relation to the duration of the dominant period. The number of neurons in an ensemble varies. The more neurons are involved in the rhythms of a certain ensemble, the higher the likelihood of awareness of the corresponding image. The minimum number of neurons that ensures the stability of the ensemble is about 100–300 [Zabrodin, Lebedev, 1977].
It is not synapses or even individual neurons such as detector neurons or command neurons that serve as storage units, but only groups, ensembles of cooperatively pulsating neurons. Of course, these are not atomic or molecular, but cellular, namely neural codes. They can also be called cyclic memory codes, because cyclicity, i.e. the regularity of the discharges of the mass of neurons, reflected in the regularity of the waves of the electroencephalogram, is a specific feature of such codes.
The alphabet of neural memory units is easy to calculate. It is inversely related to Livanov's constant. Namely, one of many salvos indicates the beginning of the period. That is why the size of the alphabet of such code units is one less, i.e. N = 1/R – 1. The number of neural groups involved in the active state in one period (consecutively one after another) is equal to the same number, N = 1/R – 1. As we see, the length of the code chains, i.e. sequentially involved neural ensembles, is limited by the same frequency refractoriness and is just as easy to calculate.
From here the maximum possible number of different code sequences (about half a billion) is derived using the formula
This is the capacity of long-term memory, C = 99 = 387,420,489 memory units.
Each memory unit is one specific concept or command, i.e. action pattern. Let's give a comparison: the size of the active dictionary in the native language is about 10,000, and even Shakespeare and Pushkin, whose dictionary of works has been calculated, is less than 100,000 words. Consequently, a person is able to speak dozens of languages, which, of course, is not new. What is new is that memory capacity is a function of one single physiological constant (R = 0.1). This is the Livanov fraction (it is named so by analogy with another famous constant - the Weber fraction (see next paragraph).
The calculated capacity allows us to find out the dependence of the volume of short-term memory on the alphabet of memorized stimuli. In one equation, we linked three fundamental psychological indicators: the capacity of long-term memory (C), the capacity of operational or working memory (H), and the capacity of attention (M), i.e. number of updated different long-term memory images:
(2)
Where
,
and, in turn,
,
where R is Livanov’s physiological constant (R = 0.1); A is the size of the given alphabet of stimuli.
It should be clarified once again that not all memory units, i.e. not all ensembles are updated at the same time. Only a small number M of ensembles are updated at each current moment in time. This number serves as a measure of attention span.
If a person focused his attention at a certain point in time on memorizing binary elements (zeros and ones), then the smallest amount of attention is equal to the size of the objectively given binary alphabet familiar to him, i.e. M = A = 2. The largest amount of attention is equal to the following product: M = A x N (in this example, M = 2 x N, where N is a proportionality coefficient equal to the volume of short-term, or working, memory for memorized elements).
Short-term memory H is measured by the maximum number of elements, not necessarily different and correctly reproduced, taking into account their meaning and position in the series after a single perception. The duration of a single perception does not exceed 2–10 s.
From equation (2) follows a simple rule for predicting the capacity of short-term memory for a combination of features, if the capacity for each of the features is measured separately:
(3)
where N is the required volume for the combination; H 1, H 2, H 3 – volumes of short-term memory for initial features.
This formula, derived analytically from the previous one, predicted the existence of a new phenomenon, previously unknown in psychology (moreover, with a high degree of accuracy). Prediction error in different experiments N.A. Skopintseva, L.P. Bychkova, M.N. Syrenov and other researchers testing formula (3) was often only 3–5%. Compare this figure with 25–35% according to Miller's rule, which does not work satisfactorily in this situation. According to Miller, such a problem is unsolvable.
In the works of I.Yu. Myshkin and V.V. Mayorov [Myshkin, Mayorov, 1993], who fruitfully developed the theory of dynamic memory, as well as in other studies [Markina et al., 1995], the required dependences of memory volume on electroencephalogram parameters were established. Thus, the goal of I.P. was realized. Pavlova - to quantitatively explain known psychological phenomena and predict new ones using physiological concepts (and fundamental psychological phenomena that describe the volume of memory and its speed).
It is noteworthy that the equations for calculating human memory capacity and its speed include two EEG parameters, frequency refractoriness (R) and dominant frequency (F). They are, as they say after P.K. Anokhin, system-forming parameters that should explain many psychological indicators.
Equations (1), (2), together with their derivation and experimental verification, are considered in detail in some works [Lebedev, 1982; Lebedev et al., 1985].
The discovered physiological formulas for memory and its speed provided a solution to two ancient psychological problems. We are interested, first of all, in the problem of instant choice, searching for the necessary information in memory, information necessary at every step for the implementation of goal-directed behavior.
In cognitive psychology, perhaps, the most literature is on the paradigm of S. Sternberg, a student of D. Luz, concerning the speed of searching for information in memory. S. Sternberg came up with a method for determining this speed. A clear dependence of speed on the size of a number of remembered stimuli was revealed. P. Kavanagh processed the data of many researchers and discovered a constant of about 1/4 s, which characterizes the scanning time of the entire contents of short-term memory, regardless of the content of the memorized material.
According to S. Sternberg’s method, a person first remembers a series of stimuli, for example numbers, as a whole - as a single image - and retains this new image until the appearance of a single stimulus that is included in the memorized set (or, conversely, is not included in him), responding by pressing the appropriate key. In this case, according to the experimental conditions, the parameter M from equation (1) is equal to the volume H of short-term memory, and the parameter K = 1.
To compare one image of a stimulus with the presented one, t/H time is required, and to recognize the presented stimulus, if its image is present in the remembered series, a total of 1 to the number of H comparisons is required, on average (1+ H)/2 comparisons, i.e. e. 0.5(H + 1) t/ H time units, which is equal to 0.25 s with typical values ​​of F = 10 Hz and R = 0.1.
The value calculated from physiological data differs from the experimental value determined by Kavanagh from a variety of psychological data by less than 3%. It is interesting to note that when H = 1 (of course, according to the measurement conditions K = 1), the comparison time according to formula (1) is minimal (about 5 ms). It is equal to the Geissler constant, accurate to 0.3 ms.
To estimate the average increase in time at H > 1 per stimulus, the found value of 0.5(H + l) t/ H of the scanning time of the entire contents of short-term memory should be divided by the number of increments (H – 1) of the stimulus series. Psychological data are fully consistent with physiological calculations [Lebedev et al., 1985; Lebedev, 1990].
Another prediction concerns the speed of visual search, also following purely analytically from equation (1). Formula (1) not only establishes the dependence of search speed on individual electrophysiological constants, but also on the size of the alphabet of perceived visual signals [Lebedev et al., 1985].
Due to cyclical fluctuations in the excitability of neural ensembles, images of long-term memory, including images of perceived and spoken words, are not updated all at once, but in turn, some more often than others. Based on the frequency of updating, i.e., for example, on the frequency of occurrence of the same word in written speech, one can judge the patterns of cyclic neural processes and, conversely, predict the characteristics of speech based on the characteristics of neural cycles.
If the moments of actualization of different images coincide, then such memory units have a chance to unite. In this way a new concept is developed. This is how learning occurs and acts of creativity are realized.
Survive, i.e. Only those memory images whose cyclic activity does not correlate with each other are not forever united in one ensemble. The periods of cycles of such activity are correlated as members of the natural series 1:2:3:4..., and the probabilities of actualization as members of the harmonic series (1/1) : (1/2) : (1/3) : (1/4). The sum of the probabilities is equal to one, and the value of the first term is equal to the physiological Livanov constant. Thus, the following formula is derived, with the help of which one can predict the dependence of the frequency of occurrence of a word (p) in connected speech on the number of its rank:
where i is the rank of the word by frequency of occurrence in the text.
The formula, which includes a physiological constant, expresses what has been known since the 30s. Zipf's law. From formula (4) follow equations for calculating the dependence of the volume of a dictionary on the size of the text in which the given dictionary is implemented, and for calculating the intervals between repetitions of the same word in the text [Lebedev, 1985]. Speech, written or oral, and not only poetry, is musical. The Livanov constant is included in equation (4) of the harmonic series of words ranked by frequency.
Using multiple linear regression equations to assess schoolchildren’s learning ability based on EEG characteristics, we found that the alpha rhythm parameters that determine memory capacity also influence the success of predicting intellectual development [Artemenko et al., 1995], which is not surprising. Thus, the theory of cyclic neural memory codes allows us to take a fresh look at already known psychological laws.

The more objects in front of us, the more time we need to choose. Hick's law in a simplified form is presented exactly like this.

The dependence, formulated back in the mid-19th century, was experimentally confirmed only in 1952 by psychologists William Hick (Great Britain) and Ray Hyman (USA).

Hick-Hyman formula

Scientists have developed a formula that describes the logarithmic relationship between reaction time and the number of objects from which to choose.
T = a + b * log2(n + 1)

Where
T is the total reaction time,
a And b - constants that describe individual characteristics of perception, such as the delay before completing a task and the individual coefficient of decision speed,
n - the number of equivalent alternatives from which to choose.

What does it mean

For clarity, let's build a graph. If we do not take into account the variables a and b, which depend solely on the individual characteristics of a person, then it will look like this: The vertical axis T is the reaction time, the horizontal axis n is the number of alternative objects of choice.

Here's how the reaction time changes as the number of objects increases:

We see that the reaction time when increasing objects from 2 to 5 increased by 1 conventional unit. Now pay attention that the reaction time also increased by the same 1 conventional unit when the objects were increased from 50 to 100. This is the logarithmic dependence.

The fewer objects, the faster and easier it is to select the one you need. But when the number of objects increases beyond a certain number, the reaction time changes slightly.

General Application of Hick's Law



When designing interfaces, Hick's law helps determine the optimal number of objects in a homogeneous array - for example, in a menu. It is usually used in conjunction with Fitts' law, which helps determine the optimal size of an element in terms of reaction speed.

Hick's law is also closely related to other principles of perception and psychological characteristics of decision making. It can be equally effectively considered in the context of proximity theory and the 7 +/- 2 rule, as well as other models of user behavior on the site.

Outside the Internet environment, the principles of Hick's law are implemented in almost any interface that interacts with the user: from the microwave control panel to the location and number of buttons on the TV remote control.

Features of the comprehensive application of Hick's and Fitts' laws in UX



Please note that laws describe actions that usually follow one another.
  1. We immediately need to make a choice (Hick’s law)
  2. And then - get to the desired element (Fitts' law)
Thus, the total time can be calculated as the sum of the values ​​of the two formulas.
In a UX context, this means the following:
  • One long menu (or the arrangement of homogeneous elements in one block) is more convenient for the user than two or several separate ones.
  • When designing an interface, you need to take both laws into account and try to optimize both the size and position of the blocks, as well as the number of elements in each block.
  • You can also focus on laws when creating and optimizing task profiles. Hick's law is especially indicative when analyzing the process of localizing and filling out form fields.
Hick's Law is less known than Fitts' Law. However, it complements it perfectly and helps design user interactions more consciously and effectively.

  • The more objects there are, the more time the user needs to select the one he needs.
  • The relationship between reaction time and the number of choice alternatives is described by a logarithmic function.
  • Hick's law allows you to calculate the optimal number of objects in a block.
  • The application of Hick's law in conjunction with Fitts' law allows us to more accurately predict the user's reaction time during interaction with the interface.
Ignoring Hick's and Fitts' laws is like shooting with your eyes closed. You can only get there by chance. If you take interface design seriously, both principles and the formulas that describe them will help you create a truly effective solution.

states that the reaction time when choosing from a certain number of alternative signals depends on their number. This pattern was first established in 1885 by the German psychologist I. Merkel, and in 1952 it was experimentally confirmed by V. E. Hick, and it took the form of a logarithmic function:

where VR is the average reaction time for all alternative signals; n is the number of equally probable alternative signals; a is the proportionality coefficient. The unit is introduced into the formula to take into account one more alternative - in the form of missing a signal.

HICK'S LAW

English Hick's law) is an experimentally established dependence of the reaction time of choice on the number of alternative signals. It was first obtained by the German psychologist I. Merkel (1885) and later confirmed and analyzed by the English psychologist V. E. Hick (Hick, 1952). Hick's dependence is approximated by a function of the following form: where VR is the value of the reaction time averaged over all alternative signals; n is the number of equally probable alternative signals. "+ 1" in brackets represents an additional alternative - the case of missing a signal.

An equivalent formulation of 3. X.: reaction time increases as a linear function of the amount of information (measured in bits). Syn. Hick-Hyman law.

Hick's law

Specificity. According to this law, the reaction time when choosing from a certain number of alternative signals depends on their number. This pattern was first obtained in 1885 by the German psychologist I. Merkel. Accurate experimental confirmation was received in Hick's studies, in which it took the form of a logarithmic function: VR = a*log(n+1), where VR is the average reaction time for all alternative signals; n is the number of equally probable alternative signals; a is the proportionality coefficient. The unit in the formula represents another alternative - in the form of skipping a signal.

HICK'S LAW

experimentally established dependence of the choice reaction time on the number of alternative signals (amount of incoming information). This dependence has the form: BP = blog,(n + I), where BP is the average value of the reaction time, n is the number of equally probable alternative stimuli, b is the proportionality coefficient. The "I" in parentheses takes into account the additional alternative of skipping the signal. The use of information theory methods has made it possible to extend the above formula to the case of unequally probable signals, regardless of how the uncertainty (entropy) of incoming signals changes: either by changing the length of their alphabet, or by changing the probabilities of their occurrence. In a more general form, the formula has the form: where n is the length of the alphabet of signals, P is the probability of receiving an i-ro signal, H is the amount of incoming information (average per signal), a and b are constants having the following meaning: a - latent reaction time, b - the reciprocal value of the speed of information processing by the operator (time of processing one binary unit of information). The speed of human information processing V= 1/b varies widely and depends on a large number of factors. 3. X. is used in engineering psychology and ergonomics in the information analysis of the operator’s activity, calculating the time required for the operator to solve a problem, coordinating the speed of information flow to the operator with his psychophysiological capabilities for receiving and processing information (throughput). When using 3. X., it is necessary to take into account the possibilities and limitations of the application of information theory in engineering psychology.

  • Translation

Introduction

In preparation for the redesign and overhaul of wufoo.com, I spent some time re-learning the fundamentals of human-computer interaction, hoping to incorporate something new that had accumulated over decades of research into creating simple interfaces. The first thing that surprised me along this path was that the material on this topic was extremely condensed and was clearly aimed at mathematicians, since it was written in the language of the academic elite. One might assume that if they wanted to impress (especially designers), they could write documents that were easier to read.
Remembering school, I noted that only while studying physics did mathematics acquire some meaning for me. Instead of abstract functions, I needed graphs. Thinking along those lines, I thought it would be a good idea to give a visual interpretation of Fitts' Law, the cornerstone of human-machine interface design, and explain both its concept and why these ideas are a little more complex than many would like.

The mathematics of the obvious

Published in 1954, Fitts' Law is an effective method for modeling a specific yet very common situation encountered in interface design. This situation involves a human-controlled object (whether physical, like a finger, or virtual, like a mouse cursor) and a target located somewhere else. The first diagram illustrates exactly this situation:

Mathematically, Fitts' law can be written as follows:


where T is the average time spent on performing an action, a is the time to start/stop the device, b is a value depending on the typical speed of the device, D is the distance from the starting point to the center of the object, W is the width of the object, measured along the axis of movement.
This mainly means that the time it takes to reach a target is a function of the distance and size of the target. At first glance, this seems obvious: the further we are from the target and the smaller it is, the longer it will take to position. Tom Stafford expands on this idea:
“While the underlying message is obvious (big things are easier to pick out), the precise math is impressive because it involves a logarithmic function, meaning that the relationship between size and reaction time is such that making small objects a little bigger makes them easier to pick out ( whereas small changes in the size of large objects no longer matter). The same applies to the distance to the target"

Moving on to the real world, we can say that it is much easier to point to a coin than to a freckle, but pointing to a house or an apartment complex makes virtually no difference. Thus, when you next optimize your website according to Fitts’ Law, remember that if the link is already quite large, then further increasing it will not increase the speed of access to it. However, even a small increase in the size of small links already makes a difference.

Fitts' Law is all about lines!

Wanting to learn a practical lesson from Fitts' equation, interface designers have come up with several rules for the practical application of one of the few laws of human-computer interaction. One of the rules is called

Target Size Rule

It combines the ideas that lead to Fitts's Law and Hick's Law (which will be discussed another time) to state that the size of a button should be proportional to the frequency of its use. Bruce Tognazzini, Apple's interface guru, even developed a great quiz to explain how Fitts' Law can be used to develop rules that radically improve operating system interfaces.
Before you go and blindly apply these rules to your applications, I would like to remind you that Fitts' Law describes a very specific situation. It starts from the assumption that the movement from the starting point is clear and directed, and this implies a strictly defined and straight trajectory (starting with a high initial speed, as if there were no other goals and you knew exactly where you wanted to go). I also saw that many people think that Fitts' Law describes the following situation:

However, in the above equation there is no value corresponding to the height of the target, it only includes the width! Thus, since we are talking about the limitations of Fitts' law as applied to interfaces, we can say that it describes a one-dimensional situation. Fitts' original experiments examined human performance in making horizontal movements toward a target. Both the range of motion and the width of the final region were measured along the same axes, which means that the model describing the law is more likely to look like this:

Thus, when constructing size optimization using Fitts' law, we can assume that vertical and diagonal movements are described by the same equations. It turns out that the ease with which one can point to a particular goal actually depends on the relative position of the starting point and the goal.


In the above example, the cursor on the right, due to its larger width, is technically in a more favorable situation for hitting the target than in the situation on the left. Note that Fitts' Law will also work well for round targets, since the distance to the center will be the same for all angles. However, the law becomes less accurate for rectangular and more complex objects. In the following example, we will make two attempts to optimize the link area by increasing the size of the rectangle.


In the first case we increased the width of the target rectangle, and in the second - the height. As you can see, for this starting point, not all size increases have the effect of being easier to hit the target, which can be significant for web designers working with CSS and the Box Model.

Positioning physical and virtual

Since the publication of Fitts' work, hundreds of derivative experiments have been performed. One interesting work was carried out in 1996 by Evan Graham and Christine McKenzie, which analyzed the difference between the positioning of objects in the real and virtual world. It shows that the movement from the starting point to the target area can be divided into two parts: an initial high-speed phase, and a deceleration phase.


In this study, the authors concluded that the first phase is predominantly influenced by the distance to the target. Neither the scale of the image nor the size of the object will speed up the approach to the goal (larger links will not increase the speed of movement). The only phase that affects the selection time of small objects at the same distances is the deceleration phase. Now here's something interesting:
"The difference between the virtual and physical display only appears in the second phase of the movement, when visual control of deceleration to small targets is a virtual task that takes longer than the physical one"
Simply put, links and buttons on the screen are easier to click with a finger rather than a mouse. And the problem with the mouse comes not from its ability to hit the target, but from our ability to slow down accurately. Apple, your multi-touch monitors are the only hope.

Infinite Boundary Rule

It suggests that computer monitors provide a very interesting side effect of Fitts' target selection model in that they have something called "edges." Jeff Atwood, author of The Horror of Coding, really explained this almost perfectly in his article last year, Fitts' Law and Infinite Width.
Since the pointing device can extend as far as it wants in any direction, targets at the edges of the screen are actually targets with infinite width, as shown below.


For the operating system and any full-screen application, these borders are usually considered the most valuable space, since they are technically the most accessible. Not only because they have an infinite width, but also because they do not require the user to engage in a braking phase when reaching them. That's why it's so incredibly easy and intuitive to assign actions like navigating between windows to the corners of the screen (as is done in Compiz Fusion - approx. lane)


Unfortunately, web applications do not benefit from the Infinite Bounds Rule. Having the limitation of having to run in a bordered browser window, positioning buttons and links on the edges and corners is not particularly interesting from a Fitts' Law point of view, unless the browser is running in full screen mode, which is perhaps more common only for web kiosks .


This also explains why web-based operating system interfaces will never be as good as those that take advantage of the entire monitor area.

Fitts still rules!

The above-mentioned limitations of the Fitts Law are absolutely no reason to throw it out the window. I just wanted to show that discussions around it continue to this day, just as they did 50 years ago. And since it cannot technically accurately describe most situations in the field of interfaces - people do not always move confidently towards a goal, we do not use direct trajectories, there are usually several goals, which can lead to confusion, and so on - it does not seem to be significantly more accurate models that took into account many other factors could change the fundamental truths underlying Fitts's Law.
“Fitts' Law has been shown to apply to a variety of cases, including various limbs (arms, legs, gaze detectors), manipulators, physical environments (including underwater) and user groups (young, old, people with delayed reactions and even those under the influence of drugs) »

In conclusion, the main message I would like designers to take away is that the task of application design is so complex and rich, involving so many variables, that one should be wary of applying Fitts' Law in a blanket manner. With monitor sizes increasing, ways to increase mouse acceleration becoming more popular, and swiping technologies on larger screens becoming more popular, it will be interesting to see how software developers can take advantage of this and increase the ability to cover long distances quickly.

Hick's Law in web design helps create great UX for your project to create an effortless user experience. UX and UI designers are interested in the emotional state of their users and Hick's Law will help you create products that will not depress your future users.

Luckily, designers have a set of principles to guide them in creating good user experiences. Whether it's Gestalt theory, Occam's razor, color theory, or basic prototyping, it all determines the quality of a product.

In this post, we'll cover what you need to know about Hicke's Law and how you can apply it when prototyping.

What is Hick's law?

If you go to Ozon.ru and enter “phone” into the search, you will be shown 7,555 results.

But that's good, right? We have a choice. No matter what phone you prefer, you can find it among 7,555.

And so, Hick's law states that the more options a person is given, the longer it will take for the person to make a decision. Such a number of results in the search results will simply scare away the user.

To make a choice, you will have to filter the search results, look at the characteristics and compare the results; this is a rather complex and time-consuming process.

Barry Schwartz, in his book The Paradox of Choice, writes: “Rather than being a fetishist about free choice, we should ask ourselves whether it gives us something or takes something away from us?”

Essentially, Hick's law states that the time required to make a decision increases logarithmically with the number of options. The more options you have, the longer it takes us to make a choice, and if there are too many options, then the amount of information necessary for analysis will be too large. Schwartz also argues that all these choices effectively imprison us.


Hick's Law in Web Design and User Experience

Now let’s talk about what Hick’s law can give to UX design. Can we use Hick's Law to improve the interaction between a product and the user?

Let's say you're developing a new website for a local library.

Imagine that this library has specialized books and information, hence many different but useful categories for visitors to browse. Let's say there are 50 categories.

When it comes to designing a navigation menu, you are unlikely to display all 50, otherwise it will simply confuse your visitor, and he will leave the site in search of a more convenient option.

Hick's Law helps designers design in a way that reduces failure and increases engagement.

Hick's law formula

In 1952, two psychologists, William Hick and Ray Hyman, tried to understand the relationship between the number of stimuli and an individual's response to them. Based on the results of the study, the following formula was derived:

RT = a + b log2(n)

This is a fairly easy formula to understand. RT is reaction time, "a" is total time, not related to decision making, "b" is an empirically derived constant based on the cognitive process time for each option, which for humans is 0.155, and (n) the number of alternative stimuli .

To illustrate an example: Let's say you're on a website and you need to navigate to a specific page. There is a list of menus and it takes you 2 seconds to read, understand and decide which navigation option to take out of 5 possible ones. The response time, according to Hick's Law, is as follows:

RT = (2 seconds) + (0.155 seconds) (log2(5)) = 2.36 seconds.

What this all boils down to is this: the time required to make a decision increases as the number of alternatives increases.

If in our example there were more options in the navigation menu, the decision time would be higher. Thus, by creating interfaces with a large number of points and options, you create a situation in which the user can simply refuse to interact with the product.

If you are faced with a situation with a lot of navigation points and options in your UI design. For example, 30 links in the navigation menu or 12 images in the carousel. Hick's Law suggests reducing the number of options, but how? So there are several ways to apply Hick's law in design:

Working with content filters

Even though Ozon returns 7,555 results for the query “phone,” the design applies Hick’s Law to navigation menus.


As you can see, the first line after the title gives us the category selection. So we can, even intuitively, choose a category that suits us from 5 options without spending a lot of time searching. So, having switched to the category we need, let’s say “smartphones”, we will see a new set of categories and filters. Interactions with these filters will be faster because they are presented in a limited number (4-5), which in total saves time when compared to searching through 7555 options.

By classifying the selection, the user is not overwhelmed. As UX designers, we should group menu items into categories of different levels - this will make the user more confident in navigating the site's content.

Limiting the number of options

One way to improve the UX of your product is to simply remove all the unnecessary stuff from your interface design. For example, take input fields and filling out forms. If the user proceeds to fill out, then he has already formed a need, he is already ready to spend some time filling out the fields. But if the general design concept is emotional, designed for impulsive actions from the user, then filling out the fields will take time and the user may simply refuse to fill out, which will lead to a decrease in conversion.

The right solution here would be to save the user’s time as much as possible and implement everything in one click, for example, through authorization or subscription via social networks.

Division into stages

If you can’t simplify the procedures performed by users in your product, create the illusion of simplicity. Break the entire process into several simple steps so that the user does not have to work with a large amount of data in any of them.

By breaking the process down into smaller steps with their own screens, you'll create a more user-friendly experience and the user will be more inclined to complete the process than if they had to fill out all the information in one go. This solution has proven itself particularly well in e-commerce.

Hiding minor functions

If your mobile app or website has complex non-essential options that could overwhelm your users, simply hide them. This will make it easier for your audience to interact with the main functionality of your product, and, if necessary, can move on to more advanced ones.

Hick's Law in web design allows you to save your users time, which is already good form, regardless of the type of product. Plus, it generally contributes to a more positive experience, ease of navigation, and usability of your product.

Wrote:

Expert in design, development, web analytics



Did you like the article? Share with your friends!