You probably know about perception. Every day, our brains are processing a lot of information that perception from the proximal stimulus, such as light, sounds, touch, and receive by sensory organs, like eyes. Learning about visual perception is the first step to advance your design to attract target users.
Perception is the process where information from proximal stimulus is encoded, judged, and given meaning.
First, human perception involves signals from the world around us and received by sensory organs, such as eyes, ears, skin. When the light/objects go through our eyes, human visual systems would sense. The light passes through both the cornea and the lens, projected the images upside-down to our retina, where is located in the back go the eyeball.
The retina of the eye contains millions of rods and cones, which connect with ganglion neuron cell, which transmits information through the optic nerve to our brains. Rods and cones are the two important connectors. Whether they function appropriately, would influence our visual information reception.
The most cones are concentrated in a small area near the center of the retina known as the fovea, where visual acuity is greatest.
However, the rods are located at the edges or periphery of the retina. Rods are sensitive to contrast and overall brightness, work best in low light, and poor spatial acuity.
What does make a graphic symbol be found rapidly?
How can something be highlighted in our brains? In order to ensure all visual queries can be effectively and rapidly served. As a designer, we should know how to stimulate the periphery vision and what makes an object easy to find to focus user attention on it.
In our back of the brain, there is a region called primary visual cortex cells, known as V1, where different kinds of information are processed. Visual input from V1 would act the neurons in Visual area 2 (V2), V3, and V4, where individual neurons each respond to a specific simple feature. V2’s neurons respond to slightly more complex patterns, based on the processing already done in V1.
Our V1 is always a response from the feature types, such as orientation, size, and motion. The critical thing to make our eye movement is to pop out from the page.
Area V1 and V2 can each be thought of as a parallel computer, far more complex and powerful than anything humans have built to date. The V1 and V2 provide the inputs to two distinct processing system called the what system and where systems.
The what system and where the system would impact how easily we can direct a rapid eye movement to focus our attention on it.
What system identified objects in the environments and helps identify the pattern of light and color represents.
Where the system is concentrated on the location of information and with guiding actions in the world, such as moving from place to place and make eye movements.
The responses from the cells that are sensitized passed both up the what pathway biasing the things that are seen, and up the where pathway to regions that send signals to make eye movements occur.
What Stand Out= What We Can Bias For
Visual search is not random. If you’re looking for something smallish, we can only see it when we are looking at it. But how do the eyes get directed to the right locations if the information has not been processed?
If that target is distinct in some feature channels of the primary visual cortex(V1), we can program an eye movement. In order to make an object pop-out, it is usually not enough that low-level feature differences simply exist. We sometimes need to combine several feature channels.
Pop-out effects depend on the relationship of a visual search target to the other objects that surround it.
As a designer, if you wish to make several things easily searchable at the same time? The solution is to use different channels, such as orientation and size, color, and motion or combines several channels to enhance visibility.
One key to making an efficient visual search is through the use of pop-out properties, including the elements of form, size, elongations, color, and orientation;
The elements of color, including hue and lightness, motion, and spatial layout. The large-scale graphic structure also can help with visual search when search already is known where the important details exist.
Our color vision is limited
Usually, color vision is optimized to detect contrast, not absolute brightness. A well-known example is in image1, the square marked A and B are the same gray. We see B as white because it is shaded from the cylinder’s shadow.
Image 1. The square marked A and B are the same gray. We see B as white because it is shaded from the cylinder’s shadow. Sources by Johnson, Jeff. Designing with the Mind in Mind
Also, our ability to distinguish color depends on how colors are presented. The users. The paler the color is, the harder it is to tell them apart. The smaller or thinner objects are, the harder it is to distinguish their colors. The more separated color patches are, the more difficult it is to distinguish their color.
Gestalt Principles-Every Designer Needs to Know
Our vision is optimized to see the structure. The structure can be achieved by using the gestalt principles, which are proximity, similarity, continuity, closure, symmetry, figure/ground, and common fate.
Proximity is that objects that are near each other appear, groups, while those are farther apart do not. The relative distance between objects in a display affects our perception of whether and how the objects are organized into a subgroup. For example, image 1–1, the graphic on the left side is perceived to be one group, while the graphic on the right side, people perceive two groups.
Image 1–1. Sources: Andy Rutledge
Similarity is that where objects that look similar appear grouped, all other things being equal. similar elements can be grouped by color, shape, or size. for example, in image 1–2, the shape causes us to interpret elements as a column of circles and squares.
Image 1–2. Source from Jon Hensley.
Proximity and Similary often uses to group information, such as organize patterns and objects.
Continuity is that our visual perception is biased to perceive continuous forms rather than disconnected segments. Human vision is biased to see continuous forms, even adding missing data if necessary. A well-known example of the use of the continuity principle in graphic design is the IBM® logo. It consists of disconnected blue patches, and yet it is not at all ambiguous; it is easily seen as three bold letters, perhaps viewed through something like Venetian blinds.
Closure states that our visual system automatically tries to close open figures so that they are perceived as whole objects rather than separate pieces. For example, in image 1–3, human vision is biased to see whole objects, even when they are incomplete.
The closure is often used in logo design. The well-known logo design, such as Apple, NBC, Adidas.
Symmetry states that we tend to parse complex scenes in a way that reduces complexity. the human visual system tries to resolve complex scenes into combinations of simple, symmetrical shapes. For example, in image1–4, we see two overlapping diamonds, not as two touching corner bricks or a pinch-waist octahedron with a square in its center.
Image 1–4. Sources by Johnson, Jeff. Designing with the Mind in Mind.
Figure/Ground states that our mind separates the visual field into the figure (the foreground) and ground (the background). The foreground consists of the elements of a scene that are the object of our primary attention, and the background is everything else.
Common fate concerns moving objects. It states that objects that move together are perceived as grouped or related. One of the implications of common fate is common motion, which is used in some animations to show relationships between entities. In image 1–5, items appear grouped or related if they move together.
Image 1–5. Common fate example
In the real world, the gestalt principles are usually work in concert, not in isolation. As a designer, we should keep the gestalt principles in mind when we’re designing a display.