Moreover, regardless of whether it is in a strong light or dark environment, the transparency of the screen cannot be affected, thereby affecting the wearer's field of vision. This requires that the transparent screen must adjust the display intensity of its picture according to the environment.
Enhancing the display screen will inevitably affect the transparency of the screen, thereby affecting the wearer's field of vision. Reducing the intensity of the display screen will affect the quality of the screen, thus affecting the viewing experience.
This is a contradictory problem, and if you want to solve it, you must adapt to local conditions. When should the display screen be enhanced in what kind of usage scenario, and when should the display screen intensity be reduced. This not only requires human control, but also requires the system to be based on relevant
Use the wearing environment for intelligent automatic adjustment.
In addition to the display technical problems, there is also the information data processing capability, which is also divided into hardware and software parts.
First of all, in terms of hardware, AR glasses can be different from VR glasses. Because the environments and scenes used are different, AR glasses need to be worn for a long time and adapt to a variety of environments, so the size and weight of AR glasses must be as light as possible.
The most ideal state is a pair of glasses, or not much bigger than glasses, and not much heavier. If it is too big or too heavy, it will affect the wearing experience.
Equally paradoxical is how to place a large number of hardware devices while being as light and small as possible, which places extremely high requirements on the integration and integration capabilities of the entire hardware.
What is currently commonly done is to integrate these hardware devices on the frames and temples at both ends of the glasses, but even so, they are still very bulky and inconvenient to wear.
Due to size and weight limitations, the power of the hardware device cannot be too strong, which also greatly limits the system's computing and processing capabilities. How to improve the system's information and data processing capabilities is also a problem that the R&D team must solve.
Although with the promotion and popularization of 5G technology, the high-speed dissemination of information data is no longer a problem. However, how to receive and process these massive amounts of information in a timely manner is also a very difficult problem.
It's okay in a single environment, but what if it's in a complex environment.
Assume a scene, when you are walking on a busy cross street, all the surrounding buildings, billboards, and even some facilities are equipped with AR interpretation functions. This means that your AR glasses have to accept a large amount of AR data information at once.
And displayed on your screen at the same time, which can place great demands on the processor and system.
The last problem is in the interactive system. VR can be controlled using wearable glove sensors or handheld operating handles.
The same cannot be said for AR, because AR needs to adapt to a variety of environments and scenarios, so there must be a simpler and more direct way.
There are currently three methods that can be thought of for this purpose. The first is eye tracking control technology.
The eye capture sensor is used to capture the rotation, blinking, and eye focus center of the eye in real time for interactive control. This technology has been implemented and has good application performance on many devices.
Under normal circumstances, this technology will also be used with head movement sensors. For example, when you look up, the screen display content slides up; when you lower your head and look down, the screen display content slides down. Left or right
When viewing, the content displayed on the screen will slide left and right accordingly.
When you blink, you can perform operations such as confirming selections. For example, blinking once means confirming, blinking twice means canceling, etc. This is equivalent to the left and right buttons of the mouse.
The focus of the eyes also corresponds to the mouse cursor. Wherever you look, the focus will be there, and it is as flexible as the sliding cursor of the mouse.
The second way is to use gesture control technology and use sensors to capture the movement changes of previous gestures for interactive control.
For example, if you slide your hand up and down, the content displayed on the screen will also slide up and down, and the same goes for left and right. Pulling your finger can also move the screen position, or zoom in and out of the screen. Click with your finger to confirm, wave to cancel, etc.
Gesture recognition control technology is currently developing rapidly, but there are still some difficulties in identifying changes in high-speed gestures. This requires that the sensor must have accurate recognition and capture capabilities for gestures, and the processor must be able to quickly and accurately capture these gestures.
Gestures are converted into relevant operating instructions.
Another point is that everyone's gestures are different, or everyone's gestures are different every time. Even a gesture will have some changes in different time and environmental scenarios.
This brings certain difficulties to the system's capture and identification, and therefore requires the system to have good fault tolerance.
The third way of interaction, which seems more sci-fi, is the brain-computer control technology that has become popular recently. Simply put, it controls operations through thinking and imagination.
When we imagine a thing or an image or an object, the brain waves we release are different. Brain-computer control technology uses these different brain waves of us to control and interact with devices.
For example, after your brain imagines an idea of moving forward, the brain will release such a brain wave. The brain-computer system will recognize this brain wave and convert it into corresponding electrical signal instructions to control the device to move forward.
At present, this technology has been applied in some fields, including this kind of brain-computer controlled wheelchair for patients with high paraplegia. Patients can control the wheelchair through their brain to stop movement and so on.
There is also the use of this brain-computer control technology to perform text-related input. It is said that the input speed can reach 70 words per minute, which can be said to be very fast.
Although this technology is developing rapidly, it is also a hot area that technology giants from various countries are competing to study. However, the controversy about this technology has not stopped, and has even become more intense.
An important core question that everyone is discussing is, is this technology safe? First of all, is it safe to use? Will wearing this sensor to capture brain waves for such a long time cause damage to the brain, affect intelligence, nervous system, and
Is there any impact on health?
Second, since brain-computer equipment can read brain waves, it means that it can also input brain waves. Nowadays, Internet security is becoming more and more serious. If hackers master the relevant technology, they can then use brain-computer control technology to invade the human brain.
, wouldn’t it be possible to steal data and secrets from human brains?
Or even more serious, what if a hacker uses this method to transmit a transplanted virus into a person's brain? Do we really need to restart the person's brain, or format it directly? Or install an antivirus in the brain?