Hard knowledge! To open the channel between B and C depends on it! Popular VR Collection

Today, the offline promotion model is popular in China, and online and offline clashes are obviously superior to offline experiences. Price has always been a primary consideration for consumers, and offline experiences can enjoy the sense of immersion brought about by VR with minimal paid services, and it can be used for the general public.

On March 31st, the 3Glasses Shenzhen New Product Launch Conference was successfully concluded. At the conference, 3Glasses Lanpo S1, 3GlassesWand, 3Box, Luna and VRSHOW were displayed. The Lanpo S1 features a number of performances in the industry. It is the first domestic head-mounted device using a custom-built binocular VR dedicated screen, super-eye 2K quality, field of view angle of 110°, refresh rate of 120HZ, and PPI Super 700. Equipped with 3GlassesWand can be hand-positioned in 2M space. Behind the gorgeous performance parameters, it shows that 3Glasses will be based on hardware, content and services as the core, open the channel between the B-side and the C-side, and serve the users' determination.

It was also emphasized that users prefer to buy VR products at offline experience and at a lower price. However, once the price and experience services exceed their psychological prices, even if the product performance is better, it is difficult to attract people. The most important thing is that a good or bad product needs to have easy-to-understand hard knowledge, and it spreads in the middle of the user community to form a reputation. For example, Apple's mobile phone is able to become an arcade, not relying on the technology presentation at the Apple conference, but the user's personal experience and popularization of basic smartphone knowledge formed a good reputation. It's like advertising flyers, although they are lost, but the above content can still be read. However, for ordinary consumers, it is difficult to read the advantages of product performance in a large number of VR professional terms. Before popularizing VR, it is still necessary to understand VR related knowledge.
1. Virtual Reality Headset/Headset When you read or hear about virtual reality content, this is probably the most common term for you. Especially because in most cases, the head is the hardware that currently provides the user with a virtual reality experience. It looks a bit like classic goggles or some kind of helmet, you need to put it on your face or put it on your head. Put on virtual reality heads, you can watch VR experience. Some VR heads have head tracking sensors, while others do not.
2. Head Tracking Talking about head tracking, this term refers to sensors that track the user's head movements and then move the projected image based on the recorded data so that it matches the position of the head movement. In short, if you wear the Oculus Rift, head tracking is when your head is looking to the left, right, up or down, you can see the scene in these directions in the head.
3. Eye tracking Eye tracking is somewhat similar to head tracking, but the image presented matches the direction the user's eyes are looking at. For example, there is a virtual reality headpiece called FOVE (which you may have heard of) that incorporates eye tracking technology into its virtual reality heads. In their demonstration demo, aiming the weapon in different directions (looks like a laser). Or, similar to the Rocket Toss game, relying on the user's head to aim, determine the direction of the launch ring.
4. Field of view The field of view angle is the number of angles of view in the field of view. Having a higher field of view is particularly important because it helps users have a high degree of immersion in the VR experience. The normal field of view of the human eye is approximately 200°. Therefore, the greater the angle of view, the stronger the immersive experience experienced by people.
5. Delay If you have tried a VR experience, you may notice that when you turn your head, the screen content you see may not keep up. This is delay. This will bring an uncomfortable feeling because it is completely absent in the real world. This kind of image delay is often used by users to complain about certain VR experiences. It is not a standard for various reasons and is also a measure of VR technology.
Simulated motion sickness simulations Motion sickness is caused by inconsistent information handled by the brain and the body. People obtain information about the external environment through hearing, sight, and touch. After long-term evolution, human's various sensory organs are highly coordinated and cooperative. In the VR experience, your eyes tell the brain: "We are moving!" And other brain receptors your brain receives say: "No! We are still now. It's a bit sick!" Science magazine said that the gap can be It is interpreted as a toxin and the body needs to remove the toxin and therefore vomit. Just as people in the process of experiencing virtual reality, want to do flying or jumping action, for many people, this is a bad idea. But each person's physique is different, not all people will produce motion sickness. This is a great challenge for developers - developers need to figure out how to make people move without causing users to experience motion sickness.
7, perspective tremor can be very obvious shaking or jitter. But for VR, Oculus Chief Technology Officer Michael Abrash defines it as: "The combination of trailing and strobe is particularly noticeable in VR/AR heads."
8, refresh rate If you are watching TV, or in the virtual reality experience, you will see a series of images. The refresh rate is how fast these images are updated. A high refresh rate will reduce the delay, which will reduce the appearance of simulated motion sickness. It also means that the player will acquire a more sensitive experience. Of course, you definitely want to achieve a refresh rate of 60 frames per second or more for the best experience.
9. Haptic feedback There is a term not unique to virtual reality, but it has a lot to do with virtual reality - it is tactile feedback. In other words, in the VR world, users feel that they have touched something but they have actually not. In June, Oculus announced its input device Oculus Touch handle, which was codenamed Half a Month. One of the functions of this handle is tactile feedback.
10. On-the-spot sensation If virtual reality is committed to immersing users in a new environment, the on-the-spot sensation is what the developer wants to achieve. Simply put, it is the user who wants to feel that they are in the field, regardless of where the user is.
11, virtual reality For this term, there is no strict definition at this stage. It seems a bit tricky. In a nutshell, it's a bit like the theoretical foundation of virtual reality; Forbes defines it as an "aggregate noun for virtual reality," but there is still much controversy about "what field can it apply to and what exactly it is." Someone suggested: Read Neal Stephenson's "Snowcrash", which is an academic novel published in 1992 and looks forward to virtual reality.
12. Microlens Array Display Technology: Through in-depth study of the microlens array structure, the zoom principle of the microlens array on the micropattern is revealed, and on this basis, the microlens array structure parameters, micro-pattern structure parameters and micro The relationship between the moving speed of the pattern array, the moving direction and the magnification ratio, the micro lens array is used to realize the zooming, dynamic and stereoscopic display of the micro graphics.
13, near-eye field display: a new head-mounted display device developed by NVIDIA named "Near-Eye LightField Display" (Near-EyeLightFieldDisplays), its internal use of some Sony 3D head mounted OLED display HMZ-T1 components, peripheral structure part It is manufactured using 3D printing technology. The near-eye field display uses a microlens array with a focal length of 3.3mm to replace the optical lens used in similar products. This design succeeded in reducing the thickness of the entire display module from 40mm to 10mm, making it easier to wear. At the same time, with the use of NVIDIA's latest GPU chip for real-time light source ray tracing operations, the image is decomposed into dozens of different viewing angle arrays, and then the screen is restored and displayed to the user's eyes through the microlens array so that the viewer can be likened to the user. As in the real world, stereoscopic images are naturally observed from different perspectives through the eyes. Since the near-eye light field display can restore the environment in the scene through the micro lens array, it is only necessary to add vision correction parameters in the GPU operation process, and it can offset the effect of vision defects such as nearsightedness or farsightedness on the viewing effect, which means “glasses” The family can also use this product to enjoy true and clear 3D images in the naked eye.
14. Field of view angle: In optical instruments, the angle of the lens of an optical instrument is used as the vertex, and the angle between the object image of the object to be measured and the two edges of the largest range of the lens is called the angle of view. The size of the field of view determines the field of view of the optical instrument. The larger the field of view, the larger the field of view and the smaller the optical power. In layman's terms, target objects beyond this angle will not be captured in the lens. In the display system, the angle of view is the angle between the edges of the display and the viewing point (eyes).
15, naked eye 3D: naked eye 3D, is the use of human eyes with parallax characteristics, without the need for any auxiliary equipment (such as 3D glasses, helmets, etc.), you can get a realistic, stereoscopic image with space, depth. From a technical point of view, the naked eye type 3D can be divided into three types: light barrier lenticular technology and pointing light source. The biggest advantage of naked-eye 3D technology is to get rid of the shackles of glasses, but there are still many deficiencies in resolution, viewing angle and visual distance.
16, HMD: Head-mounted visual equipment (HeadMountDisplay) wearing a virtual display, also known as glasses-type monitors, portable theater. Is a popular name, because the glasses-like display looks like glasses, while specifically for large-screen audio and video player video images, so the image is called video glasses (videoglasses). Video glasses were initially required by the military and applied to the military. The current video glasses are like the stage and status of the big brother and big mobile phones. In the future, 3C convergence will develop very rapidly.
17. HMZ: As of April 24, 2015, when Sony announced the suspension of production of HMZ series products, the series introduced three generations of products, the HMZ-T1 in 2011, the HMZ-T2 in 2012, and the HMZ-T3/T3W in 2013. HMZ-T1 display resolution is only 720p, the headset is a virtual 5.1 channel, but also drag a large hub box. First, it uses two 0.7-inch 720p OLED screens. After wearing the HMZ-T1, the two 0.7-inch screens display the same effect as watching a 750-inch giant screen at a distance of 20 meters. In October 2012, Sony released a small version of the HMZ-T1, the HMZ-T2. Compared to the HMZ-T1, it reduces the weight by 30%, while eliminating the built-in headphone design, allowing users to use their favorite headphones. Although the screen maintains the same 0.7-inch 720pOLED parameters, but the introduction of the 14bitRealRGB3 x 3-color conversion matrix engine and a new optical filter, in fact, there are also enhanced image quality. In 2013, the HMZ-T3/T3W upgrade was not small. It achieved wireless signal transmission for the first time, allowing you to wear a wireless version of the HMZ-T3W for limited, small-scale movements and no longer be bound by cables.
18, ray tracing algorithm: In order to generate the visible image in the 3D computer graphics environment, ray tracing is a more realistic implementation than ray casting or scanning line rendering. This method works by retrospectively tracing the optical path that intersects with the illusion of the camera lens. Because a large number of similar rays traverse the scene, the scene-visible information and the software-specific lighting conditions seen from the camera angle can be constructed. The reflection, refraction, and absorption of light are calculated when the light intersects the object or medium in the scene. Ray tracing scenes are often described by programmers using mathematical tools. They can also be described by visual artists using intermediate tools, or they can use images or model data captured from different technical methods such as digital cameras.
19, real rendering technology: In the virtual reality system, the requirements for the real drawing technology are different from the traditional realism graphics. The traditional drawing only requires the graphics quality and the reality. However, in the VR, we must do the graphic display. The update speed is not less than the user's visual conversion speed, otherwise the screen will be delayed. Therefore, in VR, real-time 3D rendering requires graphics to be generated in real time, and no less than 10 to 20 frames of images must be generated per second. It also requires its authenticity and must reflect the physical properties of the simulated object. In order to make the scene more realistic and real-time, texture mapping, environment mapping and anti-aliasing methods are usually used.
20. Image-based real-time rendering technology: Image-based rendering (IBR) is different from the traditional geometric drawing method, and the model is first built to draw at a fixed light source. IBR generates images of unknown angles directly from a series of graphs. The images are transformed, interpolated, and deformed directly to obtain scenes with different visual angles.
21, three-dimensional virtual sound technology: In everyday life, we hear stereo sound is from the left and right channels, the sound effect can be very obvious is that we feel from the plane in front of us, not like someone shouting us behind us, sound From the sound source, and can accurately determine its position. Obviously stereo can't be done now. The three-dimensional virtual sound is to listen to its position, that is, in the virtual scene, the user can listen to the sound and fully meet the requirements of the hearing system in the real environment. Such a sound system is called a three-dimensional virtual sound.
22. Speech recognition technology: Automatic Speech Recognition (ASR) is a technology that converts a speech signal into text information that can be recognized by a computer, enabling the computer to recognize the speaker's language instructions and textual content. To achieve full recognition of speech is very difficult, it must go through several processes such as parameter extraction, reference mode establishment, and pattern recognition. With the constant research of researchers, methods such as Fourier transform and spectral parameters are used, and the degree of speech recognition is also getting higher and higher.
23, speech synthesis technology: speech synthesis technology (TextoSpeech, TTS), refers to the artificial synthesis of speech technology. To achieve the voice output by the computer can accurately, clearly and naturally express the meaning. There are two general methods: one is recording/replay, and the other is text-to-speech conversion. In the virtual reality system, the use of speech synthesis technology can improve the immersiveness of the system and make up for the lack of visual information.
24. Man-Machine Natural Interaction Technology: In the virtual reality system, we are committed to enabling users to interact with virtual environments generated in computer systems through the sense organs such as eyes, gestures, ears, speech, nose, and skin. The exchange technology in the virtual environment is called human-computer natural interaction technology.
25. Eye tracking technology: Eye Movement-based Interaction (also known as tracking technology). It can complement the deficiencies of head tracking technology, which is simple and direct.
26, facial expression recognition technology: The current research of this technology and people's expectations are still far from each other, but to study the results to show its charm. This technique is generally divided into three steps. The first is the tracking of facial expressions. The user's facial expressions are recorded using a video camera, and then facial expression recognition is achieved through image analysis and recognition techniques. The second is the encoding of facial expressions. The researchers used Facial Action Coding System (FACS) to dissect human facial expressions and classify and encode facial activities. Finally, the recognition of facial expressions can be used to form a system flowchart of facial expression recognition through the FACS system.
27. Gesture recognition technology: Data gloves or depth image sensors (such as leapmotion, kinect, etc.) are used to accurately measure the position and shape of a hand, thereby realizing the manipulation of a virtual object by a virtual hand in the environment. The data glove determines the position and orientation of the hand and the joint by bending, twisting the sensor, and the curvature and curvature of the palm of the hand. The depth sensor-based gesture recognition is calculated by the depth image information obtained by the depth sensor, and the palm is obtained. Data, such as bending angles of fingers and other parts.
28, real-time collision detection technology: In daily life, people have established a certain physical habits, such as solids can not penetrate each other, the object falls in a free fall to do free-fall movement, thrown out the object to do flat throwing movement, etc. It is also affected by gravity and air flow velocity. In order to completely simulate the real-world environment in the virtual reality system and prevent the penetration phenomenon from occurring, real-time collision detection technology must be introduced. Moore proposed two collision detection algorithms, one dealing with triangulated object surfaces and the other dealing with collision detection in a polyhedral environment. In order to prevent penetration there are three main parts. First, the collision must be detected. Second, the speed of the object should be adjusted in response to the collision. Finally, if the collision does not cause the objects to separate immediately, the contact force must be calculated and applied until they separate.
29. Three-dimensional panoramic technology: The three-dimensional panoramic technology (Panorama) is the most popular visual technology nowadays. It is based on image rendering technology to generate virtual reality technology with realistic images. The generation of panorama is firstly a series of image samples obtained by the camera's translation or rotation; then the image mosaic technology is used to generate panoramic images with strong dynamic and perspective effects; finally, the image fusion technology is used to make the panorama bring new users to the user. A sense of reality and interaction. This technology uses the extraction of the depth information of the panorama to restore the 3D information model of the real-time scene. The method is simple, the design cycle is shortened, the cost is greatly reduced, and the effect is more, so it is more popular at present.
30, PPTundefinedixels Per Inch is also called pixel density, which represents the number of pixels per inch. Therefore, the higher the PPI value, the more representative the display screen can display the image at a higher density. Of course, the higher the displayed density, the higher the degree of fidelity.


Yixing Futao Metal Structural Unit Co. Ltd. is com manded of Jiangsu Futao Group.
It is located in the beach of scenic and rich Taihu Yixing with good transport service.
The company is well equipped with advanced manufacturing facilities.
We own a large-sized numerical control hydraulic pressure folding machine with once folding length 16,000mm and the thickness 2-25mm.
We also equipped with a series of numerical control conveyor systems of flattening, cutting, folding and auto-welding, we could manufacture all kinds of steel poles and steel towers.
Our main products: high & medium mast lighting, road lighting, power poles, sight lamps, courtyard lamps, lawn lamps, traffic signal poles, monitor poles, microwave communication poles, etc. Our manufacturing process has been ISO9001 certified and we were honored with the title of the AAA grade certificate of goodwill"
Presently 95% of our products are far exported to Europe, America, Middle East, and Southeast Asia, and have enjoyed great reputation from our customers,
So we know the demand of different countries and different customers.
We are greatly honored to invite you to visit our factory and cheerfully look forward to cooperating with you.

Electric Transmission Pole

Electric Transmission Pole, Galvanized Electric Pole, Electrical Transmission Tower, Power Transmission Pole

YIXING FUTAO METAL STRUCTURAL UNIT CO.,LTD( YIXING HONGSHENGYUAN ELECTRIC POWER FACILITIES CO.,LTD.) , https://www.chinasteelpole.com