2024-12-23 07:10:50
Author: WiMi Hologram Cloud Inc. / 2023-07-24 02:10 / Source: WiMi Hologram Cloud Inc.

WiMi Developed an HMD-based Control System for Humanoid Robots Controlled by BCI

BEIJING,July 21,2023 -- WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"),a leading global Hologram Augmented Reality ("AR") Technology provider,today announced its development of a humanoid robot control system based on a head-mounted display (HMD) controlled by a brain-computer interface (BCI). This system via steady-state visual evoked potentials (SSVEP) and allows interaction with the environment and humans. The technology provides real-time feedback through the robot's embedded camera,which integrates stimulus feedback into the HMD display.

WiMi's researchers took control of the robot and tested its performance by using this new interaction in an experiment. Testers were asked to navigate the robot to a specific location to perform a task. This test was based on visual SLAM feedback,which provides navigation instructions through images of the environment captured by a camera.

Steady-state visual evoked potentials (SSVEP) are used as control signals,and the user's EEG signals are captured by an EEG signal acquisition device,and real-time feedback from the robot is displayed using a head-mounted display. For navigation instructions,the researchers embedded an embedded camera in the robot and combined its real-time feedback with the SSVEP signals on the head-mounted display to form a form of interaction. The researchers used a visual SLAM (Simultaneous Localization and Mapping) algorithm to implement the navigation instructions.

The control system of the head-mounted display BCI consists of several components: signal acquisition device,head-mounted display,humanoid robot,embedded camera,control algorithm,etc. The steps and results of WiMi's BCI control system implementation are as follows:

EEG signal acquisition and processing:

The WiMi researchers first used a set of EEG signal interaction platforms for capturing the user's EEG signals and processing those signals. The platform consists of a multi-channel amplifier,electrode caps,and data acquisition software to capture and store the user's EEG signals in real-time. In processing the EEG signals,an efficient method is used,which is the extraction of SSVEP by frequency analysis of the signals. This method presents light flashes of a specific frequency to the user,and the user's EEG signal will resonate at this frequency,thus enabling the extraction of control signals.

The researchers used an EEG signal acquisition device to capture the user's EEG signals,a head-mounted display to show real-time feedback from the robot,and an embedded camera to provide navigation instructions. The control algorithm uses a visual SLAM algorithm to implement the navigation instructions,which models the environment and implements the navigation instructions from images of the environment captured by the robot's camera.

Head-mounted display applications:

To provide feedback on the control signals,the researchers used a head-mounted display,which shows real-time images and states displayed through a built-in monitor. The head-mounted display utilizes high-resolution display technology to provide a more realistic experience of the virtual environment.

In addition,to better provide feedback on control signals,the researchers combined the display with the robot's built-in camera. With the robot's camera capturing images of the environment and displaying them on the head-mounted display,users can visualize the robot's state and environmental information more intuitively. Specifically,the user is required to gaze at a specific frequency of light stimuli on the head-mounted display,which causes the user's brain to emit a specific SSVEP signal. Once the signal is captured and processed,a control algorithm can determine the user's intent based on the frequency and amplitude of the signal,such as moving the robot to turn left or right. The embedded camera captures the viewpoint of the robot in real-time,providing an image of the environment as well as feedback on the current position of the robot.

Navigation implementation:

Using the SLAM algorithm,an image of the robot's environment is captured by the robot's built-in camera and converted into a model of the robot's environment. The algorithm is also able to estimate the robot's position and provide navigation instructions to the user. The user can control the direction of the robot's movement through SSVEP signals,while the head-mounted display shows the robot's status and environmental information to provide more intuitive feedback to the user.

In addition,the researchers conducted an experimental evaluation of the control system to assess its performance in terms of control accuracy and interaction experience. The experimental results show that the control system can provide precise control signals as well as an immersive interactive experience,providing users with a novel way to control their robots.

Experimental evaluation:

To evaluate the performance of the system,the researchers conducted a series of experiments. The experiments included a user using the system for a robot navigation task and a control experiment using a traditional remote control for the task. The results of the experiments showed that users were able to perform navigation tasks more accurately and get better interaction when using the system.

The researchers conducted experiments to evaluate the control system,including performance in terms of control accuracy and interaction experience. In the experiment,participants were required to control the robot to walk to a specific location to complete a task in a simulated environment. The experimental results show that the control system can provide precise control signals with an average control accuracy of 98.1%. Meanwhile,users evaluated the system as a very good interaction experience,and considered it a very natural and intuitive way of interaction.

WiMi's head-mounted display (HMD) for robot control via a brain-computer interface (BCI) demonstrates a new way of interaction,providing a more natural and intuitive way of control. Experimental results show that the control system can provide precise control signals. This control method has the potential to be used in many scenarios that require precise control,such as medical,educational,and entertainment fields. Subsequently,the control system can be combined with other sensor technologies,such as voice and gesture,to provide more diverse control. This navigation-assisted scheme provides users with a novel interaction method,which can improve the robot's operation efficiency and interaction experience.

About WIMI Hologram Cloud

WIMI Hologram Cloud,Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software,3D holographic pulse LiDAR,head-mounted light field holographic equipment,holographic semiconductor,holographic cloud software,holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application,3D holographic pulse LiDAR technology,holographic vision semiconductor technology,holographic software development,holographic AR advertising technology,holographic AR entertainment technology,holographic ARSDK payment,interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains "forward-looking statements" within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as "will," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar statements. Statements that are not historical facts,including statements about the Company's beliefs and expectations,are forward-looking statements. Among other things,the business outlook and quotations from management in this press release and the Company's strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission ("SEC") on Forms 20−F and 6−K,in its annual report to shareholders,in press releases,and other written materials,and in oral statements made by its officers,directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement,including but not limited to the following: the Company's goals and strategies; the Company's future business development,financial condition,and results of operations; the expected growth of the AR holographic industry; and the Company's expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company's annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.

Tags: Broadcast Technology Computer Accessories Computer/Electronics Consumer Electronics Entertainment Telecommunications

Previous:

Next:

Leave a comment

CUSMail

CusMail provide the Latest News , Business and Technology News Release service. Most of our news is paid for distribution to meet global marketing needs. We can provide you with global market support.

© CUSMAIL. All Rights Reserved. Operate by Paid Release