2024-12-22 11:20:03
Author: WiMi Hologram Cloud Inc. / 2023-09-26 22:00 / Source: WiMi Hologram Cloud Inc.

WiMi Developed An Efficient Deep Self-Supervised Remote Sensing Scene Classification Technology

BEIJING,Sept. 26,2023 -- WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"),a leading global Hologram Augmented Reality ("AR") Technology provider,today announced that an efficient deep self-supervised remote sensing scene classification technology is developed,which centers on self-supervised learning on unlabeled images to reduce the dependence on labeled data. It employs an innovative deep learning architecture,which realizes the efficient solution of the remote sensing scene classification task through the synergistic action of the online network and the target network.

The key to WiMi's deep self-supervised remote sensing scene classification lies in the cooperative learning of the online and target networks. Under the self-supervised learning paradigm,the deep learning model is first pre-trained on unlabeled images to learn discriminative features. This process generates different views from each image through cross-view comparison learning,which in turn passes features between the online network and the target network. The online and target networks collaborate to optimize the entire model by minimizing the cross-view distance,enabling the model to learn useful features from unlabeled images.

In addition to the collaborative learning of the online network and the target network,the technology introduces an intelligent training strategy with fused resolution,which further improves efficiency. During the discrimination task,the technology is trained using low-resolution images,thus allowing for larger batch sizes,which significantly improves performance. This strategy leverages the advantages of both high- and low-resolution images,allowing the model to benefit from large batch and full image sizes to effectively learn from large amounts of unlabeled data.

Implementation steps of WiMi's deep self-supervised remote sensing scene classification technology:

First,a self-supervised network is constructed by pre-training on unlabeled remote-sensing images. In the self-supervised learning paradigm,the network needs to learn features from unlabeled images to distinguish images with different views. This step can be done using a contrast learning approach by maximizing the similarity between different views of the same image and minimizing the similarity between different images. This approach allows the network to learn useful image features for subsequent remote sensing scene classification tasks.

In the pre-training phase,the online network and the target network are constructed as two separate parts. The online network is responsible for processing unlabeled images and learning features,while the target network is responsible for processing labeled images for task-specific fine-tuning. The online network and the target network collaborate with each other for feature transfer through cross-view comparison learning. Specifically,different views are generated from each image through geometric transformations,and then these views are passed to the online and target networks. This step is intended to allow the networks to obtain richer features from unlabeled images. Collaborative learning of the online and target networks also involves minimizing the cross-view distance to optimize the overall model. In this step,the goal is to make the feature representations of the online and target networks as similar as possible between different views. By minimizing the cross-view distance,the networks can better learn the representation of unlabeled images,thus improving the subsequent remote sensing scene classification performance.

To further improve efficiency,the technology introduces an intelligent training strategy with fused resolution. During the discrimination task,low-resolution images are used for training. This has the advantage of speeding up the training and also maintaining a stable performance when processing larger batches of images. By fusing the information from different resolution images,the network can provide a more comprehensive understanding of the remote sensing scene features,which can further improve the classification performance.

This technology can be used for practical remote sensing scene classification tasks after obtaining an online network that has undergone self-supervised learning. In order to adapt to the specific task,fine-tuning can be performed on a few labeled scenes or images. Through fine-tuning,the network can be better adapted to the specific remote sensing scene classification task and achieve higher classification accuracy.

WiMi's deep self-supervised remote sensing scene classification technology is able to achieve higher classification accuracy with few labeled scenes compared to traditional deep learning methods based on a large amount of labeled data. By fully utilizing the potential of unlabeled data and self-supervised learning,this technology opens up entirely new possibilities in the field of remote sensing image analysis. This technology not only achieves significant results in the scene classification task,but also brings a new solution to the remote sensing image analysis field,i.e.,self-supervised learning to obtain useful features from unlabeled data,thus reducing the dependence on labeled data.

WiMi's deep self-supervised remote sensing scene classification technology extracts discriminative features from unlabeled images through self-supervised learning. During the collaborative learning process between the online network and the target network,cross-view comparison learning achieves cross-network transfer of features,and minimized cross-view distance optimization ensures feature consistency for more accurate scene classification. Meanwhile,the intelligent training strategy of fused resolution skillfully uses low-resolution images in the discrimination task,effectively improving the training efficiency. This technology provides a new solution to the remote sensing scene classification problem under a few labeled scenes,and achieves significant classification performance improvement by fully utilizing unlabeled data.

About WIMI Hologram Cloud

WIMI Hologram Cloud,Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software,3D holographic pulse LiDAR,head-mounted light field holographic equipment,holographic semiconductor,holographic cloud software,holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application,3D holographic pulse LiDAR technology,holographic vision semiconductor technology,holographic software development,holographic AR advertising technology,holographic AR entertainment technology,holographic ARSDK payment,interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains "forward-looking statements" within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as "will," "expects," "anticipates," "future," "intends," "plans," "believes," "estimates," and similar statements. Statements that are not historical facts,including statements about the Company's beliefs and expectations,are forward-looking statements. Among other things,the business outlook and quotations from management in this press release and the Company's strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission ("SEC") on Forms 20−F and 6−K,in its annual report to shareholders,in press releases,and other written materials,and in oral statements made by its officers,directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement,including but not limited to the following: the Company's goals and strategies; the Company's future business development,financial condition,and results of operations; the expected growth of the AR holographic industry; and the Company's expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company's annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.

Tags: Artificial Intelligence Broadcast Technology Cloud Computing / Internet of Things Computer Networks Computer/Electronics Data Analytics Entertainment Internet Technology Telecommunications

Previous:

Next:

Leave a comment

CUSMail

CusMail provide the Latest News , Business and Technology News Release service. Most of our news is paid for distribution to meet global marketing needs. We can provide you with global market support.

© CUSMAIL. All Rights Reserved. Operate by Paid Release