2024-12-23 10:52:29
Author: Squirrel AI Learning / 2023-07-23 14:46 / Source: Squirrel AI Learning

Squirrel AI Learning by Yixue Group Learning Won Best Paper & Best Student Paper Award at ACM KDD International Symposium on Deep Learning on Graph

ANCHORAGE,Alaska,Aug. 14,2019 -- The first international symposium on deep learning on graph: methods and applications (DLG 2019) was held in Anchorage,the US on August 5,2019. It is worth mentioning that the research project in which Squirrel AI Learning,a unicorn enterprise on AI in China,was deeply involved,won the best paper and best student paper awards on the symposium.

Deep learning is the core of AI research. However,this technology cannot be directly applied to graphical structure data,triggering the exploration of graph deep learning by the academic circles. In the past few years,neural networks based on graphical structural data have achieved remarkable results in the fields of social networks,bioinformatics and medical informatics.

KDD,ACM SIGKDD Conference on Knowledge Discovery and Data Mining,is the highest level international conference in the field of data mining. Since 1995,KDD has been held for more than 20 times consecutively with an annual reception rate of not more than 20% and this year's reception rate of less than 15%. It is worth mentioning that this is also the first year that KDD has adopted double-blind evaluation which is still divided into research track and applied track. According to the public information,the KDD research track has received 1,179 papers,of which 111 were received as Oral papers and 63 as Poster papers with a reception rate of 14.8%.

The applied track has received more than 700 papers,of which 45 were received as Oral papers and 100 as Poster papers with a reception rate of 20.7%. In comparison,in 2018,the KDD research track received 181 papers with a reception rate of 18.4%,and the applied track received 112 papers with a reception rate of 22.5%.

As part of the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD),DLG 2019 aims to bring together academics and practitioners from diverse backgrounds and with different perspectives to share cutting-edge technologies in the field of graph neural networks.

Belowis information about theBest Paper & Best Student Paper Award won by Squirrel AI Learning by Yixue Group Learning:

The Best Paper: Solving the Problem of Text Generation Based on RDF Data Using Graph Neural Network.

The Best Paper award was granted to "Exploiting Graph Neural Networks with Context Information for RDF-to-Text Generation" jointly written by Central China Normal University,IBM Research Institute,and Squirrel AI. This paper studied text generation based on RDF data,and the task was to generate corresponding descriptive text for a given set of RDF triples.

Most previous methods either converted this task to a sequence-to-sequence (Seq2Seq) problem or modeled the RDF triple and decoded the text sequence using a graphics-based encoder. However,none of these methods can explicitly simulate the global and local structural information within and between triples. In addition,they fail to use the target text as additional contextual content for modeling complex RDF triples.

In order to solve these problems,the authors of the paper propose to combine graph encoder and graph-based triple encoder to learn the global and local structural information of the RDF triples. In addition,the researchers also use the Seq2Seq-based automatic encoder to monitor the learning of the graph encoder with the target text as the context.

Experimental results from the WebNLG dataset show that the model proposed by the research team is superior to the state-of-the-art baseline approach.

Author:

Gao Hanning,Wu Lingfei,Hu Bai and Xu Fangli (Wu Lingfei from IBM Research Institute,Xu Fangli from Squirrel AI,and the rest from Central China Normal University)

Why is this research important?

Resource Description Frameworks is a common framework to express entities and their relationships in structured knowledge base. Based on W3C standards,each RDF data is a triple of three elements in the form of (subject,predicate,object).

In Natural Language Generation (NLG),text generation based on RDF data is a challenging task,which has attracted much attention of researchers due to its wide application in industries,including knowledge-based Q&A system,entity abstract,data-driven news generation,etc.

"For example,you have a knowledge graph,and then you need to do a Q&A system. You have SparQL (a query language developed by RDF),and then you query this knowledge graph and an RDF will be returned and it is very difficult for people to understand RDF. The paper aims to convert the returned RDF answer into natural language so that people can understand it easily. " One of the authors,Dr. Wu Lingfei from IBM Research Institute,explained.

What are the challenges?

With the great progress made in end-to-end deep learning,especially in various Seq2Seq models,substantial progress has been made in text generation based on RDF data. However,if RDF triples are simply converted into sequences,important high-order information may be lost.

Because RDF triples can be expressed as knowledge graphs,researchers have recently proposed two methods based on graph networks,but both have their own shortcomings: for example,the model based on recurrent neural network cannot express the rich local structural information between entities and relationships,while the graph encoder based on improved Graph Convolutional Network (GCN) cannot express global information within and between triples.

Core contributions:

In order to solve the above problems,the authors of the paper propose a novel neural network architecture which uses graph-based neural network and context information to improve the model's ability to generate text based on RDF data.

The research team proposes a new encoder model based on graph structure which combines GCN encoder and GTR-LSTM triple encoder to model the multi-view input of RDF triplets and learn the local and global structure information of RDF triplets.

Both encoders generate a set of node representations. Nodes generated by GCN can better capture the local structure information in RDF triples,while nodes generated by GTR-LSTM mainly focus on the global structure information. The research team obtains graph embedding by combining nodes generated by GCN and GTR-LSTM and mean-pooling.

Since the target reference text contains almost the same information as the triplet,the research team then uses the automatic encoder based on Seq2Seq to monitor the learning of the graph encoder with the target text as an auxiliary context.

Experiment results:

The research team uses a WEBNLG dataset which consists of a resource-side triplet dataset and a target-side reference text. Each RDF triplet is expressed as (subject,relation,object).

The whole data set contains 18,102 training pairs,2,495 verification pairs and 2,269 test pairs. The experiment adopts the standard evaluation indexes of the WebNLG challenge,including BLEU and METEOR.

The experiment results show that the model proposed by the research team can better encode the global and local graph structures of RDF triples. The model is about 2.0 BLEU points higher than other baseline models on the WebNLG dataset.

In addition,the research team manually evaluated the results of different models and found that the model involving GCN encoder performed better in expressing the correct relationship between entities and that the target text automatic encoder and GTR-LSTM encoder performed better in generating text associated with the context information between RDF triples.

In further research,the research team found that four key factors in their proposed model might affect the quality of the generated text. They are target text automatic encoders,which will help to integrate target-side context information; the Ldis factor,which can minimize the distance between graphic expression and text expression; GCN encoder and GTR-LSTM encoder,which encode local and global information of triplets.

The Best Student Paper: An Empirical Study of Semantic Analysis Based on Graph Neural Network

The best student paper award was granted to "An Empirical Study of Graph Neural Networks Based Semantic Parsing" on the subject of semantic parsing based on graph neural network by Nanjing University,IBM Research Institute and Squirrel AI.

Existing neurosemantic parsers either only consider word sequences for encoding or decoding,or ignore important grammatical information useful for parsing purposes. In this paper,the author proposes a new neural semantic parser based on Graph Neural Network (GNN),namely Graph2Tree,which consists of a graph encoder and a hierarchical tree decoder.

Authors:

Li Shucheng,Feng Shiwei,Xu Fangli,Xu Fengyuan,Zhong Sheng (Wu Lingfei from IBM Research Institute,Xu Fangli from Yixue Education-Squirrel AI,the rest from Nanjing University)

Why is this research important?

As a classic task in Natural Language Processing (NLP),Semantic Parsing converts sentences in natural language into semantic representations machine can read. The industry has a large number of mature applications based on semantic parsing,such as Q&A system,voice assistant,code generation,etc.

In the past two years,with the introduction of neural coding and decoding methods,the semantic analysis model has undergone tremendous changes. In recent years,researchers have begun to develop neurosemantic parsers with Seq2Seq model,and these parsers have achieved remarkable results.

What are the challenges?

Because semantic representations are usually structured objects (e.g. tree structure),researchers have invested a lot of energy to develop structure-based decoders,including tree decoder,syntax constraint decoder,action sequence generated by semantic graph,and modular decoder based on abstract syntax tree.

Although impressive results have been achieved in these methods,they only consider word sequence information and ignore other rich syntax information available at encoder side,such as dependency tree and constituency tree.

Recently,researchers have proved the important application of graph neural network in various NLP tasks,including neural machine translation,information extraction,and AMR-based text generation. In semantic parsing,researchers have proposed Graph2Seq model,which combines dependency tree and constituency tree with word sequence,and then creates a syntactic graph as coding input. However,this method only regards logical form as a sequence and ignores the rich information in structured objects (e.g. trees) in decoder architecture.

Core contributions:

The authors of this paper propose a new neural semantic parser based on graph network,namely the Graph2Tree that consists of a graph encoder and a hierarchical tree decoder.

Graph encoder effectively encodes a syntactic graph as a vector representation,and the syntactic graph is constructed from word sequence and corresponding dependency tree or constituency tree. Specifically,the research team first naturally combines the corresponding syntax relation of the original text data with the input sequence to form a graph data structure,and then uses graph encoder to learn high-quality vector representation from this graph structure.

The tree decoder decodes logical form from the learned graph-level vector representation and fully learns the composition properties of the logical form representation. The research team also proposes to calculate separate attention mechanism on different node representations corresponding to the original word token and parse tree nodes in order to calculate the final context vector for the structured output of decoding tree. Then,through joint training,the conditional logarithmic probability of correct description is maximized given the syntactic graph.

One of the major features of this paper is the input of natural language and the output of logical form,both of which are structured objects. The input statement is converted into syntactic graph and then makes input. The logical form is a structured output decoded by tree decoder that can make the best use of the implicit structured information and the characteristics of the objects during output.

In addition,the research team also studied the impact of different syntactic graph architectures on GNN semantic analysis performance and found that due to the imperfection of dependency tree parser or complex constituency tree,noise information and structure complexity introduced by graph architecture may have significant adverse effects on the performance of GNN-based semantic parser.

Experiment results:

Through experiments,the research team hopes to find answers to the following questions: I) What syntactic graph can be used to make the graph-based network approach perform well? ii) Will Graph2Tree perform better than baseline approach through correctly constructed graph input?

The research team evaluated the Graph2Tree framework on three benchmark datasets of JOBS,GEO and ATIS. JOBS refers to jobs database,GEO refers to U.S. geographic database,and ATIS refers to Flight Reservation System Dataset.

In the comparison results of JOBS and GEO,the research team observed that Graph2Tree model was superior to Graph2Seq model in generating high-quality logical forms based on graph input regardless of the type of graph structure used.

In terms of graph architecture,if the noise generated by the CoreNLP tool leads to semantic parsing errors,the performance of both parsers will be lower,and even cannot be compared with that of parsers with only Word Order.

Similarly,the jump size of constituency tree,or the structural complexity,also has a great impact on performance. If the structural information is overwhelming or minimal,the performance of the parser will also be lower.

On the contrary,when noise caused during input is controlled or reduced in some way,the performance of Word Order+dependency number can be significantly improved. When the correct graph layer is selected,the performance of Word Order+constituency tree can also be improved. For example,the logical form precision of Word Order+constituency tree in single-layer cutting is higher than that of Word Order.

Dr. Cui Cui from Squirrel AI Learning: graph deep learning and knowledge graph of adaptive learning

Pei Jian,chairman of SIGKDD and vice president of JD.com delivered opening remarks at the symposium that day and scholars from Stanford University,Tsinghua University,UCLA,UIUC and other universities were invited to give speeches.

Dr. Cui from Squirrel AI was also invited to the conference to introduce the current development of graph deep learning and knowledge graph in adaptive learning.

Squirrel AI Learning by Yixue Group Learning Won Best Paper & Best Student Paper Award at ACM KDD International Symposium on Deep Learning on Graph


Squirrel AI Learning by Yixue Group Learning Won Best Paper & Best Student Paper Award at ACM KDD International Symposium on Deep Learning on Graph

The Squirrel AI Intelligent Adaptive Online Learning System developed by Yixue Education Group can continuously monitor and evaluate students' individual abilities,discover their weaknesses in learning,and enable them to make progress at their own pace and finally improve their learning results. The system provides optimized learning solutions and synchronous counseling support to maximize learning efficiency and improve students' knowledge,skills and abilities.

For many years,the shortage of senior teacher resources and geographical problems in China's education have affected the popularization of quality education. Squirrel AI hopes to create super teachers through AI to provide tailored teaching to students. "Every child deserves a one-on-one super teacher," said Dr. Cui.

Since 2014,Squirrel AI has been independently developing the intelligent adaptive learning system for K12 students in China. Its main goal is to accurately tell how well students have mastered the knowledge points and then recommend personalized learning contents and learning path planning.

The first thing is students' mastery of knowledge points. The following figure shows a Squirrel AI student's proficiency in physical knowledge. As is shown in the figure,the blue part is what the student has mastered,accounting for 80% and the yellow part is what the student hasn't learned well,accounting for 20%.

How to know how well students have mastered the knowledge accurately? Squirrel AI evaluates students' mastery of knowledge based on test results,test duration,difficulty of test and the knowledge points covered in test,and even students' choice of wrong options and their mouse-moving behavior.

Regarding the working principle of Squirrel AI,Dr. Cui said that the intelligent adaptive engine is divided into three layers: ontology layer,algorithm layer and interactive system.

The ontology layer is a content-based layer that includes the ontology of learning objectives,of learning contents and of error analysis. Squirrel AI independently developed the technology to disassemble knowledge points at a super-nano level,making for more accurate determination of the knowledge points students are supposed to master. Take junior high school mathematics as an example. Squirrel AI can disassemble 300 knowledge points into 30,000 ones.

At the same time,Squirrel AI links related knowledge points based on Bayesian network-like graph. Through this technology,the teaching sequence and relationship of excellent teachers can be simulated,which conforms to students' cognitive law and the different levels of difficulty of knowledge points.

The algorithm layer includes content recommendation engine,students' user portrait engine and target management engine. Based on user status evaluation engine and knowledge recommendation engine,Squirrel AI will build a data model to detect the gaps of knowledge for each student accurately and efficiently and then recommend corresponding learning content according to these gaps.

The interactive system learns more about students by collecting interactive data through management system,detection and early warning system and real-time event collector.

Dr. Cui stressed that the intelligent adaptive learning system based on AI adopts a teaching process completely different from traditional education.

For example,in terms of knowledge state diagnosis,traditional diagnosis is based on high-frequency examination,while Squirrel AI uses system that provides knowledge state diagnosis based on information theory and knowledge space theory,which can accurately find out knowledge deficiencies.

Traditional assessment is based on scores or rankings in examination,and traditional intelligent adaptive assessment is based on IRT,DINA,BKT and DKT models whose shortcomings cannot be evaluated in real time. Squirrel AI's system is based on the Bayesian theory that carries out continuous and real-time evaluation based on students' past records.

In terms of content recommendation,the traditional recommendation algorithm adopts collaborative filtering algorithm which is not applicable in the field of education because students master knowledge points at different levels in similar learning situation. Collaborative filtering algorithm is not accurate enough to ensure the effectiveness of the recommended contents.

Squirrel AI uses neural network to realize personalized recommendation based on students' learning achievements,and further improves the accuracy of personalized learning and recommendation through deep learning algorithm.

The superiority of the algorithm is reflected in the results. Over the past two years,Squirrel AI has defeated outstanding teachers in four human-computer contests. Up to now,Squirrel AI has opened nearly 2,000 offline schools in more than 400 cities in China for nearly 2 million students.

Squirrel AI currently has a cumulative financing of nearly RMB 1 billion. Last year,Squirrel AI donated 1 million accounts to millions of children from poor families to promote education equity.

Squirrel AI will hold the 4th global AI Adaptive Education (AIAED)Summit in the Shanghai Center from November 12 to 13. The Chairman of the Organizing Committee of the summit will be Professor Tom Mitchell,Dean of School of Computer Science of CMU and the god father of machine learning. Dr. Cui hopes that relevant practitioners will get together at the summit to jointly promote the progress of AI education.

The website of the 4th AIAED summit: https://www.aiaed.net/

Squirrel AI Learning by Yixue Group Learning Won Best Paper & Best Student Paper Award at ACM KDD International Symposium on Deep Learning on Graph

View original content to download multimedia:/news-releases/squirrel-ai-learning-by-yixue-group-learning-won-best-paper--best-student-paper-award-at-acm-kdd-international-symposium-on-deep-learning-on-graph-300901409.html

Tags: Education Higher Education Internet Technology Telecommunications

Previous:

Next:

Leave a comment

CUSMail

CusMail provide the Latest News , Business and Technology News Release service. Most of our news is paid for distribution to meet global marketing needs. We can provide you with global market support.

© CUSMAIL. All Rights Reserved. Operate by Paid Release