In the de­vel­op­ment of ar­ti­fi­cial in­tel­li­gence, the learning process is crucial. Machine learning (and deep learning, in par­tic­u­lar) is used to train al­go­rithms and, therefore, to teach the software to think for itself. Facial recog­ni­tion, for example, is based on this tech­nol­o­gy. The foun­da­tion for many machine learning ap­proach­es are ar­ti­fi­cial neural networks. The software’s al­go­rithms are designed as a network made of nodes, just like the human nervous system. So-called graph neural networks are a fairly new approach. How does this tech­nol­o­gy work?

How do graph neural networks work?

Graph neural networks (GNN) are a new subtype of ar­ti­fi­cial neural networks that are based on graphs. In order to un­der­stand GNNs we first need to know what is meant by ‘graph’ in this context. In IT, the term stands for a certain type of data. A graph consists of several points (nodes or vertices) that are connected with one another (via edges), forming pairs. To give a simple example: Person A and Person B can be rep­re­sent­ed as points on a graph. Their re­la­tion­ship to each other is then the con­nec­tion. Were the con­nec­tions to disappear, we’d be left with a mere col­lec­tion of people or of data.

One popular subtype of a graph is a tree. Here, the nodes are connected in such a way that there is always just one path (even across several nodes) between Point A and Point B. The edges can either have a direction or no direction. In a graph, the con­nec­tions are just as important as the data itself. Every edge and every node can be labeled with at­trib­ut­es.

A graph is, therefore, perfectly suited to represent real cir­cum­stances. And that is the challenge for deep learning: to make natural con­di­tions un­der­stand­able to software. A graph neural network makes that possible: In a GNN, nodes collect in­for­ma­tion from their neighbors as the nodes regularly exchange messages. The graph neural network learns in this way. In­for­ma­tion is passed on and recorded in the prop­er­ties of the re­spec­tive node.

Tip

Want to find out more about graph neural networks and delve deeper into the subject? The Natural Language Pro­cess­ing Lab at Tsinghua Uni­ver­si­ty has published a com­pre­hen­sive summary of sci­en­tif­ic work on the topic of GNN on GitHub.

Where are graph neural networks used?

Up to now, sci­en­tists primarily dealt with the pos­si­bil­i­ties of graph neural networks. The potential areas of ap­pli­ca­tion suggested, though, are diverse. Where re­la­tion­ships play a major role in sit­u­a­tions or processes that are to be rep­re­sent­ed via neural networks, it makes sense to use GNNs.

  • Financial markets: Market forecasts can be made more reliable by un­der­stand­ing the trans­ac­tions.
  • Search engines: The con­nec­tions between websites are critical in eval­u­at­ing the sites’ im­por­tance.
  • Social networks: Better un­der­stand­ing re­la­tion­ships between people can help optimize social media.
  • Chemistry: The com­po­si­tion of molecules can be rep­re­sent­ed via graphs, and thus can be trans­ferred to GNNs.
  • Knowledge: Un­der­stand­ing the links between in­for­ma­tion is crucial to providing knowledge in the best way possible.

Graph neural networks are already being used in image and speech recog­ni­tion. Un­struc­tured, natural in­for­ma­tion can po­ten­tial­ly be processed more ef­fec­tive­ly with a GNN than with tra­di­tion­al neural networks.

Ad­van­tages and dis­ad­van­tages of graph neural networks

Graph neural networks help with chal­lenges that tra­di­tion­al neural networks haven’t yet been able to ad­e­quate­ly deal with. Data based on a graph couldn’t be processed correctly because the con­nec­tions between the data weren’t weighted suf­fi­cient­ly. With GNNs, though, the so-called edges are just as important as the nodes them­selves.

However, other problems that accompany neural networks can’t be solved with graph neural networks. The black box problem, in par­tic­u­lar, remains unsolved. It’s difficult to un­der­stand how a (graph) neural network comes to its final con­clu­sion because the complex al­go­rithms’ internal processes are difficult to retrace from the outside.

Go to Main Menu