Neural networks: what can artificial neural networks do?

What has been a huge dream for computer scientists could soon become a reality. They have dreamt of creating computers whose capabilities are not only on par with those of humans but far surpass them. Artificial intelligence research has made great strides in recent years. One key technology for teaching machines to think independently is the artificial neural network.

AI Tools at IONOS

Build your presence faster with AI tools. Our products are getting smarter with new AI features. Stay tuned for updates.

AI Text Improver
AI Image Generator
AI SEO Text Generator

What is a neural network?

Neural networks are a branch of research in computer science and neuroinformatics. There are various types of artificial neural networks, each offering different options for processing information.

Definition: Neural network

A neural network is an information technology system that is modeled after the human brain and provides computers with characteristics of artificial intelligence.

Neural networks allow computers to solve problems and improve their performance independently. The question of whether or not they need to be trained by humans at the beginning depends on the method of artificial intelligence used.

How does a neural network function?

Neural networks are modeled after the human brain which processes information through a network of neurons.

To display this video, third-party cookies are required. You can access and change your cookie settings here.

An artificial neural network can be described as a model consisting of at least two layers – an input layer and an output layer – generally with additional layers in between (hidden layers). The more complex the problem is that the artificial neural network is addressing, the more layers will be required. Each layer of the network contains a large number of specialized artificial neurons.

Processing information in the neural network

Information processing in the neural network always follows the same process: Information in the form of patterns or signals is received by the input layer neurons which process it. Each neuron has a weight assigned to it so that neurons receive information of different levels of importance. The weight and the transfer function determine the input which is then forwarded.

In the next step, an activation function and threshold value calculate and weight the output value of the neuron. Depending on the value and weight of the information, additional neurons are linked and activated to a greater or lesser extent.

This linking and weighting creates an algorithm which generates a result for each input. With each training, the weighting and thus the algorithm are modified so that the network will deliver even more accurate and better results.

The neural network: an example of its application

Neural networks can be used for image recognition. Unlike humans, a computer cannot tell at a glance whether an image shows a person, plant or object. It has to examine the image for individual characteristics. The computer determines which characteristics are relevant using the implemented algorithm or through data analysis.

In each network layer, the system checks the input signals (i.e. the images) for specific criteria such as colors, corners and shapes. With each check, the computer becomes better able to determine what is shown in the image.

Initially, the results will be relatively prone to error. When a neural network receives feedback from a human trainer and thus is able to adapt its algorithm, this is called machine learning. Deep learning does not require training from humans. In this case, the system learns from its own experience and improves according to the amount of image materials it has available to it.

Ideally, the end result is an algorithm which can perfectly identify the content of images regardless of being black and white, the image subject’s position or the perspective from which the image subject is seen.

Types of neural networks

Different structures of neural networks are used depending on the learning method used and what the intended application is.


This is the simplest form of neural network and originally referred to a network formed from a single neuron which is modified by weightings and a threshold value. Now, the term “perceptron” is also used for single-pass feedforward neural networks.

Feedforward neural networks

These artificial neural networks can only process information in one direction. These can be single-layer networks (i.e. consisting only of input and output layers) or multi-layer networks with multiple hidden layers.

Recurrent neural networks

In recurrent neural networks, information can also go through feedback loops and thus return to previous layers. This feedback allows the system to develop a memory. Recurrent neural networks are used for purposes such as voice recognition, translation and handwriting recognition.

Convolutional neural networks

These networks are a subtype of multi-layer networks. They consist of a minimum of five layers. Pattern recognition is conducted on each layer, and the results from one layer are transferred to the next. This type of neural network is used for image recognition.

Learning methods

To properly establish connections in artificial neural networks so that they can perform tasks, the neural networks must first be trained. There are two basic methods:

Supervised learning

During supervised learning, specific results are defined for each of the different input options. For example, if cat images are to be recognized by the system as cat images, people supervise the system’s categorization and give feedback as to which images were correctly recognized and which were not. As a result, the weightings in the network are modified and the algorithm is optimized.

Unsupervised learning

During unsupervised learning, the result for the task is not predefined. Instead, the system learns exclusively from the input information. Hebb’s rule and adaptive resonance theory are used in this learning method.

Applications for neural networks

Neural networks are especially effective when there is a large amount of data to be evaluated and only limited systematic problem-solving knowledge. Classic application uses include text, image and voice recognition (i.e. situations in which computers examine data for specific characteristics in order to categorize it).

Artificial neural networks can also be used for predictions and simulations (e.g. weather forecasts, medical diagnoses and stock markets).

When it comes to industrial applications, neural networks are sometimes used in control systems engineering: they are used to monitor target values and automatically take corrective actions if any deviations occur as well as to independently set target values based on data analyses.

Developments in the field of unsupervised learning for neural networks are currently expanding the application uses and improving the performance of networks dramatically. The most notable applications of self-learning neural networks include the voice assistants Alexa, Siri and Google Assistant.

History and future outlook

In the past ten years, neural networks have moved into the public consciousness due to the discussions around artificial intelligence. However, the basic technology has already been around for many decades.

Talk of artificial neural networks can be dated back to the early 1940s. Back then, Warren McCulloch and Walter Pitts described a model that linked together elementary units and was based on the structure of the human brain. It was supposed to be able to calculate almost any arithmetic function. In 1949, Donald Hebb developed Hebb’s rule which is still used in many neural networks today.

In 1960, a neural network was developed which had worldwide commercial use for echo filtering in analog telephones. After that, research in the field ground to a halt. One reason for this was that leading scientists had come to the conclusion that the neural network model could not solve important problems. Another was that effective machine learning requires large amounts of digital data which was not available at the time. It was not until the emergence of big data that this situation changed: interest in artificial intelligence and neural networks was revived.

Since then, this field has seen tremendous growth. While the results are promising, neural networks are not the only technology to implement artificial intelligence in computers. They are just one possibility, even if they are often presented as the only viable option in the public discourse.

We use cookies on our website to provide you with the best possible user experience. By continuing to use our website or services, you agree to their use. More Information.