The language errors and mistranslations on Google Translate might often be good for a funny anecdote, but behind this slightly error-prone translation tool lies one of the most complex computer technologies ever developed. Natural language processing may still be in a relatively primitive stage, but its complex technology already enables computers to translate from one language into another, carry...What is natural language processing?
The neurosciences are often making headlines with their research into the human brain. Their progress in this area provokes a discussion upon which many hopes and concerns hang: Will it one day be possible to fully reproduce a brain using technology? It’s no surprise that computer processing capacity continues to evolve and far exceed that of humans but in many areas, the human brain remains superior. But is this set to change sometime soon?
All of these questions relate to the area of artificial intelligence (AI). One idea behind AI research is to try and technically replicate the brain and its functions using computer science, neurology, psychology, and linguistics. In many ways, the approach of AI research reveals a lot about how we view ourselves as humans as well as about our understanding of the word ‘intelligence’.
At the moment, forms of artificial intelligence that have their own will and behave autonomously still only exist in works of fiction. But there is no doubt that there are many areas of life where AI technology already plays a central role, although it’s not always immediately noticeable. There are many out there that do not actually know what it is or exactly how artificial intelligence works. AI helps make better diagnoses and treatment plans, it means market forecasts are much more accurate, and it also makes Google’s search algorithms more and more dynamic. AI technology is behind every digital assistant like Cortana and Siri, helping develop self-driving cars, as well as helping computers select new employees. At the moment in the US, AI is being used to write legal documents. In the last few decades, AI research has achieved a lot in many sectors.
Such new developments are also very relevant for search engines and online marketing. Having a basic understanding of AI technology is, therefore, very beneficial from an SEO point of view: what is artificial intelligence and how does it work? What are the aims of the research and what are the current sectors that it can be applied? What are the prospects and risks associated with it? And finally, what impact do these developments have on the world of online marketing and SEO?
- Definition of artificial intelligence: vision and reality
- How does artificial intelligence work? The methodology and history of AI
- The prospects and risks of artificial intelligence
- Artificial intelligence in the digital world
Definition of artificial intelligence: vision and reality
What is ‘artificial intelligence‘?
Artificial intelligence can be defined as a branch of computer science, whose goal is to create a technological equivalent to human intelligence. It is with this goal in mind that many computer scientists work with experts from across a variety of fields. But exactly what ‘intelligence’ actually is and what way you can reproduce it using technology – needless to say, there are many theories and methodical approaches dedicated to this line of research.
An exact definition of artificial intelligence is impossible given how hard it is to even define the word ‘intelligence’. Exactly what intelligence entails or involves is something that humans themselves cannot even agree on – so you can imagine that this debate is even more enhanced when it comes to the issue of machines. Should the machine primarily be optimized towards rationality? Or are other human characteristics like intention, intuition, and learning ability also important? It could also be argued that social competencies, empathy, and a sense of responsibility all play a role as well. At the end of the day, the question is whether the technology should primarily engender rational characteristics or artificial humanity?
Should the machine be built in a way that is identical to a human brain? This approach to simulation aims to achieve a complete replication of the brain functions. Or should the machine only have the appearance of a person or only be similar to it in when it comes to the final result? This phenomenological approach is centered on what humans actually gain from artificial intelligence – regardless of which technical process is behind it.
Defining artificial intelligence has always been difficult. In 1950 the mathematician Alan Turing developed a test which was designed to make AI measurable: the ‘Turing Test’ aimed, through a series of questions, to determine whether a machine was still recognizable as such. If the answers given by a computer are impossible to differentiate from those given by a human, then the computer is said to be ‘artificially intelligent’. But this definition does little to help current AI technology as these days is primarily being developed for very technical fields of activity. This means that AI is less focused on mastering the art of human communication and more so on carrying out highly specialized tasks in an efficient manner. For these technologies, a reduced or limited version of the Turing test is used. For example, if a certain part of a technical system features identical characteristics to a human, e.g., in the case of a medical diagnosis or a game of chess, then we are referring in this case to an artificially intelligent system. As a result, there are two definitions of artificial intelligence: a ‘strong’ and a ‘weak’ one.
Your very own .ai domain
Don't wait, register your .ai domain with IONOS now!
The vision: strong artificial intelligence
The ‘strong’ definition of artificial intelligence refers to an intelligence which, with its diverse capabilities, is in a position to replace humans. This universal approach that a machine can act as a humman has existed since the Enlightenment but currently remains fictional.
There are various dimensions to intelligence: cognitive, sensory motor, emotional, and social. The majority of the current applications of artificial intelligence are in the area of cognitive intelligence, i.e., logic, planning, problem-solving, self-sufficiency, and individual perspective formation.
The vision is that one day AI can develop an autonomous consciousness and its own free will. It is with this long-term goal that AI research touches on the likes of very traditional subject areas like philosophy and brings up countless ethical and legal questions. For this reason, law philosophers insist that there needs to be binding legislation for artificially intelligent beings. The legal capacity of intelligent machines is still very unclear.
Reality: the weaknesses of artificial intelligence
On the other hand, the ‘weak’ definition of artificial intelligence is one where the development and application of artificial intelligence take place in clearly defined, marked out sectors. This is the position that artificial intelligence finds itself in at this moment in time. Nearly all of the current uses of artificial intelligence can be defined as ‘weak’ but also undoubtedly specialized, AI. A good example of this is the development of self-driving cars, medical diagnoses, and intelligent search algorithms.
Over the last few years, research has made groundbreaking success in the area of weak AI. The development of intelligent systems in the individual sectors has shown itself as not just immensely practical but also as less harmful, ethically speaking, as the research in superintelligence. The sectors where artificial intelligence is applied are extremely varied. Areas that are experiencing particular success at the moment are in medicine, finance, transport, marketing, and, of course, also online. At this stage, it is almost inevitable that AI technologies of this kind will continue to expand their influence in all aspects of our lives.
How does artificial intelligence work? The methodology and history of AI
How does one even begin describing the operating principles of artificial intelligence? AI is only ever as good as the nature of its technical representation of knowledge. For this, there are two fundamental methodical approaches: the symbolic approach and neuronal approach.
- With symbolic AI the selected knowledge is represented by symbols; it operates with so-called symbol manipulation. Symbolic AI approaches the processing of information ‘from above’ and operates with symbols, abstract correlations, as well as logical keys.
- With neuronal AI the selected knowledge is depicted by artificial neurons and their connectors. Neuronal AI approaches the processing of information ‘from below’ and simulates individual artificial neurons, which organize themselves into larger groups and together form an artificial neuronal network.
The classic approach for artificial intelligence is symbolic AI. This touches on the idea that human thought can be reconstructed from a logically superior conceptual level, regardless of concrete experiences (top down approach). Knowledge is represented by abstract symbols, including written and spoken language. Through symbol manipulation, machines learn to recognize, understand, and use these symbols on the basis of algorithms. The intelligent system retrieves its information from so-called expert systems. Here the symbols and information are sorted in a specific way – mostly through logical ‘if-then’ relationships. What happens here is that the symbols and information are sorted in a specific way – mostly in logical ‘if-then’ relationships. The intelligent system can access these knowledge databases and then compare the information within with its own input.
Classic uses of symbolic KI are word processing and speech recognition but also other logical disciplines like playing a game of chess. Symbolic AI works based on set rules, and with increasing computing power, can solve more and more complex problems. This allowed IBM’s Deep Blue, with the help of symbolic KI, to win in a game of chess against the world champion at that time, Garry Kasparov.
The structure of an expert system is done on the basis of files that are equipped with processing rules. Here’s an example:
- All trees are made of wood.
- Wood is flammable.
- X is a tree.
- Therefore X is flammable.
Based on such logical links, an expert system can imitate the cognitive behavior of humans. Expert systems are nearly always limited to a specialist, e.g., a specific branch of medicine.
The performance of symbolic AI goes up and down based on the quality of the expert system. Originally developers had hoped that the more progressive the technology, the more capable an expert system would be. This would then mean that the dream of strong artificial intelligence was somehow within reach. However, the limitations of symbolic AI are getting clearer and clearer. This is to do with the fact that regardless of how complex the expert system is, the symbolic AI remains proportionally inflexible. With some exceptions, variation or unspecified facts, the strictly rule-based system can be difficult to handle. On top of this, the symbolic AI has only a very limited capacity to acquire and gain independent knowledge.
Too rigid and not dynamic enough: the technology didn’t seem to be able to meet the very high expectations that led to the so-called ‘AI winter’ in the mid-1970s. This lasted until well into the 1980s as nearly all financial support fell away dramatically. It was during this deep lull that the technology went in a revolutionarily new direction; the development of self-learning systems. And so the work on artificial neuronal networks revived AI research.
It was Geoffrey Hinton and his two colleagues who revived neuronal AI research in 1986 and with it the research field of artificial intelligence. The further development of the backpropagation algorithm created the basics for Deep Learning, with which nearly all AI operates these days. It’s thanks to this learning algorithm that deep neuronal networks can continually learn and grow by themselves – and now overcome challenges where symbolic AI once failed.
Neuronal artificial intelligence (aka. connectionist or sub symbolic AI) has moved away from the principle of symbolic representation of knowledge. Similar to what happens to the human brain, the knowledge is instead split up into tiny functional units, artificial neurons, which then connect to become ever growing groups (bottom up approach). This results in a diverse and branched network of artificial neurons.
Neuronal AI attempts to replicate the operating principle as precisely as possible and to artificially simulate its neuronal networks. Unlike with the symbolic AI, the neuronal network is ‘trained’ – for example, in robotics this is done with sensory motor data. From these experiences, the AI generates an ever growing knowledge base. And this is exactly where the big breakthrough was – while in the grand scheme of things this training requires a lot of time, the system is now in a position to learn independently. This is what we now call ‘self-learning systems’ or ‘machine learning’. This has made neuronal artificial intelligence into very dynamic, adaptable, and versatile systems, some of which are no longer completely comprehensible for humans.
The structure of artificial neuronal networks nearly always follows the same principles:
- A countless number of artificial neurons are layered one over the other. They are connected to one another via simulated wires.
- At the moment, it is primarily deep neuronal networks that are being used. ‘Deep’ means that they are working with more than two layers. The intermediary layers are stacked hierarchically one on top of the other – in some systems, it is the case that information is passed up through millions of connectors. To give you an idea: AlphaGO (Google DeepMind) features over 13 intermediary layers, Inception (Google) has over 22 layers.
- The uppermost layer, or input layer, works like a sensor. It records the input, i.e., text, images, and sounds, as it enters the system. From here the input is brought through the network according to certain patterns and compared with any previous input. And so the network is fed and trained through the input layer.
- On the other hand, the deepest layer, or output layer, mostly has very few neurons – one for each classified category (picture of a dog, picture of a cat, etc.). The output layer shows the user the result of the neuronal network and, e.g., can also recognize a picture of a cat, which previously may have been unknown to it.
- There are three fundamental learning processes, with which neuronal networks can train: supervised, unsupervised, and reinforced learning. These processes regulate the manner in which a system input leads to the desired output in different ways.
An overwhelming majority of the most recent AI successes can be attributed to neuronal networks. Under the heading Deep Learning, we group the extraordinary achievements of self-learning systems in innovation research. This applies to everything from speech and writing recognition to self-driving cars. It was thanks to deep neuronal networks that allowed Google’s DeepMinds AlphaGo to defeat the South Korean professional Go player Lee Sedol in 2016. Go is recognized globally as being one of the most complex strategic board games.
On the other hand, Google’s Inception, originally a system for image recognition, creates striking dream images. They created a lot of viral hype in 2015 under the hashtag #DeepDreams. This ‘side effect’ of the system was only discovered by its developers coincidentally. They wanted to find out exactly how the artificial intelligence that they had created actually worked.
The prospects and risks of artificial intelligence
From blind optimism about progress to a simple refusal to acknowledge technology – intelligent technology provokes every range of emotion and reaction. This can be primarily attributed to there being both positive and negative future projections about how these technologies will change our lives. What are the prospects and risks associated with artificial intelligence? Here we have compiled the most important points of view from both AI enthusiasts and skeptics.
There is a whole range of advantages and prospects when it comes to artificial intelligence. The most important advantages are undoubtedly in the world of work, where it can be highly efficient and dramatically improve economic prospects.
“AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping to clean up the environment, providing high-quality education to people all over the world, helping the disabled (including providing Hawking’s voice) and contributing in a myriad of other ways. We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity
. AI will be the pivotal technology in achieving this progress.”
- American author, computer scientist, inventor and futurist Ray Kurzweil writing in TIME magazine in December 2014; Source: time.com/3641921/dont-fear-artificial-intelligence/
Proponents of the new technology refer primarily to the prospects offered by artificial intelligence:
- Workplaces and reduced workload: the new technology could bring about valuable new workplaces and in general lead to an economic upsurge. One thing that all experts agree on is that the technology will have a radical impact on the whole workplace. In 2015 a study panel at Stanford University decided to investigate the future prospects of artificial intelligence, and came to the conclusion that at the moment it is impossible to gauge whether the eventual effect on the labor market will be positive or negative. However, one thing that is certain is that in the future many people will not be able to support themselves through work alone. For this reason, many supporters of a universal basic income see artificial intelligence technology as a big opportunity because the traditional model of paid labor will soon be replaced. For Tesla founder Elon Musk, one of the big advantages of artificial intelligence is that it will mean more free time for people.
- Comfort: supporters of AI view each technical advancement as a prospect for greater ease and comfort in everyday life. One example of this is, of course, the self-driving car as well as intelligent translation software – in general, such developments make life a good bit easier for users/customers.
- Extraordinary performance: Even when it comes to tasks for the greater public good, artificial intelligence has significant, if not the biggest advantage. There is no denying the fact that machines have a much lower error rate than humans and their performance potential is enormous. In the health and law sectors, in particular, the versatility of intelligent machines is seen as particularly promising. While experts don’t go so far as to expect that judges will one day be replaced by machines, artificial intelligence can help to recognize the pattern of a process faster in order to reach an objective conclusion.
- Economic advantages: Naturally, it’s also the case that this technology also promises large financial gains for the industries involved. The International Federation of Robotics (IFR) predicts that by 2019 there will have been 42 million service robots sold worldwide – resulting in a turnover of around 22 billion dollars. A study by Bank of America Merill Lynch also estimated that by 2020, the turnover of the AI industry will exceed 150 billion dollars. All in all, this means that artificial intelligence could result in a significant economic upturn for the IT sector and it sister branches - and in turn also improve overall prosperity.
- Futuristic projects: Last but not least, artificial intelligence inspires the natural curiosity in humans. Already it’s being used for exploring oil sources and controlling Mars robots. It is safe to assume that the continued development of the technology will mean that the relevant application fields will continue to expand.
However, there are also some prominent experts, like physicist Stephen Hawking or the aforementioned Silicon Valley icon Elon Musk, who warn about the risks of artificial intelligence. These critical voices also happen to have the support of larger organizations and initiatives. For example: the Future of Life Institute (FLI) regularly mobilizes renowned critics to help call for a responsible approach to technology.
“The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast - it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most. This is not a case of crying wolf about something I don't understand.”
- Elon Musk, Tesla founder, and AI investor, during an interview in 2014 (Source: uk.businessinsider.com/elon-musk-killer-robots-will-be-here-within-five-years-2014-11)
Here are some of the risks of artificial intelligence:
- Inferiority of humans: One potential risk which many fear, and which has often been a favorite subject of science fiction writers, is the development of a so-called ‘superintelligence’. This term refers to a technology that optimizes itself and then becomes no longer reliant on humans. The relationship between humans and this super intelligent technology could become problematic and eventually lead to humans becoming slaves to the technology. Although most researchers view an intentionally malicious artificial intelligence as being impossible. However, an actual risk that many see as realistic is that artificial intelligence becomes competent enough to carry out activities independently; activities that may then be harmful to humans. There is a lot of disagreement about if and when such a loss of control of the AI technology could occur. On its website, the FLI has information on the myths and misunderstandings of superintelligence.
- Dependency on technology: There are other skeptics that see the major risks of artificial technology in the ever growing dependency of people on technology, and not in the possible inferiority of humans. One example that the critics cite is in the area of healthcare, where the use of nursing robots has already been tested, meaning that ever increasingly man is becoming a monitored subject of technological systems. As a result, they argue, humans are in danger of giving up a large chunk of their personal privacy and autonomy. It is not just in the area of healthcare that such concerns are being voiced, but also in relation to AI-supported video surveillance or intelligent algorithms online.
- Data protection and the distribution of power: Intelligent algorithms are now able to process the growing data sets more efficiently than ever. This is particularly good news for the online retail sector. According to critics, the processing of data through AI technology is becoming more and more difficult for users to understand and keep up with. Instead, it is companies and experts with the tech know-how that have all the control. Of course, these are not just risks that are exclusive to the area of artificial intelligence but also apply to the more general challenges of the current digital age. However as the potential of AI technology grows, so do the voices emphasizing caution.
- Filter bubbles and selective awareness: Online activist Eli Pariser has voiced concerns about what he sees as a further risk of artificial intelligence: so-called filter or information bubbles. If algorithms only show content to a user based on information that they have gathered from previous online behavior (personalized content), then it is very likely that his/her view of the world will get narrower and narrower. Or at least this is the worry. Skeptical experts are of the opinion that AI technologies lead to selective cognition and enforces a growing ‘ideological distance between individuals’. A 2016 study by Microsoft investigated the drifting apart of information resources as a result of filter bubbles. The results put into perspective the risks associated with artificial intelligence. It also goes onto mention how there are similar problems emerging with classic journalism and where up till now, it has been hard to determine the extent of the influence of new technologies.
- Influence of opinion formation: Additionally, according to critics, AI technologies have the ability to control public opinions. The reason for this sort of thinking is the existence of technologies that have very detailed information on its users, as well as the presence of social bots that influence public discussion. The more intelligent such technologies become, the higher the risk of opinions being influenced. At least what the skeptical voices are saying.
- Weapon technology: Another major risk associated with artificial intelligence is its use in war. In 2015, hundreds of AI researchers and scientists came together under the common banner of the FLI to warn against AI supported automatic weapons. Among the signatories were Stephen Hawking and Elon Musk, as well as Apple co-founder Steve Wozniak and DeepMind co-founder Demis Hassabis. In an open letter, they called for the banning of AI-supported weapons technology, seeing them as “offensive autonomous weapons beyond meaningful human control”. The sinister link between artificial intelligence and things like warfare, arms races, nuclear threats, etc., is something which is repeatedly pointed out by many different groups/parties, from a variety of backgrounds.
- Workplace: The risks of artificial intelligence that have been attributed to the world of work are mostly associated with job automation. Many skeptics out there fear that AI technology will soon mean that humans will be surplus to requirements, as can be seen with the likes of cleaning robots, robot doctors, or self-navigating transport systems. At the moment it is the use of robot doctors that is causing quite a stir in discussions on the ethics of healthcare. The fear is that robots taking care of people and the resulting lack of human interaction, will have a negative effect on patients, particularly those in the later phases of life.
- Discriminatory algorithms: Unlike its human counterparts, artificial intelligence frequently delivers results that are neutral – one of the many advantages of the technology. However, it is more and more often demonstrating biases when it comes to individuals’ gender or background. In just pretty short spaces of time, Microsoft’s Chatbot Tay began to imitate racist language, security technologies identify ‘black neighborhoods’ as problem areas, and advertising platforms displaying higher paying jobs for male internet users. Such problems are becoming more and more well-known, leading to the British Standards Institute publishing a revised version of the ethical guidelines for robots. But it can be difficult for such guidelines to be integrated into the technology. At the end of the day, AI learns by itself from environmental factors and processes upon which individuals only have limited influence on.
Artificial intelligence in the digital world
How does artificial intelligence work in the digital world? Firstly it must be pointed out that for the ordinary person, artificial intelligence online is hardly visible. There are many companies who are very quick to distance themselves from the term, even if it is the case that their products actually need artificial intelligence to function. As big as the fascination with AI might be, the associated negative connotations are also quite prominent. Consumers are often skeptical of AI technology in everyday life. On top of this, it is not always easy to categorize a technological performance as ‘intelligent’; the fluid nature of how such technology is applied as well as the differing definitions of artificial intelligence can often lead to a sort of confusion surrounding such matters.
However, it is pretty much inevitable that we will get more and more used to the idea and application of AIs. There are so many areas in the online world where AI technologies play a decisive role. The list of active AI technologies and programs is long and getting longer. Google dominates this market with its innovations that are miles ahead of everyone else’s – apparently as many as 2-3 years ahead of other IT companies. But just how is artificial intelligence being integrated into well-known search algorithms and SEO? Below you’ll find some examples of the groundbreaking technology and software in this sector.
Techniques and areas of application
- Machine learning: Machine learning refers to the way an artificial system can collect knowledge from experiences it has. This learning data enables the system to recognize patterns and regularities. With machine learning, it is both symbolic and neuronal artificial intelligence that is used.
- Deep Learning: Deep learning is a subcategory of machine learning that works exclusively with neuronal AI; more specifically with artificial neuronal networks. Deep learning is the basis for most current AI applications.
- Visual classification: This is used for the development of object, face, symbol, and text recognition.
- Auditive classification: This serves the purpose of developing speech and sound recognition.
- Social Computing: This is where different online content (e.g., from social media, online games, blogs, wikis, etc.) is analyzed. From these results, it is possible to determine patterns and rules relating to social behavior. Social computing then allows you to develop artificial social agents.
- Opinion analysis: So-called ‘opinion mining’ (a.k.a. ‘sentiment analysis’) refers to methods with which you can search the web for users’ opinions and feelings about certain topics. The data that is gained is then used to find out how users’ feel about certain issues, events, and personalities. Such ‘opinion mining’ also makes it possible to process customer requests automatically and deal with them in a personalized way.
- Customer service (telephone, online) and digital assistants: AI developments play a big role in the service industry. Particularly when it comes to voice recognition software working with artificial intelligence.
- Search algorithms: Artificial intelligence is one of many components with which search algorithms are being optimized. Its significance when it comes to rankings is increasing all the time.
- Crawlers: Search engines, as well as other things, use crawlers to comb through the web in search for information. This information is then used to build up an index. A crawler learns from examples and can deduce relevant conclusions from them.
- Computer vision systems: Machine vision, specifically face recognition, is often used in the security technology sector; with the likes of street traffic or the monitoring of public places. Websites like Facebook use this technology to recognize their users more effectively. These days Facebook is now in the position to find a certain face amongst millions of photos – even if that face is not staring into a camera.
- Virtual actors and bots: With the development of computer games AI allows virtual players to act in a way that more human. There are so-called bots being developed that simulate human activity on the internet. Social bots operate using artificial intelligence.
- Crowd simulation: Artificial intelligence allows you to predict the complex behavior patterns of groups of people. These are used not just in the development of computer games but also for the likes of security technology, or in the analysis of viral dynamics.
Artificial intelligence should not be confused with the Semantic Web. While the roots of the Semantic Web originated in AI research, today the two sectors have nearly nothing to do with one another.
Programs, algorithms, and research initiatives
- RankBrain: Rankbrain is an artificially intelligent algorithm from Google that was originally developed to better understand search queries that were possibly unknown at the time of the first search. In 2015 Google announced that Rankbrain, after links and content, was the third most important factor of over 200 ranking factors in the Google search. This means that RankBrain has a big influence on SEO.
- DeepMind: Purchased by Google in 2014, DeepMind is a company that has created many different, innovative AI technologies. These developments, including RankBrain, were integrated by Google for various applications and algorithms. One of the AIs from DeepMind managed to teach itself the rules of old Atari games. Furthermore, the company AlphaGo developed a computer program that mastered the board game ‘Go’ perfectly. DeepMind distinguished itself technically by having developers that rely on neuronal networks and additionally equip their AIs with short-term memory.
- Inception: Inception is an image recognition network from Google that brought the visual recognition industry onto a whole new level.
- Siri, Alexa, Cortana, and co.: The artificial intelligence of voice assistants from Apple, Amazon, and Microsoft respectively should already be familiar to most consumers from everyday experience. In particular, the speech function of the assistant relies on AI technology.
- Watson: The IBM developed communication software, Watson, has been optimized for questions and answers in everyday language. In 2011 it appeared on the quiz show ‘Jeopardy’ and showed what it was capable of: it won against its human opponent with a head start of $2,500. At the same time, they are also being used in the medical sector to ascertain patients’ insurance details as well as their medical history. Another gimmick was demonstrated in 2016 when it managed to create a trailer for the film ‘Morgan’ by itself, on the basis of 100 other movie trailers. This trailer would then go on to be used officially.
- Cleverbot: The web-based chat program Cleverbot learns by communicating with humans. It is an open source chat program, which in 2011 was labeled as ‘human’ after it scored 59.3% in the Turning Test.
- TensorFlow: Since 2015, this intelligent software has been made freely available by Google with the intention of bringing AI research projects forward. At the moment TensorFlow is used in various Google products, including Google’s speech recognition, Gmail and Google search.
- Facebook AI Research (FAIR)/Torch: It is a similar case with Facebook and its open source software ‘Torch’. This is also intended to promote deep learning methods.
- Microsoft Emotion Recognition: This emotion recognition from Microsoft is a tool that recognizes emotions in images.
How does this affect SEO?
The innovations of self-learning systems are bringing about big changes in the online world. With the purchase of the company DeepMind in 2014, Google showed that its search algorithm will further specialize itself in artificial intelligence. Google is constantly buying start-ups in the area of AI research – just like the British companies Vision Factory and Dark Blue Labs – and integrating them into its DeepMind team.
Up till now, the Google AI initiative that has had the biggest influence has been the intelligent algorithm RankBrain. Highly efficient in the recognition of new search queries, in 2015 it was globally implemented Google’s search algorithm. Since then it has become one of the three most important ranking factors, alongside links and content. RankBrain’s specialty is converting search queries into mathematical entities. This makes it easier to recognize the intention behind the search query. However, it is unknown how the artificial intelligence behind this actually works.
The influence that AI has on Google search cannot be overstated. SEO expert Mark Traphagen quotes its CEO Sundar Pichai: “We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world. We want to build a personal Google for every user.” A personal Google for everyone means the complete individualization of online search using artificial intelligence. This is an immense challenge for SEO.
The artificial intelligence from RankBrain orders search queries by converting its known data into hypotheses and generalizations and applying these to the respective input. As it is constantly being fed with new data, this means that its conduct is always changing. This means that Google is not working with weekly updates performed by people, instead, they are working increasingly with real time calculations from self-learning systems. Given that the algorithms were already only partly understandable, this now makes the dynamic and personalized AI SEOs even more difficult.
The role of artificial intelligence in SEO is outlined below under the respective headings. There is one thing that you need to bear in mind when it comes to SEO: everyday AI is gaining new knowledge, relating to the quality of a website, from user experiences and signals. These are then applied to future rankings. Google knows where a user clicks, which links they use, how long they spend on a page, and how likely they are to react to an advertisement. The following points could be helpful when it comes to SEO:
1. User signals are highly relevant: It’s no longer all about clicks. What is now also crucial is things like time spent on page and, according to some studies, social signals. Four things are truly decisive:
- Time on site: This is the average amount of time spent on a website
- Bounce rate: This includes both very brief visits to a site as well as visits that only involve one page being viewed.
- Click through rate (CTR): Refers to how often ad banners or sponsored links are clicked on.
- Social signals: Social signals are likes, shares, and comments on a website or a website’s content. They are an important indicator for the popularity of web content.
2. Semantics versus keywording: RankBrain was originally developed in order better understand longer and previously unknown search queries. The result is that Google is getting better and better at interpreting searches in everyday language and the respective intentions of these searches. This shows that with Google, semantic phrases are becoming more and more important, whereas classic keywording is losing importance. Hence, when it comes to achieving a good ranking, it is the quality of the website’s content and its relevance to the user, which is becoming more decisive.
3. Google recognizes user satisfaction: Google evaluates user signals in order to rate the quality of a website more precisely than was done by search algorithms prior to RankBrain. For this reason, it makes sense to focus on a greater user friendliness. The key to such improved ‘usability’ lies in comprehensible texts and intelligent link building. The loading speed of a page is also decisive for user satisfaction and the length of their stay on the page. The navigation menu is crucial as well – both users and any AI need to be able to find their way around easily. In short, it is not just flawless content but also technical perfection that is demanded.
4. Promote inter-divisional online marketing: The larger a company is, the more it will invest in their online presence, the bigger their team of online marketers, SEO people, social media specialists, usability managers, and so on. If you want to be able to react effectively to innovations in the field of artificial intelligence, then all of these parties must all be pulling in the same direction.
While the rankings are getting more flexible, the good news is that this hasn’t made too much of an impact on search engine optimization. It has been a long time since pure keywording has been the only standard for successful SEO. In the last few years, the industry has made user satisfaction one of the main focuses; at the end of the day, the target audience should want to visit the website frequently.
What is the role of artificial intelligence when it comes to rankings? The answer is, it doesn’t differ all that much from the classic algorithms. AI-supported algorithms do not necessarily work any differently, however, they are more efficient and precise because they register and take in more of what is actually relevant for internet users. Existing SEO strategies should not be discarded, they should be enhanced and optimized with greater expertise.
One thing that’s left to say is that in 2016, Silicon Valley’s big five (Google, Amazon, Facebook, IBM, and Microsoft) decided to combine their artificial intelligence research.. This news should immediately cause some alarm for data privacy conscious users, as these companies have access to the majority of global data. But above all, this cooperation is primarily committed to the development of common ethical guidelines when it comes to the overall issue of artificial intelligence. The necessity of common ethical guidelines in the area of artificial intelligence cannot be denied. Without a doubt, developing AI technology in a way that is universally beneficial should be the central focus for our society in the coming years and decades.