Lead Image © lassedesignen, Fotolia.com

Lead Image © lassedesignen, Fotolia.com

The TensorFlow AI framework

Machine Schooling

Article from ADMIN 57/2020
By
The TensorFlow symbolic math library can help you introduce artificial intelligence, deep learning, and neural networks into your projects.

Those working in IT and in the AI environment often know little more about TensorFlow than that it has something to do with artificial intelligence (AI). In fact, TensorFlow is one of the most powerful artificial intelligence frameworks. In contrast to some abstract projects from university research labs, it can be used today. In this article, I introduce the topic and look into the features that TensorFlow offers and contexts in which it can be used sensibly.

Basics

Wikipedia describes TensorFlow as "a free and open-source software library for dataflow and differentiable programming across a range of tasks" [1]. If you don't happen to be a math genius or an IT scientist, this description might make it difficult to understand what TensorFlow is actually about.

To begin, it makes more sense to look at TensorFlow from a different perspective: What does the program do for developers, and which problems can it help solve? To understand this, you must understand some more technical terms: What does machine learning actually mean in a technical sense? What is deep learning, and how does a neural network work? All these concepts appear regularly in the TensorFlow context, and without understanding them, TensorFlow cannot be understood. If you need an overview before proceeding, please see the "Neural Networks" box.

Neural Networks

Neural networks were given the name because they imitate the neural connections found in brains in a highly simplified form. In the brain, information is processed by cells that connect with each other depending on experiences and sensations. Machine Learning uses a similar, but extremely simplified, model: Artificial neurons are connected to other artificial neurons in a network. According to these artificial neurons' own knowledge, they then reconnect autonomously.

From where do their insights come? If you keep in mind that the workings of the human brain have not yet been researched conclusively, it seems pretty audacious to try to reproduce its functions in software. Where does input originate? Knowledge in computers, and therefore also the data on which neural networks learn, is always the result of mathematical calculations. Accordingly, the artificial neurons, as the counterpart of their real models, also act as a mathematical reference.

If you imagine a neural network as a graph, the individual neurons form its nodes and the connections form the edges. In the sense of graph theory, a graph is a structure that represents different objects and their relations to each other. Complex problems, according to the theory, can always be broken down and represented as individual objects and the relationships between them. Ultimately, graph theory thus acts as a tool for modeling the relationships between many factors.

The individual elements within a graph like those used in neural networks are defined by several properties. Each node in a graph can have at least one source from which it obtains information and one destination (aka sink) to which it sends information. The nodes themselves act as information processing units in the neural network by performing calculations. However, calculations only make sense if the nodes have material that can be processed. A neural network therefore always includes a data set that the responsible developer provides to the network as the basis for its work.

Neural networks support basically two modes of operation. The simpler variant is limited only to applying the existing model to new input data (i.e., analyzing previously unknown data). In such a scenario, neural networks are more at home in the big data field, where they evaluate data.

The other mode is far more exciting from an IT point of view and genuine machine learning: the training phase that needs to precede the application. Here, the network compares its results with a target value for each data set, which someone must have defined beforehand. On the basis of this target, the network evaluates the quality of its calculations and re-weights the connections between the individual nodes of the neural network accordingly.

If the task consists, for example, of identifying photos that show dogs from a collection of photos, the network should correctly recognize considerably more dogs in the tenth round than after the first, because it repeatedly becomes aware of misclassifications and corrects them. At the end of the rounds, the neural network is considered trained.

Of note here is that neural networks do not receive explicit instructions about which concrete measures lead to better results. Instead, they independently strengthen those weightings that lead to better results and weaken weightings that cause errors.

Essential Preparation

Building an environment that can weave neural networks and make the necessary calculations for them well and reliably is obviously not a trivial task, and it would not make sense for every research institution to build its own completely individual AI implementation, because many AI approaches face similar problems and use comparable methods to reach their goals.

In IT, issues arise repeatedly, even in very different projects. The answers to these questions are provided by software libraries that implement functions that can then be integrated into other environments. People who use these libraries in their own programs can save themselves a huge amount of overhead and ensure a higher degree of standardization, which in turn reduces the cost of developing and maintaining software.

The idea behind TensorFlow is easy to grasp against this background: TensorFlow sees itself as an AI library that researchers and developers can feed with data to develop AI and neural networks. This effectively saves the effort of composing an AI environment for individual use cases.

Data Streams as the Basis

In practical terms, TensorFlow applies a graph model to arrive at mathematical calculations and lets developers build graphs with data that originates from data streams. Each node in a TensorFlow network is a mathematical operation that changes the incoming data in a specific way before it migrates further through the neural network. The connections between the nodes form multidimensional arrays, which are referred to as "tensors."

The real power of the neural network is to improve the signal strength with which the stream data makes its way through the graph so that a specific, predetermined goal is achieved. To do so, it automatically adjusts the tensors – which makes it clear how TensorFlow got its name.

Ultimately, the great strength of TensorFlow is that it abstracts some of the complexity in the AI environment for the developer. If you have a concrete problem to solve, you do not have to develop and deal with the complexities of an algorithm for machine learning. Instead, you can access TensorFlow and combine its ready-made functions with your own input material, which TensorFlow then processes. In fact, TensorFlow is so versatile that you only have to describe a basic task.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=