Murera Gisa(elgisamur@gmail.com|mgisa@aims.ac.rw)
In this blog post you will discover the real meaning of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) as well as their differences and applications. Plenty of several people are keep asking themselves what AI is?, how does it works? and why they are called black box paradigm?. Simply, this blog post is going to answer some of those asked AI-related questions and clarify their crucial applications to make decisions and take actions in human daily life.
The Artificial Intelligence (AI) refers to the simulation of human intelligence in various types of machines that are programmed to think wisely like humans and mimic their actions. It also refers to computing systems that accurately perform tasks normally considered within the realm of human decision making. For examples, the below images are showing how the humanoids (human-alike robots) are doing and reading different scientific formulas and graphics.
Humanoid Artificial Intelligence products
Furthermore, Wikipedia defines the AI as the study of intelligent agents. These are any software system device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. These software-driven applications and systems combined to the extremely intelligent agents for largely incorporating the advanced data analytics and Big Data applications. AI systems leverage this huge knowledge repository to extensively make decisions and take actions that approximate cognitive functions, including learning and problem solving as human can do as well.
In this era of fourth industrial revolution together with increasingly capable machines and data explosion, AI becomes a critical and attractive field of scientific and technological researches to amazingly change and simplify lives. For instances, modern technology capabilities is generally classified AI as intelligent agent to successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomously operating cars, intelligent routing in content delivery networks, military simulations and more and more!!!
Then what are the crucial requirements to implement those field of application?, here we go;
It requires mostly to design the expert systems equipped with the knowledgeable practice that is proficient to acquire, manifest, and decipher and justify to its users; and
To efficiently stimulating devices to identify results for complicated issues like humans do and implement them in the mode of algorithms in computers.
In simple words we can say that the AI refers to technology that can make machines think like humans and the incorporated intelligent robots can work the way humans use to do according to the available information signals (data!!). With this regards, we are quite convinced that the artificial intelligence, Machine Learning, and Data Science are all related to each other.
Similarly, do you really want to know How Intelligent AI is? and will robots take over the human positions? Here we are, What are the 5 trends in AI that are dominating the year 2020
By referring to the different worldwide current AI applications, we can surely agree that this technology in near future is going to prove the immensity of human capacity to make remarkable technologies to make our life easier than before in such way is going to be unbelievable to our antecedents.
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, and we will have multiplied the intelligence – the human biological machine intelligence of our civilization – a billion-fold” Ray Kurzweil
As we have noticed above, AI is all about assigning machines the capability and power of simulating human behavior, incredibly cognitive capacity, reasoning and action. This behavior is supporting by big interconnected flow of information (tensorflow). With this regards, there is a wide range of techniques that come in this domain of AI such as linguistics, vision, robotics, planning, decision science, etc. The following chart is presenting the several branches of AI in daily life.
In addition to the aforementioned AI applications as it is indicated on the above chart, let’s dig deeper in order to get a detailed information of each major sub-fields of AI:
Machine Learning (ML) is a field of Artificial Intelligence that enables machines to translate, execute and investigate data for solving real-world problems. ML algorithms are created by complex mathematical skills that are coded in a machine language in order to make a complete ML system. The usage of ML in several domains is expanding with the available amount of data increases. Machine learning is extensive proposing a tons of techniques to extract the insightful knowledge from data that can be rendered into purposeful product. Additionally, ML enables individuals to execute tasks to recognize, classify, and estimate data from a given data set. The below image is indicating a various application of Machine Learning itself.
It is also called Artificial Neural Networks (ANN) which is computing system that is immensely inspired by the biological neural networks that constitute animal brains. Similarly, Neural networks are a series of algorithms that mimic the operations of a human brain to recognize relationships between vast amounts of data. In this sense, the ANN might refer to replicating the human brain and to code brain-neurons into a system or a machine that is widely using the neural networks functions to mimic humans. Formally, the combination of Neural network and machine learning solve many complex tasks. Visit Here to for famous relationship Neurology and Artificial Intelligence.
This has emerged as an extremely hot and attractive field of artificial intelligence. It mainly focuses on designing, constructing, operating and usage of robots. Robotics is an interdisciplinary field of science and engineering incorporated with mechanical engineering, electrical engineering, computer science,etc. The goal of robotics is to design intelligent machines that can help and assist humans in their day-to-day lives and keep everyone safe. Additionally, AI researchers are also developing robots using machine learning to set interaction at social levels.
Under the umbrella of an AI technology, an expert system is a computer system that mimics the decision-making intelligence and ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. For example Apple’s SIRI, a dialog system that attempts to replicate the decision-making capabilities of a human expert.
The term fuzzy refers to things which are not clear or are vague. In the real world many times we encounter a situation when we can’t determine whether the state is true or false, their fuzzy logic provides a very valuable flexibility for reasoning. In this way, we can consider the inaccuracies and uncertainties of any situation.
In boolean system truth value, 1.0 represents absolute truth value and 0.0 represents absolute false value. But in the fuzzy system, there is no logic for absolute truth and absolute false value. And also in fuzzy logic, there is intermediate value too present which is partially true and partially false.
Additionally, fuzzy logic is a technique that represents and modifies uncertain information by measuring the degree to which the hypothesis is correct. And it is widely used for reasoning about naturally uncertain concepts. Here for details on Fuzzy Logic
Wikipedia explain Natural Language Processing (NLP) as the sub-field of linguistics, computer science and Artificial Intelligence concerning with the interactions between computers and human language. Specifically, how to program computers to process and analyze large amounts of natural language data through the deep understanding of document contents. This technology involves speech recognition, natural language understanding, natural language generation, text translation, sentiment analysis.
Simply, a NLP is to develop the technological methods that assist in communicating with machines using human languages such as English, French,.. For examples, spam detection by looking at the subject of a line or text of an email and checking if it is junk. And NLP is used by social network like Facebook, Instagram, Twitter and much more. Specifically, Twitter is using NLP to filter the terrorist language from their tweets. In addition the e-markets like Amazon, Alibaba used NLP to interpret user reviews and enhance user experience.
Nowadays, AI has got an extremely important role to play in today’s society which is indicated by its myriad applications. Here below are the AI applications in different sectors:
AI in Health Sector: AI uses computer systems to support and perform the faster clinical diagnoses much better than humans do. AI is used in numerous diseases prediction to support the health policy making and health interventions. AI makes happen the robotic surgery that helps in work-flow and administrative tasks, and cognitive surgical robotics collects data from real surgical processes to improvise the already existing surgical approaches. AI might be used to predict ICU transfers, improve clinical work-flows and even pinpoint a patient’s risk of hospital-acquired infections.
AI in Social Media: In social media like Facebook, Snapchat, Instagram, Twitter there are billions and billions of user-profiles and they need to maintain the user information data and AI is the one that manages this. For example Facebook uses advanced machine learning to do everything from serve you content to recognize your face in photos to target users with advertising. Instagram (owned by Facebook) uses AI to identify visuals. LinkedIn uses AI to offer job recommendations, suggest people you might like to connect with, and serving you specific posts in your feed. Snapchat leverages the power of computer vision, an AI technology, to track your features and overlay filters that move with your face in real-time.
AI in Astronomy: In recent years, artificial intelligence has been increasingly used in astronomical research. AI together with machines can help the astronomical field do data analysis, such as, capturing new stars, new extraterrestrial planets, and even dark matter and It will help in understanding how they works, origin, and so on.
AI in Finance: Like the other sectors, Finance has not left behind to be Artificial intelligence playground since AI plays a crucial role in managing risk, money fraudulence prevention, counterfeit money detection, credit scoring and decisions and others.
AI in Travel: AI is very essential in the transportation industry. Travel sectors are using AI-powered chatbots which can create human-like interaction with customers for better answers. Artificial Intelligence is also used in Autonomous Buses and self-driving cars for reliable and quick transports.
AI in Agriculture: AI is making the agriculture field digital and AI can be very beneficial for the farmers. Precision agriculture uses AI technology to aid in detecting diseases in plants, pests, and poor plant nutrition on farms. From drones, AI enabled cameras can capture images of the entire farm and analyze the images in near-real time to identify problem areas and potential improvements.
AI in Education: AI chatbots can communicate with students in the form of educating assistants. In the future, AI will show much more development in the field of education such as personalized learning.
AI in E-Commerce: Today, AI is changing the way in which e-commerce stores operate and provide services to their customers. The AI applications can analyze consumer data to predict future purchasing patterns and make product recommendations, based on the browsing patterns. And Artificial Intelligence is assisting buyers to find out related commodities with recommended color,size,or brand. AI is coming to be more demanding in the e-commerce industry.
AI in Gaming: AI is present in almost every field. So, while playing chess, the opponent player is controlled by AI.
AI in Army and Security: The U.S. military is already integrating AI systems into combat via a spearhead initiative called Project Maven, which has used AI algorithms to identify insurgent targets in Iraq and Syria. Soon the similar technology is going to overtake the current security and army facilities
Finally, to be beneficial of the artificial intelligence technologies, it is needed to have the infrastructure, the field experts who really understand AI, machine learning and deep learning and their various applications. It’s imperative to comprehend what they are and how they work towards creating a more technologically advanced society.
When trying to dig in deeper into machine learning, you might often come across the metaphor of a “black box”. Within the same time, there are a lot of speculation and quite confusing meanings and definitions about this hypothesis, so it’s quite hard to comprehend what’s really going on.
So, let’s break down what does a black box in machine learning really mean, what situations black box models work best for, and what issues may be connected with this concept.
Currently, countless accurate decision support systems and technologies have been designed and constructed as black boxes. This concept refers to a system that hides their internal logic to the end-users. Incredibly, humans are black box models, too. In Machine Learning, a black box refers to an algorithmic function where a user knows the signature of the inputs and outputs, but can’t know how it determines the outputs from the inputs. These black box systems exploit sophisticated machine-learning models to predict individual information that may also be sensitive.
Data scientists acknowledge that the inner working of these self-learning machines especially deep Learning machines adds an additional layer of complexity and opaqueness concerning machine behavior. Once a ML algorithm is trained on data set, it might be hard and painful to comprehend why it turns a particular outputs to a set of data inputs. This is as described as black box learning machine. This concept was derived from black box testing technique in software testing which involves to evaluate the functionality of an software application without closer looking into its internal structures or workings. This method of test can be applied virtually to every level of software testing (unit, integration, system and acceptance).
Furthermore, the fact that Machine Learning algorithms can act in ways unforeseen by their designers raises the issues of complexity to the end users which makes them black box models.
Additionally, as Machine Learning algorithms especially deep learning get smarter, they are also becoming more incomprehensible. And now we know a little bit about the concept of a black box, let’s now find out in what situations both deep learning and human being are called black box models.
The black box metaphor is sounding worthy to human being in fact that we can ask people to explain their actions which is sometimes hard and challenging since, the human behavior might seem quite transparent. However, given that people sometimes cannot explain their behavior even to themselves, it’s fair to say that they don’t always know the real cause of their actions. This is the reason why humans themselves are black boxes from the point of view of machine learning.
In terms of closed-source software, and Neural Network systems (Deep Learning), users have no total access to its functionality due to the mathematical function back-end of DL model which is too complicated for any human to comprehend. This forces users only to monitor inputs and outputs. Deep learning models, for instance, tend to be black boxes of the first kind because they are highly recursive and not easier to comprehend its background function. Simply, the DL is called pragmatic black box models because the processes between the input and output are not transparent at all. The pone and only things user can observe are how the data is entered and what the final decisions are. As the neural network becomes more complex when the number of nodes and neurons increases, the model itself becomes less and less transparent and complex.
We have seen that people have insufficient knowledge of how neural networks makes decisions and can’t view its internal workings, they lose confidence in the model they can’t fully control. Consequently, the lack of trust usually leads to many AI failures.
The occurred issues within AI and ML models might be problematic when the algorithms are applied to critical important task. This requires an urgent solution to get rid of black box drawbacks. Here they are:
Cautious design the ML system to make it more transparent and let the end-users analyze why the system takes certain decisions.
Implement systematic leading practices by availing the hidden hypothesis, precisely managing ML algorithms, checking on the compilation, ensuring open-source algorithms and training data, etc.
Comprehensive use of external tools to monitor how the ML system works. For instance a ATMSeer presented by MIT, which let’s users see and monitor how an automated machine learning system works.
You might ask yourself what Deep Learning (DL) is? what deep neural networks is, how really it works? and why it should be used?; here below is simple meaning and working principal of Deep Learning models.
Do you use Facebook?? let’s say yes wooww!!!. Have you ever uploaded a photo on FB with your friends?? If so you might have noticed how FB automatically highlights faces and prompts friends to tag in the photo. But how does Facebook know which of your friends in this photo? The answer is Artificial Intelligence do the job. Facebook uses facial recognition powered by deep neural networks to suggest to you whom you should tag in the past.
Deep learning is part of a broader family of machine learning methods based on artificial neural networks function that imitates the workings of the biological brain in processing data and creating patterns for use in decision making. It is also known as deep neural learning or deep neural network. Similarly, Deep Neural Networks are the biologically inspired simulations performed on the computer to relevantly undertake a certain specific tasks like clustering, classification, regression tasks and pattern recognition.
Since DL inspires by human brain, let’s have a look on the similarity and difference of biological neurons (BNNs) and artificial neurons (ANNs):
ANNs: it is also known as perceptron which is the basic unit of the neural network. Simply, it is a mathematical function simulates on a model of biological neurons. It can also be seen as a simple logic gate with binary outputs. An ANN is based on a collection of connected units or nodes. Each connection, like the synapses in a human brain, can transmit a signal to other neurons. The transmitted real number signal at connection (edge) is processed and the output of each neuron is computed by some non-linear function (activation function) of the sum of its input signals.
BNNs: They are the basic functional units of the nervous system, and they generate sharp electrical signals across their cell membrane, roughly one millisecond in duration, called action potentials or spikes. Here below are the different between human and artificial neurons:
The basic idea behind the working principal of NN is to simulate (copy in simplified but reasonably faithful way) lots of densely interconnected brain cells inside of computer system so it can get to learn things, recognize patterns and make a decisions in humankindlike way. Similarly to human brain, an artificial neural network is an interconnected group of nodes which crucially transmit the input signals to multiple other neurons to be processed into meaningful outputs.The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final output neurons of the dep neural net accomplish the task, such as recognizing an object in an image.
To find the output of the neuron, first one takes the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. The bias term is added to this sum. This weighted sum is sometimes called the activation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the meaningful output. The initial inputs are external data, such as images and documents. The meaningful outputs are the accomplished task, for example recognizing an object in an image.
Organization of NNs: The natural neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be fully connected, with every neuron in one layer connecting to every neuron in the next layer. For more details visit here
Soon, in next blog post; we will try to apply the deep neural network model to efficiently analyze and predict the churn behavior of customers in banking sector.
Currently, there are many types of neural networks in deep learning which are used for different purposes. Here we will explain the most used topologies in neural networks, briefly introduce how they work, along with some of their applications to real-world challenges.
This a single-layer neural network model which contains input and output layers. Then how it works??; since it has no hidden layers, it takes an input and calculates the weighted input for each node. Afterward, it uses an activation function (mostly a sigmoid function) for classification purposes. It applies in classification problems, Encode Database (Multilayer Perceptron), Monitor Access Data (Multilayer Perceptron).
In this type of artificial neural network, the nodes do not ever form a cycle. But all of the perceptrons are arranged in layers where the input layer takes in input, and the output layer generates output. The hidden layers have no connection with the outer world; that’s why they are called hidden layers. In FF, every perceptron in one layer is connected with each node in the next layer. Therefore, all the nodes are fully connected. It is extremely used in data compression, pattern recognition, computer vision,..
This type of NN is generally used for function approximation problems. It has faster learning rate and universal approximation compared to others. RBNs use a Radial Basis Function as an activation function (sigmoid or logistic function) but when we are dealing with continuous values, then RBNs is not suitable model to be used. It is widely applied in classification, time series prediction, system control and function approximation.
A deep feed-forward network is a feed-forward network that uses more than one hidden layer. The main problem with using only one hidden layer is the one of overfitting, therefore by adding more hidden layers, we may achieve (not in all cases) reduced overfitting and improved generalization. It uses in Data Compression, Computer Vision, Financial Prediction, and ECG Noise Filtering.
Recurrent Neural Networks (RNNs) are the variation to Feed-Forward (FF) Networks in which each of neurons in hidden layers receive an input with a specific delay in time. RNNs is extremely used for accessing the previous information in current iterations. For instance, when we are trying to predict the next cancer diagnostic label, we need to know the previously one first. It is used in machine translation, Robot Control, Time series prediction, Pattern Recognition,…
LSTM Networks are a type of Recurrent Neural Network capable of learning order dependence in sequence prediction problems. It is referred as a complex area of deep learning whose behavior required in complex problem domains like machine translation, speech recognition, and more. LSTM networks introduce a memory cell. LSTM can process data with memory gaps and it is a best deep neural network model to be used if RNN fails since it can consider time delay in RNNs and fix it. Additionally, it crucial important to be used if you have a large number of relevant data, and we want to find out relevant data from it. Also, RNNs cannot remember data from a long time ago, in contrast to LSTMs.
GRUs are a gating mechanism in recurrent neural networks introduced by Kyunghyun Cho with colleagues in 2014. It is similar to RNNs with a forget gate but has fewer parameters than LSTM, as it lacks an output gate. GRUs only have three gates, and they do not maintain an Internal Cell State: Update Gate which determines how much past information to pass to the future. Reset Gate that determines how much past knowledge/information to forget. And finally Current Memory Gate which sub-parts of reset fate. They are extensively applied in Polyphonic Music Modeling, Speech Signal Modeling, and Natural Language Processing.
The Auto-Encoder is a type of Deep neural network used to learn efficient data codings in an unsupervised manner. It learns how to efficiently compress and represent or encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible. In an autoencoder, the number of hidden cells is smaller than the input cells. The number of input cells in autoencoders equals to the number of output cells. It is typically for dimensionality reduction, by training the network to ignore data signal “noise”. An AE consists of 3 components: encoder, code and decoder. The encoder compresses the input and produces the code, the decoder then reconstructs the input only using this code. It applies in classification, clustering problems and feature copmpression
A Variational Autoencoder (VAE) has first introduced by Diederik Kingma and Max Welling in 2013. It uses a probabilistic approach for describing observations in latent space. Thus, rather than building an encoder which outputs a single value to describe each latent state attribute. It is widely used in Interpolate Between Sentences and Automatic Image Generation.
A Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. For exapmle, some set of possible states can be financial stock performances, weather conditions and much more. It applies in statistics, speech recognition and communication.
A Boltzmann machine is a type of stochastic recurrent neural network. A Boltzmann machine network involves learning a probability distribution from an original dataset and using it to make inference about unseen data.
Example scenario in which the Boltzmann Machine could be a best model to be used:
Suppose we work in a nuclear power plant, where safety must be the number one priority. Our job is to ensure that all the components in the power plant are safe to use, there will be states associated with each component, using booleans for simplicity 1 for usable and 0 for unusable. However, there will also be some components for which it will be impossible for us to measure the states regularly.
Furthermore, we do not have data that tells us when the power plant will blow up if the hidden component stops functioning. So, in that case, we build a model that notices when the component changes its state. So when it does, we will be notified to check on that component and ensure the safety of the power plant.
Convolutional Neural Networks (CNN, or ConvNet) is a class of deep neural networks, widely applied to relevantly analyzing visual imagery. For example to primarily classify images by naming and identifying what they see, cluster images by similarity measures and segment them accordingly (photo search), and perform object recognition within scenes. CNNs are not limited to image recognition, since they largely applied in video recognition, natural language processing, brain computer interfaces, financial and economic time series prediction and forecasting. Additionally, CNNs are regularized versions of multilayer perceptrons in which all perceptrons are fully connected one to another (i.e each neuron in one layer is connected to all neurons in the next layer). This makes it to be fully connectedness deep neural networks. CNNS are extremely applied in Identify Faces, Street Signs, Tumors, Image Recognition, Anomaly Detection.
A Neural Turing machine (NTMs) is a recurrent neural network model which has first published by Alex Graves with colleagues in 2014. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. (NTM) architecture contains two primary components: Neural Network Controller and Memory Bank. Then how it works?, the controller interacts with the external world via input and output vectors. It also performs selective read and write R/W operations by interacting with the memory matrix. A Turing machine is said to be computationally equivalent to a modern computer. Therefore, NTMs extend the capabilities of standard neural networks by interacting with external memory. Visit Here for details on Neural Turing Machine (NTMs).
This type of Deep Neural Network is extremely applied in Robot designing, operation and action and in Building in Artificial Human Brain.
Hopefully, you enjoyed this summary of the main types of neural networks. If you are eager and motivated to know more about the other types just visit the Tutorial on the main types of neural networks and their applications to real-world challenges.
NOTE: The original referenced graph is attributed to Stefan Leijnen and Fjodor van Veen, which can be found at Research Gate/Neural Network Zoo.
Why Deep Learning or Neural Networks should be used?, the simple answer is:
DL is best approach when it comes to complex tasks and high difficulty level of feature engineering in some types of solutions, e.g. multimedia data (images, videos, audios) processing, large frequency time series data analysis.
DL outperforms other models, including white box models and dynamical systems models in applications like weather forecasting, genomics, and stock prediction and trading.
DL is worth to be used when the comparative cost of failure is relatively low while the cost of success is pretty high.
In this blog post, we have leaned about the meaning of AI and its main branches as well as the emerging technologies in Artificial Intelligence and Machine Learning that are transforming and simplifying the human lives. Not only this but also the meaning of black box metaphor in machine learning technology. We have enumerated the black box related issues and how to solve them in order to be beneficial from such paradigm. The black box concept is found to be an interesting topic in the field of artificial intelligence in general, and in machine learning systems in particular.
The experts, analysts and researchers in the area of AI are continuously attempting to build up software systems (i.e intelligent agents) for distinct applications like self-driving cars, natural language processing, speech recognition, etc. However, the built intelligent agents have currently revealed the limitations to some areas like military, space science, medical, neutral networks and geological. But it might be expected that in near future with the extensive research and advancement in the field of technology and AI, those systems will be able to move away from today’s machinery and hardships at working stations by creating the humans and machine ecosystem. For example we will be having robot as doctor in hospitals, professor in class room, driver in bus, farmer in farm, etc.
Furthermore, according to the idea of trans-humanism first introduced by Swedish-born philosopher, Nick Bostrom who argues that through the extensive researches in a field of AI, in near future a day will come when humans and intelligent machines will merge into cyborgs or cybernetic organisms that are more capable and powerful than either. In such way, the AI experts anticipate that the created intelligent agents will be able to do anything that humans can but do it even better than them. This is a questionable hypothesis, but they will surely surpass and outshine humans in particular domains. The first example was recently identified from a chess computer beating the world chess champion.
Watson, J. D., & Crick, F. H. C. (1953). Molecular structure of nucleic acids: A structure for deoxyribose nucleic acid. Nature, 171(737-738), 3-12.
Poole, Mackworth & Goebel (1998), which provides the version that is used in this article. These authors use the term “computational intelligence” as a synonym for artificial intelligence.
“ACM Computing Classification System: Artificial intelligence”. ACM. 1998. Archived from the original on 12 October 2007. Retrieved 30 August 2007.
Bostrom, Nick (2005) “A history of transhumanist thought” , Journal of Evolution and Technology, p. 2-21
Nagrath I.D. and Gopal M. (1994) “Control Systems Engineering” New Age International Publications. PC “Fuzzy Logic and Neural Networks - Practical Tools for Process Management”, May/June, 1994, p.17
J. Harrison, K. Izzetoglu, H. Ayaz, B. Willems, S. Hah, U. Ahlstrom, et al. (2014), “Cognitive workload and learning assessment during the implementation of a next-generation air traffic control technology using functional near-infrared spectroscopy,” IEEE Transactions on Human-Machine Systems, vol. 44, pp. 429-440.
G. E. Hinton and T. J. Sejnowski, (1986), “Learning and releaming in Boltzmann machines,” Parallel Distrilmted Processing, vol. 1.
Hopefully, you have enjoyed reading this theological blog post and gained a lot in the field of AI. Next time we will apply the Deep Neural Network (black-box) models and compare them with Machine Learning (white-box) models; all will be trained on the same data set and will be assessed basing on performance and running complexity. Keep enjoying then!!!
About Murera Gisa
Murera Gisa is a Data Scientist and Economist. His fields of practice include data analytics and machine learning. He enjoys making various types of data speak for themselves and communicating to the diverse audiences.
Connect with Murera Gisa