Solving Intelligence  -A personal view on A.I

Solving Intelligence -A personal view on A.I

A few years ago, I thought A.I was a dream, a thought not yet realized, but after seeing Iron man and Avengers, I became interested in the character of J.A.R.V.I.S, an AI Tony stark created to help him with his inventions and managing his personal life and his business. In the light if this thought, I decided to try something new, I’ve been  coding in various languages for a while now and I’ve come to realize that there are many beautiful thing a computer can be made to do. Even creating and re-writing its own source code.

But the real question is how close are we to achieving this feat of building programs or Expert systems capable of such ingenuity. Truth is we are quite close, very close, several institution have been created solely for the purpose of building AI and there’s been breakthroughs. To recount the progress in recent years.

In early 2010, Deep Mind was founded by Demis hassabis, Shane Legg and Mustafa Suleyman in London, the Initiative was supported by  most iconic tech entrepreneurs and investors of the past decade,  prior to being acquired by Google in early 2014 in their largest European acquisition to date. The company is powered by Google(yea that Google, the internet giant guy)
Algorithms created by the company are capable of learning for themselves directly from raw experience or data, and are general in that they can perform well across a wide variety of tasks straight out of the box. their world-class team consists of many renowned experts in their respective fields, including but not limited to deep neural networks, reinforcement learning and systems neuroscience-inspired models.

As at early march this year, DeepMind’s computer program AlphaGo played  against South Korean’s professional Go player, Lee Se-dol, having a rank of 9 dan,(just so you know, go is a strategic board game that originated in china, its quite similar to chess but nothing like it.) AlphaGo won with a 4-1 series, thus proving the programs tenacity and efficiency.

Google has gone ahead to acquire more Artificial intelligence startups including DarkBlue Labs and Vision Factory

So all these kinda prompt the question,
how big is the field of artificial intelligence?

According to a variety of metrics, the amount of AI research being done appears to be about 10% of the amount of computer science (CS) research being done. The metrics used, however, mostly capture research quantity rather than research quality, and thus may be a weak proxy for measuring how many QARYs have been invested. That said, the fact that roughly 10% of CS research prizes are awarded for AI work may indicate that research quality is similar in CS and AI. The various fields associated with artificial intelligence include

  • Computer science
  • Artificial intelligence
  • Natural language and speech
  • Artificial Neural Networks
  • Machine learning and pattern recognition
  • Computer vision

Broken down these all have various meanings, some we know, others we might not.

Machine learning is a subfield of computer science[1] (more particularly soft computing) that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. Arthur Samuel defined machine learning as a “Field of study that gives computers the ability to learn without being explicitly programmed”. This is what makes artificial intelligence machines capable of learning over time, by watching and learning from routines and experiences and possible a little anticipation.  Machine learning is closely related to (and often overlaps with) computational statistics; a discipline which also focuses in prediction-making through the use of computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is unfeasible.

Artificial neural networks (ANNs) are a family of models inspired by biological neural networks (the central nervous systems of animals, in particular the brain) which are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. Artificial neural networks are typically specified using three things:

  • Architecture specifies what variables are involved in the network and their topological relationships—for example the variables involved in a neural network might be the weights of the connections between the neurons, along with activities of the neurons, neural networks are used basically in three categories:
  • Activity Rule Most neural network models have short time-scale dynamics: local rules define how the activities of the neurons change in response to each other. Typically the activity rule depends on the weights (the parameters) in the network.
  • Learning Rule The learning rule specifies the way in which the neural network’s weights change with time. This learning is usually viewed as taking place on a longer time scale than the time scale of the dynamics under the activity rule. Usually the learning rule will depend on the activities of the neurons. It may also depend on the values of the target values supplied by a teacher and on the current value of the weights

Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions A theme in the development of this field has been to duplicate the abilities of human vision by electronically perceiving and understanding an image.Understanding in this context means the transformation of visual images (the input of retina) into descriptions of world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.Computer vision has also been described as the enterprise of automating and integrating a wide range of processes and representations for vision perception.

the development of artificial intelligence can be considered important and necessary to the development of technology and life itself.
But how can the lowly coder looking to make his own contribution to the open source community or the average developer looking to stimulate his own creation of an AI and maybe achieve this feat of computer programming and computing neurology(Yea I coined that, its means the study of the supposed mental activity of an AI). Well for starters one might consider the speech to text translational system and voice-recognition, all over the Internet there are several STTs(Speech to Text Translation APIs). Several of these projects are hosted on Github, these include

Most notable is the google speech recognition(not free),

pocketshinix.js-a speech recognition library written entirely in javascript and running in the web browser

Julius-“Julius” is a high-performance, small-footprint large vocabulary continuous speech recognition (LVCSR) decoder software for speech-related researchers and developers.
These inputs of speech are transformed to text by these APIs and would be used by the developer’s programs as input, analyses and used in operations specific to their nature.
The second part to this would be text to speech translation these would require the need of TTS APIs rightly guessed Text To Speech Translators.) the entire process can be considered pretty basic, but its a good start for any developer looking to get into A.I development.
Until next time……

Leave a comment