I am sure the word Artificial Intelligence (AI) would fascinate a lot of people and if one says that he is a practitioner in this area or has created some algorithms/models which can perform predictions with a certain level of accuracy, self learns and auto corrects, recognize people; this in turn would fascinate a lot around him.

Artificial Intelligence focusses around the theory and development of computer systems and programs/algorithms which can perform tasks normally done by humans (ones which earlier required human intelligence/inputs). A few examples could be in areas of visual perception, speech recognition, decision-making, language translation, etc.

Today, organizations and individuals (including me) are experimenting a lot with AI technologies in one way or the other – be it in terms of developing/making use of predictive models using datasets to forecast stock prices, weather patterns, crops, building neural networks, face recognition technologies, self-aware/self-driving cars, improve disease diagnosis process – it’s happening all around us (and almost across all industry verticals) at a very fast pace. It is believed by many that with the onset of AI technologies – a lot of jobs would be lost, unemployment rates would become high, artificially intelligent machines would take over and people will become redundant. Personally speaking and in my view; this is probably a myth – in-fact I feel that with AI and other new age technologies – newer jobs would be created thereby giving individual an opportunity to grow, do his job in a better way than what he was used to doing. It could also mean breaking the monotony of old ways of working and do a job in an altogether new way.

Someone rightly said – “Change and evolution are the only constant things in the world”.

One should in fact contemplate the following question in the case of an unfortunate situation of a job loss due to AI/automation –

“Is it really AI which has taken one’s job or Is it my attitude to not adapt and change with time which is responsible for this”?

Long story short - Key here is to recognize that there is a change happening around us and one should try and adapt to the change (as much as possible) and go with the flow. We all have changed and evolved from being apes to human beings and it is only natural to evolve further into something else. We all should remember (and it is good to be somewhat paranoid in this case) and understand that a human being and an employee too might become extinct if he is not able to adapt himself with time towards changing environment and conditions around him. So, with all this change happening around us and having highlighted the need to change for humans comes the next important question (and restricting myself to only technology and the topic of AI):

Coming from a traditional technology background –

  • What should individuals do to adapt themselves to be better prepared for this change?
  • How can an individual who comes from a traditional technology/business background can change himself, enhance his learnings and maybe one day call himself an AI practitioner and even develop algorithms, neural network designs/models for himself and not become paranoid about AI/job loss due to it?     

Through this article; I aim to share my personal view points around how I have adapted myself towards this change. I can summarize my AI journey - so far - in 7 steps. I believe these 7 steps have helped me immensely in beginning my AI journey & further motivated me to continue within it. Henceforth, taking an initiative to pen it down to benefit other beginners to chart their own self learning paths to excel. I originally did not have these steps in mind for this journey – they happened to me (over the years) as I embarked on my learning process.

1.The Instinct Of "Getting Overwhelmed"

Every individual appreciates a little bit of attention – one way or the other during his lifetime. With the advent of social networking; sharing information and thus getting attention has become a lot more easier. This is also one of the primary reasons for social networking sites to clock phenomenal double digit growths and business. Using social networks – individuals try to highlight their achievements, their successes, their agonies and their knowledge. By highlighting all this they aim to achieve that little bit of extra attention from their peers, friends, family and well-wishers.   

Coming back to the topic of Artificial Intelligence - In the process highlighted above and to showcase their knowledge/understanding/achievement on a subject/topic (in the professional world/otherwise); an individual at times uses (maybe not intentionally) difficult terms & jargons to explain a topic. Now, their might be people who follow what he is saying to the dot. At the same time (and I know for sure) - their would be masses who will not follow anything at all. It is exactly the point at which (and its only human) for those masses; who are not following anything; become overwhelmed and shut themselves out – when they see/hear people converse/use those esoteric terms/topics. In the context of Artificial Intelligence & Data Sciences - Unsupervised/Supervised learning, Neural Networks, Backpropagation, Decision Trees, Self-Learning, Reinforcement Learning, Deep Learning, Feed Forward Mechanisms, etc. are some of the terms/topics fitting the condition that I talked about and which most of us would try to skirt and under normal situations not follow at all when we hear people conversing about them. Comes the first and foremost point to be kept in mind - A lot of us (including self) used to get overwhelmed just by the mention/usage of the above-mentioned terms. These terms initially used to sound/appear so complicated (and high up) that just by hearing them; the natural human instinct used to take over and used to say –

"This is too tough for me" OR "I need to have done advanced studies; maybe a Doctorate or Masters to even understand this" OR "This requires extensive mathematics, statistics, etc (true maybe in some cases but generally speaking - "Its Manageable" - something which I will elaborate a bit more on going forward").

Lesson 1 – STOP being overwhelmed

AI has been in existence since late 1970’s-1980’s – it’s just now that people have started conversing so much about it in main stream and in one’s daily life. Remember the movie “Terminator” (1980’s) where Arnold Schwarzenegger played a cyborg – a perfect combination of human and machine; a lethal killing machine; one so powerful that it used to recognize its subjects, self learns, plan and even take decisions to kill humans to avoid a probable conflict with them at a future date. We actually have experienced AI, studied about basic elements which constitutes the foundations of AI (at schools and colleges) since the start - one way or the other. So, for example – when we started doing Linear Algebra from Grade 7 to 9 OR studied matrices/determinants, Statistics, areas under the curve from Grade 10 onwards OR studied about vectors/scalars, differentiation/Integration from grade 11 onwards till our advanced college degrees studying Probability, Regression, Transforms, State of machines, etc – we have touched upon parts of Machine Learning, Data Science and AI already. If you have done all this and passed with flying colors then...

Why get Overwhelmed NOW!!!

Take the First step by going through the exercise below to revise your key Probability Concepts

Exercise – Probability Overview (Key Concepts Review)
Probability theory provides a framework for reasoning about likelihood of events.
Important Terms
Experiment: procedure that yields one of a possible set of outcomes e.g. repeatedly tossing a die or coin
Sample Space S: set of possible outcomes of an experiment e.g. if tossing a die, S = (1,2,3,4,5,6)
Event E: set of outcomes of an experiment e.g. event that a roll is 5, or the event that sum of 2 rolls is 7
Probability of an Outcome s or P(s): number that satisfies 2 properties
o   for each outcome s, 0 <= P(s) <= 1
o   ∑ p(s) = 1
Probability of Event E: sum of the probabilities of the outcomes of the experiment:
o   p(E) = ∑sCE p(s)
Random Variable V: numerical function on the outcomes of a probability space
Expected Value of Random Variable V: E(V) =∑ sCS p(s) * V(s)

Liking It !!! So Far...Just remember – This is all do-able

Algorithm Output – Prediction Strategy For Stock Price Movement Against Market Returns 

2.What is AI & What Should I Keep In Mind As A Beginner In This Field

In simplest terms - AI is a branch of science and the real beauty of AI lies in the simplicity with which things get done within it. In-fact the branch has been created to make a machine perform (by means of automation and self-learning) repeatable tasks what a human being was performing (and hence prone to human errors). The machine performs this task to improve accuracy, minimize errors and then further re-learn from its actions to perform even better and excel. 

As an example - from our childhood till now – whatever we see and the things that we do or have done - be it learning or recognizing alphabets, digits, words, sentences, grammar, raising your hands, moving your legs, thinking (amazing thing to note here is that we learnt all this naturally without even realizing and we perform all this so naturally and effortlessly) is one way or the other examples of AI and today’s machines are capable of performing all this and maybe beyond.

Lesson 2 - Be Sensitive To Things Happening Around You – Ask How’s & Why’s

To do well in the field of AI and as beginners in this field – be sensitive to things around you and always try to cultivate your mind to constantly ask questions

  • Why is this happening?
  • How is this happening?
  • When you see – a bird, an airplane, a car or a ship with your eyes – how many of you have asked a question to yourself – How am I able to do all this? What is the logic that the human body (eyes and brain in this case) is using to differentiate between them?
So, it is these constant “How’s and Why’s” to oneself which will come in handy to keep you excited and to begin your journey.  

3.Self-Learn Using Internet & Get Enrolled In an Online Course 

These days; with the advancements in technology; data rates gone southwards and internet becoming freely available to almost every individual on the planet – it is very easy to source for information. There are so many blog posts, websites dedicated solely to Artificial Intelligence, Data Sciences, etc. One can just go through all this information on the internet and can self-learn a lot. Also a lot of renowned universities and institutions have published their research papers across various subjects on the internet. They also have started offering online courses of the same topics which are taught in person at the universities.

The online mode is something which has completely transformed the way learning is happening these days. Through this, acquiring knowledge on a topic has become extremely easy, convenient and cost effective for an individual. Till a few years back; courses on Artificial Intelligence, Data Sciences (offered by Universities as in-person courses at their campus) were extremely expensive and probably beyond the reach for a lot of us. These days and through the online mode; the same course and content is offered at discounted rates and has made education and knowledge somewhat affordable to everyone. Add to it the convenience of learning something new at one’s home and in familiar environments. The only thing what is needed at an individual level is have that crave to learn a topic, identify the relevant course material for that topic and then dedicate some time to learn from the internet or get enrolled in a course to acquire knowledge.   

Lesson 3 – Self Learn, Get Enrolled

Sites like Coursera, Google, Amazon, etc has some amazing information available freely on this subject and they also offer online courses (at heavily discounted rates) from some of the top institutions worldwide. An individual just has to get enrolled in a particular course and learn. 

Some of the good Sites for Crash Courses
1. Google's Machine Learning Crash Course: (developers.google.com/machine-learning/crash-course/)
2. Introduction to Statistical Learning: (www-bcf.usc.edu/~gareth/ISL/)
Coursera, etc.

4. AI - Use Case Driven Industry

Not everything (not literally speaking) can be AI-(tized) (if that at all is a word). This is a very use case driven industry and it does not make sense to AI-tize everything and anything around you. Remember – what comes so naturally to a human being and human body (for example understanding speech or differentiating a bird from a plane) – to replicate and build the same functionality in an algorithm or a piece of code – it takes a humungous amount of time and effort and naturally cost. So do your research and identify only those use cases which makes sense to be AI-tized (and gives real value) and not everything under the sun.

For example - An algorithm or a neural network which is designed to perform a specific task/activity will only perform that with precision. It becomes self-aware of its surroundings, learns from it and do the job day in and out without getting tired and improving on the accuracy and efficiency with time through principles of self-learning. Within AI and as designers of the various AI models – we humans have to be very clear and precise in first identifying the use case, defining our requirements or asks from the model and also knowing in advance the outputs expected from the model. In the absence of this - It is entirely possible for the models to give some erroneous result which are not desired and the same can go absolutely haywire – not to miss the escalating costs and expectation mismatches.

Lesson 4 – Choose Your Use Case Wisely

AI is a very use case driven industry. We need to be absolutely clear with the objective that your models are required to achieve right at the beginning and then we plan backwards to create the algorithms/models/products.

Exercise – Modelling Taxonomy Review
There are many different types of models. It is important to understand the trade-offs and when to use a certain type of model.
Parametric vs. Nonparametric:
o  Parametric: models that first make an assumption about a function form, or shape, of f (linear). 	Then fits the model. This reduces estimating f to just estimating set of parameters, but if our assumption was wrong, will lead to bad results.
o   Non-Parametric: models that don't make any assumptions about f, which allows them to fit a wider range of shapes; but may lead to overfitting.
Supervised vs. Unsupervised
o        Supervised: models that fit input variables xi = (x1; x2; :::xn) to a known output variables yi =(y1; y2; :::yn)
o       Unsupervised: models that take in input variables xi = (x1; x2; :::xn), but they do not have an associated output to supervise the training. The goal is understand relationships between the variables or observations.
Blackbox vs. Descriptive
o       Blackbox: models that make decisions, but we do not know what happens "under the hood" e.g. deep learning, neural networks
o       Descriptive: models that provide insight into why they make their decisions e.g. linear regression, decision trees
First-Principle vs. Data-Driven
o   First-Principle: models based on a prior belief of how the system under investigation works, incorporates domain knowledge (ad-hoc)
o   Data-Driven: models based on observed correlations between input and output variables
Deterministic vs. Stochastic
o    Deterministic: models that produce a single "prediction" e.g. yes or no, true or false
o    Stochastic: models that produce probability distributions over possible events
Flat vs. Hierarchical
o   Flat: models that solve problems on a single level, no notion of sub-problems
o   Hierarchical: models that solve several different nested subproblems.

5.What Do I need to Do to start My Journey in this field

Comes the most important question – I come from a traditional technology background - What should I do to get into this?

Academically speaking and in my opinion – first I strongly believe our education system needs a revamp. We are taught more theory than practical and that needs to undergo a radical shift. As students, to all students and their guardians - We need to cultivate a culture of “Why this”, “How to do This” and “How do I Apply this” instead of “Learn This”, “His Grades are better” and “No Marks = No Good Jobs”.

We need to move away from all this. 

Lesson 5 – Brush Up Forgotten Skills/Pre-Requisites To kick-start your AI journey

a.  As a pre-requisite - Brush up on your fundamentals and concepts of mathematics (specially around vectors, scalars, matrices, determinants, linear algebra, etc), statistics (p-value, Theorems determining areas under the curve, etc) in place before you start your journey.

b. Pick up a programming language (could be anything – C++, Java, Python, etc) and make a conscious attempt to start coding. When you code- you can see things in action. Models producing results in front of you by means of predicting values, differentiating between objects or recognizing individuals to more advanced outputs like self learn and correction - it’s a completely different feeling when you see an algorithm produce an expected output - learn and differentiate.

c. Since AI is a very use case driven industry – it is important to identify an industry or a process and brush up on all the knowledge that you can acquire about that industry/domain which you want to experiment with. An example could be – Supply Chain Process. It is a big-big process and comprises of several sub-processes within it. So first try and identify one of the sub-process, plot it from start to finish, understand the nuances involved within it and the problems being faced by industry; which you wish to eliminate with your AI models.

6. Test what you have learnt – Apply the Feedback Loop to Improve

Once you have experimented with all the above - it is very important to know where you stand vis a vis other people working in the same area. The moment you see things developed by you (or what others had developed) working in front of you – learnings/ideas start flowing. 

Lesson 6 – Test Your Knowledge

For this I would recommend (and I still continue to do this very often even after years of being in the industry) participating in various hackathons and coding contests around AI, IOT, ML, Analytics, NLP, etc. Most companies have the following objectives from hackathons

a. Scout for good talent
b. Test out various ideas that they see as potential trends or what their customer are demanding
c. Through community learning and effort - build on to their objectives and prototypes.

For a candidate, the learning could be:

a. Hackathon topics are an important source to know about the general industry trends and demands by business houses.
b. Get noticed by top companies (if you perform well in a hackathon)
c. Test your leanings against a large talent pool and build on to your learnings.
So, in short – participate even if you don’t win or even if you know you don’t stand a chance to win…

7. Re-Learn & Don’t let your Creative Urge to Die

The last thing what I would like to emphasize upon is to not let that creative side of yours die. The urge, the alternate view, being critical and to suggest ways to improve and to do things differently should not die because of your current nature of work OR because you are taught in a certain way to perform a job. It is very important to try, experiment and see things from a different perspective, to have an alternate view point, to be critical about oneself, to have that urge to do things differently remain alive within you.

So, in a nutshell – the motto should be to
    LEARN, TEST, APPLY, HAVE CREATIVE URGE INTACT & RE-LEARN.

Sources of Article

Disclaimer: The above article focuses on my learnings & views on this topic. Some of the references/text highlighted in this article especially the exercises have been picked up from the internet for knowledge sharing/representational purposes only.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE