There are two distinct sides to how common people look at AI. On the one hand, we read reports that emphasize on unlocking the economic value of more than fifteen, sixteen, or even seventeen trillion dollars, globally - by 2030, and on the other, there’s ambiguity. 

A lot of applications are traded off as AI (without actually being the case) because that’s what the market wants to hear. In the long run, it will dilute the narrative and lead to a deeply unsettling capability crisis. To democratize AI, we will have to demystify it first. 

The government on its part designed a National Program for Government Schools: Responsible AI for Youth with Intel. The vision is to create 30 million AI professionals by 2030. Right now, Intel is working with 11 countries and would eventually expand this particular outreach to include 30 or more. The programme has been introduced in schools and enables young people to solve real problems through AI. It helps build concepts and create aspirations too. 

In one of the showcases, a 15-year old girl used computer vision technology to identify instances of depression in her classmates. The application also sends out alerts to teachers for follow-up action. The school has been extremely supportive and she has been able to reach out to the government. The most encouraging part is that the girl (hails from an extremely humble rural background) now dreams of becoming an AI ethics lawyer. 

Deep Learning in Mental Health

Mental health challenges are many – depression is different from bipolar disorder, for instance, and data capture (images, etc.) will have to be very specific. 

Dear reader, please consider this: You are very sad from inside but put up a brave front to the world and yet, those who are closest to you know the truth. 

How many times has this happened? It’s fairly common.

But why does it happen?

Human intelligence can pick up subtle clues in a change in behaviour – verbal & non-verbal. This is possible when we are with people we are close to and there’s a strong likelihood that the world at large may miss these clues completely, but we don't. With time and coupled with the quality of these interactions, the human brain learns to distinguish these subtle patterns.

What if a computer can be made to replicate the human brain in this aspect not just for a few people but a wider population and with a high degree of accuracy?

The facial muscles control expressions such as eye-gaze, eyebrow position, jaw contours and head-gestures (for instance) or the way a person speaks when they are tensed. A breakthrough in computer vision has been the 'facial landmark detection'. There are 68 landmarks or key points on the face that are impacted. On a lighter vein, the next time you smile or snarl do give a thought which of the 68 landmarks are activated and the laggards lying dormant. 

The point is, algorithms “deep” enough and exposed to an adequate number of cases can learn and distinguish the permutations and combinations of facial landmarks through which humans are likely to behave when impeded by mental health issues. The premise being there’s a pattern in ways these 68 facial landmarks change/shift or alter in humans when they suffer from depression. And, these patterns are different for others who are affected by PTSD. For every disorder, the pattern and the correlation are likely to change.  

Well, it’s not as simple and “finite” as I make it sound. One has to consider so many different aspects that impact human behaviour – race, education, social conditioning, economic conditions, underlying other health issues, etc. Algorithms that have fetched the desired results for Caucasians can’t simply be “lifted and shifted” for Indian conditions. 

But the important question here is -are non-verbal cues adequate for the machine to arrive at a decision? 

An imperfect (but effective) analogy can be drawn from movie watching. It helps if the volume is unmuted! At a simplistic level, it’s a three-layered problem. The first two layers can generalize between population. One layer almost instantaneously senses to quantify the expression and gaze from the image. The second layer integrates that information over time for recognition. The last layer detects behavioural markers that are specific to each disorder. 

Behaving responsibly

Medical practitioners cannot take decisions purely based on what the algorithms indicate. At best, they augment human decision-making. In countries like India, where the doctor to patient ratio is not very healthy, it’s of great assistance to have this technology that helps doctors to prioritize. But they have to be aware of the conditions in which the applications have been tested and whether they apply to the present environment as well.  

Democratizing AI is not only about making the tools and technology available to the masses but equally, it’s about building the right mindset very early. If it’s going to unlock value worth 5 times the Indian GDP then it has to happen responsibly throughout the world. We are looking at global citizens in the next decade, and many aren’t even a part of the workforce. They don’t necessarily have a clear idea of what AI can do but most importantly, what it shouldn’t do. The government-led initiative is a big step forward in building a strong AI foundation at the grass-root level. It’s critical that young minds learn to solve real-life problems through advanced technology and lay the foundation for innovative thinking. 

It’s also a great opportunity for students to work with the government and the industry to see if these ideas can be scaled. A problem-solving approach is a must-have skill in the digital era and it’s best nurtured early when the mind is uncluttered.

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE