Results for ""
Brijraj Singh is a research scientist who has worked in multiple research labs like IBM, Samsung Research Lab, and Sony Research Lab.
He has experience presenting research work at multiple conferences like NeurIPS Montreal Canada, ICCV Seoul South Korea, and universities like IPAM UCLA, Los Angeles USA, and IIT Roorkee, IIT Indore etc.
INDIAai interviewed Brijraj Singh to get his perspective on AI.
How did you get your start in AI?
I have been naturally attracted to subjects related to data (like DBMS), and I used to read them through multiple books during my B.Tech. My first interaction with AI was during B. Tech was an introductory course in AI, but my interest grew when I was admitted to the specialization of "Intelligent Systems" at IIIT-Allahabad for my master's. There I learned about the architecture of intelligent systems, how AI is embedded in the circuits and cognitive process modelling. When I read about the evolution of AI from its inception, I got more interest in knowing about natural intelligence. Further, I got the opportunity to work in the SILP laboratory under the supervision of Prof. Uma Shankar Tiwari, where I read and understood natural intelligence from the perspective of brain science. I wrote my master's thesis on brain signals (EEG signals) which were my research start in AI, where I developed a brain signal-based biometric system using minimal electrodes.
What is the focus of your PhD research?
The main focus of my PhD research was "Optimizing the Machine Learning Models". The optimization was about reducing
a) Inference time,
b) Loading time,
c) Memory requirement,
d) Number of samples.
The broader aspect was to make ML models easily used by optimizing their resource (time/space) demand.
As one of my PhD works, I enabled the popular Radial Basis Function (RBF) for big data scenarios by controlling its memory demand. This method can work in any application with enormous data to fit a curve/ surface (regression, classification) but with limited memory. In another work, I had to speed up the inference through the Mobilenet-V2 DNN model, the fastest DNN model at that time. To solve that, I proposed a novel Shunt Connection and applied it to showcase the speed-up in the inference. I also proposed a method for selecting the best samples from the dataset for the given ML task.
This way, I focused most of my problems on enabling ML in a resource-constrained environment.
Tell us about your challenges while conducting your PhD research and how you overcame them.
There were multiple challenges, as a PhD happens at the phase of life when we can not ignore other family-related responsibilities. However, I am listing a few technical issues I faced.
We should note that every problem is different. One problem statement can be solved quickly, while others may take longer. So comparing your performance with peers is like comparing apples with oranges. If you are never stuck with your research problem, you need to rethink whether you are justifying your capabilities by selecting this problem.
A research problem should be relevant, realistic, and related. I did multiple industrial internships during my PhD to learn about real-world problems. A good problem will force you to think beyond your limits, which is how the horizon of knowledge expands.
At one time, It was like reinventing the wheel for me, so I paused for some time but continued discussion with the people, and eventually, I got the way of utilizing the same idea to solve a current real-world problem.
So these were a few of the challenges I faced and overcame during my PhD.
Tell us about your role at Sony Research as a Research Scientist. What is your daily schedule?
In Sony Research, I am working on multiple research problems. First, I am supposed to improve the existing system by considering them as my research problems and finding solutions. Further, I have to figure out the possibility of bringing novel solutions to Sony products.
A few problems are farsighted futuristic, whereas others are on the current system. I am supposed to complete the cycle from research to market.
Daily, I used to interact with my team, who implemented the ideas. I had to keep an eye on their implementation issues and resolve the roadblocks with them from the implementation front or the ideation front. Then I work on other implementations and experimentations, which I carry by myself. During the process, I have to go through the ongoing research in the community by reading their research papers. I also have to prepare presentations for the higher management about the ongoing work and its updates.
What similarities do you observe between working in R&D and being a research scholar?
It was a very smooth transition for me, as, during my PhD, I worked as an intern at Samsung Research, where I popped up multiple problems in the existing industrial landscape. I worked on a few problems as an intern. I continued working on those problems when I joined Samsung Research as a full-time employee. I didn't realize any difference between the set-up except the computing environment.
Similarities:
However, there were a few differences:
How many patents do you currently have? Tell us about your patents.
I have a total of 7 patents, and out of those, a few are under publishing. Most of my patents are in ML optimization. For example, one is optimizing the DNN model's loading latency. Few others are in optimizing the inference latency and systems.
I will give detail about one interesting and intuitive idea on loading latency. A vision-based DNN model is a collection of layers that may be 100 or 150. So when a DNN model is loaded on the working memory, all these layers must travel from auxiliary to working memory. This travelling costs some latency, and we know it as loading latency. When we work on a desktop/laptop, we don't care about this latency, but when we work on hand-held devices like mobile phones, we want the camera icon to pop up immediately as we click on the camera icon. Since multiple DNN models back the camera display, they are also loaded when we click on the camera icon. Therefore, the loading latency of DNN models should be small to improve the user experience. Here, I proposed the idea to modularize a DNN model of, say, 100 layers among five models of 20 layers each and utilize the multiple H/W threads to load all of them together on the working memory in the ArmNN environment. This way, I could reduce loading latency from 228 ms to 90 ms, one of the ideas on which I got a patent.
What advice do you have for those who want to work in AI research? What are the most efficient methods of progress?
I always advise picking up good research problems so that they should create an impact when the problem is solved.
To a newcomer: When you go for a job after a PhD or research, If your problem is not big enough, then you won't be able to impress the interviewer with your solution, no matter how many papers you have published. So, in my opinion, even if you have limited publications, if you have solved good-quality problems, you can easily impress the interviewers and expect a better career.
Continuous experiments and brainstorming eventually spark the idea, so I advise you please do not hesitate to share the ideas with your colleagues. When you explain it to someone, you get clarity of your idea. It is absolutely fine even if the listener is not from the same domain because even a fundamental doubt may drag you to a point you never explored.
Could you provide a list of notable academic books and journals on artificial intelligence?
Books:
Journals:
Conferences: