Results for ""
The prevalence of AI in today's tech world is known to everyone. With the prevalence of AI/ ML, we also have got a clear understanding of the importance of cybersecurity. To utilize AI/ML and associated technologies, it's critical we start focusing more on providing more robust systems.
The development of ML and its applications has also increased the need for security around it. ML algorithms work on data, the training on certain input data. They eventually self-learn from that data and provide results based on these learnings.
Here comes the critical point; this dependence on data makes ML algorithms pretty vulnerable. Any flaw in the data fed to the algorithm for trying might have severe repercussions. This way, bias, error, or defects can be infused into the system to disturb the model's harmony with the production of erroneous results.
These events of intentional corruption of training data to negatively affect the outcomes of the predictive ML algorithms is called data poisoning. The algorithm gets misguided, which leads to amplified ill-effects.
According to a Kaspersky Security Network (KSN) report, its products detected and blocked 52,820,874 local cyber threats in India between January to March in 2020, which brought India to the 27th position globally in the number of web-threats detected by the company in the first quarter of 2020.
Let's first see the most prevalent ways of data poisoning.
The most dangerous poisoning attacks are those which spoil and corrupt the training data sets with an intent to infuse a vulnerability in the system that attackers can exploit. This is highly critical to address, as once the ML algorithm is poisoned, it gets difficult to deal with. The reason being that ML algorithms are BlackBox natured, and hence it makes it difficult to spot or identify the attacks. Therefore we might consume or use the results unaware of the fact that the results are compromised.
In recent years, there has been an increase in data poisoning events and attempts. It has been presented by a Gartner report that machine learning data poisoning will become a prominent issue in cybersecurity. According to the report cited by Microsoft, 30% of all artificial intelligence cyberattacks by 2022 are expected to leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems.
The situation might create dire consequences as this makes everything, including government, military, financial markets, healthcare, law enforcement, security, and education systems, vulnerable.
This again reminds us to bring cybersecurity to the forefront and refocussing our efforts towards more robust and secure systems. Stricter government policies, regulations, monitoring, and auditing is required to defend our systems which we have high hopes with.