Results for ""
Machine learning trains computers to behave like humans by providing them with historical data and predictions about what might happen in the future. This section will examine exciting machine learning algorithms like Federated Learning of Cohorts, Deep reinforcement learning, and Algorithms of Oppression.
Federated Learning of Cohorts
The Federated Learning of Cohorts algorithm looks at what users do online in the browser. It creates a "cohort ID" for each user using the SimHash algorithm to group them with other users looking at similar content. Each cohort has several thousand users, which makes it harder to find a specific user. Cohorts are updated every week. Websites can then get the cohort ID through an API and use it to decide what ads to show. Unfortunately, Google doesn't label cohorts based on interest beyond grouping users and giving them an ID. Hence, advertisers have to figure out on their own what kinds of users are in each cohort.
Federated Learning of Cohorts, or FLoC, is a way to track people on the web. It puts people into "cohorts" based on their browsing history to see ads more likely to interest them. FLoC was made as part of Google's Privacy Sandbox project, which includes several other advertising technologies with bird-related names. Even though its name is "federated learning," FLoC does not use federated learning.
Deep reinforcement learning
Deep reinforcement learning (deep RL) is a branch of machine learning that combines reinforcement learning (RL) and deep learning. RL looks at the problem of a computer learning to make decisions by trying things out and seeing what works and doesn't. Deep RL is a solution that uses deep learning to let agents make decisions based on unstructured input data without having to engineer the state space manually. Deep RL algorithms can take in a lot of data, like every pixel on the screen of a video game, and figure out what actions to take to maximise a goal (e.g. maximising the game score). Deep reinforcement learning has been used for many things.
The Markov decision process (MDP) states are very complex in many real-world decision-making problems. Instead, deep reinforcement learning algorithms use deep learning to solve such Map issues. The policy or other learned functions are often represented as a neural network, and unique algorithms that work well in this setting are created.
Algorithms of Oppression
Algorithms of Oppression is a book based on more than six years of academic research on Google's search algorithms. The research looked at search results from 2009 to 2015. The book talks about how search engines can cause discriminatory biases. Noble says that search algorithms are racist and make society's problems worse because they reflect the negative biases in society and the people who make them. Noble breaks down the idea that search engines are neutral by showing how their algorithms favour whiteness by showing positive cues when the word "white" is searched instead of "Asian," "Hispanic," or "Black." Her primary example is the difference between the search results for "Black girls" and "white girls" and how they show bias. Then, these algorithms can be biased against women of colour and other marginalised groups. They can also hurt Internet users by leading to "racial and gender profiling, misrepresentation, and even economic redlining." The book says that algorithms keep people in vulnerable positions and are unfair to People of Color, especially women of colour.
Noble's argument also discusses how racism is built into the Google algorithm. It is true of many coding systems, such as facial recognition and programmes for medical care. Noble is arguing against the idea that many new technological methods are progressive and fair. He says many technologies, like Google's algorithm, "reflect and reproduce existing inequities."