Saturday, 25 November 2023

Cyberfeminism and AI and Gender Biases


Hello everyone,in this blog I will discuss Cyberfeminism and AI and Gender Biases.

Cyberfeminism

Cyberfeminism is a feminist approach which foregrounds the relationship between cyberspace, the Internet, and technology. It can be used to refer to a philosophy, methodology or community. The term was coined in the early 1990s to describe the work of feminists interested in theorizing, critiquing, exploring and re-making the Internet, cyberspace and new-media technologies in general. The foundational catalyst for the formation of cyberfeminist thought is attributed to Donna Hardaway's "A Cyborg Manifesto", third wave feminism, post-structuralist feminism, riot grrrl culture and the feminist critique of the blatant erasure of women within discussions of technology.




Theoretical Background

Cyberfeminism arose partly as a reaction to "the pessimism of the 1980s feminist approaches that stressed the inherently masculine nature of techno-science", a counter movement against the 'toys for boys' perception of new Internet technologies. According to a text published by Trevor Scott Milford, another contributor to the rise of cyberfeminism was the lack of female discourse and participation online concerning topics that were impacting women. As cyberfeminist artist Faith Wilding argued: "If feminism is to be adequate to its cyberpotential then it must mutate to keep up with the shifting complexities of social realities and life conditions as they are changed by the profound impact communications technologies and techno science have on all our lives. It is up to cyberfeminists to use feminist theoretical insights and strategic tools and join them with cybertechniques to battle the very real sexism, racism, and militarism encoded in the software and hardware of the Net, thus politicizing this environment."




Critiques

Many critiques of cyberfeminism have focused on its lack of intersectional focus, its utopian vision of cyberspace, especially cyberstalking and cyber-abuse, its whiteness and elite community building.

One of the major critiques of cyberfeminism, especially as it was in its heyday in the 1990s, was that it required economic privilege to get online: "By all means let [poor women] have access to the Internet, just as all of us have it—like chocolate cake or AIDS," writes activist Annapurna Mamidipudi. "Just let it not be pushed down their throats as 'empowering.' Otherwise this too will go the way of all imposed technology and achieve the exact opposite of what it purports to do." Cyberfeminist artist and thinker Faith Wilding also critiques its utopian vision for not doing the tough work of technical, theoretical and political education.

“We emerged from the cyberswamp…on a mission to hijack the toys from techno-cowboys and remap cyberculture with a feminist bent.”

They wrote their own Cyberfeminist Manifesto for the 21st Century (1991) in homage to Haraway, presented as an 18-foot-long billboard, which was exhibited at various galleries across Australia. The text bulges from a 3D sphere, surrounded by images of DNA material and dancing, photomontaged women that have been transformed into scaled hybrids. “We make art with our cunts,” the manifesto reads. “We are the virus of the new world disorder.”

“Cyberfeminism is not a fragrance,” it reads, “not boring... not a single woman... not a picnic… not an artificial intelligence... not lady-like... not mythical.”

Cyberfeminism resisted easy definition and, as the manifesto showed, there were multiple iterations and conflicting notions of what it was—and was not. By 1997, the movement was running into trouble. Haraway and Butler’s texts had called for the dissolution of gender and racial hierarchies, but it was increasingly clear that cyberfeminism had failed to address race at all.

Recruiting AI software can be tested for bias by using it to rank and grade candidates, and then assessing the demographic breakdown of those candidates.

The great thing is if AI does expose a bias in your recruiting, this gives you an opportunity to act on it. Aided by AI, we can use our human judgment and expertise to decide how to address any biases and improve our




Tackling bias in artificial intelligence (and in humans)

AI has the potential to help humans make fairer decisions—but only if we carefully work toward fairness in AI systems as well.

Will AI’s decisions be less biased than human ones? Or will AI make these problems worse?

Two opportunities present themselves in the debate. The first is the opportunity to use AI to identify and reduce the effect of human biases. The second is the opportunity to improve AI systems themselves, from how they leverage data to how they are developed, deployed, and used, to prevent them from perpetuating human and societal biases or creating bias and related challenges of their own. Realizing these opportunities will require collaboration across disciplines to further develop and implement technical improvements, operational practices, and ethical standards.




Underlying data are often the source of bias

Underlying data rather than the algorithm itself are most often the main source of the issue. Models may be trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities. For example, word embeddings (a set of natural language processing techniques) trained on news articles may exhibit the gender stereotypes found in society.




Human judgment is still needed to ensure AI supported decision making is fair

While definitions and statistical measures of fairness are certainly helpful, they cannot consider the nuances of the social contexts into which an AI system is deployed, nor the potential issues surrounding how the data were collected. Thus it is important to consider where human judgment is needed and in what form. Who decides when an AI system has sufficiently minimized bias so that it can be safely released for use? Furthermore, in which situations should fully automated decision making be permissible at all? No optimization algorithm can resolve such questions, and no machine can be left to determine the right answers; it requires human judgment and processes, drawing on disciplines including social sciences, law, and ethics, to develop standards so that humans can deploy AI with bias and fairness in mind. This work is just beginning




Six potential ways forward for AI practitioners and business and policy leaders to consider

Minimizing bias in AI is an important prerequisite for enabling people to trust these systems. This will be critical if AI is to reach its potential, shown by the research of MGI and others, to drive benefits for businesses, for the economy through productivity growth, and for society through contributions to tackling pressing societal issues. Those striving to maximize fairness and minimize bias from AI could consider several paths forward:




1. Be aware of the contexts in which AI can help correct for bias as well as where there is a high risk that AI could exacerbate bias.

When deploying AI, it is important to anticipate domains potentially prone to unfair bias, such as those with previous examples of biased systems or with skewed data. Organizations will need to stay up to date to see how and where AI can improve fairness—and where AI systems have struggled.




2. Establish processes and practices to test for and mitigate bias in AI systems.

Tackling unfair bias will require drawing on a portfolio of tools and procedures. The technical tools described above can highlight potential sources of bias and reveal the traits in the data that most heavily influence the outputs. Operational strategies can include improving data collection through more cognizant sampling and using internal “red teams” or third parties to audit data and models. Finally, transparency about processes and metrics can help observers understand the steps taken to promote fairness and any associated trade-offs.




3. Engage in fact-based conversations about potential biases in human decisions.

As AI reveals more about human decision making, leaders can consider whether the proxies used in the past are adequate and how AI can help by surfacing long-standing biases that may have gone unnoticed. When models trained on recent human decisions or behavior show bias, organizations should consider how human-driven processes might be improved in the future.




4. Fully explore how humans and machines can work best together

This includes considering situations and use-cases when automated decision making is acceptable (and indeed ready for the real world) vs. when humans should always be involved. Some promising systems use a combination of machines and humans to reduce bias. Techniques in this vein include “human-in-the-loop” decision making, where algorithms provide recommendations or options, which humans double-check or choose from. In such systems, transparency about the algorithm’s confidence in its recommendation can help humans understand how much weight to give it.




5. Invest more in bias research, make more data available for research (while respecting privacy), and adopt a multidisciplinary approach.

While significant progress has been made in recent years in technical and multidisciplinary research, more investment in these efforts will be needed. Business leaders can also help support progress by making more data available to researchers and practitioners across organizations working on these issues, while being sensitive to privacy concerns and potential risks. More progress will require interdisciplinary engagement, including ethicists, social scientists, and experts who best understand the nuances of each application area in the process. A key part of the multidisciplinary approach will be to continually consider and evaluate the role of AI decision making, as the field progresses and practical experience in real applications grows.




6. Invest more in diversifying the AI field itself.

Many have pointed to the fact that the AI field itself does not encompass society’s diversity, including on gender, race, geography, class, and physical disabilities. A more diverse AI community will be better equipped to anticipate, spot, and review issues of unfair bias and better able to engage communities likely affected by bias. This will require investments on multiple fronts, but especially in AI education and access to tools and opportunities.






No comments:

Post a Comment