As digital platforms increasingly mediate global communication and social interaction, their role in shaping and upholding human rights has become both pivotal and contested. These platforms govern user interactions through their design choices, rule-making, and gatekeeping functions, which significantly influence online behavior and the dissemination of content. This governance is not limited to content moderation but extends to algorithmic curation and policy enforcement, thereby shaping the digital public sphere and impacting societal norms and individual freedoms (Gorwa, 2024). Understanding platform governance has thus become essential for comprehending the broader architecture of the internet, particularly as it pertains to digital rights and responsibilities in the online realm.
This paper focuses on the policies of fifteen digital platforms spanning categories such as social media, messaging apps, adult content platforms and video-sharing platforms. The study examines the governance by platforms in the case of gender-based harmful content due to its urgent and pervasive nature, reflecting the intersection of digital rights and gender justice. Gender-based harm encompasses a range of online abuses, including harassment, stalking, and the non-consensual sharing of intimate images, making it a critical lens for examining the efficacy and limitations of platform governance in balancing content moderation and internet freedoms. Employing purposive sampling criteria based on traffic, market dominance, and the capacity to host gender-based harmful content, the research examines the governance frameworks of these platforms. The analysis draws on documents collected in June 2024, aiming to uncover patterns, themes, and gaps in their approaches to content moderation. However, the study acknowledges limitations, including the inability to access non-public policies, the exclusion of smaller or peripheral platforms such as those on the darknet, and the absence of direct engagement with individuals affected platform administrators. These constraints may have narrowed the scope and introduced potential bias in the findings.
The paper further explores the dual roles of platforms as enablers of free expression and gatekeepers of online safety, emphasizing their impact on addressing gender-based harms. Adopting a human rights-based framework, it investigates how governance structures, profit-driven moderation systems, and corporate policies influence the realization of digital rights. The findings highlight a tension between platforms’ normative commitments to internet freedom and the operational challenges they face in combating CVAW, such as algorithmic biases, inconsistent enforcement, and gaps in transparency and accountability. By critically analyzing corporate discourses and practices, the study exposes the limitations of current governance models in addressing gendered harms and critiques the neoliberal values that underpin these frameworks.
Ultimately, the research advocates for a more inclusive multistakeholder approach to internet governance that balances corporate responsibilities with the protection of vulnerable groups. By addressing these gaps and fostering equity in governance practices, the study contributes to the ongoing debate about the private sector's role in defining norms of internet freedom and advancing a more just digital ecosystem.