Since the introduction of China’s Social Credit System (SCS) in 2014, the discussion of data-surveillance and the creation of a data-state have been dominating the discourse around China’s digital governance. The framing and literature of SCS is more aligned and concerned with surveillance and control than the discussion of Intelligence, despite some back-end applications of SCS used in court denoting features of automated-decision-making and compliance. However, the 2025 deployment of DeepSeek signals a shift from purely back-end algorithms to openly integrating AI into frontline governance. As more provincial and local governments appoint AI as ‘public servants’ and applies AI in the medical settings, China has issued new regulations—including standards set for AI used in aged care – while some areas forbid AI-generated prescriptions. These diverse attitudes create space to explore both the extent of human–AI collaboration and the ethical implications involved.
Western scholarship warns that AI can perpetuate a “panoptic sorting,” classifying individuals through fragmented data. By a close reading and comparative study of media and relevant policy document in AI application and regulation, particularly in the governance and public wellbeing sectors, the research intends to investigate the evolving boundaries set by the state and local governments in applying and collaborating with AI in (local) governance and everyday living. A clear trend toward integrating AI into daily public life in China underscores the government’s confidence in leveraging AI’s advantages, while also revealing how it manages associated risks and envisions paths to mitigate potential challenges. The research explores the dynamic, multi-agent environments that AI works in and discusses ways to develop responsible AI and establish measures for accountability. Ultimately, it contributes to broader AI governance scholarship by illuminating both the Chinese context and wider global implications.