The development of trustworthy algorithms that leverage AI in order to provide public government services is a challenging subject. The growth of AI in recent years is unprecedented and novel applications that make use of AI seem to be the norm nowadays. EU recently enforced the EU AI Act that is a big step towards regulating the usage of AI but the pace of advances in AI is difficult to follow, let alone be regulated. Other measures must be employed such as self-assessment tools, especially for public services that use AI. Self-assessment tools have the potential to steer the development of AI-based public services towards trustworthiness. An early example of such a self-assessment tool is ALTAI but several others exist. This paper tries to identify critical factors of self-assessment tools and describes a methodology for synthesizing self-assessment tools for public services embedding AI components.