Since the turn of the 21st century, artificial intelligence (AI) has advanced considerably in many domains, including government affairs. Furthermore, the emergence of deep learning has taken the development of many AI fields, including natural language processing (NLP), to a new level. Language models (LMs) are key research directions of NLP. Referred to as statistical models, LMs were initially used to calculate the probability of a sentence; however, in recent years, there have been substantial developments in large language models (LLMs). Notably, LLM products, such as the generative pretrained transformer (GPT) series, have driven the rapid revolution of large language research. Domestic enterprises have also researched LLMs, for example, Huawei's Pangu and Baidu's enhanced language representation with informative entities (ERNIE) bot. These models have been widely used in language translation, abstract construction, named-entity recognition, text classification, and relationship extraction, among other applications, and in government affairs, finance, biomedicine, and other domains.
In this study, we observe that improving the efficiency of governance has become one of the core tasks of the government in the era of big data. With the continuous accumulation of government data, traditional statistical models relying on expert experience and local features gradually suffer limitations during application. However, LLMs, which offer the advantages of high flexibility, strong representation ability, and effective results, can rapidly enhance the intelligence level of government services. First, we review the research progress on early LMs, such as statistical LMs and neural network LMs. Subsequently, we focus on the research progress on LLMs, namely the Transformers series, GPT series, and bidirectional encoder representations from transformers (BERT) series. Finally, we introduce the application of LLMs in government affairs, including government text classification, relationship extraction, public opinion risk identification, named-entity recognition, and government question answering. Moreover, we propose that research on LLMs for government affairs must focus on multimodality, correctly benefit from the trend of "model as a service, " focus on high data security, and clarify government responsibility boundaries. Additionally, a technical path for studying LLMs for government affairs has been proposed.
The application of LLMs in government affairs mainly focuses on small-scale models, lacking examples of application in large-scale models. Compared with smaller models, large models offer many advantages, including high efficiency, broader application scenarios, and more convenience. These advantages can be understood as follows. In terms of efficiency, large models are usually trained on a large amount of heterogeneous data, thus delivering better performance. In terms of application scenarios, large models gradually support multimodal data, resulting in more diverse application scenarios. In terms of convenience, we emphasize the "pretraining + fine-tuning" mode and the invocation method of interfaces, making LLMs more convenient for research and practical applications. This study also analyzes the issues suffered by LLMs, specifically from the technological and ethical perspectives, which have resulted in a panic to a certain extent. For example, ChatGPT has generated many controversies, including whether the generated files are novel, whether using ChatGPT will lead to plagiarism and ambiguity as to who are property rights owners for the generated files. Overall, it can be said that LLMs are in the stage of vigorous development. As the country promotes research on AI and its application in government affairs, LLMs will play an increasingly crucial role in the field.