Abstract
As one of the most popular visual programming languages, Scratch has a lot of evaluation around it. Reasonable evaluation can help programmers understand their projects better. At the same time, it can also provide a reference for them to browse other projects in the online community. Most of the existing evaluations on Scratch are carried from three perspectives: Computational Thinking (CT) ability, visual presentation aesthetics, and code quality. Among them, the assessment of CT and code quality is mainly carried out from the program script, while the evaluation of visual aesthetics is analyzed from the perspective of image sequences generated by project execution. The single-view evaluation focuses on the performance of a program in a certain aspect and is one-sided. In this paper, we propose a multi-view evaluation framework to integrate various evaluations using different policies. We quantitatively analyze the assessment of different views driven by data. Combined with overall evaluations that represent human opinions, we analyze their differences and connections. Through experiments, we determine the weights of different integration policies, the proposed multi-view evaluation method can generate evaluation results similar to human opinions.