Pre-trained language models have achieved remarkable performance in few-shot learning with the rise of “prompt learning”, where the key problem is how to construct a suitable prompt for each example. Sample and prompt will be combined as a new input to language model (LM). A series of prompt construction methods have been proposed recently, some of these methods are for discrete prompt construction, and some focus on continuous prompt construction, both of them normally apply a unified prompt to all examples. However, the results show that it is hard to find a perfect unified prompt that works for all examples in a task, one prompt can only help LM assign the correct class to some samples in the downstream classification task and give the wrong result to others. To this end, we propose a novel personalized continuous prompt tuning (PCP-tuning) method to learn personalized prompts that are tailored to each sample's semantic for few-shot learning. Two calibration techniques are proposed to control the distribution of generated prompts for better prompts. Extensive experimental results on ten benchmark tasks demonstrate the superior performance of our method.