Recently proposed neural architecture search (NAS) algorithms adopt neural predictors to accelerate architecture search. The capability of neural predictors to accurately predict the performance metrics of the neural architecture is critical to NAS, but obtaining training datasets for neural predictors is often time-consuming. How to obtain a neural predictor with high prediction accuracy using a small amount of training data is a central problem to neural predictor-based NAS. Here, a new architecture encoding scheme is first devised to calculate the graph edit distance of neural architectures, which overcomes the drawbacks of existing vector-based architecture encoding schemes. To enhance the predictive performance of neural predictors, two self-supervised learning methods are proposed to pre-train the architecture embedding part of neural predictors to generate a meaningful representation of neural architectures. The first method designs a graph neural network-based model with two independent branches and utilizes the graph edit distance of two different neural architectures as a supervision to force the model to generate meaningful architecture representations. Inspired by contrastive learning, the second method presents a new contrastive learning algorithm that utilizes a central feature vector as a proxy to contrast positive pairs against negative pairs. Experimental results illustrate that the pre-trained neural predictors can achieve comparable or superior performance compared with their supervised counterparts using only half of the training samples. The effectiveness of the proposed methods is further validated by integrating the pre-trained neural predictors into a neural predictor guided evolutionary neural architecture search (NPENAS) algorithm, which achieves stateof-the-art performance on NASBench-101, NASBench-201, and DARTS benchmarks.