There is an undeniable communication problem between the Deaf community and the hearing majority. Innovations in automatic sign language recognition try to tear down this communication barrier. My contribution to this domain, Nepali Sign Language Recognition, is an automated system for recognizing the gestures of Nepali Sign Language using 2D Convolutional Neural Networks (CNN). In order to recognize the provided gesture, image processing techniques namely, grayscale conversion, thresholding, edge and contour detection were used in order to create shape files of individual hand gesture images. These segmented characters were then fed into the trained CNN which contains a further 4 layers: convolution layers, pooling/subsampling layers, nonlinear layers, and fully connected layers. The ReLU activation function was applied to the output to introduce nonlinearities into the model. The Neural Network training was conducted using 1200 images each for 37 alphabets and 10 numbers of Nepali Language i.e. a total of 56,400 images. Furthermore the images of every single character were flipped along the vertical axis and were added to the training model. Finally, a set of 1200 blank images were also trained to the model which increased my final accuracy from 82.4450% to 92.4568%.