Sign language is one of the most important and natural form of language for communication between deaf people. However, it is not a familiar language for normal people who are mostly communicating in normal languages such as English, Nepal etc. and interpreter are very difficult to be available for all the deaf ones. The hand sign recognition system is a process of identifying the gestures of hands and its meanings from a certain frame of video frame from a video source. It can be made through multiple methods, but the one of the best way to do it is by comparing the hand postures and gestures from the given set of images of hands along with their classes within a database. This project involved extracting of the hand features such as edges of hand and fingers, thresholds from a given frame with image processing techniques under a 128 * 128px size Region Of Interest (ROI) where the hand needs to be placed along with grayscale conversion. The recognition of the class or the alphabet of the hand sign was done using machine learning algorithms such as Convolutional Neural Network working on various layers. After the filtration of the hand within a certain ROI according to the Gaussian thresholding, CNN classifier classifies the hands through its convolutional, pooling, fully connected and final output layer and thus proposes the most suitable class for the hand gesture within the interface of the project. The project had achieved 70% accuracy with the 2 layered CNN model. Also for the hand sign that might seem to be similar separate classifier were made for those alphabets which helped the project to achieve 80% accuracy overall.