Medical diagnosis is a category of medical tests designed for disease or infection detection. Early medical diagnosis of disease is crucial to increase the chances of recovery for the patient while errors in medical diagnosis has been proven to be fatal in most cases. Machine learning is a practical approach for medical diagnosis with minimum cost and high accuracy because machine learning algorithm implementations are making near-perfect diagnosis of diseases, recommending best medicines and identifying high-risk patients. For countries like Nepal, where 13.02 million people are at risk of malarial infection every summer [1], machine learning can help reduce cost of treatment and provide early and accurate diagnosis of the disease which can prove vital for saving many lives. In this project, I use image processing tools to convert pathological data in captured images to machine readable form and apply machine learning algorithms to diagnose infected cells. Out of all the algorithms, CNN achieved highest test accuracy of 96.24% while KNN achieved least test accuracy of 58%.
This project deals with the development of a 2D game using the Unity 4 game engine on Windows OS. Unity is a game engine that acts as a framework for aiding game development. It provides an interface for controlling the game assets with scripting. It provides a standard set of libraries for game development and allows scripting in C# language. The project aims to create a game in a 2D environment to learn the game mechanics and understand deployment in multiple platforms. The genre of the game designed is a platformer. Platformer games are defined by the characteristic that the player jumps from one platform to another while avoiding obstacles and enemies.
Machine Learning has made it possible to do things we only thought was possible by a human. Abilities like detecting and recognizing particular objects from a view are now done by computers through training them. However, the domain to which a machine can recognize objects is limited to how much the user has trained it. This ability can be seen being used by humans in various fields to recognize and identify objects like face detection, color classification, fruits classifications, and many more fields. My project leans towards recognizing monuments and buildings that are historical and religious. Tourists that visit some historical places might not be aware of the building or statues they are observing. So, allowing them to click the picture they are curious to know about and uploading on our application, the application recognizes the object being displayed in the picture and gives the user information on what they are actually looking at. The system is trained with images of such monuments so that when a similar image is shown to it by the user, it can identify that object in the image. Basically, there is an application through which the user can click the pictures and the application detects the object in the image (considering the object to be recognized is trained to the system through supervised learning). Images are classified using certain classification algorithms such as SVM, CNN, and a few others. At first, they are cleansed and augmented through python scripts, and then they are trained in classifier built in TensorFlow Keras. Then, an application is created and the classifier is used by the application to recognize the monuments clicked from the application itself.
Data analysis is an important aspect in this technological bloom. There are tons of data being produced every day. If we can use this data efficiently, we can surely benefit from it. Not to mention, if the data of activity of people in the online business is tracked down, companies can focus more on the hot leads and therefore, have a policy for the hot leads to convert them into customers. The purpose of this project is to analyze the data to generate lead score and predict the potential costumers on the basis of the lead score. In the B2B realm alone, MarketingSherpa calculates that only around 21% of companies have established a lead scoring practice. Thus, we try to bridge that gap using Lead Score prediction along with visualizations using logistic regression for analyzing the data. This project Builds a logistic regression model to assign a lead score between 0 and 100. A higher score would mean that the lead is hot, i.e., is most likely to convert whereas a lower score would mean that the lead is cold and will mostly not get converted. We managed to get insight of the customer lead score with more than 85% accuracy. The project can be further implemented in the dataset of similar attributes.
Travelling Salesman Problem is an intensively studied problem in the field of Combinatorial Optimization. Being an NP-Hard problem, it is widely studied in the area of optimization. A problem is NP-Hard if its approximate solution is derived from the solution of the NP problem, i.e., an algorithm that is used to solve an NP problem can be modified to find the approximate solution to an NP-hard problem. The main objective of TSP is to find the minimum distance by traversing each of the given set of cities at least once and then traversing back to the start city. The project is aimed to provide a method for solving TSP using both Genetic and nearest neighbor Algorithms and provide efficient results and compare the result generated from both algorithms. The comparison depends on parameters of the number of cities that have both passed through algorithms and the result is calculated and checked based on their distance and execution time. To classify which algorithm produces the best optimal performance, several algorithms can be compared for solving the TSP under some conditions. Also, this project provides a comparison between the two algorithms based on various parameters that help to choose the better algorithm as per the needs. In the end, through the number of test cases, the obtained output is compared between the Nearest Neighbor and Genetic algorithm. Nearest Neighbor always provides a suboptimal path for the given number of cities while the Genetic algorithm provides a better result for the smaller number of cities. But for a larger number of cities, the execution time of the algorithm drastically increases making it hard to generate the optimal path better than Nearest Neighbor.
Since its early inception in 1878, survival games have retained their popularity among avid gamers and enthusiasts. The genre popularly provides a gist of action and a sense of adventure and accomplishment to a player. Commercialized by movies and television shows, the genre has expanded into several subcategories like adventure, horror, battle royal, among others. However, contemporary survival games have stopped being about survival. Rather, it commonly rehashes the same gameplay while putting more focus on in-app purchases. Ironically, these games have been criticized for their blatant attempts at cash-grabs over compelling storylines. Furthermore, -paced with a loses its charm. Hence, a survival game should put more focus on creating a sense of challenge with player-focused gameplay and simple controls. The gameplay should feel appealing with modern graphics. Staying true to the findings, the 3D survival game is actualized with the help of Unreal Engine 4. It allows for a realistic 3D environment matched with photorealistic lighting. Moreover, it opens door to advanced customization features to achieve a conceptualized game. To prevent the game from losing its charm, the overall difficulty increases with time as long as the player survives. Furthermore, the simple control allows players to easily jump into the game, but only allows them to master it after more practice. In repetition, the video game will allow users to play for countless hours at a time
In recent years ad hoc networks have opened a new dimension in wireless networks with a large number of mobile nodes. It allows wireless nodes to communicate with each other in the absence of centralized support. Ad hoc networks do not follow any fixed infrastructure because of the mobility of nodes and multi-path propagation. Due to the dynamic topology and routing overhead, selection of routing protocol in Mobile Ad-hoc Network (MANET) is a great challenge. A design issue for an efficient and effective routing protocol is to achieve optimum value of performance parameters under network scenarios. There are various protocols available for MANET. In the past, tremendous work has been done on comparison and evaluation of routing algorithms for MANET using NS2 (Network Simulator 2). This project involves study on the Proactive and Reactive routing protocols like OLSR (Optimized Link State Routing Protocol), DSDV (Distance Sequenced Distance Vector), AODV (Ad-Hoc On Demand Distance Vector Routing) and comparison between these routing protocols on the basis of performance metrics (throughput, packet delivery ratio, packet dropped, jitter, end to end delay etc.) with the help of NS3 simulator.
Q&A forums like Quora, Stack-overflow, Reddit, etc. are highly susceptible to question pair duplication. Two questions asking the same thing could be too different in terms of vocabulary and syntactic structure, which makes identifying their semantic equivalence challenging. In this report, we explore methods of determining semantic equivalence between pairs of questions using a dataset released by Quora of more than 400,000 questions pairs through Machine Learning with Natural Language Processing. The machine learning approach is based upon Levenshtein distance between two sentences and the sentence-vector encoding using Word2Vec models to experiment with a variety of distance metrics and predict their semantic equivalence. The experimental results show that the artificial neural network with word embeddings achieves high performance, achieved an F1-score of 0.6529 with 0.7236 accuracies on the test set.
The world has experienced growth in demand and consumption of electrical energy in the past decade. Concurrently, different companies are working towards building smart houses by replacing traditional devices with electrical devices. Most of these electrical appliances consume a large amount of electricity in return for their service. Thus, various organizations have been taking extensive measures to increase electricity production. Although green energy power plants are replacing non-renewable resources, none of the alternatives are fully environment friendly. While the advancement in eco-friendly technology continues, the world must conserve its electrical energy. This paper describes a system which implements IoT along with deep learning, to monitor electronic devices in smart homes and can regulate electricity consumption. The system will continually monitor all electronic devices connected to it. It automatically switches off the unused electronic devices to save electricity. As a result, the system will also reduce the risk of casualties associated with short-circuiting. Furthermore, it will also assist its users in cutting down their electric bills.
In this modern day of digitization where almost everything is available in digital platforms, the spread news which used to be available only on the television or newspaper is easily available via the internet. This has both pros and cons as the spread of news might lead to positive impact and negative impact among the readers. The spread of news has two consequences: the easy access of news at low cost or wide spread of fake news. The spread of fake news leads to negative impact in society. Most people believe everything they read on the internet without considering the fact that the news might be fake. So, this web application has been designed and developed for the detection of fake news. This web application detects the fake news present on an online medium using Supervised Learning Algorithms. For this, a dataset was used with various information such as subject, body, writer and the context of the statement. After this, NLP techniques and supervised learning algorithms were used to classify news based on their truthfulness and falseness. This web application gives the results as whether the news entered is true or false and also provides the truth probability score.
Blockchain technology is a revolutionary innovation for its potential to build solutions where outsiders can transact with each other without being dependent on any middle person to supervise the transaction between the parties. In our conventional certificate validation system, employers have to believe the mediators i.e. the certificate holder, instructor or college authorities for the authenticity of the certificate. Because of the trust, there are vulnerabilities to false exercises by the corrupted middlemen to deliver fake certificates. Fake certificates cause critical harms to the society. It is possible to create such certificates at very low cost and also the process to verify them is very complex. This problem can be solved by generating the digital certificates on the blockchain. The blockchain technology provides immutable, decentralized and publicly verifiable transactions. These properties of blockchain can be used to generate the digital certificate which are anti-counterfeit and easy to verify. It is because the generated digital certificate cannot be edited or modified since it is generated through blockchain, which makes it unchangeable.
With advancement of automation technology, life is getting easier and less difficult in all With the rapid growth within the number of customers of the internet over the past decade has made the internet a component and parcel of life, and IOT is the modern-day and emerging internet technology. Internet of things is a developing network of ordinary objects-from industrial devices to client goods which could share statistics and whole tasks while you're busy with other activities. Wireless Home Automation machine (WHAS) using IOT is a device that uses computer systems or mobile devices to manage basic home functions and capabilities automatically through the internet from everywhere around the world. The home automation system differs from other device through allowing the consumer to function the device from everywhere across the world through net connection. Here the data are collected in a real-time. In this project, the user can turn the appliances on or off. The android application is linked to the firebase which is connected to the NodeMcu in which motor and led are connected. Based on the output desired by the user, the current and voltage sensors connected to the NodeMcu on breadboard send their respective information of the specific electrical appliance back to the application. The user can then decide whether to keep the appliance switched on or turn them off.
When an individual wants to read particular news, it should be classified in the proper category so that they can know if they are interested in that news. A news writer writes a news article and submits it to the publisher whose job is to perform SEO operations and publish the news into the certain category of the website. The publisher has to read certain paragraph to know which category the news falls in. Various documents and text articles can be found in social networking sites and forums such as facebook.com and quora.com which are unclassified. One has to read certain lines of the article to know whether what category of news that is. To make that easier news classification can be used. News classification is the task of categorizing the news content into the predefined category from the training news dataset. In this project, a system has been built for categorizing the content of the news into different categories using the 16719 document news dataset obtained from Kaggle.com. This project uses Naïve Bayes algorithm for News classification because it is easy and fast to predict the test data set and also performs better with less dataset. It classifies the news by analyzing the content of the news. In the system, a self-created News Corpus with 6 different categories and total 16719 documents collected from Kaggle.com is used. The test showed the accuracy of 80.3% in the news classification using Naïve Bayes. In the system, the user can categorize the news based on their content by analyzing the news content from any English text document and clicking the classify button.
Emotion recognition has become an important area that plays a significant role in Human-Computer Interactions. Emotions can be expressed in many ways, such as facial expression and gestures, speech, and written text. A sufficient amount of work has been done regarding speech and facial emotion recognition, but a text-based emotion recognition system still needs researchers' attraction. With the increase in text information in social media posts, micro-blogs, news articles, etc., these contents can be beneficial to discover various aspects, including emotions. Emotion Detection in text documents is essentially a content-based classification problem involving concepts from the domains of Natural Language Processing and Machine Learning. In this paper, an overview of the emerging field of emotion detection from text is presented. The current generation of detection methods is usually divided into three main categories: keyword-based, learning-based, and hybrid recommendation approaches are discussed, along with their limitations. By examining the limitations of these approaches, the possible solution technique is suggested to improve emotion detection capabilities in practical systems, which emphasize human-computer interactions. To demonstrate the discussed process, a system is developed to accept file and text information and extract emotion from the text. This system is based on a neural network to classify and extract the text's emotion. This methodology can be beneficial in fields like emotion selling, social media analysis, integration with a chatbot for better customer interactions, etc.
Noise Removal is a system that deals with the obstacles appears in the video. While capturing a video there may come unknown obstacles that may degrade the video. Repeating of the video time and again is time consuming and is not possible all the time. People may not record the same clip twice, nor can get back the video without that obstacle. So, the noise content video can create big problem by damaging its quality. SiamMask data is used in this project so that it directly predicts the mask of an object. Deep CNN algorithm has been used in this project which takes the video as an input, then the user selects the portion which they want to remove from the video the video is reconstructed and the output is made noise free. It removes the obstacles and replace it with the similar background. Hence, this relies on the clip of before and after of the selected obstacle. This application can be implicated for all the people to get the noise free video.
A Twitter Bot is a type of bot software that is designed to control and automate a Twitter account. It is achievable with the help of Twitter API and Tweepy. The overall scope of the bot, even in real-world scenarios, will depend on an indi the bot can be designed to automate single or multiple accounts at any time for various purposes. In real-world examples, there have been documented use of Twitter Bots in sectors like education, research, and marketing. The bot can be programmed to perform multiple actions like tweeting, re-tweeting, replying liking, following, unfollowing, direct messaging, and even interact with other Twitter accounts. These interactions must adhere to a set of rules governed by Twitter. However, there have several instances of misuse of the product, either to spread misinformation or for malicious intent. With intentions to highlight bots in a positive light, two Twitter bots are programmed using Twitter API by highlighting the basic features such as retweeting, replying, liking, and following. The project showcases Twitter bots in two lights: creativity and automation. The first bot generates new tweets from a preexisting document. Whereas, the second bot automates certain Twitter action with the help of API. After achieving these objectives, the first bot generates a new tweet with the help of the Markov Chain principle. These newly generated tweets are periodically posted through a designated Twitter account. The second bot with its separate designated Twitter account performs automated interactions with the help of Twitter API, gaining new followers in the process. It exhibits creative and automated use of bots in social media.
Chess is a mind-controlled strategy game that we play on a checkerboard. In recent years computer games with automated systems have become a common form of entertainment. Moreover, the emergence of artificial intelligence systems has evolved traditional games in new and profound ways. The normal chess gameplay is achieved by sitting next to each other and making moves on the pieces. But a different situation arises when there's no one sitting next to play the game. This project deals with an autonomous chess-playing system, capable of recognizing all possible chess board states, and generating chess piece moves on checkboard in real-time. This proposed system can help players to play chess with AI in checkerboard without letting the person feel the absence of an opponent. In this system, the Heuristic approach evaluates the chess position. The Minimax algorithm determines the optimal move and Alpha-Beta pruning minimizes the number of steps. Arduino and Raspberry Pi powers this system. Similarly, stepper motor drives the chess pieces in X and Y direction. The implementation, methodology, and feasibility analysis are discussed in this report.
Quicker prototyping and designing process enables a better and faster development process with more time to explore for better user interface experience. The project aims to simplify the design workflow of business and entitle them to quickly create webpages. The project has been built for the prediction and generation of computer codes in response to a Graphical User Interface screenshot as an input. For the system to identify the different entities on the input, approaches of deep learning; Convolution Neural Network, and Recurrent Neural Network are used. Further to transform the identified entities, both of the deep learning techniques are leveraged to generate the desired computer code for the generation of the webpages. On providing an input to the system, users experience the generation of automated computer code for the webpage they want to create. The system can also be an effective tool for developers for focusing more on additional engineered features and lessen the burden of manually programming the user interface.
A music genre is a classification of music that identifies and arranges music into different types. Music in older days was labeled into different music genres manually which was time consuming and inefficient. However, with an advancement of technology and various researches in Music Information Retrieval (MIR), some form of automation has been seen in the field of music genre categorization. Out of various techniques, Convolutional Neural Network (CNN) has been used to classify music into ten genres: disco, reggae, rock, pop, blue, country, jazz, classical, metal and hip-hop. Digital Signal Processing techniques using Fast Fourier Transform (FFT), Short Time Fourier Transform (SFT) and Mel-Frequency Cepstral Coefficient (MFCC) have been used to generate feature values which are then fed into the classifier developed using CNN. The training and testing of the system has been performed successfully obtaining an accuracy of 71.35% which seems to be significant in MIR. GTZAN Music Dataset, a popular Western music dataset prepared for music analysis, has been used during training and testing; hence, this system works well only with Western music files.
DeepCoder solves simplest competitive style programming problems from input-output examples using deep learning. It uses the principle of Learning Inductive Program Synthesis to induce programs that are consistent with given input-output examples. It uses a neural network to predict the probabilities of a program that generated the outputs from the inputs and use these predictions to augment the search technique, an enumerative search, to make the process faster. The results show an order of speedup over non-augmented approaches. This project can solve competitive programming style problems of the simplest level. However, since it is limited by the DSL, it cannot yet be applied to complex problems that require more complex approaches.
Plant leaf disease identification is a major challenge in the agricultural sector but their rapid identification is difficult because of lack of necessary infrastructure. Faster and accurate prediction of leaf diseases in plants could help in early treatment and reduce economical loss to the farmers. Modern advanced developments in deep learning have increased the performance and accuracy of object detection and recognition systems. Disease detection in plant leaves detects the infected part of the leaf and predicts the disease present in the leaf and also displays the remedy that can be used to cure and provide proper care to the plants. In this product, Convolutional Neural Network (CNN) models is used to perform plant disease detection and diagnosis using simple leaves of healthy and infected plants through deep learning. CNN is a deep neural network originally designed for image analysis. CNN always contains two basic operations, namely convolution and pooling. The convolution operation using multiple filters is able to extract features (feature map) from the data set, through which their corresponding spatial information can be preserved. The pooling operation, also called subsampling, is used to reduce the dimensionality of feature maps from the convolution operation. Training of the models was performed with the use of an open dataset of more than 54,000 images of the plants of 14 crop species. The trained model achieves an accuracy of 68.8% on a held-out test set. The proposed product can effectively identify different plant leaf diseases and can be used as an advisory or early warning tool for better cultivation of the plants.
Transcribing speech is expected to become a crucial capability for the upcoming IT era. Be it presentations, broadcast news, or even class lectures, the need for transcribing is rising. Even though speech is the most natural form of communication, it is not easy to process it. However, if the recordings are simply left as mere audio signals, a deeper sense of understanding to the recorded data will not be gained. These data in the form of audio can be utilized to create much more meaningful information by the process of summarization. As of today, different methods of automatic summarization are being researched on and studied. These methods include two broad divisions: extractive and abstractive summarization. Abstractive summarization is still being studied, and does not yield good result when it comes to handling a complex dataset. For the proper handling of data and effective extractive summarization of the input, Recorded Speech-to-Text Summarizer using NLP is proposed. This system utilizes the TextRank algorithm, which is an expansion to the PageRank algorithm, to generate summaries of the input that is processed. The output(s) generated by the system were compared to two categories of summaries generated; the first were summaries that were devised by the process of hand-picking lines from the input, and the second were summaries that were generated by a system that was a basic NLP processor whose main criteria to grade sentences was frequency-distribution of keywords. For the same, a group of participants was called for, and some files were fed to the system as input. Upon comparison, it suggesting that the system indeed works as supposed to.
Object recognition is one of the emerging technologies in computer science. This technology uses different machine learning algorithms to classify images into their distinct categories. Garbage classification into different waste classes using machine learning and computer vision also falls under the domain of object recognition and classification. This project is a computer vision approach to classify garbage into recycling categories that could ultimately automate the sorting process and speed up the recycling pace. The accumulation of solid waste in the urban area is becoming a great concern, and it would result in environmental pollution and may be hazardous to human health if it is not properly managed. It is important to have an advanced waste management system to manage a variety of waste material. One of the most important steps in waste management is the separation of the waste into different components and this process is normally done manually by hand. To simplify the process, this project takes images of a single piece of garbage and classifies it into six classes consisting of glass, paper, metal, plastic, cardboard and trash. These six classes account for over 99 percent of all recyclable material. Using advanced machine learning algorithms (Support Vector Machine and Convolutional Neural Network) and freely available dataset, this project aims to develop an intelligent model that could automatically classify the waste material with a best accuracy of 83% (tested on the dataset). The project aims to make the separation process of the waste faster, intelligent and more efficient with minimal human involvement.
Survival video games are a subgenre of action video games in hostile, intense, open-world environments. Despite gaining popularity since the early 80s, the state of survival games as of challenging gameplay with approachable control mechanics, they have kept on making blatant attempts at cash-grabs over compelling storylines. Thereby, this project is carried out to develop a 3D game that has an open-world level design to provide an immersive experience for the players. In addition to that, the gameplay is also very simple and easy to get used to. The open world of the game is brought to life in the Unity game engine. Unity is a synthesizing type of game engine that the designers could use to develop a videogame, visualized constructions and real-time 3D animations. Furthermore, it is a cross-platform engine that supports multiple operating systems like Windows, Linux, Mac, iOS, Android, etc. it provides an interface for controlling the game assets with scripting. It provides a standard set of libraries for game development and allows scripting in C# language. The project aims to develop a 3D survival game from the first-person perspective that focuses on implementing rigid body physics, movement mechanics, lighting, etc. The project demonstrates the basic features of the FPS genre of games and the process of 3D game development with the Unity game engine. In addition to that, all the assets and scripts involved in this project are flexible for future updates and development.
Sign language is one of the most important and natural form of language for communication between deaf people. However, it is not a familiar language for normal people who are mostly communicating in normal languages such as English, Nepal etc. and interpreter are very difficult to be available for all the deaf ones. The hand sign recognition system is a process of identifying the gestures of hands and its meanings from a certain frame of video frame from a video source. It can be made through multiple methods, but the one of the best way to do it is by comparing the hand postures and gestures from the given set of images of hands along with their classes within a database. This project involved extracting of the hand features such as edges of hand and fingers, thresholds from a given frame with image processing techniques under a 128 * 128px size Region Of Interest (ROI) where the hand needs to be placed along with grayscale conversion. The recognition of the class or the alphabet of the hand sign was done using machine learning algorithms such as Convolutional Neural Network working on various layers. After the filtration of the hand within a certain ROI according to the Gaussian thresholding, CNN classifier classifies the hands through its convolutional, pooling, fully connected and final output layer and thus proposes the most suitable class for the hand gesture within the interface of the project. The project had achieved 70% accuracy with the 2 layered CNN model. Also for the hand sign that might seem to be similar separate classifier were made for those alphabets which helped the project to achieve 80% accuracy overall.
White blood cells in our blood stream provides a glimpse into the state of our immune system and any potential risk we might be facing. White blood cells analysis is generally performed in evaluating hematic pathologies such as immune deficiency syndrome (AIDS), blood cancer (leukemia) and other related disease [1]. In particular, a dramatic change in white blood cells count relative to baseline is generally a sign that your body is currently being affected by an antigen [2]. Generally, a variation in a specific type of white blood cells correlates with a specific type of antigen; people having allergies generally see an increase in their eosinophil counts as they are responsible for fighting allergens [3]. Therefore, counting and classification of white blood cells in the most efficient way is very important. Unfortunately, the traditional manual method is very time consuming, inaccurate and tedious [4]. In traditional method, people manually classify and count the white bloods cells. Hence, system is needed to classify and count the different types of white blood cells. In this software, the blood cells are classified and counted using the microscopic image of white blood cells. In the classification and counting, the features of image is extracted using CNN and Image processing algorithms. Moreover, three different types of features were extracted which are morphological, statistical and textural. Therefore, the overall result confirms that the software can produce accurate result in less period of time; it can be applied in hematological laboratories. The average accuracy was 85%.
heard how our information, our data are being licked due to the faulty nature, where we have to face different sort of problem due to this mistake. Message, data or any kind of information should maintain confidentiality as per the sender and receiver. So I have come with this idea for providing confidentiality among sender and receiver. My project works on maintain and improving of sharing the message secretly and providing the copyright so no other person can have access and the task can be performed smoothly. A message is embedded inside image so no other parties can think that there is message inside that image. So due to this nature I have taken image to hide my Watermarking provide the copyright so other person can claim other than the person who have that copyright. This is simple but yet effective project which is done in java where it provides the feature of hiding and extracting message between sender and receiver and provides watermarking. My project, Steganography and Watermarking using LSB algorithm uses LSB algorithm to hide message. LSB is one technique in which least significant bit of the image is replaced with data
Pet tracking system is an application which helps us to monitor and track pet location. Monitoring pet is a consistent issue for those pet owners who work away from home. However, using the application of pet tracking system, owner can easily monitor and track their pet location from their work places or anywhere. So, the main aim of this project is to make a system to enable the owner to monitor their pets when they are out of view. An investigation of the current system has been made to examine the current issue of the system that should be upgraded to improve the user experience, and also no restriction on the Geofence creation. This paper reflects on system design methods and concerns in the detail investigation of GPS and GIS technologies supremacy in pet tracking system. It also proposes a GPS/GIS- based solution that relates to pet tracking market monitoring. This project also illustrates the development of a Mobile Geographic Information System Project (MGISP). At last, different testing like UI (user interface) testing, user acceptance and unit testing are done to check whether this system is able to bring satisfaction among user or not and to know whether the functionality of this system works properly or not. The results obtained from testing shows that the functionalities of this system work properly with no error and users are fully satisfied with this system. In this report, there is a detailed discussion of the functional components and system architecture of the product. Pet Tracking System is intended to provide real-time location of your pet when it's within a network connection range that enables precise monitoring of pets. As soon as the pet leaves the connection coverage area, notification is sent to the user's phone eliminating any chances of the pet going missing. The past location history of the pet can also be accessed by the users on the basis of date that helps the user to know trends, activity and train the pet better. The system is developed to meet the user requirement and is designed to fill the void existing in the current systems by effective tracking of pets.
In the past few years, the problem of generating descriptive sentences automatically for images has garnered a rising interest in natural language processing and computer vision research. Image captioning is a fundamental task which requires semantic understanding of images and the ability of generating description sentences with proper and correct structure. In this study, we propose a hybrid system employing the use of multilayer Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) models to generate vocabulary describing the images and a Long Short-Term Memory (LSTM) to accurately structure meaningful sentences using the generated keywords. We showcase the efficiency of our proposed model using the Flickr8K and Flickr30K datasets and show that their model gives superior results. We discuss the foundation of the techniques to analyze their performances, strengths and limitations. We also discuss the datasets and the evaluation metrics popularly used in deep learning based automatic image captioning.