Name:
Goda_P_Project_report.pdf
Size:
1.925Mb
Format:
PDF
Description:
Final Thesis Submission
Average rating
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Star rating
Your vote was cast
Thank you for your feedback
Thank you for your feedback
Author
Goda, Piyush JainKeyword
image text detection and recognitioncomputer vision
Convolution Neural Network (CNN)
Object Detection
YOLO (You Only Look Once)
Single Shot Detector (SSD)
MobileNet
Python
Date Published
2020-12
Metadata
Show full item recordAbstract
Recently, a variety of real-world applications have triggered a huge demand for techniques that can extract textual information from images and videos. Therefore, image text detection and recognition have become active research topics in computer vision. The current trend in object detection and localization is to learn predictions with high capacity deep neural networks trained on a very large amount of annotated data and using a high amount of processing power. In this project, I have built an approach for text detection using the object detection technique. Our approach is to deal with the text as objects. We use an object detection method, YOLO (You Only Look Once), to detect the text in the images. We frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. YOLO, a single neural network, that predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. The MobileNet pre-trained deep learning model architecture was used and modified in different ways to find the best performing model. The goal is to achieve high accuracy in text spotting. Experiments on standard datasets ICDAR 2015 demonstrate that the proposed algorithm significantly outperforms methods in terms of both accuracy and efficiency.