Ocr Ann

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Ocr Ann as PDF for free.

More details

  • Words: 670
  • Pages: 4
noisy data forced to 1 back-propagation m

Optical character rec converting a printed CII characters that

factors involved in optimal selection of

systems, banking, automat devices for blind.

developed based them have been found c o m e

0-7803-3280-6/96/$5.00 '1996

IEEE

44 -

vector of the neural network. Finally, an input vector that contains 64 (horizontal + vertical) unique features of the character is evaluated. A histogram is the distribution of the pixel intensity values of an image or portion of an image. It indicates the overall brightness and contrass of an image. Histogram techniques are used for automatic processing of lines, words and characters extraction in the sequence.

smooth the digitised characters. Moreover, the system must be able to handle touching characters, proportional spacing, variable line spacing and change of font style in the scanned text, in addition to the problems of multi-fonts.

[ image acquisition ] ~~

I

The erosion and dilation operations make the object smaller and larger respectively. Erosion makes an object smaller by removing or eroding away the pixel on its edges. Dilation makes an object larger by adding pixel around its edges. Dilation technique is used for extracting a word from the original image (gray scale). Image dilation is applied to make the characters in a word thicker until they join together. The image erosion techniques are used for extracting each chwacter from a word.

mage pre-processing

I

input features & targets I

I

[ ~~traning

ll

results

1

2.2. Neural Network Architecture

The architecture of a neural network determines how a neural network transforms its input into an output. This transformation can be viewed as a computation. We have implemented a multi-layer feed forward neural network with one hidden layer as shown in figure 3.

Figure 1 System Block Diagram 2.1. Feature Extraction

Feature extraction is the process of getting information about an object or a group of object in order to facilitate classification. This is an important part in our system. The character from the scanned image is normalised from 60 X 60 pixel into 32 X 32 pixel as in figure 2.

hidden layer

n

60

m

U 32 1

.

*.

0

-.-.[El

-

32

'

weight connection

Figure 3 The Network Model

Figure 2

The topology of the network is 64 input modes, 64 hidden nodes and 62 output nodes (64-64-62). Since the image character is normalised to have a input

The horizontal and vertical vectors (Vh and Vv respectively) are added together to form the input

-

2245

-

input units. As a rule hidden layer nodes shoul

Total number of tr (a..z) and b]. , Total number of te Note:P=>F means

using the back-propa description of back-

The system was initi ) of Times Roman font characters ( [A..Z, a..z, 14. Each character was captured once and its are stored in an array. These back-propagation neural netw performed. After the training, with training set and testing set characters. The table below shows the results of the system. Experiment 1: Training font : Times New Rom Testing font : Times New Rom Network configuration: 6 Total number of training and (a..z) Total number of testing c Note:P=>F means P is miss-classify as F Batch mor

training time (hours)

3.88961

13

0.04008

14 h=>O u=>n

0.01281

14

U=>H

Batch error

0.08526

0.03282

.9>,

[3]. Hussain, B and Kabuka, M. R., “A novel feature recognition neural network and its application to character recognition”, IEEE Transactions of Pattem Recognition and Machine Intelligence, Vol. 16, No.1, 1994, pp.98 106.

[5]. Rumelhart, D. E, Hinton, G.E., Williams, R. J, “Learning Representation by Error Backpropagation”, In Parallel Distributed Processing, Vol. 1, MIT Press, Cambridge, Chapter.8, 1986, pp.3 18-362.

[4]. Avi-Itzhak, H. I, Diep, T. A. and Garland, H, “High accuracy optical character recognition using neural networks with centroid dithering”, IEEE Transactions of Pattem Recognition and Machine Intelligence, Vol. 17, No.2, 1995, pp.218-224.

[6]. Jones, W. P., and Hoskins, J., “Back Propagation: A generalised learning rule”, Byte, 12, 1987, pp. 155-158.

-

- 2247 -

Related Documents

Ocr Ann
November 2019 15
7024880-ocr-ann-pdf
May 2020 2
Ocr
November 2019 21
Ocr
June 2020 12
Ann
November 2019 75
Ann
November 2019 63