Gmr

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Gmr as PDF for free.

More details

  • Words: 31,596
  • Pages: 104
Graphical Models Reference O N - L I N E

M A N U A L

Copyright  1982 - 1999 by ERDAS, Inc. All rights reserved. Printed in the United States of America. ERDAS Proprietary - Delivered under license agreement. Copying and disclosure prohibited without express written permission from ERDAS, Inc. ERDAS, Inc. 2801 Buford Highway, N.E. Atlanta, Georgia 30329-2137 USA Phone: 404/248-9000 Fax: 404/248-9400 User Support: 404/248-9777

Warning All information in this document, as well as the software to which it pertains, is proprietary material of ERDAS, Inc., and is subject to an ERDAS license and non-disclosure agreement. Neither the software nor the documentation may be reproduced in any manner without the prior written permission of ERDAS, Inc. Specifications are subject to change without notice.

Trademarks ERDAS is a trade name of ERDAS, Inc. ERDAS and ERDAS IMAGINE are registered trademarks of ERDAS, Inc. Model Maker, CellArray, ERDAS Field Guide, and ERDAS Tour Guides are trademarks of ERDAS, Inc. Other brands and product names are trademarks of their respective owners.

Graphical Models Reference On-Line Manual Graphical Models Reference Guide - Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 About This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Graphical Models Reference Guide - Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 AUTO_IARReflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

AUTO_LogResiduals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Aspect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Badlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Clump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

iii

Graphical Models Reference On-Line Manual Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Create File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Crisp - Gray Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Crisp - Min/Max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Decorrelation Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Dehaze High . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

iv

Graphical Models Reference On-Line Manual Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

Dehaze Low . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Image Difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Customization . . . . . Inputs: . . . . Optional Inputs: Outputs: . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

. . . .

. . . . . . . . .

22 22 22 22

Eliminate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Focal Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

v

Graphical Models Reference On-Line Manual Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Histogram Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

IARR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

IHS to RGB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

vi

Graphical Models Reference On-Line Manual Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Inverse Principal Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

LUT Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Layer Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Level Slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Log Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

vii

Graphical Models Reference On-Line Manual Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41 41

Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Recoding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Mean Per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Natural Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Neighborhood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

viii

Graphical Models Reference On-Line Manual Normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Prewitt Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

Principal Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

RGB to IHS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

ix

Graphical Models Reference On-Line Manual Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Recode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Rescale3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Rescale Min-Max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Rescale - Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Resolution Merge - Brovey Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

x

Graphical Models Reference On-Line Manual Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

Resolution Merge - Multiplicative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Resolution Merge - Principal Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Applications and Modifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Reverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Sieve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

xi

Graphical Models Reference On-Line Manual Signal To Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Slope - Degrees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

Slope - Percent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Non-directional Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

TM Dehaze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

xii

Graphical Models Reference On-Line Manual TM Destripe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

Tasseled Cap - TM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

Topographic Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

Vector To Raster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Vegetation Indexes - NDVI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Inputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Outputs: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

xiii

Graphical Models Reference Guide - Introduction

Graphical Models Reference Guide - Introduction About This Manual The Graphical Models Reference Guide is a catalog of the graphical models used to build the Image Interpreter functions. Using this manual, you can see how these models were built and how they can be changed to suit other applications. Each graphical model is described in detail. These models can be displayed in ERDAS IMAGINE through Model Maker (Spatial Modeler). Models can be edited, converted to script form, run, and saved in libraries. The models may also be accessed through the Image Interpreter menus.

Introduction This document describes the standard models which are supplied with the ERDAS IMAGINE Spatial Modeler. These models can be accessed through one or both of the following:

♦ Spatial Modeler - each model is stored as a .gmd file in the /etc/models directory. ( is the directory where IMAGINE resides.) This file can be edited with Model Maker, or you can use the Script Librarian Edit tool to edit the script file (.mdl) with the Spatial Modeler Language.

♦ Image Interpreter - most of these models appear as functions in the Image Interpreter. They can be applied at the touch of a button, or viewed and edited from the Image Interpreter dialog boxes.

☞ The models stored in /etc/models are permanent (.pmdl) and cannot be written over. If you make any changes to one of these files, use the File | Save As option and give the file a new name. For each model, this document shows the following:

♦ suggested applications and modifications ♦ how the model is accessed ♦ step-by-step description of what the model does ♦ algorithms, where appropriate, from which the model was derived ♦ source material ➲ For information on the scripts used to write models, see the Spatial Modeler Language manual in On-Line Help. Also see the “Enhancement” chapter in the ERDAS Field Guide for more information.

1

Graphical Models Reference Guide - Bibliography

Graphical Models Reference Guide - Bibliography Colby, J. D. 1991. “Topographic Normalization in Rugged Terrain.” Photogrammetric Engineering and Remote Sensing, Vol. 57, No. 5: 531-537. Conrac Corp., Conrac Division. 1980. Raster Graphics Handbook. Covina, California: Conrac Corp. Crippen, Robert E. 1989. “A Simple Spatial Filtering Routine for the Cosmetic Removal of ScanLine Noise from Landsat TM P-Tape Imagery.” Photogrammetric Engineering & Remote Sensing, Vol. 55, No. 3: 327-331. Crist, E. P., et al. 1986. “Vegetation and Soils Information Contained in Transformed Thematic Mapper Data.” Proceedings of IGARSS ‘86 Symposium, ESA Publications Division, ESA SP-254. Daily, Mike. 1983. “Hue-Saturation-Intensity Split-Spectrum Processing of Seasat Radar Imagery.” Photogrammetric Engineering & Remote Sensing, Vol. 49, No. 3: 349-355. ERDAS, 1982-1994. ERDAS Field Guide. 3rd edition. Atlanta, Georgia: ERDAS, Inc. ERDAS. 1991. ERDAS Ver. 7.5 Terrain Analysis Modules. Atlanta, Georgia: ERDAS, Inc. ERDAS. 1991. ERDAS Ver. 7.5 Core Module. Atlanta, Georgia: ERDAS, Inc. . Faust, Nickolas L. 1989. “Image Enhancement.” Volume 20, Supplement 5 of Encyclopedia of Computer Science and Technology, edited by Allen Kent and James G. Williams. New York: Marcel Dekker, Inc. Gillespie, Alan R., et al. 1986. “Color Enhancement of Highly Correlated Images. I. Decorrelation and HSI Contrast Stretches.” Remote Sensing of Environment, Vol. 29: 209-235. Gonzalez, Rafael C. and Paul Wintz. 1977. Digital Image Processing. Reading, Massachusetts: Addison-Wesley Publishing Company. Hodgson, M. and B. Shelley. 1994. “Removing the Topographic Effect in Remotely Sensed Imagery.” The ERDAS Monitor, Vol. 6, No. 1: 4-6. Jensen, John R. 1986. Introductory Digital Image Processing. Englewood Cliffs, New Jersey: Prentice-Hall.

2

Graphical Models Reference Guide - Bibliography Kruse, Fred A. 1988.“Use of Airborne Imaging Spectrometer Data to Map Minerals Associated with Hydrothermally Altered Rocks in the Northern Grapevine Mountains, Nevada.” Remote Sensing of the Environment, Vol. 24: 31-51. Minnaert, J. L., and G. Szeicz. 1961. “The Reciprocity Principle in Lunar Photometry.” Astrophysics Journal, Vol. 93: 403-410. Pratt, William K. 1991. Digital Image Processing. New York: John Wiley & Sons, Inc. Sabins, Floyd F., Jr. 1987. Remote Sensing Principles and Interpretation. New York: W. H. Freeman and Co. Schowengerdt, Robert A. 1983. Techniques for Image Processing and Classification in Remote Sensing. New York: Academic Press. Smith, J., T. Lin, and K. Ranson. 1980. “The Lambertian Assumption and Landsat Data.” Photogrammetric Engineering & Remote Sensing, Vol. 46, No. 9: 1183-1189. Wolberg, George. 1990. Digital Image Warping. IEEE Computer Society Press Monograph.

3

AUTO_IARReflectance

AUTO_IARReflectance This model combines three commonly used functions into a single process. First the raw data is normalized using the same algorithm that is accessible through the Normalize model. Next the internal average relative reflectance is computed using the same routine used by the Internal Average Relative Reflectance (IARR) model. The final step in this process is to rescale the data in three dimensions using the same routine used by the Rescale3D model.

☞ Before running this model, the Origin for Tables preference in the Spatial Modeler category must be set to 0 (zero) before running this model.

➲ For more information see the Preference Editor. Access Spatial Modeler: This model is found in the file /etc/models/ AUTO_IARReflectance.gmd. Image Interpreter: Select HyperSpectral Tools... | Automatic Rel. Reflectance.... To view or edit the model, click the View... button in the Automatic Internal Average Relative Reflectance dialog.

Algorithm See the three component functions Normalize, IARR, and Rescale 3D.

Customization Inputs: n1_hyperspectral Outputs: n32_autoIARR

4

AUTO_LogResiduals

AUTO_LogResiduals This model combines three commonly used functions into a single process. First the raw data is normalized using the same algorithm that is accessible through the Normalize model. Next the logarithmic residuals of the spectra are computed using the same routine used by the Log Residuals model. The final step in this process is to rescale the data in three dimensions using the same routine used by the Rescale3D model.

➲ You may wish to set the Edge Extension preference in the Spatial Modeler category to “Reflect about Edge” before running this model.

Access Spatial Modeler: This model is found in the file /etc/models/ AUTO_LogResiduals.gmd. Image Interpreter: Select HyperSpectral Tools... | Automatic Log Residuals.... To view or edit the model, click the View... button in the Automatic Log Residuals dialog.

Algorithm See three component functions: Normalize, Log Residuals, and Rescale 3D

Customization Inputs: n1_hyperspectral Outputs: n32_autologres

5

Aspect

Aspect Aspect files are used in many of the same applications as slope files. In transportation planning, for example, north facing slopes are often avoided. Especially in northern climates, these would be exposed to the most severe weather and would hold snow and ice the longest. Using the Aspect model, you can recode all pixels with north facing aspects as undesirable for road building.

Access Spatial Modeler: This model is found in the file /etc/models/Aspect.gmd. Image Interpreter: Select Topographic Analysis... | Aspect.... To view or edit the model, click the View... button in the Surface Aspect dialog.

Algorithm Source: ERDAS As with slope calculations, aspect uses a 3 by 3 window centered on each pixel to calculate the prevailing direction of its neighbors. For pixel x, y, the average changes in elevation in both x and y directions are calculated first. The average slope is then the average change in elevation in the y direction divided by the average change in elevation in the x direction. The aspect is the arc tangent of the average slope.

x y

a

b

c

d

e

f

g

h

i

For pixel x, y, the average changes in elevation in both x and y directions are calculated first. ∆x 1 = c – a ∆x 2 = f – d ∆x 3 = i – g

∆y 1 = a – g ∆y 2 = b – h ∆y 3 = c – i

6

Aspect where:

a ... i =elevation values of pixels in a 3 by 3 window as shown above. The average change in elevation in each direction is calculated by: ∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3 ∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ 3 The aspect is calculated by taking the arc tangent of the average slope: ∆x aspect = tan–1  ------   ∆y 

Customization Inputs: n1_Indem Outputs: n2_aspect_U16

Example The elevation of each pixel neighboring the pixel of interest (shaded) is given in the following example. Note that the elevation of the pixel of interest is not part of the calculation.

10

20

22 20

25 25

24

18

The average changes in elevation in the x and y directions are calculated as follows:

7

Aspect ∆x 1 = 25 – 10 ∆x 2 = 25 – 22 ∆x 3 = 18 – 20 15 + 3 – 2 ∆x = ------------------------ = 5.33 3

∆y 1 = 10 – 20 ∆y 2 = 20 – 24 ∆y 3 = 25 – 18 – 10 – 4 + 7 ∆y = ---------------------------- = – 2.33 3

If ∆x = 0 and ∆y = 0, then the aspect is flat (coded to 361 degrees). Otherwise, aspect is calculated as: 5.33 aspect = tan–1  -------------  = – 1.16 radians  – 2.33  To convert radians to degrees, multiply by 180/π ; -1.16 radians = -66.4 degrees. Negative angles are converted to positive angles. In this example, -66.4 degrees = 293.6 degrees. The aspect of the area surrounding the pixel of interest is 293.6 degrees (approximately West-NorthWest).

8

Badlines

Badlines This model uses an algorithm that replaces image data lines, either rows or columns, with values determined from adjacent data lines.

Access Spatial Modeler: This model is found in the file /etc/models/Badlines.gmd. Image Interpreter: Select Utilities... | Replace Bad Lines.... To view or edit the model, click the View... button in the Replace Bad Lines dialog.

Algorithm Source: ERDAS 1. Define lines, rows or columns, to be replaced. 2. Define replacement technique. 3. Process image line by line, using the defined technique to replace lines defined as bad.

Customization Inputs: n1_badlines Outputs: n5_badlines_fixed

9

Clump

Clump Clump identifies clumps, which are contiguous groups of pixels in one GIS class. This clumped data are saved in a new .img file, which can be used as it is, or as input to the Sieve or Eliminate functions. Sieve eliminates clumps of specific sizes. In combination with Sieve, Clump can be used effectively for applications such as facilities siting. For example, Clump can identify areas of a specified soil type, and Sieve can screen out the areas that would be too small for the facility’s acreage requirements. When the clumped data are input to the Eliminate model, you can produce a classification utilizing a minimum mapping unit. You may specify which neighbors of a pixel will be considered contiguous. The two choices are 4 and 8:

4 neighbors

8 neighbors

➲ See the Spatial Modeler Language manual in On-Line Help for more information. Access Spatial Modeler: This model is found in /etc/models/Clump.gmd. Image Interpreter: Select GIS Analysis... | Clump.... To view or edit the model, click the View... button in the Clump dialog.

Algorithm The Clump model is derived from this algorithm:??? 1. As the program processes the data, clumps of the non-zero class values are identified and numbered sequentially. 2. In the new .img file, this sequential clump number replaces the class value for each pixel. 3. All background (zero) pixels are assigned a value of 0.

Customization Inputs: n1_Inlandc

10

Clump Outputs: n3_Inclump

11

Create File

Create File The Create File model allows you to create a single-valued file of specified dimensions.

Access Spatial Modeler: This model is found in the file /etc/models/CreateFile.gmd. Image Interpreter: Select Utilities... | Create File.... To view or edit the model, click the View... button in the Create File dialog.

Customization Inputs: n2_Integer Outputs: n1_newfile

12

Crisp - Gray Scale

Crisp - Gray Scale This model attempts to sharpen an image by convolution with an inverted point spread function (PSF) kernel. The PSF is a measure of the blurring of the image due to characteristics of the sensor system itself. It may be defined as the image generated from a point source input. It is assumed that the PSF is rotationally and spatially invariant. With that assumption, a symmetrical kernel can be applied to the whole image (Wolberg 1990). The values in this kernel invert the PSF of the sensor which has the effect of sharpening the image.

Access Spatial Modeler: This model is found in the file /etc/models/Crispgreyscale.gmd. Image Interpreter: Select Spatial Enhancement... | Crisp.... To view or edit the model, click the View... button in the Crisp dialog.

Algorithm Source: ERDAS The Crisp model is derived from this algorithm: 1. Select input raster image. 2. Define Spatial Modeler function, Convolve, using 3 × 3 PSF kernel. 3. Rescale to 8-bit image using: DN in – Min DN out = ----------------------------- × 255 Max – Min where: DN out =pixel value in 8-bit data range DN in = pixel value - floating point

Min = minimum pixel value in image Max = maximum pixel value in image Note that in this model a 3 × 3 PSF kernel is used. This was determined both theoretically and empirically to be a very satisfactory choice. However, depending on the original input image resolution, detail sought, or image noisiness, two other kernels could be considered:

♦ 3 x 3 Summary kernel

13

Crisp - Gray Scale

♦ 5 x 5 Summary kernel Since the purpose of this operation is to sharpen the image, it is doubtful if a kernel over 5 × 5 would be of any value. To try a different kernel, double-click in the kernel symbol and enter one of the other kernels suggested above.

Customization Inputs: n1_panAtlanta Outputs: n12_panCrisp

14

Crisp - Min/Max

Crisp - Min/Max The Crisp filter sharpens the overall scene luminance without distorting the thematic content of the image. This is a useful enhancement if the image is blurred due to atmospheric haze, rapid sensor motion, or a broad point spread function of the sensor. This model is used to sharpen a multiband image. This is done by convolving the first principal component (PC-1) of the image with a Summary kernel. The convolution is done on PC-1 since this band correlates highly with overall scene intensity, while the other PCs contain the scene inter-band variance. Thus, the thematic content of the image is minimally affected.

Access Spatial Modeler: This model is found in the file /etc/models/ Crisp_MinMax.gmd. Image Interpreter: Select Spatial Enhancement... | Crisp.... Select a multiband image. To view or edit the model, click the View... button in the Crisp dialog.

Algorithm The logic of the algorithm is that the first principal component (PC-1) of an image is assumed to contain the overall scene luminance. The other PCs represent intra-scene variance. Thus you can sharpen only PC-1 and then reverse the principal components calculation to reconstruct the original image. Luminance is sharpened, but variance is retained. This algorithm requires a multiband image if the principal components calculation is to be meaningful. As discussed under Crisp - Gray Scale, several kernels could be considered for the convolution of PC-1. For this model a 3 × 3 summary kernel was selected based on empirical use mostly with Landsat TM images. Depending on your application, you may want to replace the convolution kernel (n13) with others as suggested under Crisp - Gray Scale.

Customization Inputs: n1_germtm n13_Summary (new kernels can be added at this point) Outputs: n21_memory

15

Decorrelation Stretch

Decorrelation Stretch The purpose of a contrast stretch is to:

♦ alter the distribution of the image DN values within the 0 - 255 range of the display device ♦ utilize the full range of values in a linear fashion The Decorrelation Stretch performs a stretch on the principal components of an image, not on the original image. A principal components transform converts a multiband image into a set of mutually orthogonal images portraying inter-band variance. Depending on the DN ranges and the variance of the individual input bands, these new images (PCs) will occupy only a portion of the 0 - 255 data range. Each PC is separately stretched to fully utilize the data range. The new stretched PC composite image is then retransformed to the original data space. If desired, you may save either the original PCs (n7) or the stretched PCs (n17) as a permanent image file for viewing after the stretch. To do so, place the cursor into the Raster symbol and double-click. When the dialog box comes up, left-click on Temporary file and give the image a name.

NOTE: Storage of PCs as floating point-single precision would be appropriate.

➲ See Principal Components for more information. Access From Spatial Modeler: This function is found in the file /etc/models /Decorrelation_Stretch.gmd. From Image Interpreter: Select Spectral Enhancement... | Decorrelation Stretch.... To view or edit the model, click the View... button in the Decorrelation Stretch dialog. Algorithm

Source: Sabins 1987 1. Calculate all principal components of input image. 2. Separately stretch each PC to fully utilize data range. 3. Convert back to original image axes.

16

Decorrelation Stretch

Customization Inputs: n1_lanier Outputs: n21_memory

17

Dehaze High

Dehaze High When sunlight passes through atmosphere containing haze (particulate matter) the resultant image is blurred because of particle induced scattering. The extent to which this happens to a particular image is called its point spread. Mathematically we define this as the point spread function. This (point spreading) can be theoretically modeled in a generic point spread function. This algorithm inverts the generic point spread function and is implemented as a convolution kernel. This inverse point spread kernel is applied to the image via convolution. Low and High options are available, and are implemented as 3X3 or 5X5 kernels respectively.

Access From Spatial Modeler: This function is found in the file /etc/models/ Dehaze_High.gmd. From Image Interpreter on the ERDAS IMAGINE main menu, select Radiometric Enhancement... | Haze Reduction.... Under Point Spread Type select High. To view or edit the model, click the View... button in the Haze Reduction dialog.

Algorithm Source: ERDAS Dehaze_High uses the 5X5 point spread function kernel:

0.257 – 0.126 – 0.213 – 0.126 0.257

– 0.126 – 0.627 0.352 – 0.627 – 0.126

– 0.213 0.352 2.928 0.352 – 0.126

– 0.126 – 0.627 0.352 – 0.627 – 0.126

0.257 – 0.126 – 0.213 – 0.126 0.257

Customization Inputs: n1_Klon_TM

18

Dehaze High Outputs: n8_dehaze_high

19

Dehaze Low

Dehaze Low When sunlight passes through atmosphere containing haze (particulate matter) the resultant image is blurred because of particle induced scattering. The extent to which this happens to a particular image is called its point spread. Mathematically we define this as the point spread function. This (point spreading) can be theoretically modeled in a generic point spread function. This algorithm inverts the generic point spread function and is implemented as a convolution kernel. This inverse point spread kernel is applied to the image via convolution. Low and High options are available, and are implemented as 3X3 or 5X5 kernels respectively.

Access From Spatial Modeler: select This function is found in the file /etc/models/ Dehaze_Low.gmd. From Image Interpreter on the ERDAS IMAGINE main menu, select Radiometric Enhancement... | Haze Reduction.... Under Point Spread Type select Low. To view or edit the model, click the View... button in the Haze Reduction dialog.

Algorithm Source: ERDAS Dehaze_Low uses the 3X3 inverse point spread function kernel:

– 0.126 – 0.213 – 0.126 – 0.627 0.352 – 0.627 0.352 2.928 0.352

Customization Inputs: n1_Klon_TM Outputs: n8_dehaze_low

20

Image Difference

Image Difference Image Difference is used for change analysis with imagery that depicts the same area at different points in time. With Image Difference, you can highlight specific areas of change in whatever amount you choose. Two images are generated from this image-to-image comparison; one is a grayscale continuous image, and the other is a five-class thematic image. The first image generated from Image Difference is the Difference image. The Difference image is a grayscale image composed of single band continuous data. This image is the direct result of subtraction of the Before Image from the After Image. Since Image Difference calculates change in brightness values over time, the Difference image simply reflects that change using a grayscale image. Brighter areas have increased in reflectance. This may mean clearing of forested areas. Dark areas have decreased in reflectance. This may mean an area has become more vegetated, or the area was dry and is now wet. The Highlight Difference image divides the changes into five categories. The five categories are Decreased, Some Decrease, Unchanged, Some Increase, and Increased. The Decreased class represents areas of negative (darker) change greater than the threshold for change and is red in color. The Increased class shows areas of positive (brighter) change greater than the threshold and is green in color. Other areas of positive and negative change less than the thresholds and areas of no change are transparent. For your application, you may edit the colors to select any color desired for your study.

Access From Image Interpreter on the ERDAS IMAGINE main menu, select Utilities... | Change Detection.... To view or edit the model, click the View... button in the Change Detection dialog. Algorithm Subtract two images on a pixel by pixel basis.

Source:ERDAS 1. Subtract the Before Image from the After Image. 2. Convert the decrease percentage to a value. 3. Convert the increase percentage to a value. 4. If the difference is less than the decrease value, then assign the pixel to Class 1 (Decreased). 5. If the difference is greater than the increase value then assign the pixel to Class 5 (Increased).

21

Image Difference

Customization Inputs: n1_atl_spotp_87 (Before Image) n2_atl_spotp_92 (After Image) Optional Inputs: n11_Float (Decrease Percentage) n12_Float (Increase Percentage) n23_Custom_String (Class Names) n26_Custom_Color (Colors) n29_Custom_Float (Opacity) Outputs: n4_difference (Difference Image) n22_hilight (Highlight Image)

22

Eliminate

Eliminate The Eliminate model enables you to specify a minimum clump or class size and clumps smaller than this specified minimum are eliminated. This function is normally used on thematic layers which have been clumped.

➲ For more information see the Eliminate function in the Image Interpreter manual. Access From Spatial Modeler: This function is found in the file /etc/models/ Eliminate.gmd. From Image Interpreter: Select GIS Analysis... | Eliminate.... To view or edit the model, click the View... button in the Eliminate dialog.

Algorithm The Eliminate model is derived from this algorithm: 1. Small clumps are filled in with a class number one larger than the number of classes. 2. Large clumps are changed back to their original class values. 3. The small clumps are filled in from their neighboring large clumps in an iterative fashion until they are completely filled.

☞ In the graphical model version of Eliminate, only the first iteration is performed. The Image Interpreter Eliminate function makes use of the looping capabilities of SML to repeat the iteration as described in step 3 above.

Customization Inputs: n1_Inclump Outputs: n9_Ineliminate

23

Focal Analysis

Focal Analysis This model (Median Filter) is useful for reducing noise such as random spikes in data sets, dead sensor striping, and other impulse imperfections in any type of image. It is also useful for enhancing thematic layers. Focal Analysis evaluates the region surrounding the pixel of interest (center pixel). The operations which can be performed on the pixel of interest include:

♦ Standard Deviation ♦ Sum ♦ Mean ♦ Median ♦ Min ♦ Max These functions allow you to select the size of surrounding region to evaluate by selecting the window size. This Median Filter model is operating with a 3 × 3 window size. To select a different window size, double-click on the Matrix icon n7 and enter the desired size.

NOTE: The neighborhood shape may be made irregular by changing to a Custom_Matrix and entering zero for any of the matrix cells.

➲ For information on applying filters to thematic layers, see the “Geographic Information Systems” chapter in the ERDAS Field Guide.

Access Spatial Modeler: This model is found in the file /etc/models/ Focal_Analysis.gmd. Image Interpreter: Select Spatial Enhancement... | Focal Analysis.... To view or edit the model, click the View... button in the Focal Analysis dialog.

Algorithm Source: Pratt 1991 The Focal Analysis, Median Filter model is derived from this algorithm: 1. Put all pixel DNs with the selected moving window into numerical order.

24

Focal Analysis 2. Replace the pixel of interest with the DN value in the center of the ranking.

Customization Inputs: n1_Indem Outputs: n3_MedianImage

25

Functions

Functions The Functions model includes 36 common mathematical functions needed for developing algorithms. The arctangent operator was selected for this model, since it is a data rescale operation that is routinely useful. Perhaps the most common application of this function is in rescaling output from a ratioing algorithm (such as under Image Interpreter, Indices). Generally, the data from such an operation will occupy only a small portion (0-10) of the display range (0255). However, the small differences between the output values (for example, 0.1 and 0.4) are now important.

➲ For information on other functions, see the Spatial Modeler Language manual in the OnLine Help.

Access From Spatial Modeler: This model is found in the file /etc/models/ Function.gmd. From Image Interpreter: Select Utilities... | Functions.... To view or edit the model, click the View... button in the Single Input Functions dialog.

Customization Inputs: n1_tmIRR Outputs: n3_tmIRR_atan

26

Histogram Equalization

Histogram Equalization The Histogram Equalization model is a nonlinear stretch that redistributes pixel values so that there are approximately the same number of pixels with each value within a range. The result approximates a flat histogram. Therefore, contrast is increased at the “peaks” of the histogram, and lessened at the “tails.” Histogram equalization can also separate pixels into distinct groups, if there are few output values over a wide range. This can have the visual effect of a crude classification.

➲ For more information, see the “Enhancement” chapter in the ERDAS Field Guide. Access From Spatial Modeler: This function is found in the file /etc/models/ Histo_Eq.gmd. From Image Interpreter: Select Radiometric Enhancement... | Histogram Equalization.... To view or edit the model, click the View... button in the Histogram Equalization dialog.

Algorithm Source: Modified from Gonzalez and Wintz 1977 Suppose there are 240 pixels represented by the histogram. To equalize the histogram to 10 bins, there would be:

240 pixels / 10 bins = 24 pixels per bin = A A = T/N where:

T N A

= the total number of pixels in the image = the number of bins = the equalized number of pixels per bin

To assign pixels to bins, the following equation is used:

27

Histogram Equalization

i–1

  Hi  ∑ H k  + -----B i = int k = 1  2 ----------------------------------A

where:

A Hi

= equalized number of pixels per bin (see above) = the number of values with the value i (histogram)

int = integer function (truncating real numbers to integer) B i = bin number for pixels with value i Customization Inputs: n1_lanier Outputs: n10_HistoEq

28

Histogram Match

Histogram Match Matching the data histograms of two images is a useful function for several diverse applications. For example, when mosaicking two scenes, this can help eliminate differences in the overall scene luminance (due, perhaps, to different solar illumination on different days) to create a seamless composite image. Some production oriented facilities like to produce a standard look to their output for comparison with past and future output. In this situation, an enhancement which produces the best results can become the standard. All subsequent images similarly enhanced can be matched to the standard image (using the Spatial Modeler function RASTERMATCH) as the final step toward a standard output.

➲ For information on RASTERMATCH, see the Spatial Modeler Language manual. For more information on histogram matching, see the “Enhancement” chapter in the ERDAS Field Guide.

Access From Spatial Modeler: This function is found in the file /etc/models/ Histo_Match.gmd. From Image Interpreter: Select Radiometric Enhancement... | Histogram Match.... To view or edit the model, click the View... button in the Histogram Matching dialog.

Algorithm To match the histograms, a lookup table is mathematically derived which serves as a function for converting one histogram to the other.

Customization Inputs: n1_mosaic_1 (to be matched) n10_mosaic_2 (to match to) Outputs: n9_Rastermatch

29

IARR

IARR It is desired to convert the spectra recorded by the sensor into a form that can be compared to known reference spectra. This technique calculates a relative reflectance by dividing each spectrum (pixel) by the scene average spectrum (Kruse 1988). The algorithm is based on the assumption that this scene average spectrum is largely composed of the atmospheric contribution and that the atmosphere is uniform across the scene. However, these assumptions are not always valid. In particular, the average spectrum could contain absorption features related to target materials of interest. The algorithm could then overcompensate for (i.e., remove) these absorbence features. The average spectrum should be visually inspected to check for this possibility. Properly applied, this technique can remove the majority of atmospheric effects.

Access Spatial Modeler: This model is found in the file /etc/models/IARR.gmd. Image Interpreter: Select HyperSpectral Tools... | IAR Reflectance.... To view or edit the model, click the View... button in the Internal Average Relative Reflectance dialog.

Algorithm Source: Kruse, 1988 1. Calculate an average spectrum for the entire input scene. 2. Divide each pixel spectrum by the scene spectrum.

Customization Inputs: n1_hyperspectral Outputs: n5_iarr

30

IHS to RGB

IHS to RGB This operation is the reverse of the RGB to IHS stretch. In this transformation, three components are defined as Intensity, Hue, and Saturation in the IHS color coordinate system (Pratt 1991). These three components could simply be the output from an RGB to IHS transformation, in which case the end result of the two transforms (RGB to IHS and IHS to RGB) would be the original image. For other applications, you could replace one of the outputs from the RGB to IHS transform (commonly the Intensity) with another component (say SPOT panchromatic data) before doing the IHS to RGB transform. Others have found that defining I and/or S as some image raster (for example, ERS-1), setting Hue to a fixed image, and converting to RGB space can produce a useful image (Daily 1983). In this model, the use of the IHS to RGB transformation is to stretch I and S to fully utilize the data range. Since Hue is a circular dimension (0 - 360) and defines the “color” of the image, it is generally not appropriate to alter its values.

➲ The IHS stretch is discussed in detail by Gillespie et al 1986. For more information on IHS to RGB transformations, see the “Enhancement” chapter in the ERDAS Field Guide.

Access From Spatial Modeler: This model is found in the file /etc/models/ IHStoRGB_Stretch.gmd. From Image Interpreter: Select Spectral Enhancement... | IHS to RGB.... To view or edit the model, left-click the View... button in the IHS to RGB dialog box.

Algorithm Source: Conrac 1980 Given:

0 ≤ H ≤ 360 0 ≤ I ≤ 1.0 0 ≤ S ≤ 1.0 If I ≤ 0.5, M = I (1 + S) If I > 0.5, M = I + S - I (S)

m=2*I-M

31

IHS to RGB The equations for calculating R in the range of 0 to 1.0 are: If H < 60, R = m + (M - m) (H / 60) If 60 ≤ H < 180, R = M If 180 ≤ H < 240, R = m + (M - m) ((240 - H) / 60) If 240 ≤ H ≤ 360, R = m The equations for calculating G in the range of 0 to 1.0 are: If H < 120, G = m If 120 ≤ H < 180, G= m + (M - m) ((H - 120) / 60) If 180 ≤ H < 300, G = M If 300 ≤ H ≤ 360, G = m + (M - m) ((360 - H) / 60) The equations for calculating B in the range of 0 to 1.0 are: If H < 60, B = M If 60 ≤ H < 120, B = m+ (M - m) ((120 - H) / 60) If 120 ≤ H < 240, B = m If 240 ≤ H < 300, B = m+ (M - m) ((H - 240) / 60) If 300 ≤ H ≤ 360, B = M

Customization Inputs: n6_rgbtoihs Outputs: n28_IHStoRGB_IS

32

Index

Index Index creates a composite .img file by adding together the class values of two “weighted” input raster files. You can assign a weighting value to each input file, which is used to multiply the class value in each cell. The corresponding cells from each file are then added, and the composite output file contains the resulting sums. A recoding option within Index also allows you to prescale the input data if desired, which is useful for “masking out” certain data files.

➲ Since these weighted sums may be quite large, you may want to normalize them by dividing by the sum of the weights. Use the Spatial Modeler for this. The raster input files may contain different types of information, but should cover the same geographic area. This function can be used for applications such as siting a new industry. The most likely sites will be where there is the best combination (highest cell value) of good soils, good slopes, and good access. If good slope is a more critical factor than either soils or access, you could assign it a weight of two.

33

Index Indexing Example 9 9 5

9 9

1

1 9

5

Soils 9 = good 5 = fair 1 = poor

Weighting Importance ×1 ×1 ×1

Slope 9 = good 5 = fair 1 = poor

Weighting Importance ×2 ×2 ×2

Access 9 = good 5 = fair 1 = poor

Weighting Importance ×1 ×1 ×1

+ 18 10 10

18 18

2

18 18

2

+

9 5 1

9 9

5

9 9

9

=

36 24 16

36 36

8

28

Output values calculated

36 16

Access From Spatial Modeler: This function is found in the file /etc/models/ Index.gmd. From Image Interpreter: Select GIS Analysis... | Index.... To view or edit the model, left-click the View... button in the Index dialog.

Algorithm Source: ERDAS The normalizing process is represented mathematically below. Raw Sum: R = (W1 × P1) + (W2 × P2) Normalized sum:

34

Index ( W 1 × P1 ) + W 2 × P2 ) R = ---------------------------------------------------------(W 1 + W 2) where:

W1 W2 P1 P2 R

= = = = =

weight to be applied to the first file weight to be applied to the second file value of a particular pixel in the first file value of a particular pixel in the second file resulting value of Index

Customization Inputs: n1_Inslope n2_Insoils Outputs: n6_Indeximage

35

Inverse

Inverse Occasionally, you may want to reverse the contrast of a black and white (gray scale) image. This could be necessary, if, for example, the scanned hardcopy was a photographic negative and you wanted to use the corresponding positive. Subtle details located in the low DN of an image histogram can often be rendered visible more quickly by reversing the image. Two models are included for these applications: Reverse is a linear operation (see Reverse); Inverse is a nonlinear approach.

NOTE: Every single band image is, in essence, a black and white photograph.

➲ For more information on Image Inverse, see the “Enhancement” chapter in the ERDAS Field Guide.

Access From Spatial Modeler: This function is found in the file /etc/models/ Inverse.gmd. From Image Interpreter: Select Radiometric Enhancement... | Brightness Inversion.... Under Output Options select Inverse. To view or edit this model, click the View... button in the Brightness Inversion dialog.

Algorithm Source: Pratt 1991 The Inverse model is derived from this algorithm: DN out = 1.0 ;

0.0 ≤ DN in < 0.1

DN out = 0.1 ⁄ DN in ;

0.1 ≤ DN in ≤ 1.0

Customization Inputs: n1_panAtlanta Outputs: n12_inverse_8bit

36

Inverse Principal Components

Inverse Principal Components This model enables you to perform an inverse principal components analysis on an input file that has been processed with the principal components analysis function.

Access From Spatial Modeler: This function is found in the file /etc/models/ Inverse_PC.gmd. From Image Interpreter: Select Spectral Enhancement... | Inverse Principal Comp.... To view or edit this model, click the View... button in the Inverse Principal Components dialog.

Algorithm The Inverse Principal Components model is derived from this algorithm: 1. The eigenmatrix is transposed and inverted. 2. A linear function is used to combine the eigenmatrix with the input raster image.

Customization Inputs: n1_prince Outputs: n8_invprince

37

LUT Stretch

LUT Stretch This function lets you create an .img file with the same data values as the displayed contrast stretched image. This way, once you have manipulated the histogram of an image and obtained the best results for your application, you can save the image and actually change the data file values to match the viewed image.

Access From Spatial Modeler: This function is found in the file /etc/models/ LUT_Stretch.gmd. From Image Interpreter: Select Radiometric Enhancement... | LUT Stretch.... To view or edit this model, click the View... button in the LUT Stretch dialog.

Algorithm Source: ERDAS The LUT Stretch model is derived by replacing each pixel of each layer by its lookup table value.

Customization Inputs: n3_mobbay Outputs: n12_mb_lookup

38

Layer Stack

Layer Stack The layerstack shown in this model is used in the Crisp model (among others) where the number of input PC bands will depend on the number of input bands in the multiband input image, and hence will vary from application to application.

Access From Spatial Modeler: This function is found in the file /etc/models/ Layerstack.gmd. From Image Interpreter: Select Utilities... | Layer Stack.... To view or edit this model, click the View... button in the Layer Selection and Stacking dialog.

Algorithm The Layer Stack model is derived from this algorithm: 1. Output band 1 is INPUT RASTER #1 band 1. 2. Output band 2 is INPUT RASTER #2 band 2. 3. Output band 3 is INPUT RASTER #2 band 3. 4. Output band X is INPUT RASTER #2 band X.

Customization Inputs: n7_spots (#1) n15_dmtm (#2) Outputs: n17_Layerstack

39

Level Slice

Level Slice A level slice simply “slices” or divides the data values into a user-defined number of bins or divisions. The data are equally divided into bins which are “level,” each containing the same amount. This model is good for DEMs or in other applications where you want to slice a continuous image into a discrete number of levels. For example, you may want to do a Level Slice for Aspect to show the four cardinal directions. This model allows you to select six different levels.

Access From Spatial Modeler: This function is found in the file /etc/models/ Level_Slice.gmd. From Image Interpreter: Select Topographic Analysis... | Level Slice.... To view or edit the model, click the View... button in the Topographic Level Slice dialog.

Algorithm DN in – Min DN out = ---------------------------x Bin using the formula: Max – Min x = -----------------------------------numberofbins Calculate number of DNs per bin as:

Customization Inputs: n1_Inaspect Outputs: n17_Level_Slice

40

Log Residuals

Log Residuals The Log Residuals technique was originally described by Green and Craig (1985), but has been variously modified by researchers. The version implemented here is similar to the approach of Lyon (1987). This algorithm corrects the image for atmospheric absorption, systemic instrumental variation, and illuminance differences between pixels.

➲ You may wish to set the Edge Extension preference in the Spatial Modeler category to Reflect about edge before running this model.

Access Spatial Modeler: This model is found in the file /etc/models/ LogResiduals.gmd. Image Interpreter: Select HyperSpectral Tools... | Log Residuals.... To view or edit the model, click the View... button in the Log Residuals dialog.

Algorithm Source: Lyon, 1987 1. Convert image to log basis. 2. Calculate average of each band in step 1 (above). 3. Calculate average of each pixel. 4. Subtract the band average (step 2, above) and the pixel average (step 3, above) from the converted image (step 1, above). 5. Calculate the exponential of step 4 (above).

Customization Inputs: n1_hyperspectral Outputs: n14_logres

41

Mask

Mask Mask uses an .img file to select (mask) specific areas from a corresponding raster file and use those areas to create a new file. The areas to mask are selected by non-zero class value. You begin by identifying the input raster files. Then the thematic raster file is recoded to create the mask which tells the system which class values are to be output. Suppose you want to create an .img file containing only those areas that correspond with areas in the input raster file that have a value of 7 (City of Gainesville). To accomplish this, class value 7 is assigned a recode value of 1. All other values are assigned a recode value of 0. The output image will have only data within the city of Gainesville.

☞ Any areas assigned a recode value of 0 will not be included in the output. ➲ See the “Raster Layers” or “Classification” chapters in the ERDAS Field Guide for more information.

Access From Spatial Modeler: This function is found in the file /etc/models/ Mask.gmd. From Image Interpreter: Select Utilities... | Mask.... To view or edit the model, click the View... button in the Mask dialog.

Customization Inputs: n1_lanier n4_Input (thematic raster) Outputs: n2_mask

42

Matrix

Matrix Matrix analyzes two input raster files and produces a new file. The new file contains class values that indicate how the class values from the original files overlap. Unlike Overlay or Index, the resulting class values can be unique for each coincidence of two input class values. This allows you to create any logical combination of classes from two .img files, such as union of classes, intersection of classes, complements of classes, or any combination of the above. Matrix organizes the class values of the two input files into a matrix. The first input file specifies the columns of the matrix, and the second input file specifies the rows. The first column and first row of the matrix show areas that have a class value of zero in at least one of the input files. So, the first column and row of the matrix contain zeros. All other positions in the matrix are numbered sequentially, starting with 1. These numbers become the class values of the output file. An example matrix is illustrated below: input layer 1 data values (columns)

input layer 2 data values (rows)

0

1

2

3

4

5

6

0

0

0

0

0

0

0

0

1

0

1

2

3

4

5

6

2

0

7

8

9

10

11

12

3

0

13

14

15

16

17

18

4

0

19

20

21

22

23

24

The output file will have 25 classes, numbered 0 to 24, which correspond to the elements of the matrix. Each of the classes 1 to 24 represents a unique combination of the classes of the input files. Recoding You can recode any class values in the input files before the matrix is created.

43

Matrix

Access From Spatial Modeler: This function is found in the file /etc/models/ Matrix.gmd. From Image Interpreter: Select GIS Analysis... | Matrix.... To view or edit the model, click the View... button in the Matrix dialog.

Customization Inputs: n4_Insoils n5_Inlandc Outputs: n2_matrix

44

Mean Per Pixel

Mean Per Pixel This algorithm outputs a single band, regardless of the number of input bands. By visually inspecting this output image, it is possible to see if particular pixels are "outside the norm". While this does not mean that these pixels are incorrect, they should be evaluated in this context. For example, a CCD detector could have several sites (pixels) that are dead or have an anomalous response, these would be revealed in the Mean per Pixel image. This can be used as a sensor evaluation tool.

Access Spatial Modeler: This model is found in the file /etc/models/ MeanPerPixel.gmd. Image Interpreter: Select HyperSpectral Tools... | Mean per Pixel.... To view or edit the model, click the View... button in the Mean Per Pixel dialog.

Algorithm Source: ERDAS Calculate an average DN value for each pixel using all bands selected from the input image.

Customization Inputs: n1_hyperspectral Outputs: n5_meanperpixel

45

Natural Color

Natural Color This algorithm converts SPOT XS imagery to an output which approximates a true color image.

Access Spatial Modeler: This model is found in the file /etc/models/Natcolor.gmd. Image Interpreter: Select Spectral Enhancement... | Natural Color.... To view or edit the model, click the View... button in the Natural Color dialog.

Algorithm Source: ERDAS 1. Layerstack SPOT XS band 2 as band 1. 2. Layerstack SPOT XS band 1 as band 3. 3 × XS1 + XS3 3. Layerstack ------------------------------------ as band 2. 4

Customization Inputs: n1_spotxs Outputs: n7_natcolor

46

Neighborhood

Neighborhood This model is similar to the Focal Analysis model. The difference is that these functions are more applicable to thematic raster images. The Neighborhood functions evaluate the region surrounding the pixel of interest (center pixel). The available operations are:

♦ Majority ♦ Minority ♦ Sum ♦ Diversity ♦ Density ♦ Max ♦ Min ♦ Rank This model uses a Focal Maximum operation with a 3 × 3 moving window. You may change the operation and the moving window size.

Access From Spatial Modeler: This function is found in the file /etc/models/ Neighborhood.gmd. From Image Interpreter: Select GIS Analysis... | Neighborhood.... To view or edit the model, click the View... button in the Neighborhood Functions dialog.

Algorithm The Neighborhood, Maximum model is derived from this algorithm: 1. Put all pixel DNs within the moving window in numerical order. 2. Replace center pixel with maximum DN value.

Customization Inputs: n1_Insoils

47

Neighborhood Outputs: n3_MaximumImage

48

Normalize

Normalize Pixel albedo is affected by sensor look angle and local topographic effects. For airborne sensors this look angle effect can be large across a scene; it is less pronounced for satellite sensors. Some scanners look to both sides of the aircraft; for these data sets, the average scene luminance between the two half-scenes can be large. To help minimize these effects, an "equal area normalization" algorithm can be applied (Zamudio and Atkinson 1990). This calculation shifts each (pixel) spectrum to the same overall average brightness. This enhancement must be used with a consideration of whether this assumption is valid for your scene. For an image which contains 2 (or more) distinctly different regions (e.g., half ocean and half forest), this may not be a valid assumption. Correctly applied, this normalization algorithm helps remove albedo variations and topographic effects.

Access Spatial Modeler: This model is found in the file /etc/models/Normalize.gmd. Image Interpreter: Select HyperSpectral Tools... | Normalize.... To view or edit the model, click the View... button in the Normalize dialog.

Algorithm Source: Zamudio & Atkinson, 1990 1. Calculate an average DN value for each pixel. 2. Calculate an overall scene average DN value. 3. Ratio each pixel average (step 1, above) to the overall scene average (step 2, above). 4. Multiply each scene pixel by its ratio (step 3, above).

Customization Inputs: n1_hyperspectral Outputs: n9_normalize

49

Operators

Operators There are six mathematical operations which can be accessed through the Image Interpreter Operators function: addition (+), subtraction (-), division (/), multiplication (*), square root (POWER), and modulus (MOD). The operation shown in this graphical model is Division. This model demonstrates using the EITHER statement to avoid dividing by zero.

➲ For more information on these operations, see the Spatial Modeler Language manual in the On-Line Help.

Access From Spatial Modeler: This function is found in the file /etc/models/ Operators.gmd. From Image Interpreter: Select Utilities... | Operators.... To view or edit this model, click the View... button in the Two Input Operators dialog.

Algorithm The Operators model is derived from this algorithm: If denominator = 0 , then output = 0 If denominator ≠ 0 , then output = numerator⁄denominator

Customization Inputs: n1_lanier Outputs: n4_Ratio_out

50

Overlay

Overlay Overlay creates a composite output .img file by combining two input .img files based on the minimum or maximum values of the input files. You will determine whether the output file will contain either the highest or the lowest class values found in the individual input files for each cell. A recoding option within the Overlay program lets you pre-scale the data, if desired, to mask out certain data values. The illustration below shows the result of combining two files - an original slope file and a land use file. First, the original slope file is recoded to combine all steep slopes into one value. When overlaid with the land use file, the highest data values (the steep slopes) dominate in the output file. Overlaying Example 6 8 9

2 1

6

1 3

5

RECODE

9 9 9

0 0

9

0

OVERLAY

2 2

4 1

2

2 5

3 9 9 9

RECODED SLOPE 0 = flat slopes 9 = steep slopes

0 0

3

ORIGINAL SLOPE 1-5 = flat slopes 6-9 = steep slopes

4 1

9

2 5

3

LAND USE 1 = commercial 2 = residential 3 = forest 4 = industrial 5 = wetland LAND USE 1 = commercial 2 = residential 3 = forest 4 = industrial 5 = wetland 9 = steep slopes (land use “masked out”)

Access From Spatial Modeler: This function is found in the file /etc/models/ Overlay.gmd.

51

Overlay From Image Interpreter: Select GIS Analysis... | Overlay.... To view or edit the model, click the View... button in the Overlay dialog.

Customization Inputs: n5_Inlandc n4_Input Outputs: n2_overlay

52

Prewitt Filter

Prewitt Filter The Prewitt_Filter option is a non-directional edge enhancer that uses convolution kernels. The Prewitt_Filter option will enhance edges using row-wise and column-wise sensitivity. This illustration below uses orthogonal Prewitt kernels to enhance edges in both of these directions. These two directional edge enhancement images are then combined to produce an output image with edges that are enhanced in both directions.

1 0 –1 1 --- 1 0 – 1 3 1 0 –1

–1 –1 –1 1 --- 0 0 0 3 1 1 1

Access From Spatial Modeler: This function is found in the file /etc/models/ Prewitt_Filter.gmd. From Image Interpreter on the ERDAS IMAGINE main menu, select Spatial Enhancement... | Non-directional Edge.... For Filter Selection select Prewitt. To view or edit the model, click the View... button in the Non-directional Edge dialog.

Algorithm Source: ERDAS The Prewitt_Filter model is derived from this algorithm:

1. Calculates Prewitt row gradient images using convolution. 2. Calculates Prewitt column gradient images by using convolution. 3. Combines these two images using:

2

x +y

2

Customization Inputs: n1_lanier

53

Prewitt Filter Outputs: n11_Inprewitt

54

Principal Components

Principal Components Principal components analysis (or PCA) is often used as a method of data compression. It allows redundant data to be compacted into fewer bands—that is, the dimensionality of the data is reduced. The bands of PCA data are non-correlated and independent, and are often more interpretable than the source data (Jensen 1986; Faust 1989). PCA can be performed on up to 256 bands with ERDAS IMAGINE. The PCA model is an integral part of several functions including Crisp and Resolution Merge.

➲ For more information on Principal Components, see the “Enhancement” chapter in the ERDAS Field Guide.

Access From Spatial Modeler: This model is found in the file /etc/models/ Principal_Components.gmd. From Image Interpreter: Select Spectral Enhancement... | Principal Comp. .... To view or edit the model, click the View... button in the Principal Components dialog.

Algorithm Source: Faust 1989 To perform the linear transformation, the eigenvectors and eigenvalues of the n principal components must be mathematically derived from the covariance matrix, as shown in the following equation: v 1 0 0 ... 0 V =

0 v 2 0 ... 0 ... 0 0 0 ... v n

E Cov E T = V where:

E = the matrix of eigenvectors Cov = the covariance matrix ET

V

= the transpose of the matrix of eigenvectors = a diagonal matrix of eigenvalues, in which all non-diagonal elements are zeros

55

Principal Components

V is computed so that its non-zero elements are ordered from greatest to least, so that v 1 > v 2 > v 3 ... > v n

Customization Inputs: n1_lanier Outputs: n7_prince

56

RGB to IHS

RGB to IHS It is possible to define an alternate color space which uses Intensity (I), Hue (H), and Saturation (S) as the three positional parameters (in lieu of R, G, and B). This system is advantageous in that it presents colors more nearly as perceived by the human eye.

♦ Intensity is the overall brightness of the scene (like PC-1) and varies from 0 (black) to 1 (white).

♦ Saturation represents the purity of color and also varies linearly from 0 to 1. ♦ Hue is representative of the color or dominant wavelength of the pixel. It varies from 0 at the red midpoint through green and blue back to the red midpoint at 360. It is a circular dimension. Hence, hue must vary from 0-360 to define the entire sphere (Buchanan 1979).

➲ For more information, see the “Enhancement” chapter in the ERDAS Field Guide. Access From Spatial Modeler: This model is found in the file /etc/models/ RGBtoIHS.gmd. From Image Interpreter: Select Spectral Enhancement... | RGB to IHS.... To view or edit the model, click the View... button in the RGB to IHS dialog.

Algorithm Source: Conrac 1980 The RGB to IHS model is derived from this algorithm. If R, G, and B are in the 0 - 255 range, divide by 255 first to convert to the 0 - 1.0 range.

R,G,B each in the range of 0 to 1.0. M = largest value of either R, G, or B m = least value of either R, G, or B The equation for calculating intensity in the range of 0 to 1.0 is: I = (M + m) / 2 The equations for calculating saturation in the range of 0 to 1.0 are: If M = m, S = 0 ; otherwise ... If I ≤ 0.5, S = (M - m) / (M + m) If I > 0.5, S = (M - m) / (2 - M - m)

57

RGB to IHS The equations for calculating hue in the range of 0 to 360 are: If M = m, H = 0 ; otherwise ... M–R r = --------------M–m

M–G g = --------------M–m

M–B b = --------------M–m

If r= M, H = 60 (2 + b - g) If g= M, H = 60 (4 + r - b) If b= M, H = 60 (6 + g - r)

NOTE: At least one of the r, g, or b values is 0, corresponding to the color with the largest value, and at least one of the r, g, or b values is 1, corresponding to the color with the least value.

Customization Inputs: n1_dmtm Outputs: n3_rgbtoihs

58

Recode

Recode Recode assigns a new class value number to any or all classes of an existing .img file, creating an output file using the new class numbers. This function can also be used to combine classes by recoding more than one class to the same new class number.

Access From Spatial Modeler: This function is found in the file /etc/models/ Recode.gmd. From Image Interpreter: Select GIS Analysis... | Recode.... To view or edit the model, click the View... button in the Recode dialog.

Customization Inputs: n1_Inlandc Outputs: n3_level1

59

Rescale3D

Rescale3D Many hyperspectral scanners record the data in a format larger than 8-bit. In addition, many of the calculations used to correct the data will be performed with a floating point format to preserve precision. At some point, it will be advantageous to compress the data back into an 8-bit range for effective storage and/or display. However, when rescaling data to be used for imaging spectrometry analysis, it is necessary to consider all data values within the data cube, not just within the layer of interest. This algorithm is designed to maintain the 3-dimensional integrity of the data values. Any bit format can be input. The output image will always be 8-bit. When rescaling a data cube, a decision must be made as to which bands to include in the rescaling. Clearly, a “bad” band (i.e., a low S/N layer) should be excluded. Some sensors image in different regions of the electromagnetic (EM) spectrum (e.g., reflective and thermal infra-red or long- and short-wave reflective infra-red). When rescaling these data sets, it may be appropriate to rescale each EM region separately. These can be input using the Select Layer option in the IMAGINE Viewer.

Access Spatial Modeler: This model is found in the file /etc/models/Rescale3D.gmd. Image Interpreter: Select HyperSpectral Tools... | Rescale.... To view or edit the model, click the View... button in the 3 Dimensional Rescale dialog.

Algorithm Source: ERDAS 1. Calculate a minimum and a maximum DN value for each pixel in the image using all layers selected. 2. Calculate the minimum and maximum of step 1 (above) to obtain global min/max. 3. Rescale image using min/max rescaling to 8-bit dynamic range.

Customization Inputs: n10_hyperspectral Outputs: n14_rescale

60

Rescale Min-Max

Rescale Min-Max ERDAS IMAGINE is designed to use image data in a wide variety of data types. In the past, most image processing systems dealt largely with 8-bit images since that was the standard format. However, this caused certain limitations. For example, a band ratio image may produce values only in the DN = 0 - 5 range, but the decimal components of the ratios would contain a lot of the precision. This precision would be lost by an 8-bit output. In addition, some new sensors are using 16-bit data to store the raw image data. Some radar images come in complex number formats. A modern image processing system must be able to input and output a variety of data types. While it may be desirable from a precision point of view to carry data in a high precision type, these images become increasingly large. The Rescale models exist to address these considerations. These algorithms will utilize any data type as input and output. Most commonly, they will be used to compress one of the high precision types back to 8-bit. There are two possible methods of bit compression:

♦ Min-Max Stretch ♦ Standard Deviation Stretch Access From Spatial Modeler: This model is found in the file /etc/models/ Rescale_Minmax.gmd. From Image Interpreter: Select Utilities... | Rescale.... Under Input Range Options select Minimum-Maximum. To view or edit the model, click the View... button in the Rescale dialog.

Algorithm The Rescale Min-Max Stretch model is derived from this algorithm: ( DN in – Min in ) × ( Max out – Min out ) DN out = Min out + -----------------------------------------------------------------------------------------Max in – Min in

Customization Inputs: n1_lanier

61

Rescale Min-Max Outputs: n2_rescale

62

Rescale - Standard Deviation

Rescale - Standard Deviation See Rescale Min-Max for a description of the Rescale model.

Access From Spatial Modeler: This model is found in the file /etc/models/ Rescale_StdDev.gmd. From Image Interpreter: Select Utilities... | Rescale.... Under Input Range Options select Standard Deviation. To view or edit the model, click the View... button in the Rescale dialog.

Algorithm The Rescale model is derived from this algorithm: ( D N in – ( Mean ( DN in ) – NST D × SD ( DN in ) ) × ( MAX out – MIN out ) DN out = MI N out + --------------------------------------------------------------------------------------------------------------------------------------------------------------------------2 × NSTD ( SD ( DN in ) ) where:

NSTD =number of standard deviations Customization Inputs: n1_lanier Outputs: n2_rescale

63

Resolution Merge - Brovey Transform

Resolution Merge - Brovey Transform Resolution Merge functions integrate imagery of different spatial resolutions (pixel size). These can be used either intra-sensor (i.e., SPOT panchromatic with SPOT XS) or inter-sensor (i.e., SPOT panchromatic with Landsat TM). A key element of these multi-sensor integration techniques is that they retain the thematic information of the multiband raster image. Thus, you could merge a Landsat TM (28.5 m pixel) scene with a SPOT panchromatic scene (10 m pixel) and still do a meaningful classification, band ratio image, etc. Of course, there are practical limits to the application of this algorithm. You cannot, for example, merge SPOT panchromatic (10 m pixels) with AVHRR imagery (1,100 m pixels) to produce an AVHRR image with 10 m resolution. The relative resolutions of the two images determines what resampling technique is appropriate. The nearest neighbor resampling technique looks at four surrounding pixels, bilinear looks at eight surrounding pixels, and cubic convolution looks at 16 surrounding pixels. In general, you 2

should resample using N pixels where N = resolution ratio. For example, the resolution ratio for SPOT panchromatic (10 m) and Landsat TM (28.5 m) would be 2.85. This squared equals 8.1, so a bilinear resampling technique which looks at eight surrounding pixels would be appropriate. The Resolution Merge function offers three techniques. The Brovey transform uses a ratio algorithm to merge the layers. Multiplicative is based on simple arithmetic integration of the two raster sets. The Principal Component merge (like the Crisp enhancement) operates on PC-1 rather than the input raster image.

Access From Spatial Modeler: This model is found in the file /etc/models/ ResolnMergeBrovey.gmd. From Image Interpreter: Select Spatial Enhancement... | Resolution Merge.... Under Method select Brovey Transform. To view or edit the model, click the View... button in the Resolution Merge dialog.

Algorithm Source: ERDAS The Resolution Merge - Brovey Transform model is derived from this algorithm:

64

Resolution Merge - Brovey Transform [ DN R ⁄ ( DN R + DN G + DN B ) ] × DN hires = DN Rnew [ DN G ⁄ ( DN R + DN G + DN B ) ] × DN hires = DN Gnew [ DN B ⁄ ( DN R + DN G + DN B ) ] × DN hires = DN Bnew where R,G,B = red, green, and blue bands of the image.

Customization Inputs: n1_spots (high-res) n2_dmtm (multi-layer) Outputs: n6_brovey

65

Resolution Merge - Multiplicative

Resolution Merge - Multiplicative This merge algorithm operates on the original image. The result is an increased presence of the intensity component. For many applications, this is desirable. Users involved in urban or suburban studies, city planning, utilities routing, etc., often want roads and cultural features (which tend toward high reflectance) to be pronounced in the image.

➲ See the previous section on Resolution Merge - Brovey Transform for general information about resolution merges.

Access From Spatial Modeler: This model is found in the file /etc/models/ ResolnMergeMult.gmd. From Image Interpreter: Select Spatial Enhancement... | Resolution Merge.... To view or edit the model, click the View... button in the Resolution Merge dialog.

Algorithm The Resolution Merge - Multiplicative model is derived from this algorithm: DN TM1 × DN SPOT = DN TM1new

Customization Inputs: n3_spots (grayscale) n17_dmtm (multi-layer) Outputs: n11_merge_mult

66

Resolution Merge - Principal Components

Resolution Merge - Principal Components ➲ See Resolution Merge - Brovey Transform for general information about resolution merge models.

Access From Spatial Modeler: This model is found in the file /etc/models/ ResolnMergePC.gmd. From Image Interpreter: Select Spatial Enhancement... | Resolution Merge.... To view or edit the model, click the View... button in the Resolution Merge dialog.

Applications and Modifications Because a major goal of this merge is to retain the spectral information of the six TM bands (15, 7), this algorithm is mathematically rigorous. It is assumed that:

♦ PC-1 contains only overall scene luminance; all interband variation is contained in the other 5 PCs, and

♦ Scene luminance in the SWIR bands (from Landsat) is identical to Visible band scene luminance (from SPOT). With the above assumptions, the forward transform into principal components is made. PC-1 is removed and its numerical range (min to max) is determined. The high spatial resolution image is then remapped so that its histogram is kept constant but it is in the same numerical range as PC-1. It is then substituted for PC-1 and the reverse transform applied. This remapping is done so that the mathematics of the reverse transform do not distort the thematic information.

Algorithm Source: ERDAS 1. Calculate principal components (see Principal Components). 2. Remap high resolution image into data range of PC-1 and substitute for PC-1. 3. Reverse principal components transformation.

Customization Inputs: n1_dmtm

67

Resolution Merge - Principal Components Outputs: n23_merge_PC

68

Reverse

Reverse Reverse is a linear function that simply reverses the DN values. Dark detail becomes light and light detail becomes dark. This can also be used to invert a negative image (that has been scanned) to produce a positive image. The Inverse model provides a non-linear approach.

➲ The model shown here is for 8-bit data only (0-255). The model used in Image Interpreter (Radiometric Enhancement/Image Inversion/Reverse) can handle any data type.

Access From Spatial Modeler: This model is found in the file /etc/models/ Reverse.gmd. From Image Interpreter: Select Radiometric Enhancement... | Brightness Inversion.... Under Output Options select Reverse. To view or edit the model, click the View... button in the Brightness Inversion dialog.

Algorithm Source: Pratt 1991 The Reverse model is derived from this algorithm: DN out = 255 – DN in

Customization Inputs: n1_panAtlanta Outputs: n3_reverse

69

Search

Search Search performs a proximity analysis on an input thematic data file and creates an output file. The resulting output contains new class values that are assigned based on proximity to userspecified input class values. You select any class or set of classes in the input file from which to search. The program recodes the selected classes to 0 in the output file. Neighboring pixels are then assigned a value based on their Euclidean distance from these selected pixels. For instance, pixels which are 1 cell away from the selected pixels are assigned to class 1, pixels 2 cells away are assigned to class 2, and so forth. You can select the maximum distance to search. All pixels farther than this distance from the search class(es) are assigned the maximum output class value, which is one more than the distance to search. The output file is a single raster layer, where the data value at each pixel is the distance in pixels from the nearest pixel whose value belongs to the set of search classes.

Access From Spatial Modeler: This model is found in the file /etc/models/ Search.gmd. From Image Interpreter: Select GIS Analysis... | Search.... To view or edit the model, click the View... button in the Search dialog.

Algorithm The Search model is derived from this algorithm: 1. The program recodes the selected classes to 0 in the output file. 2. Neighboring pixels are then assigned a value based on their Euclidean distance from these selected pixels.

Customization Inputs: n1_Inlandc Outputs: n3_Insearch

70

Sieve

Sieve After an .img file has been processed using the Clump model which identifies clumps of particular class values, Sieve is used to eliminate clumps smaller than a minimum size that you specify. Like Clump, Sieve outputs a raster file in which the clumps are sequentially numbered as the program processes the data. Clumps smaller than the minimum size are assigned a value of 0. Sieve differs from Eliminate in that Eliminate fills in the small clumps using neighboring values, while Sieve recodes the small clumps to 0. You may need to refer back to the original image file to determine the class value of each clump area. This information is contained in the “Original Value” raster attribute of the clumped file. To renumber the values from this attribute column, use the DELROWS function. (See the example under DELROWS in the on-line Spatial Modeler Language Manual.) An alternate method would be to use a zonal function such as ZONAL MAX or ZONAL MAJORITY, using the output of Sieve as the zone file and the original image as the class file. The zonal functions are accessible either directly from Spatial Modeler or within the Image Interpreter Summary function. Sieve uses a histogram of the input raster file to process the data.

☞ Before using the Sieve model, the original raster .img file must first be processed using Clump.

Access From Spatial Modeler: This model is found in the file /etc/models/Sieve.gmd. From Image Interpreter: Select GIS Analysis... | Sieve.... To view or edit the model, click the View... button in the Sieve dialog.

Algorithm The Sieve model is derived from this algorithm: 1. Any value whose histogram count is less than the selected threshold is set to 0. 2. Remaining values are sequentially re-numbered.

Customization Inputs: n1_Inclump

71

Sieve Outputs: n3_Insieve

72

Signal To Noise

Signal To Noise The signal-to-noise (S/N) ratio is commonly used to evaluate the usefulness or validity of a particular band. In this implementation, S/N is defined as Mean/Std.Dev. in a 3X3 moving window. After running this function on a data set, each layer in the output image should be visually inspected to evaluate suitability for inclusion into the analysis. Layers deemed unacceptable can be excluded from the processing by using the Select Layers option of the various Graphical User Interfaces (GUI's). This can be used as a sensor evaluation tool.

Access Spatial Modeler: This model is found in the file /etc/models/ SignalToNoise.gmd. Image Interpreter: Select HyperSpectral Tools... | Signal to Noise.... To view or edit the model, click the View... button in the Signal To Noise dialog.

Algorithm Source: ERDAS 1. For every pixel in the image, calculate the mean within a 3 x 3 moving window. 2. For every pixel in the image, calculate the standard deviation within a 3 x 3 moving window. 3. Divide the mean (step 1, above) by the standard deviation (step 2, above).

Customization Inputs: n1_hyperspectral Outputs: n6_signaltonoise

73

Slope - Degrees

Slope - Degrees The Slope function computes the slope of a topographic image file in percentage or degrees. The resulting output is an .img file containing slope. The input raster file must be georeferenced, and you must know whether the elevation data values are in units of feet, meters, or other.

Access From Spatial Modeler: This model is found in the file /etc/models/ Slope_degrees.gmd. From Image Interpreter: Select Topographic Analysis... | Slope.... Under Output units select Degree. To view or edit the model, click the View... button in the Surface Slope dialog.

Algorithm Source: ERDAS Slope uses a 3 by 3 window around each pixel to calculate the slope.

x y

a

b

c

d

e

f

g

h

i

For pixel (x, y), the average changes in elevation in both x and y directions are calculated first. ∆x 1 = c – a ∆x 2 = f – d ∆x 3 = i – g

∆y 1 = a – g ∆y 2 = b – h ∆y 3 = c – i

where:

a ... i = elevation values of pixels in a 3 by 3 window as shown above ∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ ( 3s x )

74

Slope - Degrees ∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ ( 3s y ) where: sx

= x pixel size

sy

= y pixel size

Next the resulting change in elevation is calculated. ( ∆x ) 2 + ( ∆y ) 2 ∆z = -------------------------------------2 Finally the slope angle is calculated: 180 slope (in degrees) = tan–1 ( ∆z ) × --------π

Customization Inputs: n1_Indem Outputs: n2_slope_degree

Example The elevation of each pixel neighboring the pixel of interest (shaded) is given in the following example. Note that the elevation of the pixel of interest is not considered. Each pixel is 30 by 30 meters.

10

20

22 20

25 25

24

18

The average changes in elevation in the x and y directions are calculated as follows:

75

Slope - Degrees 15 + 3 – 2 ∆x = ------------------------ = 0.178 3 × 30

– 10 – 4 + 7 ∆y = ---------------------------- = – 0.078 3 × 30

Next the resulting change in elevation is calculated: 0.0378 ( 0.178 ) 2 + ( – 0.078 ) 2 ∆z = ------------------------------------------------------- = -------------------- = 0.0972 2 2 Finally the slope in degrees is calculated: 180 tan–1 ( 0.0972 ) × --------- = 5.55 degrees π

☞ The trigonometric functions of the Modeler always return radians. When using a calculator, be sure to set the trigonometric mode to radians or use the formula tan–1 ( ∆z ) for calculating in degrees mode.

76

Slope - Percent

Slope - Percent Computes the slope as a percentage based on a 3 by 3 neighborhood around each pixel. The input raster file is assumed to contain elevation values. In ERDAS IMAGINE, the relationship between percentage and degree expressions of slope is as follows:

♦ a 45° angle is considered a 100% slope ♦ a 90° angle is considered a 200% slope ♦ slopes less than 45° fall within the 1 - 100% range ♦ slopes between 45° and 90° are expressed as 100 - 200% slopes The precise algorithm is given below.

Access From Spatial Modeler: This model is found in the file /etc/models/ Slope_percent.gmd. From Image Interpreter: Select Topographic Analysis... | Slope.... Under Output units select Percent. To view or edit the model, click the View... button in the Surface Slope dialog.

Algorithm Source: ERDAS Slope uses a 3 by 3 window around each pixel to calculate the slope.

x y

a

b

c

d

e

f

g

h

i

For pixel (x, y), the average changes in elevation in both x and y directions are calculated first.

i

Note that the elevation (e) of the pixel of interest is not part of the calculation.

77

Slope - Percent ∆x 1 = c – a ∆x 2 = f – d ∆x 3 = i – g

∆y 1 = a – g ∆y 2 = b – h ∆y 3 = c – i

where:

a ... i = elevation values of pixels in a 3 by 3 window as shown above ∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ ( 3s x ) ∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ ( 3s y ) where: sx

= x pixel size

sy

= y pixel size

Next the resulting change in elevation is calculated. ( ∆x ) 2 + ( ∆y ) 2 ∆z = -------------------------------------2 Finally the slope is converted to percent. if ∆ z ≤ 1, percent slope = ( ∆z × 100 ) 100 if ∆ z > 1, percent slope = 200 – --------∆z

Customization Inputs: n1_Indem Outputs: n2_slope_percent

Example The elevation of each pixel neighboring the pixel of interest (shaded) is given in the following example. Each pixel is 30 by 30 meters.

78

Slope - Percent

10

20

22 20

25 25

24

18

The average changes in elevation in the x and y directions are calculated as follows: 15 + 3 – 2 ∆x = ------------------------ = 0.178 3 × 30

– 10 – 4 + 7 ∆y = ---------------------------- = – 0.078 3 × 30

Next the combined change in elevation is calculated: 0.0378 ( 0.178 ) 2 + ( – 0.078 ) 2 ∆z = ------------------------------------------------------- = -------------------- = 0.0972 2 2 Finally the slope in percent is calculated. Since ∆ z is less than one then: ( 0.0972 × 100 ) = 9.72 %

79

Non-directional Edge

Non-directional Edge The Non-directional Edge filter (also known as the Sobel/Prewitt filter) is for edge detection. It is based on the Sobel zero-sum convolution kernel. Most of the standard image processing filters are implemented as a single pass moving window (kernel) convolution. Examples include low pass, edge enhance, edge detection, and summary filters. Two very common filters, Prewitt and Sobel, utilize orthogonal kernels convolved separately with the original image, then combined. Both of these filters are based on a calculation of the 1st derivative, or slope, in both the x and y directions. For this model, a Sobel filter has been selected. To convert this model to the Prewitt filter calculation, the kernels must be changed according to the table below:

Sobel =

-1 -2 -1

1

0

-1

0

0

0

2

0

-2

1

2

1

1

0

-1

horizontal

Prewitt =

vertical

-1 -1 -1

1

0

-1

0

0

0

1

0

-1

1

1

1

1

0

-1

horizontal

vertical

➲ For more information on Edge Detection and the Sobel convolution kernel, see the “Enhancement” chapter in the ERDAS Field Guide.

Access From Spatial Modeler: This function is found in the file /etc/models/ Sobel_Filter.gmd. From Image Interpreter: Select Spatial Enhancement... | Non-directional Edge.... To view or edit the model, click the View... button in the Non-directional Edge dialog.

Algorithm Source: Pratt 1991

80

Non-directional Edge The Non-directional Edge filter model is derived from this algorithm: 1. Convolve original image with orthogonal first derivative kernels. 2. Combine as square root of the sum of the squares.

Customization Inputs: n1_lanier Outputs: n11_Insobel

81

TM Dehaze

TM Dehaze When sunlight passes through atmosphere containing haze (particulate matter) the resultant image is blurred because of particle induced scattering. The extent to which this happens to a particular image is called its point spread. Research has indicated that the fourth component of the Tassled Cap (TC) transformation is highly correlated with the extent of haze in the atmosphere when the image has been captured. This algorithm attempts to invert Tasseled Cap Component 4 (TC4) to remove the atmospherically induced blurring of the image.

Access From Spatial Modeler: This function is found in the file /etc/models/ TMDehaze.gmd. From Image Interpreter on the ERDAS IMAGINE main menu, select Radiometric Enhancement... | Haze Reduction..., then enter the name of a Landsat TM Data file such as lanier.img. Under Method choose Landsat 4 TM or Landsat 5 TM. To view or edit the model, click the View... button in the Haze Reduction dialog.

Algorithm Source: ERDAS 1. Calculate Tasseled Cap (TC) for the input image. 2. Create a plot of Tasseled Cap Component 4 (TC4) versus each TM layer. 3. Derive slope (S) and intercept (I) of the plot in step 2 (above). 4. Correct each input pixel using the formula: TM (corrected) = TM (input) - [

(TC4 - I) x S ] Customization Inputs: n1_Klon_TM Outputs: n11_dehaze

82

TM Dehaze

83

TM Destripe

TM Destripe This algorithm removes scan line noise from Landsat TM imagery.

Access Spatial Modeler: This model is found in the file /etc/models/ TM_Destripe.gmd. Image Interpreter: Select Radiometric Enhancement... | Destripe TM Data.... To view or edit the model, click the View... button in the Destripe TM model.

Algorithm Source: Crippen,1989 1. Apply a 1 x 101 low-pass filter to the image. 2. Apply a 33 x 1 high-pass filter to step1 (above). 3. Apply a 1 x 31 low-pass filter to step 2 (above). 4. Subtract step 3 (above) from the original image.

Customization Inputs: n1_TM_striped Outputs: n5_Destriped

84

Tasseled Cap - TM

Tasseled Cap - TM The Tasseled Cap transformation offers a way to optimize data viewing for vegetation studies. The different bands in a multispectral image can be visualized as defining an N-dimensional space where N is the number of bands. Each pixel, positioned according to its DN value in each band, lies within the N-dimensional space. This pixel distribution is determined by the absorption/ reflection spectra of the imaged material.

➲ For more information see the “Enhancement” chapter in the ERDAS Field Guide. Access From Spatial Modeler: This model is found in the file /etc/models/ TasseledCap_TM.gmd. From Image Interpreter: Select Spectral Enhancement... | Tasseled Cap.... To view or edit the model, click the View... button in the Tasseled Cap dialog.

Algorithm Source: Crist et al 1986 For TM4: Brightness =

0.3037 × DN band1 + 0.2793 × DN band2 + ... + 0.1863 × DN band7

Landsat 4 Matrix Band Coefficients band 1 band 2 band 3 band 4 band 5 band 7

Feature

Brightness

.3037 .2793 .4743 .5585

.5082 .1863

Greenness

-.2848 -.2435 -.5436 .7243

.0840 -.1800

Wetness

.1509 .1973 .3279 .3406 -.7112 -.4572

Haze

.8832 -.0819 -.4580 -.0032 -.0563 .0130

Fifth

.0573 -.0260 .0335 -.1943 .4766 -.8545

Sixth

.1238 -.9038 .4041 .0573 -.0261 .0240

85

Tasseled Cap - TM

Customization Inputs: n1_lanier Outputs: n4_Intassel

86

Topographic Normalization

Topographic Normalization Digital imagery from mountainous regions often contains a radiometric distortion known as topographic effect. One way to reduce topographic effect in digital imagery is by applying transformations such as the Lambertian or Non-Lambertian reflectance models. These models normalize the imagery, making it appear is if it were a flat surface instead of topographic data. When using the Topographic Normalization model, you will need the following information:

♦ solar elevation and azimuth of sensor at time of image acquisition ♦ DEM file ♦ original imagery file (after atmospheric corrections) ➲ For more information on Non-Lambertian models, see the “Terrain Analysis” chapter in the Erdas Field Guide.

Access From Spatial Modeler: This model is found in the file /etc/models/ Topo_Normalization.gmd. From Image Interpreter: Select Topographic Analysis... | Topographic Normalize.... To view or edit the model, click the View... button in the Lambertian Reflection Model dialog.

Algorithm Source: Hodgson et al. (1994), Colby (1991), Smith et al. (1980) The Topographic Normalization model is derived from this algorithm: BV observedλ cos e BV normalλ = ----------------------------------------k ( cos i ) ( cos e ) where: BV normalλ =normalized brightness values BV observedλ =observed brightness values

cos i = cosine of the incidence angle cos e = cosine of the exitance angle, or slope angle k = the empirically derived Minnaert constant (if unknown, these may be set to 1.0 and the model becomes Lambertian)

87

Topographic Normalization

Customization Inputs: n16_eldoatm Outputs: n18_eldonorm

88

Vector To Raster

Vector To Raster This model converts vector data from a coverage into raster data.The vector dialog allows selection of the resolution to be used for rasterization. An attribute of the coverage may be selected to define the value for rasterization.

Access From Spatial Modeler: This function is found in the file /etc/models/ VectorToRaster.gmd. From Image Interpreter on the ERDAS IMAGINE main menu, select Utilities... | Vector To Raster.... To view or edit the model, click the View... button on the Vector to Raster dialog.

Customization Inputs: n1_zone88_ZONING Outputs: n3_zoning

89

Vegetation Indexes - NDVI

Vegetation Indexes - NDVI This group of algorithms contains a number of simple band combinations that are commonly used for either vegetation or mineral delineation. Indices are used extensively in mineral exploration and vegetation analyses to bring out small differences between various rock types and vegetation classes. In many cases, judiciously chosen indices can highlight and enhance differences which cannot be observed in the display of the original color bands. The models included calculate:

♦ Clay Minerals = TM 5/7 ♦ Ferrous Minerals = TM 5/4 ♦ Ferric Minerals (iron oxide) = TM 3/1 ♦ Mineral Composite = TM 5/7, 5/4, 3/1 ♦ Hydrothermal Composite = TM 5/7, 3/1, 4/3 Source: Modified from Sabins 1987, Jensen 1986, Tucker 1979 The algorithm selected for this model is the most widely used vegetation index: Normalized Difference Vegetation Index (NDVI) using Landsat TM imagery.

➲ For more information on indices, see the “Enhancement” chapter in the ERDAS Field Guide. Access From Spatial Modeler: This model is found in the file /etc/models/ Veg_NDVI.gmd. From Image Interpreter: Select Spectral Enhancement... | Indices.... Under Select Function choose NDVI. To view or edit the model, click the View... button in the Indices dialog.

Algorithm Source: Jensen 1986 The NDVI model is derived from this algorithm: IR – R NDVI = ---------------IR + R Other algorithms available in this Image Interpreter function are:

Vegetation Index = TM4 - TM3 90

Vegetation Indexes - NDVI

IR⁄R = TM4⁄ TM3 SQRT IR/R =

TM4 -----------TM3

Customization Inputs: n1_lanier Outputs: n15_InNDVI

91

Related Documents

Gmr
November 2019 20
Gmr Mklh Komdat.docx
November 2019 27
Gmr Brhtung 5b
June 2020 9
Ate-a-vista-gmr
May 2020 6
Gmr Brhtunbg 6b
June 2020 1
Gmr Brhitung 3b
June 2020 8