Yet Another Haskell Tutorial + Haskell Wikibooks Printable Version

  • August 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Yet Another Haskell Tutorial + Haskell Wikibooks Printable Version as PDF for free.

More details

  • Words: 163,084
  • Pages: 477
Yet Another Haskell Tutorial Hal Daum´e III

Copyright (c) Hal Daume III, 2002-2006. The preprint version of this tutorial is intended to be free to the entire Haskell community. It may be distributed under the terms of the GNU Free Document License, as permission has been granted to incorporate it into the Wikibooks projects.

About This Report The goal of the Yet Another Haskell Tutorial is to provide a complete introduction to the Haskell programming language. It assumes no knowledge of the Haskell language or familiarity with functional programming in general. However, general familiarity with programming concepts (such as algorithms) will be helpful. This is not intended to be an introduction to programming in general; rather, to programming in Haskell. Sufficient familiarity with your operating system and a text editor is also necessary (this report only discusses installation on configuration on Windows and *Nix system; other operating systems may be supported – consult the documentation of your chosen compiler for more information on installing on other platforms).

What is Haskell? Haskell is called a lazy, pure functional programming language. It is called lazy because expressions which are not needed to determine the answer to a problem are not evaluated. The opposite of lazy is strict, which is the evaluation strategy of most common programming languages (C, C++, Java, even ML). A strict language is one in which every expression is evaluated, whether the result of its computation is important or not. (This is probably not entirely true as optimizing compilers for strict languages often do what’s called “dead code elimination” – this removes unused expressions from the program.) It is called pure because it does not allow side effects (A side effect is something that affects the “state” of the world. For instance, a function that prints something to the screen is said to be side-effecting, as is a function which affects the value of a global variable.) – of course, a programming language without side effects would be horribly useless; Haskell uses a system of monads to isolate all impure computations from the rest of the program and perform them in the safe way (see Chapter 9 for a discussion of monads proper or Chapter 5 for how to do input/output in a pure language). Haskell is called a functional language because the evaluation of a program is equivalent to evaluating a function in the pure mathematical sense. This also differs from standard languages (like C and Java) which evaluate a sequence of statements, one after the other (this is termed an imperative language). i

ii

The History of Haskell The history of Haskell is best described using the words of the authors. The following text is quoted from the published version of the Haskell 98 Report: In September of 1987 a meeting was held at the conference on Functional Programming Languages and Computer Architecture (FPCA ’87) in Portland, Oregon, to discuss an unfortunate situation in the functional programming community: there had come into being more than a dozen nonstrict, purely functional programming languages, all similar in expressive power and semantic underpinnings. There was a strong consensus at this meeting that more widespread use of this class of functional languages was being hampered by the lack of a common language. It was decided that a committee should be formed to design such a language, providing faster communication of new ideas, a stable foundation for real applications development, and a vehicle through which others would be encouraged to use functional languages. This document describes the result of that committee’s efforts: a purely functional programming language called Haskell, named after the logician Haskell B. Curry whose work provides the logical basis for much of ours. The committee’s primary goal was to design a language that satisfied these constraints: 1. It should be suitable for teaching, research, and applications, including building large systems. 2. It should be completely described via the publication of a formal syntax and semantics. 3. It should be freely available. Anyone should be permitted to implement the language and distribute it to whomever they please. 4. It should be based on ideas that enjoy a wide consensus. 5. It should reduce unnecessary diversity in functional programming languages. The committee intended that Haskell would serve as a basis for future research in language design, and hoped that extensions or variants of the language would appear, incorporating experimental features. Haskell has indeed evolved continuously since its original publication. By the middle of 1997, there had been four iterations of the language design (the latest at that point being Haskell 1.4). At the 1997 Haskell Workshop in Amsterdam, it was decided that a stable variant of Haskell was needed; this stable language is the subject of this Report, and is called “Haskell 98”. Haskell 98 was conceived as a relatively minor tidy-up of Haskell 1.4, making some simplifications, and removing some pitfalls for the unwary.

iii It is intended to be a “stable” language in the sense that the implementors are committed to supporting Haskell 98 exactly as specified, for the foreseeable future. The original Haskell Report covered only the language, together with a standard library called the Prelude. By the time Haskell 98 was stabilised, it had become clear that many programs need access to a larger set of library functions (notably concerning input/output and simple interaction with the operating system). If these program were to be portable, a set of libraries would have to be standardised too. A separate effort was therefore begun by a distinct (but overlapping) committee to fix the Haskell 98 Libraries.

Why Use Haskell? Clearly you’re interested in Haskell since you’re reading this tutorial. There are many motivations for using Haskell. My personal reason for using Haskell is that I have found that I write more bug-free code in less time using Haskell than any other language. I also find it very readable and extensible. Perhaps most importantly, however, I have consistently found the Haskell community to be incredibly helpful. The language is constantly evolving (that’s not to say it’s instable; rather that there are numerous extensions that have been added to some compilers which I find very useful) and user suggestions are often heeded when new extensions are to be implemented.

Why Not Use Haskell? My two biggest complaints, and the complaints of most Haskellers I know, are: (1) the generated code tends to be slower than equivalent programs written in a language like C; and (2) it tends to be difficult to debug. The second problem tends not be to a very big issue: most of the code I’ve written is not buggy, as most of the common sources of bugs in other languages simply don’t exist in Haskell. The first issue certainly has come up a few times in my experience; however, CPU time is almost always cheaper than programmer time and if I have to wait a little longer for my results after having saved a few days programming and debugging. Of course, this isn’t the case of all applications. Some people may find that the speed hit taken for using Haskell is unbearable. However, Haskell has a standardized foreign-function interface which allow you to link in code written in other languages, for when you need to get the most speed out of your code. If you don’t find this sufficient, I would suggest taking a look at the language O’Caml, which often outperforms even C++, yet also has many of the benefits of Haskell.

iv

Target Audience There have been many books and tutorials written about Haskell; for a (nearly) complete list, visit the http://haskell.org/bookshelf (Haskell Bookshelf) at the Haskell homepage. A brief survey of the tutorials available yields: • A Gentle Introduction to Haskell is an introduction to Haskell, given that the reader is familiar with functional programming en large. • Haskell Companion is a short reference of common concepts and definitions. • Online Haskell Course is a short course (in German) for beginning with Haskell. • Two Dozen Short Lessons in Haskell is the draft of an excellent textbook that emphasizes user involvement. • Haskell Tutorial is based on a course given at the 3rd International Summer School on Advanced Functional Programming. • Haskell for Miranda Programmers assumes knowledge of the language Miranda. Though all of these tutorials is excellent, they are on their own incomplete: The “Gentle Introduction” is far too advanced for beginning Haskellers and the others tend to end too early, or not cover everything. Haskell is full of pitfalls for new programmers and experienced non-functional programmers alike, as can be witnessed by reading through the archives of the Haskell mailing list. It became clear that there is a strong need for a tutorial which is introductory in the sense that it does not assume knowledge of functional programming, but which is advanced in the sense that it does assume some background in programming. Moreover, none of the known tutorials introduce input/output and interactivity soon enough (Paul Hudak’s book is an exception in that it does introduce IO by page 35, though the focus and aim of that book and this tutorial are very different). This tutorial is not for beginning programmers; some experience and knowledge of programming and computers is assumed (though the appendix does contain some background information). The Haskell language underwent a standardization process and the result is called Haskell 98. The majority of this book will cover the Haskell 98 standard. Any deviations from the standard will be noted (for instance, many compilers offer certain extensions to the standard which are useful; some of these may be discussed). The goals of this tutorial are: • to be practical above all else • to provide a comprehensive, free introduction to the Haskell language • to point out common pitfalls and their solutions • to provide a good sense of how Haskell can be used in the real world

v

Additional Online Sources of Information A Short Introduction to Haskell : http://haskell.org/aboutHaskell.html Haskell Wiki : http://haskell.org/hawiki/ Haskell-Tutorial : ftp://ftp.geoinfo.tuwien.ac.at/navratil/HaskellTutorial. pdf Tour of the Haskell Prelude : http://www.cs.uu.nl/˜afie/haskell/tourofprelude.html Courses in Haskell : http://haskell.org/classes/

Acknowledgements It would be inappropriate not to give credit also to the original designers of Haskell. Those are: Arvind, Lennart Augustsson, Dave Barton, Brian Boutel, Warren Burton, Jon Fairbairn, Joseph Fasel, Andy Gordon, Maria Guzman, Kevin Hammond, Ralf Hinze, Paul Hudak, John Hughes, Thomas Johnsson, Mark Jones, Dick Kieburtz, John Launchbury, Erik Meijer, Rishiyur Nikhil, John Peterson, Simon Peyton Jones, Mike Reeve, Alastair Reid, Colin Runciman, Philip Wadler, David Wise, Jonathan Young. Finally, I would like to specifically thank Simon Peyton Jones, Simon Marlow, John Hughes, Alastair Reid, Koen Classen, Manuel Chakravarty, Sigbjorn Finne and Sven Panne, all of whom have made my life learning Haskell all the more enjoyable by always being supportive. There were doubtless others who helped and are not listed, but these are those who come to mind. Also thanks to the many people who have reported “bugs” in the first edition. - Hal Daum´e III

vi

Contents 1 Introduction

3

2 Getting Started 2.1 Hugs . . . . . . . . . . . . . . . . 2.1.1 Where to get it . . . . . . 2.1.2 Installation procedures . . 2.1.3 How to run it . . . . . . . 2.1.4 Program options . . . . . 2.1.5 How to get help . . . . . . 2.2 Glasgow Haskell Compiler . . . . 2.2.1 Where to get it . . . . . . 2.2.2 Installation procedures . . 2.2.3 How to run the compiler . 2.2.4 How to run the interpreter 2.2.5 Program options . . . . . 2.2.6 How to get help . . . . . . 2.3 NHC . . . . . . . . . . . . . . . . 2.3.1 Where to get it . . . . . . 2.3.2 Installation procedures . . 2.3.3 How to run it . . . . . . . 2.3.4 Program options . . . . . 2.3.5 How to get help . . . . . . 2.4 Editors . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . .

5 6 6 6 6 7 7 7 8 8 8 8 9 9 9 9 9 9 9 9 9

3 Language Basics 3.1 Arithmetic . . . . . . . . . . . 3.2 Pairs, Triples and More . . . . 3.3 Lists . . . . . . . . . . . . . . 3.3.1 Strings . . . . . . . . 3.3.2 Simple List Functions 3.4 Source Code Files . . . . . . . 3.5 Functions . . . . . . . . . . . 3.5.1 Let Bindings . . . . . 3.5.2 Infix . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

11 13 14 15 17 18 20 22 27 28

. . . . . . . . .

. . . . . . . . .

vii

CONTENTS

viii 3.6 3.7 3.8

Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interactivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28 29 31

4 Type Basics 4.1 Simple Types . . . . . . . . . . . 4.2 Polymorphic Types . . . . . . . . 4.3 Type Classes . . . . . . . . . . . 4.3.1 Motivation . . . . . . . . 4.3.2 Equality Testing . . . . . 4.3.3 The Num Class . . . . . . 4.3.4 The Show Class . . . . . . 4.4 Function Types . . . . . . . . . . 4.4.1 Lambda Calculus . . . . . 4.4.2 Higher-Order Types . . . 4.4.3 That Pesky IO Type . . . . 4.4.4 Explicit Type Declarations 4.4.5 Functional Arguments . . 4.5 Data Types . . . . . . . . . . . . 4.5.1 Pairs . . . . . . . . . . . . 4.5.2 Multiple Constructors . . 4.5.3 Recursive Datatypes . . . 4.5.4 Binary Trees . . . . . . . 4.5.5 Enumerated Sets . . . . . 4.5.6 The Unit type . . . . . . . 4.6 Continuation Passing Style . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

37 37 39 40 40 41 41 41 42 42 42 44 45 46 47 47 49 51 51 52 53 53

5 Basic Input/Output 5.1 The RealWorld Solution 5.2 Actions . . . . . . . . . 5.3 The IO Library . . . . . 5.4 A File Reading Program

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

57 57 58 62 64

6 Modules 6.1 Exports . . . . . . . . . . . 6.2 Imports . . . . . . . . . . . 6.3 Hierarchical Imports . . . . 6.4 Literate Versus Non-Literate 6.4.1 Bird-scripts . . . . . 6.4.2 LaTeX-scripts . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

67 67 69 70 71 71 72

7 Advanced Features 7.1 Sections and Infix Operators 7.2 Local Declarations . . . . . 7.3 Partial Application . . . . . 7.4 Pattern Matching . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

73 73 74 76 80

. . . .

CONTENTS

ix

7.5 7.6

Guards . . . . . . . . . . . . . . Instance Declarations . . . . . . 7.6.1 The Eq Class . . . . . . 7.6.2 The Show Class . . . . 7.6.3 Other Important Classes 7.6.4 Class Contexts . . . . . 7.6.5 Deriving Classes . . . . 7.7 Datatypes Revisited . . . . . . . 7.7.1 Named Fields . . . . . . 7.8 More Lists . . . . . . . . . . . . 7.8.1 Standard List Functions 7.8.2 List Comprehensions . . 7.9 Arrays . . . . . . . . . . . . . . 7.10 Finite Maps . . . . . . . . . . . 7.11 Layout . . . . . . . . . . . . . . 7.12 The Final Word on Lists . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

83 84 84 86 86 89 89 90 90 92 92 94 96 97 99 99

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

103 103 104 105 105 108 108 109 113 115 117 117

9 Monads 9.1 Do Notation . . . . . . . . . . . 9.2 Definition . . . . . . . . . . . . 9.3 A Simple State Monad . . . . . 9.4 Common Monads . . . . . . . . 9.5 Monadic Combinators . . . . . 9.6 MonadPlus . . . . . . . . . . . 9.7 Monad Transformers . . . . . . 9.8 Parsing Monads . . . . . . . . . 9.8.1 A Simple Parsing Monad 9.8.2 Parsec . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

119 120 122 124 130 134 137 139 144 144 151

8 Advanced Types 8.1 Type Synonyms . . . 8.2 Newtypes . . . . . . 8.3 Datatypes . . . . . . 8.3.1 Strict Fields . 8.4 Classes . . . . . . . 8.4.1 Pong . . . . 8.4.2 Computations 8.5 Instances . . . . . . 8.6 Kinds . . . . . . . . 8.7 Class Hierarchies . . 8.8 Default . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

CONTENTS 10 Advanced Techniques 10.1 Exceptions . . . . . . 10.2 Mutable Arrays . . . 10.3 Mutable References . 10.4 The ST Monad . . . 10.5 Concurrency . . . . . 10.6 Regular Expressions 10.7 Dynamic Types . . .

1

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

157 157 157 157 157 157 157 157

A Brief Complexity Theory

159

B Recursion and Induction

161

C Solutions To Exercises

163

2

CONTENTS

Chapter 1

Introduction This tutorial contains a whole host of example code, all of which should have been included in its distribution. If not, please refer to the links off of the Haskell web site (haskell.org) to get it. This book is formatted to make example code stand out from the rest of the text. Code will look like this. Occasionally, we will refer to interaction betwen you and the operating system and/or the interactive shell (more on this in Section 2). Interaction will look like this. Strewn throughout the tutorial, we will often make additional notes to something written. These are often for making comparisons to other programming languages or adding helpful information. NOTE Notes will appear like this. If we’re covering a difficult or confusing topic and there is something you should watch out for, we will place a warning. WARNING Warnings will appear like this. Finally, we will sometimes make reference to built-in functions (so-called Preludefunctions). This will look something like this: map :: (a− > b)− > [a]− > [b] Within the body text, Haskell keywords will appear like this: where, identifiers as map, types as String and classes as Eq. 3

4

CHAPTER 1. INTRODUCTION

Chapter 2

Getting Started There are three well known Haskell system: Hugs, GHC and NHC. Hugs is exclusively an interpreter, meaning that you cannot compile stand-alone programs with it, but can test and debug programs in an interactive environment. GHC is both an interpreter (like Hugs) and a compiler which will produce stand-alone programs. NHC is exclusively a compiler. Which you use is entirely up to you. I’ve tried to make a list of some of the differences in the following list but of course this is far from exhaustive: Hugs - very fast to load files; slow to run them; implements almost all of Haskell 98 (the standard) and most extensions; built-in support for module browsing; cannot create stand-alones; written in C; works on almost every platform; build in graphics library. GHC - interactive environment is slower than Hugs to load, but allows function definitions in the environment (in Hugs you have to put them in a file); implements all of Haskell 98 and extensions; good support for interfacing with other languages; in a sense the “de facto” standard. NHC - less used and no interactive environment, but produces smaller and often faster executables than does GHC; supports Haskell 98 and some extensions. I, personally, have all of them installed and use them for different purposes. I tend to use GHC to compile (primarily because I’m most familiar with it) and the Hugs interactive environment, since it is much faster. As such, this is what I would suggest. However, that is a fair amount to download an install, so if you had to go with just one, I’d get GHC, since it contains both a compiler and interactive environment. Following is a description of how to download and install each of this as of the time this tutorial was written. It may have changed – see http://haskell.org (the Haskell website) for up-to-date information. 5

CHAPTER 2. GETTING STARTED

6

2.1 Hugs Hugs supports almost all of the Haskell 98 standard (it lacks some of the libraries), as well as a number of advanced/experimental extensions, including: multi-parameter type classes, extensible records, rank-2 polymorphism, existentials, scoped type variables, and restricted type synonyms.

2.1.1 Where to get it The official Hugs web page is at: http://haskell.org/hugs (http://haskell.org/hugs) If you go there, there is a link titled “downloading” which will send you to the download page. From that page, you can download the appropriate version of Hugs for your computer.

2.1.2 Installation procedures Once you’ve downloaded Hugs, installation differs depending on your platform, however, installation for Hugs is more of less identical to installation for any program on your platform. For Windows when you click on the “msi” file to download, simply choose “Run This Program” and the installation will begin automatically. From there, just follow the on-screen instructions. For RPMs use whatever RPM installation program you know best. For source first gunzip the file, then untar it. Presumably if you’re using a system which isn’t otherwise supported, you know enough about your system to be able to run configure scripts and make things by hand.

2.1.3 How to run it On Unix machines, the Hugs interpreter is usually started with a command line of the form: hugs [option — file] ... On Windows , Hugs may be started by selecting it from the start menu or by double clicking on a file with the .hs or .lhs extension. (This manual assumes that Hugs has already been successfully installed on your system.) Hugs uses options to set system parameters. These options are distinguished by a leading + or - and are used to customize the behaviour of the interpreter. When Hugs starts, the interpreter performs the following tasks: • Options in the environment are processed. The variable HUGSFLAGS holds these options. On Windows 95/NT, the registry is also queried for Hugs option settings. • Command line options are processed.

2.2. GLASGOW HASKELL COMPILER

7

• Internal data structures are initialized. In particular, the heap is initialized, and its size is fixed at this point; if you want to run the interpreter with a heap size other than the default, then this must be specified using options on the command line, in the environment or in the registry. • The prelude file is loaded. The interpreter will look for the prelude file on the path specified by the -P option. If the prelude, located in the file Prelude.hs, cannot be found in one of the path directories or in the current directory, then Hugs will terminate; Hugs will not run without the prelude file. • Program files specified on the command line are loaded. The effect of a command hugs f1 ... fn is the same as starting up Hugs with the hugs command and then typing :load f1 ... fn. In particular, the interpreter will not terminate if a problem occurs while it is trying to load one of the specified files, but it will abort the attempted load command. The environment variables and command line options used by Hugs are described in the following sections.

2.1.4 Program options To list all of the options would take too much space. The most important option at this point is “+98” or “-98”. When you start hugs with “+98” it is in Haskell 98 mode, which turns off all extensions. When you start in “-98”, you are in Hugs mode and all extensions are turned on. If you’ve downloaded someone else’s code and you’re having trouble loading it, first make sure you have the “98” flag set properly. Further information on the Hugs options is in the manual: http://cvs.haskell.org/Hugs/pages/hugsman/started.html ().

2.1.5 How to get help To get Hugs specific help, go to the Hugs web page. To get general Haskell help, go to the Haskell web page.

2.2 Glasgow Haskell Compiler The Glasgow Haskell Compiler (GHC) is a robust, fully-featured, optimising compiler and interactive environment for Haskell 98; GHC compiles Haskell to either native code or C. It implements numerous experimental language extensions to Haskell 98; for example: concurrency, a foreign language interface, multi-parameter type classes, scoped type variables, existential and universal quantification, unboxed types, exceptions, weak pointers, and so on. GHC comes with a generational garbage collector, and a space and time profiler.

CHAPTER 2. GETTING STARTED

8

2.2.1 Where to get it Go to the official GHC web page http://haskell.org/ghc (GHC) to download the latest release. The current version as of the writing of this tutorial is 5.04.2 and can be downloaded off of the GHC download page (follow the “Download” link). From that page, you can download the appropriate version of GHC for your computer.

2.2.2 Installation procedures Once you’ve downloaded GHC, installation differs depending on your platform; however, installation for GHC is more of less identical to installation for any program on your platform. For Windows when you click on the “msi” file to download, simply choose “Run This Program” and the installation will begin automatically. From there, just follow the on-screen instructions. For RPMs use whatever RPM installation program you know best. For source first gunzip the file, then untar it. Presumably if you’re using a system which isn’t otherwise supported, you know enough about your system to be able to run configure scripts and make things by hand. For a more detailed description of the installation procedure, look at the GHC users manual under “Installing GHC”.

2.2.3 How to run the compiler Running the compiler is fairly easy. Assuming that you have a program written with a main function in a file called Main.hs, you can compile it simply by writing: % ghc --make Main.hs -o main The “–make” option tells GHC that this is a program and not just a library and you want to build it and all modules it depends on. “Main.hs” stipulates the name of the file to compile; and the “-o main” means that you want to put the output in a file called “main”. NOTE In Windows, you should say “-o main.exe” to tell Windows that this is an executable file. You can then run the program by simply typing “main” at the prompt.

2.2.4 How to run the interpreter GHCi is invoked with the command “ghci” or “ghc –interactive”. One or more modules or filenames can also be specified on the command line; this instructs GHCi to load the specified modules or filenames (and all the modules they depend on), just as if you had said :load modules at the GHCi prompt.

2.3. NHC

9

2.2.5 Program options To list all of the options would take too much space. The most important option at this point is “-fglasgow-exts”. When you start GHCi without “-fglasgow-exts” it is in Haskell 98 mode, which turns off all extensions. When you start with “-fglasgowexts”, all extensions are turned on. If you’ve downloaded someone elses code and you’re having trouble loading it, first make sure you have this flag set properly. Further information on the GHC and GHCi options are in the manual off of the GHC web page.

2.2.6 How to get help To get GHC(i) specific help, go to the GHC web page. To get general Haskell help, go to the Haskell web page.

2.3 NHC About NHC. . .

2.3.1 Where to get it 2.3.2 Installation procedures 2.3.3 How to run it 2.3.4 Program options 2.3.5 How to get help

2.4 Editors With good text editor, programming is fun. Of course, you can get along with simplistic editor capable of just cut-n-paste, but good editor is capable of doing most of the chores for you, letting you concentrate on what you are writing. With respect to programming in Haskell, good text editor should have as much as possible of the following features: • Syntax highlighting for source files • Indentation of source files • Interaction with Haskell interpreter (be it Hugs or GHCi) • Computer-aided code navigation • Code completion

10

CHAPTER 2. GETTING STARTED

At the time of writing, several options were available: Emacs/XEmacs support Haskell via haskell-mode and accompanying Elist code (available from http://www.haskell.org/haskel ()), and . . . . What’s else available?. . . (X)Emacs seem to do the best job, having all the features listed above. Indentation is aware about Haskell’s 2-dimensional layout rules (see Section 7.11, very smart and have to be seen in action to be believed. You can quickly jump to the definition of chosen function with the help of ”Definitions” menu, and name of the currently edited function is always displayed in the modeline.

Chapter 3

Language Basics In this chapter we present the basic concepts of Haskell. In addition to familiarizing you with the interactive environments and showing you how to compile a basic program, we introduce the basic syntax of Haskell, which will probably be quite alien if you are used to languages like C and Java. However, before we talk about specifics of the language, we need to establish some general properties of Haskell. Most importantly, Haskell is a lazy language, which means that no computation takes place unless it is forced to take place when the result of that computation is used. This means, for instance, that you can define infinitely large data structures, provided that you never use the entire structure. For instance, using imperative-esque pseudo-code, we could create an infinite list containing the number 1 in each position by doing something like:

lazy

List makeList() { List current = new List(); current.value = 1; current.next = makeList(); return current; } By looking at this code, we can see what it’s trying to do: it creates a new list, sets its value to 1 and then recursively calls itself to make the rest of the list. Of course, if you actually wrote this code and called it, the program would never terminate, because makeList would keep calling itself ad infinitum. This is because we assume this imperative-esque language is strict, the opposite of lazy. Strict languages are often referred to as “call by value,” while lazy languages are referred to as “call by name.” In the above pseudo-code, when we “run” makeList on the fifth line, we attempt to get a value out of it. This leads to an infinite loop. The equivalent code in Haskell is: 11

strict

CHAPTER 3. LANGUAGE BASICS

12

makeList = 1 : makeList

case-sensitive values types

side effects

pure

This program reads: we’re defining something called makeList (this is what goes on the left-hand side of the equals sign). On the right-hand side, we give the definition of makeList. In Haskell, the colon operator is used to create lists (we’ll talk more about this soon). This right-hand side says that the value of makeList is the element 1 stuck on to the beginning of the value of makeList. However, since Haskell is lazy (or “call by name”), we do not actually attempt to evaluate what makeList is at this point: we simply remember that if ever in the future we need the second element of makeList, we need to just look at makeList. Now, if you attempt to write makeList to a file, print it to the screen, or calculate the sum of its elements, the operation won’t terminate because it would have to evaluate an infinitely long list. However, if you simply use a finite portion of the list (say the first 10 elements), the fact that the list is infinitely long doesn’t matter. If you only use the first 10 elements, only the first 10 elements are ever calculated. This is laziness. Second, Haskell is case-sensitive. Many languages are, but Haskell actually uses case to give meaning. Haskell distinguishes between values (for instance, numbers: 1, 2, 3, . . . ); strings: “abc”, “hello”, . . . ; characters: ‘a’, ‘b’, ‘ ’, . . . ; even functions: for instance, the function squares a value, or the square-root function); and types (the categories to which values belong). By itself, this is not unusual. Most languages have some system of types. What is unusual is that Haskell requires that the names given to functions and values begin with a lower-case letter and that the names given to types begin with an upper-case letter. The moral is: if your otherwise correct program won’t compile, be sure you haven’t named your function Foo, or something else beginning with a capital letter. Being a functional language, Haskell eschews side effects. A side effect is essentially something that happens in the course of executing a function that is not related to the output produced by that function. For instance, in a language like C or Java, you are able to modify “global” variables from within a function. This is a side effect because the modification of this global variable is not related to the output produced by the function. Furthermore, modifying the state of the real world is considered a side effect: printing something to the screen, reading a file, etc., are all side effecting operations. Functions that do not have side effects are called pure. An easy test for whether or not a function is pure is to ask yourself a simple question: “Given the same arguments, will this function always produce the same result?”. All of this means that if you’re used to writing code in an imperative language (like C or Java), you’re going to have to start thinking differently. Most importantly, if you have a value x, you must not think of x as a register, a memory location or anything else of that nature. x is simply a name, just as “Hal” is my name. You cannot arbitrarily decide to store a different person in my name any more than you can arbitrarily decide to store a different value in x. This means that code that might look like the following C code is invalid (and has no counterpart) in Haskell: int x = 5;

3.1. ARITHMETIC

13

x = x + 1; A call like x = x + 1 is called destructive update because we are destroying whatever was in x before and replacing it with a new value. Destructive update does not exist in Haskell. By not allowing destructive updates (or any other such side effecting operations), Haskell code is very easy to comprehend. That is, when we define a function f, and call that function with a particular argument a in the beginning of a program, and then, at the end of the program, again call f with the same argument a, we know we will get out the same result. This is because we know that a cannot have changed and because we know that f only depends on a (for instance, it didn’t increment a global counter). This property is called referential transparency and basically states that if two functions f and g produce the same values for the same arguments, then we may replace f with g (and vice-versa).

destructive update

referential transparency

NOTE There is no agreed-upon exact definition of referential transparency. The definition given above is the one I like best. They all carry the same interpretation; the differences lie in how they are formalized.

3.1 Arithmetic Let’s begin our foray into Haskell with simple arithmetic. Start up your favorite interactive shell (Hugs or GHCi; see Chapter 2 for installation instructions). The shell will output to the screen a few lines talking about itself and what it’s doing and then should finish with the cursor on a line reading: Prelude> From here, you can begin to evaluate expressions. An expression is basically something that has a value. For instance, the number 5 is an expression (its value is 5). Values can be built up from other values; for instance, 5 + 6 is an expression (its value is 11). In fact, most simple arithmetic operations are supported by Haskell, including plus (+), minus (-), times (*), divided-by (/), exponentiation (ˆ) and square-root (sqrt). You can experiment with these by asking the interactive shell to evaluate expressions and to give you their value. In this way, a Haskell shell can be used as a powerful calculator. Try some of the following: Prelude> 5*4+3 23 Prelude> 5ˆ5-2 3123 Prelude> sqrt 2 1.4142135623730951

expressions

CHAPTER 3. LANGUAGE BASICS

14 Prelude> 5*(4+3) 35

operator precedence

We can see that, in addition to the standard arithmetic operations, Haskell also allows grouping by parentheses, hence the difference between the values of 5*4+3 and 5*(4+3). The reason for this is that the “understood” grouping of the first expression is (5*4)+3, due to operator precedence. Also note that parentheses aren’t required around function arguments. For instance, we simply wrote sqrt 2, not sqrt(2), as would be required in most other languages. You could write it with the parentheses, but in Haskell, since function application is so common, parentheses aren’t required. WARNING Even though parentheses are not always needed, sometimes it is better to leave them in anyway; other people will probably have to read your code, and if extra parentheses make the intent of the code clearer, use them. Now try entering 2ˆ5000. Does it work? NOTE If you’re familiar with programming in other languages, you may find it odd that sqrt 2 comes back with a decimal point (i.e., is a floating point number) even though the argument to the function seems to be an integer. This interchangability of numeric types is due to Haskell’s system of type classes and will be discussed in detail in Section 4.3).

Exercises Exercise 3.1 We’ve seen that multiplication binds more tightly than addition. Can you think of a way to determine whether function application binds more or less tightly than multiplication?

3.2 Pairs, Triples and More In addition to single values, we should also address multiple values. For instance, we may want to refer to a position by its x/y coordinate, which would be a pair of integers. To make a pair of integers is simple: you enclose the pair in parenthesis and separate them with a comma. Try the following: Prelude> (5,3) (5,3)

ogeneous

3.3. LISTS

15

Here, we have a pair of integers, 5 and 3. In Haskell, the first element of a pair need not have the same type as the second element: that is, pairs are allowed to be heterogeneous. For instance, you can have a pair of an integer with a string. This contrasts with lists, which must be made up of elements of all the same type (we will discuss lists further in Section 3.3). There are two predefined functions that allow you to extract the first and second elements of a pair. They are, respectively, fst and snd. You can see how they work below: Prelude> fst (5, "hello") 5 Prelude> snd (5, "hello") "hello" In addition to pairs, you can define triples, quadruples etc. To define a triple and a quadruple, respectively, we write: Prelude> (1,2,3) (1,2,3) Prelude> (1,2,3,4) (1,2,3,4) And so on. In general, pairs, triples, and so on are called tuples and can store fixed amounts of heterogeneous data. NOTE The functions fst and snd won’t work on anything longer than a pair; if you try to use them on a larger tuple, you will get a message stating that there was a type error. The meaning of this error message will be explained in Chapter 4.

Exercises Exercise 3.2 Use a combination of fst and snd to extract the character out of the tuple ((1,’a’),"foo").

3.3 Lists The primary limitation of tuples is that they hold only a fixed number of elements: pairs hold two, triples hold three, and so on. A data structure that can hold an arbitrary number of elements is a list. Lists are assembled in a very similar fashion to tuples, except that they use square brackets instead of parentheses. We can define a list like:

tuples

CHAPTER 3. LANGUAGE BASICS

16

Prelude> [1,2] [1,2] Prelude> [1,2,3] [1,2,3]

cons operator

Lists don’t need to have any elements. The empty list is simply []. Unlike tuples, we can very easily add an element on to the beginning of the list using the colon operator. The colon is called the “cons” operator; the process of adding an element is called “consing.” The etymology of this is that we are constructing a new list from an element and an old list. We can see the cons operator in action in the following examples: Prelude> 0:[1,2] [0,1,2] Prelude> 5:[1,2,3,4] [5,1,2,3,4] We can actually build any list by using the cons operator (the colon) and the empty list: Prelude> 5:1:2:3:4:[] [5,1,2,3,4]

syntactic sugar

In fact, the [5,1,2,3,4] syntax is “syntactic sugar” for the expression using the explicit cons operators and empty list. If we write something using the [5,1,2,3,4] notation, the compiler simply translates it to the expression using (:) and [].

NOTE In general, “syntactic sugar” is a strictly unnecessary language feature, which is added to make the syntax nicer.

homogenous

One further difference between lists and tuples is that, while tuples are heterogeneous, lists must be homogenous. This means that you cannot have a list that holds both integers and strings. If you try to, a type error will be reported. Of course, lists don’t have to just contain integers or strings; they can also contain tuples or even other lists. Tuples, similarly, can contain lists and other tuples. Try some of the following: Prelude> [(1,1),(2,4),(3,9),(4,16)] [(1,1),(2,4),(3,9),(4,16)] Prelude> ([1,2,3,4],[5,6,7]) ([1,2,3,4],[5,6,7])

3.3. LISTS

17

There are two basic list functions: head and tail. The head function returns the first element of a (non-empty) list, and the tail function returns all but the first element of a (non-empty) list. To get the length of a list, you use the length function: Prelude> length [1,2,3,4,10] 5 Prelude> head [1,2,3,4,10] 1 Prelude> length (tail [1,2,3,4,10]) 4

3.3.1 Strings In Haskell, a String is simply a list of Chars. So, we can create the string “Hello” as: Prelude> ’H’:’e’:’l’:’l’:’o’:[] "Hello" Lists (and, of course, strings) can be concatenated using the ++ operator: Prelude> "Hello " ++ "World" "Hello World" Additionally, non-string values can be converted to strings using the show function, and strings can be converted to non-string values using the read function. Of course, if you try to read a value that’s malformed, an error will be reported (note that this is a run-time error, not a compile-time error): Prelude> "Five squared is " ++ show (5*5) "Five squared is 25" Prelude> read "5" + 3 8 Prelude> read "Hello" + 3 Program error: Prelude.read: no parse Above, the exact error message is implementation dependent. However, the interpreter has inferred that you’re trying to add three to something. This means that when we execute read "Hello", we expect to be returned a number. However, "Hello" cannot be parsed as a number, so an error is reported.

CHAPTER 3. LANGUAGE BASICS

18

3.3.2 Simple List Functions Much of the computation in Haskell programs is done by processing lists. There are three primary list-processing functions: map, filter and foldr (also foldl). The map function takes as arguments a list of values and a function that should be applied to each of the values. For instance, there is a built-in function Char.toUpper that takes as input a Char and produces a Char that is the upper-case version of the original argument. So, to covert an entire string (which is simply a list of characters) to upper case, we can map the toUpper function across the entire list: Prelude> map Char.toUpper "Hello World" "HELLO WORLD" WARNING Hugs users: Hugs doesn’t like qualified names like Char.toUpper. In Hugs, simply use toUpper. When you map across a list, the length of the list never changes – only the individual values in the list change. To remove elements from the list, you can use the filter function. This function allows you to remove certain elements from a list depending on their value, but not on their context. For instance, the function Char.isLower tells you whether a given character is lower case. We can filter out all non-lowercase characters using this: Prelude> filter Char.isLower "Hello World" "elloorld" The function foldr takes a little more getting used to. foldr takes three arguments: a function, an initial value and a list. The best way to think about foldr is that it replaces occurences of the list cons operator (:) with the function parameter and replaces the empty list constructor ([]) with the initial value. Thus, if we have a list: 3 : 8 : 12 : 5 : [] and we apply foldr (+) 0 to it, we get: 3 + 8 + 12 + 5 + 0 which sums the list. We can test this: Prelude> foldr (+) 0 [3,8,12,5] 28 We can perform the same sort of operation to calculate the product of all the elements on a list: Prelude> foldr (*) 1 [4,8,5] 160

3.3. LISTS

19

We said earlier that folding is like replacing (:) with a particular function and ([]) with an initial element. This raises a question as to what happens when the function isn’t associative (a function (·) is associative if a · (b · c) = (a · b) · c). When we write 4 · 8 · 5 · 1, we need to specify where to put the parentheses. Namely, do we mean ((4 · 8) · 5) · 1 or 4 · (8 · ((5 · 1))? foldr assumes the function is right-associative (i.e., the correct bracketing is the latter). Thus, when we use it on a non-associtive function (like minus), we can see the effect: Prelude> foldr (-) 1 [4,8,5] 0 The exact derivation of this looks something like:

==> ==> ==> ==> ==> ==> ==>

foldr (-) 1 [4,8,5] 4 - (foldr (-) 1 [8,5]) 4 - (8 - foldr (-) 1 [5]) 4 - (8 - (5 - foldr (-) 1 [])) 4 - (8 - (5 - 1)) 4 - (8 - 4) 4 - 4 0

The foldl function goes the other way and effectively produces the opposite bracketing. foldl looks the same when applied, so we could have done summing just as well with foldl: Prelude> foldl (+) 0 [3,8,12,5] 28 However, we get different results when using the non-associative function minus: Prelude> foldl (-) 1 [4,8,5] -16 This is because foldl uses the opposite bracketing. The way it accomplishes this is essentially by going all the way down the list, taking the last element and combining it with the initial value via the provided function. It then takes the second-to-last element in the list and combines it to this new value. It does so until there is no more list left. The derivation here proceeds in the opposite fashion:

==> ==> ==>

foldl foldl foldl foldl

(-) (-) (-) (-)

1 [4,8,5] (1 - 4) [8,5] ((1 - 4) - 8) [5] (((1 - 4) - 8) - 5) []

CHAPTER 3. LANGUAGE BASICS

20 ==> ==> ==> ==>

((1 - 4) - 8) - 5 ((-3) - 8) - 5 (-11) - 5 -16

Note that once the foldl goes away, the parenthesization is exactly the opposite of the foldr. NOTE foldl is often more efficient than foldr for reasons that we will discuss in Section 7.8. However, foldr can work on infinite lists, while foldl cannot. This is because before foldl does anything, it has to go to the end of the list. On the other hand, foldr starts producing output immediately. For instance, foldr (:) [] [1,2,3,4,5] simply returns the same list. Even if the list were infinite, it would produce output. A similar function using foldl would fail to produce any output. If this discussion of the folding functions is still somewhat unclear, that’s okay. We’ll discuss them further in Section 7.8.

Exercises Exercise 3.3 Use map to convert a string into a list of booleans, each element in the new list representing whether or not the original element was a lower-case character. That is, it should take the string “aBCde” and return [True,False,False,True,True]. Exercise 3.4 Use the functions mentioned in this section (you will need two of them) to compute the number of lower-case letters in a string. For instance, on “aBCde” it should return 3. Exercise 3.5 We’ve seen how to calculate sums and products using folding functions. Given that the function max returns the maximum of two numbers, write a function using a fold that will return the maximum value in a list (and zero if the list is empty). So, when applied to [5,10,2,8,1] it will return 10. Assume that the values in the list are always ≥ 0. Explain to yourself why it works. Exercise 3.6 Write a function that takes a list of pairs of length at least 2 and returns the first component of the second element in the list. So, when provided with [(5,’b’),(1,’c’),(6,’a’)], it will return 1.

3.4 Source Code Files As programmers, we don’t want to simply evaluate small expressions like these – we want to sit down, write code in our editor of choice, save it and then use it. We already saw in Sections 2.2 and 2.3 how to write a Hello World program and how to compile it. Here, we show how to use functions defined in a source-code file in the interactive environment. To do this, create a file called Test.hs and enter the following code:

3.4. SOURCE CODE FILES

21

module Test where x = 5 y = (6, "Hello") z = x * fst y This is a very simple “program” written in Haskell. It defines a module called “Test” (in general module names should match file names; see Section 6 for more on this). In this module, there are three definitions: x, y and z. Once you’ve written and saved this file, in the directory in which you saved it, load this in your favorite interpreter, by executing either of the following: % hugs Test.hs % ghci Test.hs This will start Hugs or GHCi, respectively, and load the file. Alternatively, if you already have one of them loaded, you can use the “:load” command (or just “:l”) to load a module, as: Prelude> :l Test.hs ... Test> Between the first and last line, the interpreter will print various data to explain what it is doing. If any errors appear, you probably mistyped something in the file; double check and then try again. You’ll notice that where it used to say “Prelude” it now says “Test.” That means that Test is the current module. You’re probably thinking that “Prelude” must also be a module. Exactly correct. The Prelude module (usually simply referred to as “the Prelude”) is always loaded and contains the standard definitions (for instance, the (:) operator for lists, or (+) or (*), fst, snd and so on). Now that we’ve loaded Test, we can use things that were defined in it. For example: Test> x 5 Test> y (6,"Hello") Test> z 30 Perfect, just as we expected! One final issue regarding how to compile programs to stand-alone executables remains. In order for a program to be an executable, it must have the module name

the Prelude

CHAPTER 3. LANGUAGE BASICS

22

“Main” and must contain a function called main. So, if you go in to Test.hs and rename it to “Main” (change the line that reads module Test to module Main), we simply need to add a main function. Try this: main = putStrLn "Hello World" Now, save the file, and compile it (refer back to Section 2 for information on how to do this for your compiler). For example, in GHC, you would say: % ghc --make Test.hs -o test

NOTE For Windows, it would be “-o test.exe” This will create a file called “test” (or on Windows, “test.exe”) that you can then run. % ./test Hello World

NOTE Or, on Windows: C:\> test.exe Hello World

3.5 Functions Now that we’ve seen how to write code in a file, we can start writing functions. As you might have expected, functions are central to Haskell, as it is a functional language. This means that the evaluation of a program is simply the evaluation of a function. We can write a simple function to square a number and enter it into our Test.hs file. We might define this as follows: square x = x * x In this function definition, we say that we’re defining a function square that takes one argument (aka parameter), which we call x. We then say that the value of square x is equal to x * x. Haskell also supports standard conditional expressions. For instance, we could define a function that returns −1 if its argument is less than 0; 0 if its argument is 0; and 1 if its argument is greater than 0 (this is called the signum function):

3.5. FUNCTIONS

23

signum x = if x < 0 then -1 else if x > 0 then 1 else 0 You can experiment with this as: Test> 1 Test> 0 Test> -1 Test> -1

signum 5 signum 0 signum (5-10) signum (-1)

Note that the parenthesis around “-1” in the last example are required; if missing, the system will think you are trying to subtract the value “1” from the value “signum,” which is illtyped. The if/then/else construct in Haskell is very similar to that of most other programming languages; however, you must have both a then and an else clause. It evaluates the condition (in this case x < 0 and, if this evaluates to True, it evaluates the then condition; if the condition evaluated to False, it evaluates the else condition). You can test this program by editing the file and loading it back into your interpreter. If Test is already the current module, instead of typing :l Test.hs again, you can simply type :reload or just :r to reload the current file. This is usually much faster. Haskell, like many other languages, also supports case constructions. These are used when there are multiple values that you want to check against (case expressions are actually quite a bit more powerful than this – see Section 7.4 for all of then details). Suppose we wanted to define a function that had a value of 1 if its argument were 0; a value of 5 if its argument were 1; a value of 2 if its argument were 2; and a value of −1 in all other instances. Writing this function using if statements would be long and very unreadable; so we write it using a case statement as follows (we call this function f): f x = case x 0 -> 1 -> 2 -> _ ->

of 1 5 2 -1

if/then/else

case/of

CHAPTER 3. LANGUAGE BASICS

24

wildcard

In this program, we’re defining f to take an argument x and then inspect the value of x. If it matches 0, the value of f is 1. If it matches 1, the value of f is 5. If it maches 2, then the value of f is 2; and if it hasn’t matched anything by that point, the value of f is −1 (the underscore can be thought of as a “wildcard” – it will match anything) . The indentation here is important. Haskell uses a system called “layout” to structure its code (the programming language Python uses a similar system). The layout system allows you to write code without the explicit semicolons and braces that other languages like C and Java require. WARNING Because whitespace matters in Haskell, you need to be careful about whether you are using tabs or spaces. If you can configure your editor to never use tabs, that’s probably better. If not, make sure your tabs are always 8 spaces long, or you’re likely to run in to problems.

The general rule for layout is that an open-brace is inserted after the keywords where, let, do and of, and the column position at which the next command appears is remembered. From then on, a semicolon is inserted before every new line that is indented the same amount. If a following line is indented less, a close-brace is inserted. This may sound complicated, but if you follow the general rule of indenting after each of those keywords, you’ll never have to remember it (see Section 7.11 for a more complete discussion of layout). Some people prefer not to use layout and write the braces and semicolons explicitly. This is perfectly acceptable. In this style, the above function might look like: f x = case x of { 0 -> 1 ; 1 -> 5 ; 2 -> 2 ; _ -> 1 } Of course, if you write the braces and semicolons explicitly, you’re free to structure the code as you wish. The following is also equally valid: f x = case x of { 0 -> 1 ; 1 -> 5 ; 2 -> 2 ; _ -> 1 } However, structuring your code like this only serves to make it unreadable (in this case). Functions can also be defined piece-wise, meaning that you can write one version of your function for certain parameters and then another version for other parameters. For instance, the above function f could also be written as: f f f f

0 1 2 _

= = = =

1 5 2 -1

3.5. FUNCTIONS

25

Here, the order is important. If we had put the last line first, it would have matched every argument, and f would return -1, regardless of its argument (most compilers will warn you about this, though, saying something about overlapping patterns). If we had not included this last line, f would produce an error if anything other than 0, 1 or 2 were applied to it (most compilers will warn you about this, too, saying something about incomplete patterns). This style of piece-wise definition is very popular and will be used quite frequently throughout this tutorial. These two definitions of f are actually equivalent – this piece-wise version is translated into the case expression. More complicated functions can be built from simpler functions using function composition. Function composition is simply taking the result of the application of one function and using that as an argument for another. We’ve already seen this way back in arithmetic (Section 3.1), when we wrote 5*4+3. In this, we were evaluating 5 ∗ 4 and then applying +3 to the result. We can do the same thing with our square and f functions: Test> 25 Test> 4 Test> 5 Test> -1

square (f 1) square (f 2) f (square 1) f (square 2)

The result of each of these function applications is fairly straightforward. The parentheses around the inner function are necessary; otherwise, in the first line, the interpreter would think that you were trying to get the value of “square f,” which has no meaning. Function application like this is fairly standard in most programming languages. There is another, more mathematically oriented, way to express function composition, using the (.) (just a single period) function. This (.) function is supposed to look like the (◦) operator in mathematics. NOTE In mathematics we write f ◦ g to mean “f following g,” in Haskell we write f . g also to mean “f following g.” The meaning of f ◦ g is simply that (f ◦ g)(x) = f (g(x)). That is, applying the value x to the function f ◦ g is the same as applying it to g, taking the result, and then applying that to f . The (.) function (called the function composition function), takes two functions and makes them in to one. For instance, if we write (square . f), this means that it creates a new function that takes an argument, applies f to that argument and then applies square to the result. Conversely, (f . square) means that it creates a new function that takes an argument, applies square to that argument and then applies f to the result. We can see this by testing it as before: Test> (square . f) 1 25

function composition

CHAPTER 3. LANGUAGE BASICS

26 Test> (square . f) 2 4 Test> (f . square) 1 5 Test> (f . square) 2 -1

Here, we must enclose the function composition in parentheses; otherwise, the Haskell compiler will think we’re trying to compose square with the value f 1 in the first line, which makes no sense since f 1 isn’t even a function. It would probably be wise to take a little time-out to look at some of the functions that are defined in the Prelude. Undoubtedly, at some point, you will accidentally rewrite some already-existing function (I’ve done it more times than I can count), but if we can keep this to a minimum, that would save a lot of time. Here are some simple functions, some of which we’ve already seen: sqrt the square root function id the identity function: id x = x fst extracts the first element from a pair snd extracts the second element from a pair null tells you whether or not a list is empty head returns the first element on a non-empty list tail returns everything but the first element of a non-empty list ++ concatenates two lists == checks to see if two elements are equal /= checks to see if two elements are unequal Here, we show example usages of each of these functions: Prelude> 1.41421 Prelude> "hello" Prelude> 5 Prelude> 5 Prelude> 2 Prelude> True Prelude> False Prelude> 1 Prelude> [2,3,4] Prelude>

sqrt 2 id "hello" id 5 fst (5,2) snd (5,2) null [] null [1,2,3,4] head [1,2,3,4] tail [1,2,3,4] [1,2,3] ++ [4,5,6]

3.5. FUNCTIONS

27

[1,2,3,4,5,6] Prelude> [1,2,3] == [1,2,3] True Prelude> ’a’ /= ’b’ True Prelude> head [] Program error: {head []} We can see that applying head to an empty list gives an error (the exact error message depends on whether you’re using GHCi or Hugs – the shown error message is from Hugs).

3.5.1 Let Bindings Often we wish to provide local declarations for use in our functions. For instance, if you remember back to your grade school mathematics courses, the following equation is used √ to find the roots (zeros) of a polynomial of the form ax2 + bx + c = 0: x = (−b ± b2 − 4ac)/2a. We could write the following function to compute the two values of x: roots a b c = ((-b + sqrt(b*b - 4*a*c)) / (2*a), (-b - sqrt(b*b - 4*a*c)) / (2*a)) To remedy this problem, Haskell allows for local bindings. That is, we can create values inside of a function that only that function can see. For instance, we could create a local binding for sqrt(b*b-4*a*c) and call it, say, det and then use that in both places where sqrt(b*b - 4*a*c) occurred. We can do this using a let/in declaration: roots a b c = let det = sqrt (b*b - 4*a*c) in ((-b + det) / (2*a), (-b - det) / (2*a)) In fact, you can provide multiple declarations inside a let. Just make sure they’re indented the same amount, or you will have layout problems: roots a b c = let det = sqrt (b*b - 4*a*c) twice_a = 2*a in ((-b + det) / twice_a, (-b - det) / twice_a)

CHAPTER 3. LANGUAGE BASICS

28

3.5.2 Infix Infix functions are ones that are composed of symbols, rather than letters. For instance, (+), (*), (++) are all infix functions. You can use them in non-infix mode by enclosing them in parentheses. Hence, the two following expressions are the same: Prelude> 5 + 10 15 Prelude> (+) 5 10 15 Similarly, non-infix functions (like map) can be made infix by enclosing them in backquotes (the ticks on the tilde key on American keyboards): Prelude> map Char.toUpper "Hello World" "HELLO WORLD" Prelude> Char.toUpper ‘map‘ "Hello World" "HELLO WORLD"

WARNING Hugs users: Hugs doesn’t like qualified names like Char.toUpper. In Hugs, simply use toUpper.

3.6 Comments There are two types of comments in Haskell: line comments and block comments. Line comments begin with the token -- and extend until the end of the line. Block comments begin with {- and extend to a corresponding -}. Block comments can be nested. NOTE The -- in Haskell corresponds to // in C++ or Java, and {and -} correspond to /* and */. Comments are used to explain your program in English and are completely ignored by compilers and interpreters. For example: module Test2 where main = putStrLn "Hello World"

-- write a string -- to the screen

{- f is a function which takes an integer and produces integer. {- this is an embedded

3.7. RECURSION

29

comment -} the original comment extends to the matching end-comment token: -} f x = case x of 0 -> 1 -- 0 maps to 1 1 -> 5 -- 1 maps to 5 2 -> 2 -- 2 maps to 2 _ -> -1 -- everything else maps to -1 This example program shows the use of both line comments and (embedded) block comments.

3.7 Recursion In imperative languages like C and Java, the most basic control structure is a loop (like a for loop). However, for loops don’t make much sense in Haskell because they require destructive update (the index variable is constantly being updated). Instead, Haskell uses recursion. A function is recursive if it calls itself (see Appendix B for more). Recursive functions exist also in C and Java but are used less than they are in functional languages. The prototypical recursive function is the factorial function. In an imperative language, you might write this as something like: int factorial(int n) { int fact = 1; for (int i=2; i <= n; i++) fact = fact * i; return fact; } While this code fragment will successfully compute factorials for positive integers, it somehow ignores the basic definition of factorial, usually given as:  1 n=1 n! = n ∗ (n − 1)! otherwise This definition itself is exactly a recursive definition: namely the value of n! depends on the value of (n − 1)!. If you think of ! as a function, then it is calling itself. We can translate this definition almost verbatim into Haskell code: factorial 1 = 1 factorial n = n * factorial (n-1) This is likely the simplest recursive function you’ll ever see, but it is correct.

CHAPTER 3. LANGUAGE BASICS

30

NOTE Of course, an imperative recursive version could be written: int factorial(int n) { if (n == 1) return 1; else return n * factorial(n-1); } but this is likely to be much slower than the loop version in C. Recursion can be a difficult thing to master. It is completely analogous to the concept of induction in mathematics (see Chapter B for a more formal treatment of this). However, usually a problem can be thought of as having one or more base cases and one or more recursive-cases. In the case of factorial, there is one base case (when n = 1) and one recursive case (when n > 1). For designing your own recusive algorithms, it is often useful to try to differentiate these two cases. Turning now to the task of exponentiation, suppose that we have two positive integers a and b, and that we want to calculate ab . This problem has a single base case: namely when b is 1. The recursive case is when b > 1. We can write a general form as: b

a =



a a ∗ ab−1

b=1 otherwise

Again, this translates directly into Haskell code: exponent a 1 = a exponent a b = a * exponent a (b-1) Just as we can define recursive functions on integers, so can we define recursive functions on lists. In this case, usually the base case is the empty list [], and the recursive case is a cons list (i.e., a value consed on to another list). Consider the task of calculating the length of a list. We can again break this down into two cases: either we have an empty list or we have a non-empty list. Clearly the length of an empty list is zero. Furthermore, if we have a cons list, then the length of this list is just the length of its tail plus one. Thus, we can define a length function as: my_length [] = 0 my_length (x:xs) = 1 + my_length xs

NOTE Whenever we provide alternative definitions for standard Haskell functions, we prefix them with my so the compiler doesn’t become confused.

3.8. INTERACTIVITY

31

Similarly, we can consider the filter function. Again, the base case is the empty list, and the recursive case is a cons list. However, this time, we’re choosing whether to keep an element, depending on whether or not a particular predicate holds. We can define the filter function as: my_filter p [] = [] my_filter p (x:xs) = if p x then x : my_filter p xs else my_filter p xs In this code, when presented with an empty list, we simply return an empty list. This is because filter cannot add elements; it can only remove them. When presented with a list of the form (x:xs), we need to decide whether or not to keep the value x. To do this, we use an if statement and the predicate p. If p x is true, then we return a list that begins with x followed by the result of filtering the tail of the list. If p x is false, then we exclude x and return the result of filtering the tail of the list. We can also define map and both fold functions using explicit recursion. See the exercises for the definition of map and Chapter 7 for the folds.

Exercises Exercise 3.7 The fibonacci sequence is defined by:  1 n = 1 or n = 2 Fn = Fn−2 + Fn−1 otherwise Write a recursive function fib that takes a positive integer n as a parameter and calculates Fn . Exercise 3.8 Define a recursive function mult that takes two positive integers a and b and returns a*b, but only uses addition (i.e., no fair just using multiplication). Begin by making a mathematical definition in the style of the previous exercise and the rest of this section. Exercise 3.9 Define a recursive function my map that behaves identically to the standard function map.

3.8 Interactivity If you are familiar with books on other (imperative) languages, you might be wondering why you haven’t seen many of the standard programs written in tutorials of other languages (like ones that ask the user for his name and then says “Hi” to him by name). The reason for this is simple: Being a pure functional language, it is not entirely clear how one should handle operations like user input.

CHAPTER 3. LANGUAGE BASICS

32

monads

do notation

After all, suppose you have a function that reads a string from the keyboard. If you call this function twice, and the user types something the first time and something else the second time, then you no longer have a function, since it would return two different values. The solution to this was found in the depths of category theory, a branch of formal mathematics: monads. We’re not yet ready to talk about monads formally, but for now, think of them simply as a convenient way to express operations like input/output. We’ll discuss them in this context much more in Chapter 5 and then discuss monads for monads’ sake in Chapter 9. Suppose we want to write a function that’s interactive. The way to do this is to use the do keyword. This allows us to specify the order of operations (remember that normally, since Haskell is a lazy language, the order in which operations are evaluated in it is unspecified). So, to write a simple program that asks a user for his name and then address him directly, enter the following code into “Name.hs”: module Main where import IO main = do hSetBuffering stdin LineBuffering putStrLn "Please enter your name: " name <- getLine putStrLn ("Hello, " ++ name ++ ", how are you?")

NOTE The parentheses are required on the second instance of putStrLn but not the first. This is because function application binds more tightly than ++, so without the parentheses, the second would be interpreted as (putStrLn "Hello, ") ++ name ++ .... You can then either load this code in your interpreter and execute main by simply typing “main,” or you can compile it and run it from the command line. I’ll show the results of the interactive approach: Main> main Please enter your name: Hal Hello, Hal, how are you? Main> And there’s interactivity. Let’s go back and look at the code a little, though. We name the module “Main,” so that we can compile it. We name the primary function “main,” so that the compile knows that this is the function to run when the program is run. On the fourth line, we import the IO library, so that we can access the IO

3.8. INTERACTIVITY

33

functions. On the seventh line, we start with do, telling Haskell that we’re executing a sequence of commands. The first command is hSetBuffering stdin LineBuffering, which you should probably ignore for now (incidentally, this is only required by GHC – in Hugs you can get by without it). The necessity for this is because, when GHC reads input, it expects to read it in rather large blocks. A typical person’s name is nowhere near large enough to fill this block. Thus, when we try to read from stdin, it waits until it’s gotten a whole block. We want to get rid of this, so we tell it to use LineBuffering instead of block buffering. The next command is putStrLn, which prints a string to the screen. On the ninth line, we say “name <- getLine.” This would normally be written “name = getLine,” but using the arrow instead of the equal sign shows that getLine isn’t a real function and can return different values. This command means “run the action getLine, and store the results in name.” The last line constructs a string using what we read in the previous line and then prints it to the screen. Another example of a function that isn’t really a function would be one that returns a random value. In this context, a function that does this is called randomRIO. Using this, we can write a “guess the number” program. Enter the following code into “Guess.hs”: module Main where import IO import Random main = do hSetBuffering stdin LineBuffering num <- randomRIO (1::Int, 100) putStrLn "I’m thinking of a number between 1 and 100" doGuessing num doGuessing num = do putStrLn "Enter your guess:" guess <- getLine let guessNum = read guess if guessNum < num then do putStrLn "Too low!" doGuessing num else if read guess > num then do putStrLn "Too high!" doGuessing num else do putStrLn "You Win!" Let’s examine this code. On the fifth line we write “import Random” to tell the

34

CHAPTER 3. LANGUAGE BASICS

compiler that we’re going to be using some random functions (these aren’t built into the Prelude). In the first line of main, we ask for a random number in the range (1, 100). We need to write ::Int to tell the compiler that we’re using integers here – not floating point numbers or other numbers. We’ll talk more about this in Section 4. On the next line, we tell the user what’s going on, and then, on the last line of main, we tell the compiler to execute the command doGuessing. The doGuessing function takes the number the user is trying to guess as an argument. First, it asks the user to guess and then accepts their guess (which is a String) from the keyboard. The if statement checks first to see if their guess is too low. However, since guess is a string, and num is an integer, we first need to convert guess to an integer by reading it. Since “read guess” is a plain, pure function (and not an IO action), we don’t need to use the <- notation (in fact, we cannot); we simply bind the value to guessNum. Note that while we’re in do notation, we don’t need ins for lets. If they guessed too low, we inform them and then start doGuessing over again. If they didn’t guess too low, we check to see if they guessed too high. If they did, we tell them and start doGuessing again. Otherwise, they didn’t guess too low and they didn’t guess too high, so they must have gotten it correct. We tell them that they won and exit. The fact that we exit is implicit in the fact that there are no commands following this. We don’t need an explicit return () statement. You can either compile this code or load it into your interpreter, and you will get something like: Main> main I’m thinking of a number between 1 and 100 Enter your guess: 50 Too low! Enter your guess: 75 Too low! Enter your guess: 85 Too high! Enter your guess: 80 Too high! Enter your guess: 78 Too low! Enter your guess: 79 You Win! The recursive action that we just saw doesn’t actually return a value that we use in any way. In the case when it does, the “obvious” way to write the command is actually

3.8. INTERACTIVITY

35

incorrect. Here, we will give the incorrect version, explain why it is wrong, then give the correct version. Let’s say we’re writing a simple program that repeatedly asks the user to type in a few words. If at any point the user enters the empty word (i.e., he just hits enter without typing anything), the program prints out everything he’s typed up until that point and then exits. The primary function (actually, an action) in this program is one that asks the user for a word, checks to see if it’s empty, and then either continues or ends. The incorrect formulation of this might look something like: askForWords = do putStrLn "Please enter a word:" word <- getLine if word == "" then return [] else return (word : askForWords) Before reading ahead, see if you can figure out what is wrong with the above code. The error is on the last line, specifically with the term word : askForWords. Remember that when using (:), we are making a list out of an element (in this case word) and another list (in this case, askForWords). However, askForWords is not a list; it is an action that, when run, will produce a list. That means that before we can attach anything to the front, we need to run the action and take the result. In this case, we want to do something like: askForWords = do putStrLn "Please enter a word:" word <- getLine if word == "" then return [] else do rest <- askForWords return (word : rest) Here, we first run askForWords, take the result and store it in the variable rest. Then, we return the list created from word and rest. By now, you should have a good understanding of how to write simple functions, compile them, test functions and programs in the interactive environment, and manipulate lists.

Exercises Exercise 3.10 Write a program that will repeatedly ask the user for numbers until she types in zero, at which point it will tell her the sum of all the numbers, the product of all the numbers, and, for each number, its factorial. For instance, a session might look like:

36

Give me a number (or 5 Give me a number (or 8 Give me a number (or 2 Give me a number (or 0 The sum is 15 The product is 80 5 factorial is 120 8 factorial is 40320 2 factorial is 2

CHAPTER 3. LANGUAGE BASICS

0 to stop): 0 to stop): 0 to stop): 0 to stop):

Hint: write an IO action that reads a number and, if it’s zero, returns the empty list. If it’s not zero, it recurses itself and then makes a list out of the number it just read and the result of the recursive call.

Chapter 4

Type Basics Haskell uses a system of static type checking. This means that every expression in Haskell is assigned a type. For instance ’a’ would have type Char, for “character.” Then, if you have a function which expects an argument of a certain type and you give it the wrong type, a compile-time error will be generated (that is, you will not be able to compile the program). This vastly reduces the number of bugs that can creep into your program. Furthermore, Haskell uses a system of type inference. This means that you don’t even need to specify the type of expressions. For comparison, in C, when you define a variable, you need to specify its type (for instance, int, char, etc.). In Haskell, you needn’t do this – the type will be inferred from context. NOTE If you want, you certainly are allowed to explicitely specify the type of an expression; this often helps debugging. In fact, it is sometimes considered good style to explicitly specify the types of outermost functions. Both Hugs and GHCi allow you to apply type inference to an expression to find its type. This is done by using the :t command. For instance, start up your favorite shell and try the following: Prelude> :t ’c’ ’c’ :: Char This tells us that the expression ’c’ has type Char (the double colon :: is used throughout Haskell to specify types).

4.1 Simple Types There are a slew of built-in types, including Int (for integers, both positive and negative), Double (for floating point numbers), Char (for single characters), String (for 37

CHAPTER 4. TYPE BASICS

38

strings), and others. We have already seen an expression of type Char; let’s examine one of type String: Prelude> :t "Hello" "Hello" :: String You can also enter more complicated expressions, for instance, a test of equality: Prelude> :t ’a’ == ’b’ ’a’ == ’b’ :: Bool You should note that even though this expression is false, it still has a type, namely the type Bool. NOTE Bool is short for Boolean (usually pronounced “boo-lee-uhn”, though I’ve heard “boo-leen” once or twice) and has two possible values: True and False. You can observe the process of type checking and type inference by trying to get the shell to give you the type of an ill-typed expression. For instance, the equality operator requires that the type of both of its arguments are of the same type. We can see that Char and String are of different types by trying to compare a character to a string: Prelude> :t ’a’ == ERROR - Type error *** Expression *** Term *** Type *** Does not match

"a" in application : ’a’ == "a" : ’a’ : Char : [Char]

The first line of the error (the line containing “Expression”) tells us the expression in which the type error occured. The second line tells us which part of this expression is ill-typed. The third line tells us the inferred type of this term and the fourth line tells us what it needs to have matched. In this case, it says that type type Char doesn’t match the type [Char] (a list a characters – a string in Haskell is represented as a list of characters). As mentioned before, you can explicitely specify the type of an expression using the :: operator. For instance, instead of ”a” in the previous example, we could have written (”a”::String). In this case, this has no effect since there’s only one possible interpretation of ”a”. However, consider the case of numbers. You can try: Prelude> :t 5 :: Int 5 :: Int Prelude> :t 5 :: Double 5 :: Double

4.2. POLYMORPHIC TYPES

39

Here, we can see that the number 5 can be instantiated as either an Int our a Double. What if we don’t specify the type? Prelude> :t 5 5 :: Num a => a Not quite what you expected? What this means, briefly, is that if some type a is an instance of the Num class, then type type of the expression 5 can be of type a. If that made no sense, that’s okay for now. In Section 4.3 we talk extensively about type classes (which is what this is). The way to read this, though, is to say “a being an instance of Num implies a.”

Exercises Exercise 4.1 Figure out for yourself, and then verify the types of the following expressions, if they have a type. Also note if the expression is a type error: 1. ’h’:’e’:’l’:’l’:’o’:[] 2. [5,’a’] 3. (5,’a’) 4. (5::Int) + 10 5. (5::Int) + (10::Double)

4.2 Polymorphic Types Haskell employs a polymorphic type system. This essentially means that you can have type variables, which we have alluded to before. For instance, note that a function like tail doesn’t care what the elements in the list are: Prelude> tail [5,6,7,8,9] [6,7,8,9] Prelude> tail "hello" "ello" Prelude> tail ["the","man","is","happy"] ["man","is","happy"] This is possible because tail has a polymorphic type: [α] → [α]. That means it can take as an argument any list and return a value which is a list of the same type. The same analysis can explain the type of fst: Prelude> :t fst forall a b . (a,b) -> a

CHAPTER 4. TYPE BASICS

40

Here, GHCi has made explicit the universal quantification of the type values. That is, it is saying that for all types a and b, fst is a function from (a, b) to a.

Exercises Exercise 4.2 Figure out for yourself, and then verify the types of the following expressions, if they have a type. Also note if the expression is a type error: 1. snd 2. head 3. null 4. head .

tail

5. head .

head

4.3 Type Classes We saw last section some strange typing having to do with the number five. Before we delve too deeply into the subject of type classes, let’s take a step back and see some of the motivation.

4.3.1 Motivation In many languages (C++, Java, etc.), there exists a system of overloading. That is, a function can be written that takes parameters of differing types. For instance, the canonical example is the equality function. If we want to compare two integers, we should use an integer comparison; if we want to compare two floating point numbers, we should use a floating point comparison; if we want to compare two characters, we should use a character comparison. In general, if we want to compare two things which have type α, we want to use an α-compare. We call α a type variable since it is a variable whose value is a type. NOTE In general, type variables will be written using the first part of the Greek alphabet: α, β, γ, δ, . . . . Unfortunately, this presents some problems for static type checking, since the type checker doesn’t know which types a certain operation (for instance, equality testing) will be defined for. There are as many solutions to this problem as there are statically typed languages (perhaps a slight exageration, but not so much so). The one chosen in Haskell is the system of type classes. Whether this is the “correct” solution or the “best” solution of course depends on your application domain. It is, however, the one we have, so you should learn to love it.

4.3. TYPE CLASSES

41

4.3.2 Equality Testing Returning to the issue of equality testing, what we want to be able to do is define a function == (the equality operator) which takes two parameters, each of the same type (call it α), and returns a boolean. But this function may not be defined for every type; just for some. Thus, we associate this function == with a type class, which we call Eq. If a specific type α belongs to a certain type class (that is, all functions associated with that class are implemented for α), we say that α is an instance of that class. For instance, Int is an instance of Eq since equality is defined over integers.

4.3.3 The Num Class In addition to overloading operators like ==, Haskell has overloaded numeric constants (i.e., 1, 2, 3, etc.). This was done so that when you type in a number like 5, the compiler is free to say 5 is an integer or floating point number as it sees fit. It defines the Num class to contain all of these numbers and certain minimal operations over them (addition, for instance). The basic numeric types (Int, Double) are defined to be instances of Num. We have only skimmed the surface of the power (and complexity) of type classes here. There will be much more discussion of them in Section 8.4, but we need some more background before we can get there. Before we do that, we need to talk a little more about functions.

4.3.4 The Show Class Another of the standard classes in Haskell is the Show class. Types which are members of the Show class have functions which convert values of that type to a string. This function is called show. For instance show applied to the integer 5 is the string “5”; show applied to the character ’a’ is the three-character string “’a”’ (the first and last characters are apostrophes). show applied to a string simply puts quotes around it. You can test this in the interpreter: Prelude> "5" Prelude> "’a’" Prelude> "\"Hello

show 5 show ’a’ show "Hello World" World\""

NOTE The reason the backslashes appear in the last line is because the interior quotes are “escaped”, meaning that they are part of the string, not part of the interpreter printing the value. The actual string doesn’t contain the backslashes. Some types are not instances of Show; functions for example. If you try to show a function (like sqrt), the compiler or interpreter will give you some cryptic error mes-

CHAPTER 4. TYPE BASICS

42

sage, complaining about a missing instance declaration or an illegal class constraint.

4.4 Function Types In Haskell, functions are first class values, meaning that just as 1 or ’c’ are values which have a type, so are functions like square or ++. Before we talk too much about functions, we need to make a short diversion into very theoretical computer science (don’t worry, it won’t be too painful) and talk about the lambda calculus.

4.4.1 Lambda Calculus The name “Lambda Calculus”, while perhaps daunting, describes a fairly simple system for representing functions. The way we would write a squaring function in lambda calculus is: λx.x∗x, which means that we take a value, which we will call x (that’s what “λx. means) and then multiply it by itself. The λ is called “lambda abstraction.” In general, lambdas can only have one parameter. If we want to write a function that takes two numbers, doubles the first and adds it to the second, we would write: λxλy.2∗x+y. When we apply a value to a lambda expression, we remove the outermost λ and replace every occurrence of the lambda variable with the value. For instance, if we evaluate (λx.x ∗ x)5, we remove the lambda and replace every occurrence of x with 5, yielding (5 ∗ 5) which is 25. In fact, Haskell is largely based on an extension of the lambda calculus, and these two expressions can be written directly in Haskell (we simply replace the λ with a backslash and the . with an arrow; also we don’t need to repeat the lambdas; and, of course, in Haskell we have to give them names if we’re defining functions): square = \x -> x*x f = \x y -> 2*x + y You can also evaluate lambda expressions in your interactive shell: Prelude> (\x -> x*x) 5 25 Prelude> (\x y -> 2*x + y) 5 4 14 We can see in the second example that we need to give the lambda abstraction two arguments, one corresponding to x and the other corresponding to y.

4.4.2 Higher-Order Types “Higher-Order Types” is the name given to functions. The type given to functions mimicks the lambda calculus representation of the functions. For instance, the definition of square gives λx.x ∗ x. To get the type of this, we first ask ourselves what the type of x

4.4. FUNCTION TYPES

43

is. Say we decide x is an Int. Then, we notice that the function square takes an Int and produces a value x*x. We know that when we multiply two Ints together, we get another Int, so the type of the results of square is also an Int. Thus, we say the type of square is Int → Int. We can apply a similar analysis to the function f above. The value of this function (remember, functions are values) is something which takes a value x and given that value, produces a new value, which takes a value y and produces 2*x+y. For instance, if we take f and apply only one number to it, we get (λxλy.2x + y)5 which becomes our new value λy.2(5) + y, where all occurances of x have been replaced with the applied value, 5. So we know that f takes an Int and produces a value of some type, of which we’re not sure. But we know the type of this value is the type of λy.2(5) + y. We apply the above analysis and find out that this expression has type Int → Int. Thus, f takes an Int and produces something which has type Int → Int. So the type of f is Int → (Int → Int). NOTE The parentheses are not necessary; in function types, if you have α → β → γ it is assume that β → γ is grouped. If you want the other way, with α → β grouped, you need to put parentheses around them. This isn’t entirely accurate. As we saw before, numbers like 5 aren’t really of type Int, they are of type Num a ⇒ a. We can easily find the type of Prelude functions using “:t” as before: Prelude> :t head head :: [a] -> a Prelude> :t tail tail :: [a] -> [a] Prelude> :t null null :: [a] -> Bool Prelude> :t fst fst :: (a,b) -> a Prelude> :t snd snd :: (a,b) -> b We read this as: “head” is a function that takes a list containing values of type “a” and gives back a value of type “a”; “tail” takes a list of “a”s and gives back another list of “a”s; “null” takes a list of “a”s and gives back a boolean; “fst” takes a pair of type “(a,b)” and gives back something of type “a”, and so on. NOTE Saying that the type of fst is (a, b) → a does not necessarily mean that it simply gives back the first element; it only means that it gives back something with the same type as the first element.

CHAPTER 4. TYPE BASICS

44

We can also get the type of operators like + and * and ++ and :; however, in order to do this we need to put them in parentheses. In general, any function which is used infix (meaning in the middle of two arguments rather than before them) must be put in parentheses when getting its type. Prelude> :t (+) (+) :: Num a => a -> a -> a Prelude> :t (*) (*) :: Num a => a -> a -> a Prelude> :t (++) (++) :: [a] -> [a] -> [a] Prelude> :t (:) (:) :: a -> [a] -> [a] The types of + and * are the same, and mean that + is a function which, for some type a which is an instance of Num, takes a value of type a and produces another function which takes a value of type a and produces a value of type a. In short hand, we might say that + takes two values of type a and produces a value of type a, but this is less precise. The type of ++ means, in shorthand, that, for a given type a, ++ takes two lists of as and produces a new list of as. Similarly, : takes a value of type a and another value of type [a] (list of as) and produces another value of type [a].

4.4.3 That Pesky IO Type You might be tempted to try getting the type of a function like putStrLn: Prelude> putStrLn Prelude> readFile

:t :: :t ::

putStrLn String -> IO () readFile FilePath -> IO String

What in the world is that IO thing? It’s basically Haskell’s way of representing that these functions aren’t really functions. They’re called “IO Actions” (hence the IO). The immediate question which arises is: okay, so how do I get rid of the IO. In brief, you can’t directly remove it. That is, you cannot write a function with type IO String → String. The only way to use things with an IO type is to combine them with other functions using (for example), the do notation. For example, if you’re reading a file using readFile, presumably you want to do something with the string it returns (otherwise, why would you read the file in the first place). Suppose you have a function f which takes a String and produces an Int. You can’t directly apply f to the result of readFile since the input to f is String and the output of readFile is IOString and these don’t match. However, you can combine these as:

4.4. FUNCTION TYPES

45

main = do s <- readFile "somefile" let i = f s putStrLn (show i) Here, we use the arrow convention to “get the string out of the IO action” and then apply f to the string (called s). We then, for example, print i to the screen. Note that the let here doesn’t have a corresponding in. This is because we are in a do block. Also note that we don’t write i <- f s because f is just a normal function, not an IO action.

4.4.4 Explicit Type Declarations It is sometimes desirable to explicitly specify the types of some elements or functions, for one (or more) of the following reasons: • Clarity • Speed • Debugging Some people consider it good software engineering to specify the types of all toplevel functions. If nothing else, if you’re trying to compile a program and you get type errors that you cannot understand, if you declare the types of some of your functions explicitly, it may be easier to figure out where the error is. Type declarations are written separatly from the function definition. For instance, we could explicitly type the function square as in the following code (an explicitly declared type is called a type signature): square :: Num a => a -> a square x = x*x These two lines do not even have to be next to eachother. However, the type that you specify must match the inferred type of the function definition (or be more specific). In this definition, you could apply square to anything which is an instance of Num: Int, Double, etc. However, if you knew apriori that square were only going to be applied to value of type Int, you could refine its type as: square :: Int -> Int square x = x*x Now, you could only apply square to values of type Int. Moreover, with this definition, the compiler doesn’t have to generate the general code specified in the original

CHAPTER 4. TYPE BASICS

46

function definition since it knows you will only apply square to Ints, so it may be able to generate faster code. If you have extensions turned on (“-98” in Hugs or “-fglasgow-exts” in GHC(i)), you can also add a type signature to expressions and not just functions. For instance, you could write: square (x :: Int) = x*x which tells the compiler that x is an Int; however, it leaves the compiler alone to infer the type of the rest of the expression. What is the type of square in this example? Make your guess then you can check it either by entering this code into a file and loading it into your interpreter or by asking for the type of the expression: Prelude> :t (\(x :: Int) -> x*x) since this lambda abstraction is equivalent to the above function declaration.

4.4.5 Functional Arguments In Section 3.3 we saw examples of functions taking other functions as arguments. For instance, map took a function to apply to each element in a list, filter took a function that told it which elements of a list to keep, and foldl took a function which told it how to combine list elements together. As with every other function in Haskell, these are well-typed. Let’s first think about the map function. It’s job is to take a list of elements and produce another list of elements. These two lists don’t necessarily have to have the same types of elements. So map will take a value of type [a] and produce a value of type [b]. How does it do this? It uses the user-supplied function to convert. In order to convert an a to a b, this function must have type a → b. Thus, the type of map is (a → b) → [a] → [b], which you can verify in your interpreter with “:t”. We can apply the same sort of analysis to filter and discern that it has type (a → Bool) → [a] → [a]. As we presented the foldl function, you might be tempted to give it type (a → a → a) → a → [a] → a, meaning that you take a function which combines two as into another one, an initial value of type a, a list of as to produce a final value of type a. In fact, foldl has a more general type: (a → b → a) → a → [b] → a. So it takes a function which turn an a and a b into an a, an initial value of type a and a list of bs. It produces an a. To see this, we can write a function count which counts how many members of a list satisfy a given constraint. You can of course you filter and length to do this, but we will also do it using foldr: module Count where import Char

4.5. DATA TYPES

47

count1 p l = length (filter p l) count2 p l = foldr (\x c -> if p x then c+1 else c) 0 l The functioning of count1 is simple. It filters the list l according to the predicate p, then takes the length of the resulting list. On the other hand, count2 uses the intial value (which is an integer) to hold the current count. For each element in the list l, it applies the lambda expression shown. This takes two arguments, c which holds the current count and x which is the current element in the list that we’re looking at. It checks to see if p holds about x. If it does, it returns the new value c+1, increasing the count of elements for which the predicate holds. If it doesn’t, it just returns c, the old count.

Exercises Exercise 4.3 Figure out for yourself, and then verify the types of the following expressions, if they have a type. Also note if the expression is a type error: 1. \x -> [x] 2. \x y z -> (x,y:z:[]) 3. \x -> x + 5 4. \x -> "hello, world" 5. \x -> x ’a’ 6. \x -> x x 7. \x -> x + x

4.5 Data Types Tuples and lists are nice, common ways to define structured values. However, it is often desirable to be able to define our own data structures and functions over them. So-called “datatypes” are defined using the data keyword.

4.5.1 Pairs For instance, a definition of a pair of elements (much like the standard, build-in pair type) could be: data Pair a b = Pair a b

CHAPTER 4. TYPE BASICS

48

Let’s walk through this code one word at a time. First we say “data” meaning that we’re defining a datatype. We then give the name of the datatype, in this case, “Pair.” The “a” and “b” that follow “Pair” are type parameters, just like the “a” is the type of the function map. So up until this point, we’ve said that we’re going to define a data structure called “Pair” which is parameterized over two types, a and b. After the equals sign, we specify the constructors of this data type. In this case, there is a single constructor, “Pair” (this doesn’t necessarily have to have the same name as the type, but in this case it seems to make more sense). After this pair, we again write “a b”, which means that in order to construct a Pair we need two values, one of type a and one of type b. This definition introduces a function, Pair :: a -> b -> Pair a b that you can use to construct Pairs. If you enter this code into a file and load it, you can see how these are constructed: Datatypes> :t Pair Pair :: a -> b -> Pair a b Datatypes> :t Pair ’a’ Pair ’a’ :: a -> Pair Char a Datatypes> :t Pair ’a’ "Hello" :t Pair ’a’ "Hello" Pair ’a’ "Hello" :: Pair Char [Char] So, by giving Pair two values, we have completely constructed a value of type Pair. We can write functions involving pairs as: pairFst (Pair x y) = x pairSnd (Pair x y) = y In this, we’ve used the pattern matching capabilities of Haskell to look at a pair an extract values from it. In the definition of pairFst we take an entire Pair and extract the first element; similarly for pairSnd. We’ll discuss pattern matching in much more detail in Section 7.4.

Exercises Exercise 4.4 Write a data type declaration for Triple, a type which contains three elements, all of different types. Write functions tripleFst, tripleSnd and tripleThr to extract respectively the first, second and third elements. Exercise 4.5 Write a datatype Quadruple which holds four elements. However, the first two elements must be the same type and the last two elements must be the same type. Write a function firstTwo which returns a list containing the first two elements and a function lastTwo which returns a list containing the last two elements. Write type signatures for these functions

4.5. DATA TYPES

49

4.5.2 Multiple Constructors We have seen an example of the data type with one constructor: Pair. It is also possible (and extremely useful) to have multiple constructors. Let us consider a simple function which searches through a list for an element satisfying a given predicate and then returns the first element satisfying that predicate. What should we do if none of the elements in the list satisfy the predicate? A few options are listed below: • Raise an error • Loop indefinitely • Write a check function • Return the first element • ... Raising an error is certainly an option (see Section 10.1 to see how to do this). The problem is that it is difficult/impossible to recover from such errors. Looping indefinitely is possible, but not terribly useful. We could write a sister function which checks to see if the list contains an element satisfying a predicate and leave it up to the user to always use this function first. We could return the first element, but this is very ad-hoc and difficult to remember. The fact that there is no basic option to solve this problem simply means we have to think about it a little more. What are we trying to do? We’re trying to write a function which might succeed and might not. Furthermore, if it does succeed, it returns some sort of value. Let’s write a datatype: data Maybe a = Nothing | Just a This is one of the most common datatypes in Haskell and is defined in the Prelude. Here, we’re saying that there are two possible ways to create something of type Maybe a. The first is to use the nullary constructor Nothing, which takes no arguments (this is what “nullary” means). The second is to use the constructor Just, together with a value of type a. The Maybe type is useful in all sorts of circumstances. For instance, suppose we want to write a function (like head) which returns the first element of a given list. However, we don’t want the program to die if the given list is empty. We can accomplish this with a function like: firstElement :: [a] -> Maybe a firstElement [] = Nothing firstElement (x:xs) = Just x

CHAPTER 4. TYPE BASICS

50

The type signature here says that firstElement takes a list of as and produces something with type Maybe a. In the first line of code, we match against the empty list []. If this match succeeds (i.e., the list is, in fact, empty), we return Nothing. If the first match fails, then we try to match against x:xs which must succeed. In this case, we return Just x. For our findElement function, we represent failure by the value Nothing and success with value a by Just a. Our function might look something like this: findElement :: (a -> Bool) -> [a] -> Maybe a findElement p [] = Nothing findElement p (x:xs) = if p x then Just x else findElement p xs

The first line here gives the type of the function. In this case, our first argument is the predicate (and takes an element of type a and returns True if and only if the element satisfies the predicate); the second argument is a list of as. Our return value is maybe an a. That is, if the function succeeds, we will return Just a and if not, Nothing. Another useful datatype is the Either type, defined as: data Either a b = Left a | Right b

This is a way of expressing alternation. That is, something of type Either a b is either a value of type a (using the Left constructor) or a value of type b (using the Right constructor).

Exercises Exercise 4.6 Write a datatype Tuple which can hold one, two, three or four elements, depending on the constructor (that is, there should be four constructors, one for each number of arguments). Also provide functions tuple1 through tuple4 which take a tuple and return Just the value in that position, or Nothing if the number is invalid (i.e., you ask for the tuple4 on a tuple holding only two elements). Exercise 4.7 Based on our definition of Tuple from the previous exercise, write a function which takes a Tuple and returns either the value (if it’s a one-tuple), a Haskell-pair (i.e., (’a’,5)) if it’s a two-tuple, a Haskell-triple if it’s a three-tuple or a Haskell-quadruple if it’s a four-tuple. You will need to use the Either type to represent this.

4.5. DATA TYPES

51

4.5.3 Recursive Datatypes We can also define recursive datatypes. These are datatypes whose definitions are based on themselves. For instance, we could define a list datatype as: data List a = Nil | Cons a (List a) In this definition, we have defined what it means to be of type List a. We say that a list is either empty (Nil) or it’s the Cons of a value of type a and another value of type List a. This is almost identical to the actual definition of the list datatype in Haskell, except that uses special syntax where [] corresponds to Nil and : corresponds to Cons. We can write our own length function for our lists as: listLength Nil = 0 listLength (Cons x xs) = 1 + listLength xs This function is slightly more complicated and uses recursion to calculate the length of a List. The first line says that the length of an empty list (a Nil) is 0. This much is obvious. The second line tells us how to calculate the length of a nonempty list. A non-empty list must be of the form Cons x xs for some values of x and xs. We know that xs is another list and we know that whatever the length of the current list is, it’s the length of its tail (the value of xs) plus one (to account for x). Thus, we apply the listLength function to xs and add one to the result. This gives us the length of the entire list.

Exercises Exercise 4.8 Write functions listHead, listTail, listFoldl and listFoldr which are equivalent to their Prelude twins, but function on our List datatype. Don’t worry about exceptional conditions on the first two.

4.5.4 Binary Trees We can define datatypes that are more complicated than lists. Suppose we want to define a structure that looks like a binary tree. A binary tree is a structure that has a single root node; each node in the tree is either a “leaf” or a “branch.” If it’s a leaf, it holds a value; if it’s a branch, it holds a value and a left child and a right child. Each of these children is another node. We can define such a data type as: data BinaryTree a = Leaf a | Branch (BinaryTree a) a (BinaryTree a) In this datatype declaration we say that a BinaryTree of as is either a Leaf which holds an a, or it’s a branch with a left child (which is a BinaryTree of as), a

CHAPTER 4. TYPE BASICS

52

node value (which is an a), and a right child (which is also a BinaryTree of as). It is simple to modify the listLength function so that instead of calculating the length of lists, it calculates the number of nodes in a BinaryTree. Can you figure out how? We can call this function treeSize. The solution is given below: treeSize (Leaf x) = 1 treeSize (Branch left x right) = 1 + treeSize left + treeSize right Here, we say that the size of a leaf is 1 and the size of a branch is the size of its left child, plus the size of its right child, plus one.

Exercises Exercise 4.9 Write a function elements which returns the elements in a BinaryTree in a bottom-up, left-to-right manner (i.e., the first element returned in the left-most leaf, followed by its parent’s value, followed by the other child’s value, and so on). The result type should be a normal Haskell list. Exercise 4.10 Write a fold function for BinaryTrees and rewrite elements in terms of it (call the new one elements2).

4.5.5 Enumerated Sets You can also use datatypes to define things like enumerated sets, for instance, a type which can only have a constrained number of values. We could define a color type: data Color = Red | Orange | Yellow | Green | Blue | Purple | White | Black This would be sufficient to deal with simple colors. Suppose we were using this to write a drawing program, we could then write a function to convert between a Color and a RGB triple. We can write a colorToRGB function, as: colorToRGB colorToRGB colorToRGB colorToRGB colorToRGB

Red Orange Yellow Green Blue

= = = = =

(255,0,0) (255,128,0) (255,255,0) (0,255,0) (0,0,255)

4.6. CONTINUATION PASSING STYLE

53

colorToRGB Purple = (255,0,255) colorToRGB White = (255,255,255) colorToRGB Black = (0,0,0) If we wanted also to allow the user to define his own custom colors, we could change the Color datatype to something like: data Color = Red | Orange | Yellow | Green | Blue | Purple | White | Black | Custom Int Int Int

-- R G B components

And add a final definition for colorToRGB: colorToRGB (Custom r g b) = (r,g,b)

4.5.6 The Unit type A final useful datatype defined in Haskell (from the Prelude) is the unit type. It’s definition is: data () = () The only true value of this type is (). This is essentially the same as a void type in a langauge like C or Java and will be useful when we talk about IO in Chapter 5. We’ll dwell much more on data types in Sections 7.4 and 8.3.

4.6 Continuation Passing Style There is a style of functional programming called “Continuation Passing Style” (also simply “CPS”). The idea behind CPS is to pass around as a function argument what to do next. I will handwave through an example which is too complex to write out at this point and then give a real example, though one with less motivation. Consider the problem of parsing. The idea here is that we have a sequence of tokens (words, letters, whatever) and we want to ascribe structure to them. The task of converting a string of Java tokens to a Java abstract syntax tree is an example of a

54

CHAPTER 4. TYPE BASICS

parsing problem. So is the task of taking an English sentence and creating a parse tree (though the latter is quite a bit harder). Suppose we’re parsing something like C or Java where functions take arguments in parentheses. But for simplicity, assume they are not separated by commas. That is, a function call looks like myFunction(x y z). We want to convert this into something like a pair containing first the string “myFunction” and then a list with three string elements: “x”, “y” and “z”. The general approach to solving this would be to write a function which parses function calls like this one. First it would look for an identifier (“myFunction”), then for an open parenthesis, then for zero or more identifiers, then for a close parenthesis. One way to do this would be to have two functions: parseFunction :: [Token] -> Maybe ((String, [String]), [Token]) parseIdentifier :: [Token] -> Maybe (String, [Token]) The idea would be that if we call parseFunction, if it doesn’t return Nothing, then it returns the pair described earlier, together with whatever is left after parsing the function. Similarly, parseIdentifier will parse one of the arguments. If it returns Nothing, then it’s not an argument; if it returns Just something, then that something is the argument paired with the rest of the tokens. What the parseFunction function would do is to parse an identifier. If this fails, it fails itself. Otherwise, it continues and tries to parse a open parenthesis. If that succeeds, it repeatedly calls parseIdentifier until that fails. It then tries to parse a close parenthesis. If that succeeds, then it’s done. Otherwise, it fails. There is, however, another way to think about this problem. The advantage to this solution is that functions no longer need to return the remaining tokens (which tends to get ugly). Instead of the above, we write functions: parseFunction :: [Token] -> ((String, [String]) -> [Token] -> a) -> ([Token] -> a) -> a parseIdentifier :: [Token] -> (String -> [Token] -> a) -> ([Token] -> a) -> a Let’s consider parseIdentifier. This takes three arguments: a list of tokens and two continuations. The first continuation is what to do when you succeed. The second continuation is what to do if you fail. What parseIdentifier does, then, is try to read an identifier. If this succeeds, it calls the first continuation with that identifier and the remaining tokens as arguments. If reading the identifier fails, it calls the second continuation with all the tokens.

4.6. CONTINUATION PASSING STYLE

55

Now consider parseFunction. Recall that it wants to read an identifier, an open parenthesis, zero or more identifiers and a close parenthesis. Thus, the first thing it does is call parseIdentifier. The first argument it gives is the list of tokens. The first continuation (which is what parseIdentifier should do if it succeeds) is in turn a function which will look for an open parenthesis, zero or more arguments and a close parethesis. The second argument (the failure argument) is just going to be the failure function given to parseFunction. Now, we simply need to define this function which looks for an open parenthesis, zero or more arguments and a close parethesis. This is easy. We write a function which looks for the open parenthesis and then calls parseIdentifier with a success continuation that looks for more identifiers, and a “failure” continuation which looks for the close parenthesis (note that this failure doesn’t really mean failure – it just means there are no more arguments left). I realize this discussion has been quite abstract. I would willingly give code for all this parsing, but it is perhaps too complex at the moment. Instead, consider the problem of folding across a list. We can write a CPS fold as: cfold’ f z [] = z cfold’ f z (x:xs) = f x z (\y -> cfold’ f y xs) In this code, cfold’ take a function f which takes three arguments, slightly different from the standard folds. The first is the current list element, x, the second is the accumulated element, z, and the third is the continuation: basicially, what to do next. We can write a wrapper function for cfold’ that will make it behave more like a normal fold: cfold f z l = cfold’ (\x t g -> f x (g t)) z l We can test that this function behaves as we desire: CPS> cfold (+) 0 [1,2,3,4] 10 CPS> cfold (:) [] [1,2,3] [1,2,3] One thing that’s nice about formulating cfold in terms of the helper function cfold’ is that we can use the helper function directly. This enables us to change, for instance, the evaluation order of the fold very easily: CPS> cfold’ (\x t g -> (x : g t)) [] [1..10] [1,2,3,4,5,6,7,8,9,10] CPS> cfold’ (\x t g -> g (x : t)) [] [1..10] [10,9,8,7,6,5,4,3,2,1]

CHAPTER 4. TYPE BASICS

56

The only difference between these calls to cfold’ is whether we call the continuation before or after constructing the list. As it turns out, this slight difference changes the behavior for being like foldr to being like foldl. We can evaluate both of these calls as follows (let f be the folding function):

==> ==> ==> ==> ==> ==> ==> ==> ==> ==> ==>

cfold’ (\x t g -> (x : g t)) [] [1,2,3] cfold’ f [] [1,2,3] f 1 [] (\y -> cfold’ f y [2,3]) 1 : ((\y -> cfold’ f y [2,3]) []) 1 : (cfold’ f [] [2,3]) 1 : (f 2 [] (\y -> cfold’ f y [3])) 1 : (2 : ((\y -> cfold’ f y [3]) [])) 1 : (2 : (cfold’ f [] [3])) 1 : (2 : (f 3 [] (\y -> cfold’ f y []))) 1 : (2 : (3 : (cfold’ f [] []))) 1 : (2 : (3 : [])) [1,2,3]

==> ==> ==> ==> ==> ==> ==> ==> ==> ==> ==>

cfold’ (\x t g -> g (x:t)) [] [1,2,3] cfold’ f [] [1,2,3] (\x t g -> g (x:t)) 1 [] (\y -> cfold’ f y [2,3]) (\g -> g [1]) (\y -> cfold’ f y [2,3]) (\y -> cfold’ f y [2,3]) [1] cfold’ f [1] [2,3] (\x t g -> g (x:t)) 2 [1] (\y -> cfold’ f y [3]) cfold’ f (2:[1]) [3] cfold’ f [2,1] [3] (\x t g -> g (x:t)) 3 [2,1] (\y -> cfold’ f y []) cfold’ f (3:[2,1]) [] [3,2,1]

In general, continuation passing style is a very powerful abstraction, though it can be difficult to master. We will revisit the topic more thoroughly later in the book.

Exercises Exercise 4.11 Test whether the CPS-style fold mimicks either of foldr and foldl. If not, where is the difference? Exercise 4.12 Write map and filter using continuation passing style.

Chapter 5

Basic Input/Output As we mentioned earlier, it is difficult to think of a good, clean way to integrate operations like input/output into a pure functional language. Before we give the solution, let’s take a step back and think about the difficulties inherent in such a task. Any IO library should provide a host of functions, containing (at a minimum) operations like: • print a string to the screen • read a string from a keyboard • write data to a file • read data from a file There are two issues here. Let’s first consider the initial two examples and think about what their types should be. Certainly the first operation (I hesitate to call it a “function”) should take a String argument and produce something, but what should it produce? It could produce a unit (), since there is essentially no return value from printing a string. The second operation, similarly, should return a String, but it doesn’t seem to require an argument. We want both of these operations to be functions, but they are by definition not functions. The item that reads a string from the keyboard cannot be a function, as it will not return the same String every time. And if the first function simply returns () = (), every time, there should be no problem with replacing it with a function f due to referential transparency. But clearly this does not have the desired effect.

5.1 The RealWorld Solution In a sense, the reason that these items are not functions is that they interact with the “real world.” Their values depend directly on the real world. Supposing we had a type RealWorld, we might write these functions as having type: 57

58

CHAPTER 5. BASIC INPUT/OUTPUT

printAString :: RealWorld -> String -> RealWorld readAString :: RealWorld -> (RealWorld, String) That is, printAString takes a current state of the world and a string to print; it then modifies the state of the world in such a way that the string is now printed and returns this new value. Similarly, readAString takes a current state of the world and returns a new state of the world, paired with the String that was typed. This would be a possible way to do IO, though it is more than somewhat unweildy. In this style (assuming an initial RealWorld state were an argument to main), our “Name.hs” program from Section 3.8 would look something like: main rW = let rW’ = printAString rW "Please enter your name: " (rW’’,name) = readAString rW’ in printAString rW’’ ("Hello, " ++ name ++ ", how are you?") This is not only hard to read, but prone to error, if you accidentally use the wrong version of the RealWorld. It also doesn’t model the fact that the program below makes no sense: main rW = let rW’ = printAString rW "Please enter your name: " (rW’’,name) = readAString rW’ in printAString rW’ -- OOPS! ("Hello, " ++ name ++ ", how are you?") In this program, the reference to rW’’ on the last line has been changed to a reference to rW’. It is completely unclear what this program should do. Clearly, it must read a string in order to have a value for name to be printed. But that means that the RealWorld has been updated. However, then we try to ignore this update by using an “old version” of the RealWorld. There is clearly something wrong happening here. Suffice it to say that doing IO operations in a pure lazy functional language is not trivial.

5.2 Actions The breakthrough for solving this problem came when Phil Wadler realized that monads would be a good way to think about IO computations. In fact, monads are able to express much more than just the simple operations described above; we can use them to express a variety of constructions like concurrence, exceptions, IO, non-determinism and much more. Moreover, there is nothing special about them; they can be defined within Haskell with no special handling from the compiler (though compilers often choose to optimize monadic operations).

5.2. ACTIONS

59

As pointed out before, we cannot think of things like “print a string to the screen” or “read data from a file” as functions, since they are not (in the pure mathematical sense). Therefore, we give them another name: actions. Not only do we give them a special name, we give them a special type. One particularly useful action is putStrLn, which prints a string to the screen. This action has type: putStrLn :: String -> IO () As expected, putStrLn takes a string argument. What it returns is of type IO (). This means that this function is actually an action (that is what the IO means). Furthermore, when this action is evaluated (or “run”) , the result will have type (). NOTE Actually, this type means that putStrLn is an action within the IO monad, but we will gloss over this for now. You can probably already guess the type of getLine: getLine :: IO String This means that getLine is an IO action that, when run, will have type String. The question immediately arises: “how do you ‘run’ an action?”. This is something that is left up to the compiler. You cannot actually run an action yourself; instead, a program is, itself, a single action that is run when the compiled program is executed. Thus, the compiler requires that the main function have type IO (), which means that it is an IO action that returns nothing. The compiled code then executes this action. However, while you are not allowed to run actions yourself, you are allowed to combine actions. In fact, we have already seen one way to do this using the do notation (how to really do this will be revealed in Chapter 9). Let’s consider the original name program: main = do hSetBuffering stdin LineBuffering putStrLn "Please enter your name: " name <- getLine putStrLn ("Hello, " ++ name ++ ", how are you?") We can consider the do notation as a way to combine a sequence of actions. Moreover, the <- notation is a way to get the value out of an action. So, in this program, we’re sequencing four actions: setting buffering, a putStrLn, a getLine and another putStrLn. The putStrLn action has type String → IO (), so we provide it a String, so the fully applied action has type IO (). This is something that we are allowed to execute. The getLine action has type IO String, so it is okay to execute it directly. However, in order to get the value out of the action, we write name <- getLine, which basically means “run getLine, and put the results in the variable called name.”

60

CHAPTER 5. BASIC INPUT/OUTPUT

Normal Haskell constructions like if/then/else and case/of can be used within the do notation, but you need to be somewhat careful. For instance, in our “guess the number” program, we have: do ... if (read guess) < num then do putStrLn "Too low!" doGuessing num else if read guess > num then do putStrLn "Too high!" doGuessing num else do putStrLn "You Win!" If we think about how the if/then/else construction works, it essentially takes three arguments: the condition, the “then” branch, and the “else” branch. The condition needs to have type Bool, and the two branches can have any type, provided that they have the same type. The type of the entire if/then/else construction is then the type of the two branches. In the outermost comparison, we have (read guess) < num as the condition. This clearly has the correct type. Let’s just consider the “then” branch. The code here is: do putStrLn "Too low!" doGuessing num Here, we are sequencing two actions: putStrLn and doGuessing. The first has type IO (), which is fine. The second also has type IO (), which is fine. The type result of the entire computation is precisely the type of the final computation. Thus, the type of the “then” branch is also IO (). A similar argument shows that the type of the “else” branch is also IO (). This means the type of the entire if/then/else construction is IO (), which is just what we want. NOTE In this code, the last line is “else do putStrLn "You Win!"”. This is somewhat overly verbose. In fact, “else putStrLn "You Win!"” would have been sufficient, since do is only necessary to sequence actions. Since we have only one action here, it is superfluous.

It is incorrect to think to yourself “Well, I already started a do block; I don’t need another one,” and hence write something like: do if (read guess) < num then putStrLn "Too low!" doGuessing num else ...

5.2. ACTIONS

61

Here, since we didn’t repeat the do, the compiler doesn’t know that the putStrLn and doGuessing calls are supposed to be sequenced, and the compiler will think you’re trying to call putStrLn with three arguments: the string, the function doGuessing and the integer num. It will certainly complain (though the error may be somewhat difficult to comprehend at this point). We can write the same doGuessing function using a case statement. To do this, we first introduce the Prelude function compare, which takes two values of the same type (in the Ord class) and returns one of GT, LT, EQ, depending on whether the first is greater than, less than or equal to the second. doGuessing num = do putStrLn "Enter your guess:" guess <- getLine case compare (read guess) num of LT -> do putStrLn "Too low!" doGuessing num GT -> do putStrLn "Too high!" doGuessing num EQ -> putStrLn "You Win!" Here, again, the dos after the ->s are necessary on the first two options, because we are sequencing actions. If you’re used to programming in an imperative language like C or Java, you might think that return will exit you from the current function. This is not so in Haskell. In Haskell, return simply takes a normal value (for instance, one of type IO Int) and makes it into an action that returns the given value (for instance, the value of type Int). In particular, in an imperative language, you might write this function as: void doGuessing(int num) { print "Enter your guess:"; int guess = atoi(readLine()); if (guess == num) { print "You win!"; return (); } // we won’t get here if guess == num if (guess < num) { print "Too low!"; doGuessing(num); } else { print "Too high!"; doGuessing(num); } }

CHAPTER 5. BASIC INPUT/OUTPUT

62

Here, because we have the return () in the first if match, we expect the code to exit there (and in mode imperative languages, it does). However, the equivalent code in Haskell, which might look something like: doGuessing num = do putStrLn "Enter your guess:" guess <- getLine case compare (read guess) num of EQ -> do putStrLn "You win!" return () -- we don’t expect to get here unless guess == num if (read guess < num) then do print "Too low!"; doGuessing else do print "Too high!"; doGuessing will not behave as you expect. First of all, if you guess correctly, it will first print “You win!,” but it won’t exit, and it will check whether guess is less than num. Of course it is not, so the else branch is taken, and it will print “Too high!” and then ask you to guess again. On the other hand, if you guess incorrectly, it will try to evaluate the case statement and get either LT or GT as the result of the compare. In either case, it won’t have a pattern that matches, and the program will fail immediately with an exception.

Exercises Exercise 5.1 Write a program that asks the user for his or her name. If the name is one of Simon, John or Phil, tell the user that you think Haskell is a great programming language. If the name is Koen, tell them that you think debugging Haskell is fun (Koen Classen is one of the people who works on Haskell debugging); otherwise, tell the user that you don’t know who he or she is. Write two different versions of this program, one using if statements, the other using a case statement.

5.3 The IO Library The IO Library (available by importing the IO module) contains many definitions, the most common of which are listed below: data IOMode

= ReadMode | WriteMode | AppendMode | ReadWriteMode

openFile

:: FilePath -> IOMode -> IO Handle

5.3. THE IO LIBRARY hClose

:: Handle -> IO ()

hIsEOF

:: Handle -> IO Bool

63

hGetChar :: Handle -> IO Char hGetLine :: Handle -> IO String hGetContents :: Handle -> IO String getChar getLine getContents

:: IO Char :: IO String :: IO String

hPutChar hPutStr hPutStrLn

:: Handle -> Char -> IO () :: Handle -> String -> IO () :: Handle -> String -> IO ()

putChar putStr putStrLn

:: Char -> IO () :: String -> IO () :: String -> IO ()

readFile writeFile

:: FilePath -> IO String :: FilePath -> String -> IO ()

bracket :: IO a -> (a -> IO b) -> (a -> IO c) -> IO c

NOTE The type FilePath is a type synonym for String. That is, there is no difference between FilePath and String. So, for instance, the readFile function takes a String (the file to read) and returns an action that, when run, produces the contents of that file. See Section 8.1 for more about type synonyms. Most of these functions are self-explanatory. The openFile and hClose functions open and close a file, respectively, using the IOMode argument as the mode for opening the file. hIsEOF tests for end-of file. hGetChar and hGetLine read a character or line (respectively) from a file. hGetContents reads the entire file. The getChar, getLine and getContents variants read from standard input. hPutChar prints a character to a file; hPutStr prints a string; and hPutStrLn prints a string with a newline character at the end. The variants without the h prefix work on standard output. The readFile and writeFile functions read an entire file without having to open it first. The bracket function is used to perform actions safely. Consider a function that opens a file, writes a character to it, and then closes the file. When writing such a function, one needs to be careful to ensure that, if there were an error at some point, the file is still successfully closed. The bracket function makes this easy. It takes

64

CHAPTER 5. BASIC INPUT/OUTPUT

three arguments: The first is the action to perform at the beginning. The second is the action to perform at the end, regardless of whether there’s an error or not. The third is the action to perform in the middle, which might result in an error. For instance, our character-writing function might look like: writeChar :: FilePath -> Char -> IO () writeChar fp c = bracket (openFile fp ReadMode) hClose (\h -> hPutChar h c) This will open the file, write the character and then close the file. However, if writing the character fails, hClose will still be executed, and the exception will be reraised afterwards. That way, you don’t need to worry too much about catching the exceptions and about closing all of your handles.

5.4 A File Reading Program We can write a simple program that allows a user to read and write files. The interface is admittedly poor, and it does not catch all errors (try reading a non-existant file). Nevertheless, it should give a fairly complete example of how to use IO. Enter the following code into “FileRead.hs,” and compile/run: module Main where import IO main = do hSetBuffering stdin LineBuffering doLoop doLoop = do putStrLn "Enter a command rFN wFN or q to quit:" command <- getLine case command of ’q’:_ -> return () ’r’:filename -> do putStrLn ("Reading " ++ filename) doRead filename doLoop ’w’:filename -> do putStrLn ("Writing " ++ filename) doWrite filename doLoop _ -> doLoop

5.4. A FILE READING PROGRAM

65

doRead filename = bracket (openFile filename ReadMode) hClose (\h -> do contents <- hGetContents h putStrLn "The first 100 chars:" putStrLn (take 100 contents)) doWrite filename = do putStrLn "Enter text to go into the file:" contents <- getLine bracket (openFile filename WriteMode) hClose (\h -> hPutStrLn h contents) What does this program do? First, it issues a short string of instructions and reads a command. It then performs a case switch on the command and checks first to see if the first character is a ‘q.’ If it is, it returns a value of unit type. NOTE The return function is a function that takes a value of type a and returns an action of type IO a. Thus, the type of return () is IO (). If the first character of the command wasn’t a ‘q,’ the program checks to see if it was an ’r’ followed by some string that is bound to the variable filename. It then tells you that it’s reading the file, does the read and runs doLoop again. The check for ‘w’ is nearly identical. Otherwise, it matches , the wildcard character, and loops to doLoop. The doRead function uses the bracket function to make sure there are no problems reading the file. It opens a file in ReadMode, reads its contents and prints the first 100 characters (the take function takes an integer n and a list and returns the first n elements of the list). The doWrite function asks for some text, reads it from the keyboard, and then writes it to the file specified. NOTE Both doRead and doWrite could have been made simpler by using readFile and writeFile, but they were written in the extended fashion to show how the more complex functions are used. The only major problem with this program is that it will die if you try to read a file that doesn’t already exists or if you specify some bad filename like *\ˆ# @. You may think that the calls to bracket in doRead and doWrite should take care of this, but they don’t. They only catch exceptions within the main body, not within the startup or shutdown functions (openFile and hClose, in these cases). We would need to catch exceptions raised by openFile, in order to make this complete. We will do this when we talk about exceptions in more detail in Section 10.1.

66

CHAPTER 5. BASIC INPUT/OUTPUT

Exercises Exercise 5.2 Write a program that first asks whether the user wants to read from a file, write to a file or quit. If the user responds quit, the program should exit. If he responds read, the program should ask him for a file name and print that file to the screen (if the file doesn’t exist, the program may crash). If he responds write, it should ask him for a file name and then ask him for text to write to the file, with “.” signaling completion. All but the “.” should be written to the file. For example, running this program might produce: Do you want to [read] a file, [write] a file read Enter a file name to read: foo ...contents of foo... Do you want to [read] a file, [write] a file write Enter a file name to write: foo Enter text (dot on a line by itself to end): this is some text for foo . Do you want to [read] a file, [write] a file read Enter a file name to read: foo this is some text for foo Do you want to [read] a file, [write] a file read Enter a file name to read: foof Sorry, that file does not exist. Do you want to [read] a file, [write] a file blech I don’t understand the command blech. Do you want to [read] a file, [write] a file quit Goodbye!

or [quit]?

or [quit]?

or [quit]?

or [quit]?

or [quit]?

or [quit]?

Chapter 6

Modules In Haskell, program subcomponents are divided into modules. Each module sits in its own file and the name of the module should match the name of the file (without the “.hs” extension, of course), if you wish to ever use that module in a larger program. For instance, suppose I am writing a game of poker. I may wish to have a separate module called “Cards” to handle the generation of cards, the shuffling and the dealing functions, and then use this “Cards” module in my “Poker” modules. That way, if I ever go back and want to write a blackjack program, I don’t have to rewrite all the code for the cards; I can simply import the old “Cards” module.

6.1 Exports Suppose as suggested we are writing a cards module. I have left out the implementation details, but suppose the skeleton of our module looks something like this: module Cards where data Card = ... data Deck = ... newDeck :: ... -> Deck newDeck = ... shuffle :: ... -> Deck -> Deck shuffle = ... -- ’deal deck n’ deals ’n’ cards from ’deck’ deal :: Deck -> Int -> [Card] deal deck n = dealHelper deck n []

67

CHAPTER 6. MODULES

68 dealHelper = ...

In this code, the function deal calls a helper function dealHelper. The implementation of this helper function is very dependent on the exact data structures you used for Card and Deck so we don’t want other people to be able to call this function. In order to do this, we create an export list, which we insert just after the module name declaration: module Cards ( Card(), Deck(), newDeck, shuffle, deal ) where ... Here, we have specified exactly what functions the module exports, so people who use this module won’t be able to access our dealHelper function. The () after Card and Deck specify that we are exporting the type but none of the constructors. For instance if our definition of Card were: data Card = data Suit = | | | data Face = | | | |

Card Suit Face Hearts Spades Diamonds Clubs Jack Queen King Ace Number Int

Then users of our module would be able to use things of type Card, but wouldn’t be able to construct their own Cards and wouldn’t be able to extract any of the suit/face information stored in them. If we wanted users of our module to be able to access all of this information, we would have to specify it in the export list: module Cards ( Card(Card), Suit(Hearts,Spades,Diamonds,Clubs), Face(Jack,Queen,King,Ace,Number), ... )

6.2. IMPORTS

69

where ... This can get frustrating if you’re exporting datatypes with many constructors, so if you want to export them all, you can simply write (..), as in: module Cards ( Card(..), Suit(..), Face(..), ... ) where ... And this will automatically export all the constructors.

6.2 Imports There are a few idiosyncracies in the module import system, but as long as you stay away from the corner cases, you should be fine. Suppose, as before, you wrote a module called “Cards” which you saved in the file “Cards.hs”. You are now writing your poker module and you want to import all the definitions from the “Cards” module. To do this, all you need to do is write: module Poker where import Cards This will enable to you use any of the functions, types and constructors exported by the module “Cards”. You may refer to them simply by their name in the “Cards” module (as, for instance, newDeck), or you may refer to them explicitely as imported from “Cards” (as, for instance, Cards.newDeck). It may be the case that two module export functions or types of the same name. In these cases, you can import one of the modules qualified which means that you would no longer be able to simply use the newDeck format but must use the longer Cards.newDeck format, to remove ambiguity. If you wanted to import “Cards” in this qualified form, you would write: import qualified Cards Another way to avoid problems with overlapping function definitions is to import only certain functions from modules. Suppose we knew the only function from “Cards” that we wanted was newDeck, we could import only this function by writing:

CHAPTER 6. MODULES

70

import Cards (newDeck) On the other hand, suppose we knew that that the deal function overlapped with another module, but that we didn’t need the “Cards” version of that function. We could hide the definition of deal and import everything else by writing: import Cards hiding (deal) Finally, suppose we want to import “Cards” as a qualified module, but don’t want to have to type Cards. out all the time and would rather just type, for instance, C. – we could do this using the as keyword: import qualified Cards as C These options can be mixed and matched – you can give explicit import lists on qualified/as imports, for instance.

6.3 Hierarchical Imports Though technically not part of the Haskell 98 standard, most Haskell compilers support hierarchical imports. This was designed to get rid of clutter in the directories in which modules are stored. Hierarchical imports allow you to specify (to a certain degree) where in the directory structure a module exists. For instance, if you have a “haskell” directory on your computer and this directory is in your compiler’s path (see your compiler notes for how to set this; in GHC it’s “-i”, in Hugs it’s “-P”), then you can specify module locations in subdirectories to that directory. Suppose instead of saving the “Cards” module in your general haskell directory, you created a directory specifically for it called “Cards”. The full path of the Cards.hs file is then haskell/Cards/Cards.hs (or, for Windows haskell\Cards\Cards.hs). If you then change the name of the Cards module to “Cards.Cards”, as in: module Cards.Cards(...) where ... You could then import it in any module, regardless of this module’s directory, as: import Cards.Cards If you start importing these module qualified, I highly recommend using the as keyword to shorten the names, so you can write:

6.4. LITERATE VERSUS NON-LITERATE

71

import qualified Cards.Cards as Cards ... Cards.newDeck ... instead of: import qualified Cards.Cards ... Cards.Cards.newDeck ... which tends to get ugly.

6.4 Literate Versus Non-Literate The idea of literate programming is a relatively simple one, but took quite a while to become popularized. When we think about programming, we think about the code being the default mode of entry and comments being secondary. That is, we write code without any special annotation, but comments are annotated with either -- or {- ... -}. Literate programming swaps these preconceptions. There are two types of literate programs in Haskell; the first uses so-called Birdscripts and the second uses LATEX-style markup. Each will be discussed individually. No matter which you use, literate scripts must have the extension lhs instead of hs to tell the compiler that the program is written in a literate style.

6.4.1 Bird-scripts In a Bird-style literate program, comments are default and code is introduced with a leading greater-than sign (“>”). Everything else remains the same. For example, our Hello World program would be written in Bird-style as: This is a simple (literate!) Hello World program. > module Main > where All our main function does is print a string: > main = putStrLn "Hello World" Note that the spaces between the lines of code and the “comments” are necessary (your compiler will probably complain if you are missing them). When compiled or loaded in an interpreter, this program will have the exact same properties as the nonliterate version from Section 3.4.

CHAPTER 6. MODULES

72

6.4.2 LaTeX-scripts LATEX is a text-markup language very popular in the academic community for publishing. If you are unfamiliar with LATEX, you may not find this section terribly useful. Again, a literate Hello World program written in LATEX-style would look like: This is another simple (literate!) Hello World program. \begin{code} module Main where \end{code} All our main function does is print a string: \begin{code} main = putStrLn "Hello World" \end{code} In LATEX-style scripts, the blank lines are not necessary.

Chapter 7

Advanced Features Discussion

7.1 Sections and Infix Operators We’ve already seen how to double the values of elements in a list using map: Prelude> map (\x -> x*2) [1,2,3,4] [2,4,6,8] However, there is a more concise way to write this: Prelude> map (*2) [1,2,3,4] [2,4,6,8] This type of thing can be done for any infix function: Prelude> map (+5) [1,2,3,4] [6,7,8,9] Prelude> map (/2) [1,2,3,4] [0.5,1.0,1.5,2.0] Prelude> map (2/) [1,2,3,4] [2.0,1.0,0.666667,0.5] You might be tempted to try to subtract values from elements in a list by mapping -2 across a list. This won’t work, though, because while the + in +2 is parsed as the standard plus operator (as there is no ambiguity), the - in -2 is interpreted as the unary minus, not the binary minus. Thus -2 here is the number −2, not the function λx.x − 2. In general, these are called sections. For binary infix operators (like +), we can cause the function to become prefix by enclosing it in paretheses. For example: 73

CHAPTER 7. ADVANCED FEATURES

74

Prelude> (+) 5 3 8 Prelude> (-) 5 3 2 Additionally, we can provide either of its argument to make a section. For example: Prelude> (+5) 3 8 Prelude> (/3) 6 2.0 Prelude> (3/) 6 0.5 Non-infix functions can be made infix by enclosing them in backquotes (“`’’). For example: Prelude> (+2) ‘map‘ [1..10] [3,4,5,6,7,8,9,10,11,12]

7.2 Local Declarations Recall back from Section 3.5, there are many computations which require using the result of the same computation in multiple places in a function. There, we considered the function for computing the roots of a quadratic polynomial: roots a b c = ((-b + sqrt(b*b - 4*a*c)) / (2*a), (-b - sqrt(b*b - 4*a*c)) / (2*a)) In addition to the let bindings introduced there, we can do this using a where clause. where clauses come immediately after function definitions and introduce a new level of layout (see Section 7.11). We write this as: roots a b c = ((-b + det) / (2*a), (-b - det) / (2*a)) where det = sqrt(b*b-4*a*c) Any values defined in a where clause shadow any other values with the same name. For instance, if we had the following code block:

7.2. LOCAL DECLARATIONS

75

det = "Hello World" roots a b c = ((-b + det) / (2*a), (-b - det) / (2*a)) where det = sqrt(b*b-4*a*c) f _ = det The value of roots doesn’t notice the top-level declaration of det, since it is shadowed by the local definition (the fact that the types don’t match doesn’t matter either). Furthermore, since f cannot “see inside” of roots, the only thing it knows about det is what is available at the top level, which is the string “Hello World.” Thus, f is a function which takes any argument to that string. Where clauses can contain any number of subexpressions, but they must be aligned for layout. For instance, we could also pull out the 2*a computation and get the following code: roots a b c = ((-b + det) / (a2), (-b - det) / (a2)) where det = sqrt(b*b-4*a*c) a2 = 2*a Sub-expressions in where clauses must come after function definitions. Sometimes it is more convenient to put the local definitions before the actual expression of the function. This can be done by using let/in clauses. We have already seen let clauses; where clauses are virtually identical to their let clause cousins except for their placement. The same roots function can be written using let as: roots a b c = let det = sqrt (b*b - 4*a*c) a2 = 2*a in ((-b + det) / a2, (-b - det) / a2) Using a where clause, it looks like: roots a b c = ((-b + det) / a2, (-b - det) / a2) where det = sqrt (b*b - 4*a*c) a2 = 2*a These two types of clauses can be mixed (i.e., you can write a function which has both a let cause and a where clause). This is strongly advised against, as it tends to make code difficult to read. However, if you choose to do it, values in the let clause shadow those in the where clause. So if you define the function:

76

CHAPTER 7. ADVANCED FEATURES

f x = let y = x+1 in y where y = x+2 The value of f 5 is 6, not 7. Of course, I plead with you to never ever write code that looks like this. No one should have to remember this rule and by shadowing where-defined values in a let clause only makes your code difficult to understand. In general, whether you should use let clauses or where clauses is largely a matter of personal preference. Usually, the names you give to the subexpressions should be sufficiently expressive that without reading their definitions any reader of your code should be able to figure out what they do. In this case, where clauses are probably more desirable because they allow the reader to see immediately what a function does. However, in real life, values are often given cryptic names. In which case let clauses may be better. Either is probably okay, though I think where clauses are more common.

7.3 Partial Application

eta reduction

Partial application is when you take a function which takes n arguments and you supply it with < n of them. When discussing sections in Section 7.1, we saw a form of “partial application” in which functions like + were partially applied. For instance, in the expression map (+1) [1,2,3], the section (+1) is a partial application of +. This is because + really takes two arguments, but we’ve only given it one. Partial application is very common in function definitions and sometimes goes by the name “eta reduction”. For instance, suppose we are writting a function lcaseString which converts a whole string into lower case. We could write this as: lcaseString s = map toLower s Here, there is no partial application (though you could argue that applying no arguments to toLower could be considered partial application). However, we notice that the application of s occurs at the end of both lcaseString and of map toLower. In fact, we can remove it by performing eta reduction, to get: lcaseString = map toLower Now, we have a partial application of map: it expects a function and a list, but we’ve only given it the function. This all is related to type type of map, which is (a → b) → ([a] → [b]), when parentheses are all included. In our case, toLower is of type Char → Char. Thus, if we supply this function to map, we get a function of type [Char] → [Char], as desired. Now, consider the task of converting a string to lowercase and remove all non letter characters. We might write this as:

7.3. PARTIAL APPLICATION

77

lcaseLetters s = map toLower (filter isAlpha s) But note that we can actually write this in terms of function composition: lcaseLetters s = (map toLower . filter isAlpha) s And again, we’re left with an eta reducible function: lcaseLetters = map toLower . filter isAlpha Writing functions in this style is very common among advanced Haskell users. In fact it has a name: point-free programming (not to be confused with pointless programming). It is call point free because in the original definition of lcaseLetters, we can think of the value s as a point on which the function is operating. By removing the point from the function definition, we have a point-free function. A function similar to (.) is ($). Whereas (.) is function composition, ($) is function application. The definition of ($) from the Prelude is very simple: f $ x = f x However, this function is given very low fixity, which means that it can be used to replace parentheses. For instance, we might write a function: foo x y = bar y (baz (fluff (ork x))) However, using the function application function, we can rewrite this as: foo x y = bar y $ baz $ fluff $ ork x This moderately resembles the function composition syntax. The ($) function is also useful when combined with other infix functions. For instance, we cannot write: Prelude> putStrLn "5+3=" ++ show (5+3) because this is interpreted as (putStrLn "5+3=") ++ (show (5+3)), which makes no sense. However, we can fix this by writing instead: Prelude> putStrLn $ "5+3=" ++ show (5+3) Which works fine. Consider now the task of extracting from a list of tuples all the ones whose first component is greater than zero. One way to write this would be:

point-free programming

$ function application

CHAPTER 7. ADVANCED FEATURES

78

fstGt0 l = filter (\ (a,b) -> a>0) l We can first apply eta reduction to the whole function, yielding: fstGt0 = filter (\ (a,b) -> a>0) Now, we can rewrite the lambda function to use the fst function instead of the pattern matching: fstGt0 = filter (\x -> fst x > 0) Now, we can use function composition between fst and > to get: fstGt0 = filter (\x -> ((>0) . fst) x) And finally we can eta reduce: fstGt0 = filter ((>0).fst) This definition is simultaneously shorter and easier to understand than the original. We can clearly see exactly what it is doing: we’re filtering a list by checking whether something is greater than zero. What are we checking? The fst element. While converting to point free style often results in clearer code, this is of course not always the case. For instance, converting the following map to point free style yields something nearly uninterpretable: foo = map (\x -> sqrt (3+4*(xˆ2))) foo = map (sqrt . (3+) . (4*) . (ˆ2)) There are a handful of combinators defined in the Prelude which are useful for point free programming: • uncurry takes a function of type a → b → c and converts it into a function of type (a, b) → c. This is useful, for example, when mapping across a list of pairs: Prelude> map (uncurry (*)) [(1,2),(3,4),(5,6)] [2,12,30] • curry is the opposite of uncurry and takes a function of type (a, b) → c and produces a function of type a → b → c. • flip reverse the order of arguments to a function. That is, it takes a function of type a → b → c and produces a function of type b → a → c. For instance, we can sort a list in reverse order by using flip compare:

7.3. PARTIAL APPLICATION

79

Prelude> List.sortBy compare [5,1,8,3] [1,3,5,8] Prelude> List.sortBy (flip compare) [5,1,8,3] [8,5,3,1] This is the same as saying: Prelude> List.sortBy (\a b -> compare b a) [5,1,8,3] [8,5,3,1] only shorter. Of course, not all functions can be written in point free style. For instance: square x = x*x Cannot be written in point free style, without some other combinators. For instance, if we can define other functions, we can write: pair x = (x,x) square = uncurry (*) . pair But in this case, this is not terribly useful.

Exercises Exercise 7.1 Convert the following functions into point-free style, if possible. func1 x l = map (\y -> y*x) l func2 f g l = filter f (map g l) func3 f l = l ++ map f l func4 l = map (\y -> y+2) (filter (\z -> z ‘elem‘ [1..10]) (5:l)) func5 f l = foldr (\x y -> f (y,x)) 0 l

CHAPTER 7. ADVANCED FEATURES

80

7.4 Pattern Matching Pattern matching is one of the most powerful features of Haskell (and most functional programming languages). It is most commonly used in conjunction with case expressions, which we have already seen in Section 3.5. Let’s return to our Color example from Section 4.5. I’ll repeat the definition we already had for the datatype: data Color = Red | Orange | Yellow | Green | Blue | Purple | White | Black | Custom Int Int Int deriving (Show,Eq)

-- R G B components

We then want to write a function that will convert between something of type Color and a triple of Ints, which correspond to the RGB values, respectively. Specifically, if we see a Color which is Red, we want to return (255,0,0), since this is the RGB value for red. So we write that (remember that piecewise function definitions are just case statements): colorToRGB Red = (255,0,0) If we see a Color which is Orange, we want to return (255,128,0); and if we see Yellow, we want to return (255,255,0), and so on. Finally, if we see a custom color, which is comprised of three components, we want to make a triple out of these, so we write: colorToRGB colorToRGB colorToRGB colorToRGB colorToRGB colorToRGB colorToRGB colorToRGB

Orange = (255,128,0) Yellow = (255,255,0) Green = (0,255,0) Blue = (0,0,255) Purple = (255,0,255) White = (255,255,255) Black = (0,0,0) (Custom r g b) = (r,g,b)

Then, in our interpreter, if we type: Color> colorToRGB Yellow (255,255,0)

7.4. PATTERN MATCHING

81

What is happening is this: we create a value, call it x, which has value Red. We then apply this to colorToRGB. We check to see if we can “match” x against Red. This match fails because according to the definition of Eq Color, Red is not equal to Yellow. We continue down the definitions of colorToRGB and try to match Yellow against Orange. This fails, too. We the try to match Yellow against Yellow, which succeeds, so we use this function definition, which simply returns the value (255,255,0), as expected. Suppose instead, we used a custom color: Color> colorToRGB (Custom 50 200 100) (50,200,100) We apply the same matching process, failing on all values from Red to Black. We then get to try to match Custom 50 200 100 against Custom r g b. We can see that the Custom part matches, so then we go see if the subelements match. In the matching, the variables r, g and b are essentially wild cards, so there is no trouble matching r with 50, g with 200 and b with 100. As a “side-effect” of this matching, r gets the value 50, g gets the value 200 and b gets the value 100. So the entire match succeeded and we look at the definition of this part of the function and bundle up the triple using the matched values of r, g and b. We can also write a function to check to see if a Color is a custom color or not: isCustomColor (Custom _ _ _) = True isCustomColor _ = False When we apply a value to isCustomColor it tries to match that value against Custom . This match will succeed if the value is Custom x y z for any x, y and z. The (underscore) character is a “wildcard” and will match anything, but will not do the binding that would happen if you put a variable name there. If this match succeeds, the function returns True; however, if this match fails, it goes on to the next line, which will match anything and then return False. For some reason we might want to define a function which tells us whether a given color is “bright” or not, where my definition of “bright” is that one of its RGB components is equal to 255 (admittedly and arbitrary definition, but it’s simply an example). We could define this function as: isBright = isBright’ . colorToRGB where isBright’ (255,_,_) = True isBright’ (_,255,_) = True isBright’ (_,_,255) = True isBright’ _ = False Let’s dwell on this definition for a second. The isBright function is the composition of our previously defined function colorToRGB and a helper function isBright’, which tells us if a given RGB value is bright or not. We could replace the first line here

82

CHAPTER 7. ADVANCED FEATURES

with isBright c = isBright’ (colorToRGB c) but there is no need to explicitly write the parameter here, so we don’t. Again, this function composition style of programming takes some getting used to, so I will try to use it frequently in this tutorial. The isBright’ helper function takes the RGB triple produced by colorToRGB. It first tries to match it against (255, , ) which succeeds if the value has 255 in its first position. If this match succeeds, isBright’ returns True and so does isBright. The second and third line of definition check for 255 in the second and third position in the triple, respectively. The fourth line, the fallthrough, matches everything else and reports it as not bright. We might want to also write a function to convert between RGB triples and Colors. We could simple stick everything in a Custom constructor, but this would defeat the purpose; we want to use the Custom slot only for values which don’t match the predefined colors. However, we don’t want to allow the user to construct custom colors like (600,-40,99) since these are invalid RGB values. We could throw an error if such a value is given, but this can be difficult to deal with. Instead, we use the Maybe datatype. This is defined (in the Prelude) as: data Maybe a = Nothing | Just a The way we use this is as follows: our rgbToColor function returns a value of type Maybe Color. If the RGB value passed to our function is invalid, we return Nothing, which corresponds to a failure. If, on the other hand, the RGB value is valid, we create the appropriate Color value and return Just that. The code to do this is: rgbToColor 255 0 0 = Just Red rgbToColor 255 128 0 = Just Orange rgbToColor 255 255 0 = Just Yellow rgbToColor 0 255 0 = Just Green rgbToColor 0 0 255 = Just Blue rgbToColor 255 0 255 = Just Purple rgbToColor 255 255 255 = Just White rgbToColor 0 0 0 = Just Black rgbToColor r g b = if 0 <= r && r <= 255 && 0 <= g && g <= 255 && 0 <= b && b <= 255 then Just (Custom r g b) else Nothing -- invalid RGB value The first eight lines match the RGB arguments against the predefined values and, if they match, rgbToColor returns Just the appropriate color. If none of these matches, the last definition of rgbToColor matches the first argument against r, the

7.5. GUARDS

83

second against g and the third against b (which causes the side-effect of binding these values). It then checks to see if these values are valid (each is greater than or equal to zero and less than or equal to 255). If so, it returns Just (Custom r g b); if not, it returns Nothing corresponding to an invalid color. Using this, we can write a function that checks to see if a right RGB value is valid: rgbIsValid r g b = rgbIsValid’ (rgbToColor r g b) where rgbIsValid’ (Just _) = True rgbIsValid’ _ = False Here, we compose the helper function rgbIsValid’ with our function rgbToColor. The helper function checks to see if the value returned by rgbToColor is Just anything (the wildcard). If so, it returns True. If not, it matches anything and returns False. Pattern matching isn’t magic, though. You can only match against datatypes; you cannot match against functions. For instance, the following is invalid: f x = x + 1 g (f x) = x Even though the intended meaning of g is clear (i.e., g x = x - 1), the compiler doesn’t know in general that f has an inverse function, so it can’t perform matches like this.

7.5 Guards Guards can be thought of as an extension to the pattern matching facility. They enable you to allow piecewise function definitions to be taken according to arbitrary boolean expressions. Guards appear after all arguments to a function but before the equals sign, and are begun with a vertical bar. We could use guards to write a simple function which returns a string telling you the result of comparing two elements: comparison x y | x < y = "The first is less" | x > y = "The second is less" | otherwise = "They are equal" You can read the vertical bar as “such that.” So we say that the value of comparison x y “such that” x is less than y is “The first is less.” The value such that x is greater than y is “The second is less” and the value otherwise is “They are equal”. The keyword otherwise is simply defined to be equal to True and thus matches anything that falls through that far. So, we can see that this works:

84

CHAPTER 7. ADVANCED FEATURES

Guards> comparison 5 10 "The first is less" Guards> comparison 10 5 "The second is less" Guards> comparison 7 7 "They are equal" Guards are applied in conjunction with pattern matching. When a pattern matches, all of its guards are tried, consecutively, until one matches. If none match, then pattern matching continues with the next pattern. One nicety about guards is that where clauses are common to all guards. So another possible definition for our isBright function from the previous section would be: isBright2 c | r == 255 = True | g == 255 = True | b == 255 = True | otherwise = False where (r,g,b) = colorToRGB c The function is equivalent to the previous version, but performs its calculation slightly differently. It takes a color, c, and applies colorToRGB to it, yielding an RGB triple which is matched (using pattern matching!) against (r,g,b). This match succeeds and the values r, g and b are bound to their respective values. The first guard checks to see if r is 255 and, if so, returns true. The second and third guard check g and b against 255, respectively and return true if they match. The last guard fires as a last resort and returns False.

7.6 Instance Declarations In order to declare a type to be an instance of a class, you need to provide an instance declaration for it. Most classes provide what’s called a “minimal complete definition.” This means the functions which must be implemented for this class in order for its definition to be satisfied. Once you’ve written these functions for your type, you can declare it an instance of the class.

7.6.1 The Eq Class The Eq class has two members (i.e., two functions): (==) :: Eq a => a -> a -> Bool (/=) :: Eq a => a -> a -> Bool

7.6. INSTANCE DECLARATIONS

85

The first of these type signatures reads that the function == is a function which takes two as which are members of Eq and produces a Bool. The type signature of /= (not equal) is identical. A minimal complete definition for the Eq class requires that either one of these functions be defined (if you define ==, then /= is defined automatically by negating the result of ==, and vice versa). These declarations must be provided inside the instance declaration. This is best demonstrated by example. Suppose we have our color example, repeded here for convenience: data Color = Red | Orange | Yellow | Green | Blue | Purple | White | Black | Custom Int Int Int

-- R G B components

We can define Color to be an instance of Eq by the following declaration: instance Eq Color where Red == Red = True Orange == Orange = True Yellow == Yellow = True Green == Green = True Blue == Blue = True Purple == Purple = True White == White = True Black == Black = True (Custom r g b) == (Custom r’ g’ b’) = r == r’ && g == g’ && b == b’ _ == _ = False The first line here begins with the keyword instance telling the compiler that we’re making an instance declaration. It then specifies the class, Eq, and the type, Color which is going to be an instance of this class. Following that, there’s the where keyword. Finally there’s the method declaration. The first eight lines of the method declaration are basically identical. The first one, for instance, says that the value of the expression Red == Red is equal to True. Lines two through eight are identical. The declaration for custom colors is a bit different. We pattern match Custom on both sides of ==. On the left hand side, we bind r, g and b to the components, respectively. On the right hand side, we bind r’, g’ and b’ to the components. We then say that these two custom colors are equal precisely

86

CHAPTER 7. ADVANCED FEATURES

when r == r’, g == g’ and b == b’ are all equal. The fallthrough says that any pair we haven’t previously declared as equal are unequal.

7.6.2 The Show Class The Show class is used to display arbitrary values as strings. This class has three methods: show :: Show a => a -> String showsPrec :: Show a => Int -> a -> String -> String showList :: Show a => [a] -> String -> String A minimal complete definition is either show or showsPrec (we will talk about showsPrec later – it’s in there for efficiency reasons). We can define our Color datatype to be an instance of Show with the following instance declaration: instance Show Color where show Red = "Red" show Orange = "Orange" show Yellow = "Yellow" show Green = "Green" show Blue = "Blue" show Purple = "Purple" show White = "White" show Black = "Black" show (Custom r g b) = "Custom " ++ show r ++ " " ++ show g ++ " " ++ show b This declaration specifies exactly how to convert values of type Color to Strings. Again, the first eight lines are identical and simply take a Color and produce a string. The last line for handling custom colors matches out the RGB components and creates a string by concattenating the result of showing the components individually (with spaces in between and “Custom” at the beginning).

7.6.3 Other Important Classes There are a few other important classes which I will mention briefly because either they are commonly used or because we will be using them shortly. I won’t provide example instance declarations; how you can do this should be clear by now. The Ord Class The ordering class, the functions are:

7.6. INSTANCE DECLARATIONS

87

compare :: Ord a => a -> a -> Ordering (<=) :: Ord a => a -> a -> Bool (>) :: Ord a => a -> a -> Bool (>=) :: Ord a => a -> a -> Bool (<) :: Ord a => a -> a -> Bool min :: Ord a => a -> a -> a max :: Ord a => a -> a -> a The almost any of the functions alone is a minimal complete definition; it is recommended that you implement compare if you implement only one, though. This function returns a value of type Ordering which is defined as: data Ordering = LT | EQ | GT So, for instance, we get: Prelude> compare 5 7 LT Prelude> compare 6 6 EQ Prelude> compare 7 5 GT In order to declare a type to be an instance of Ord you must already have declared it an instance of Eq (in other words, Ord is a subclass of Eq – more about this in Section 8.4). The Enum Class The Enum class is for enumerated types; that is, for types where each element has a successor and a predecessor. It’s methods are: pred :: Enum a => a -> a succ :: Enum a => a -> a toEnum :: Enum a => Int -> a fromEnum :: Enum a => a -> Int enumFrom :: Enum a => a -> [a] enumFromThen :: Enum a => a -> a -> [a] enumFromTo :: Enum a => a -> a -> [a] enumFromThenTo :: Enum a => a -> a -> a -> [a] The minimal complete definition contains both toEnum and fromEnum, which converts from and to Ints. The pred and succ functions give the predecessor and successor, respectively. The enum functions enumerate lists of elements. For instance,

CHAPTER 7. ADVANCED FEATURES

88

enumFrom x lists all elements after x; enumFromThen x step lists all elements starting at x in steps of size step. The To functions end the enumeration at the given element. The Num Class The Num class provides the standard arithmetic operations: (-) :: Num a => a -> (*) :: Num a => a -> (+) :: Num a => a -> negate :: Num a => a signum :: Num a => a abs :: Num a => a -> fromInteger :: Num a

a -> a a -> a a -> a -> a -> a a => Integer -> a

All of these are obvious except for perhaps negate which is the unary minus. That is, negate x means −x. The Read Class The Read class is the opposite of the Show class. It is a way to take a string and read in from it a value of arbitrary type. The methods for Read are: readsPrec :: Read a => Int -> String -> [(a, String)] readList :: String -> [([a], String)] The minimal complete definition is readsPrec. The most important function related to this is read, which uses readsPrec as: read s = fst (head (readsPrec 0 s)) This will fail if parsing the string fails. You could define a maybeRead function as: maybeRead s = case readsPrec 0 s of [(a,_)] -> Just a _ -> Nothing How to write and use readsPrec directly will be discussed further in the examples.

7.6. INSTANCE DECLARATIONS

89

7.6.4 Class Contexts Suppose we are definition the Maybe datatype from scratch. The definition would be something like: data Maybe a = Nothing | Just a Now, when we go to write the instance declarations, for, say, Eq, we need to know that a is an instance of Eq otherwise we can’t write a declaration. We express this as: instance Eq a => Eq (Maybe a) where Nothing == Nothing = True (Just x) == (Just x’) = x == x’ This first line can be read “That a is an instance of Eq implies (=>) that Maybe a is an instance of Eq.”

7.6.5 Deriving Classes Writing obvious Eq, Ord, Read and Show classes like these is tedious and should be automated. Luckily for us, it is. If you write a datatype that’s “simple enough” (almost any datatype you’ll write unless you start writing fixed point types), the compiler can automatically derive some of the most basic classes. To do this, you simply add a deriving clause to after the datatype declaration, as in: data Color = Red | ... | Custom Int Int Int -- R G B components deriving (Eq, Ord, Show, Read) This will automatically create instances of the Color datatype of the named classes. Similarly, the declaration: data Maybe a = Nothing | Just a deriving (Eq, Ord, Show, Read) derives these classes just when a is appropriate. All in all, you are allowed to derive instances of Eq, Ord, Enum, Bounded, Show and Read. There is considerable work in the area of “polytypic programming” or “generic programming” which, among other things, would allow for instance declarations for any class to be derived. This is much beyond the scope of this tutorial; instead, I refer you to the literature.

CHAPTER 7. ADVANCED FEATURES

90

7.7 Datatypes Revisited I know by this point you’re probably terribly tired of hearing about datatypes. They are, however, incredibly important, otherwise I wouldn’t devote so much time to them. Datatypes offer a sort of notational convenience if you have, for instance, a datatype that holds many many values. These are called named fields.

7.7.1 Named Fields Consider a datatype whose purpose is to hold configuration settings. Usually when you extract members from this type, you really only care about one or possibly two of the many settings. Moreover, if many of the settings have the same type, you might often find yourself wondering “wait, was this the fourth or fifth element?” One thing you could do would be to write accessor functions. Consider the following made-up configuration type for a terminal program: data Configuration = Configuration String String String Bool Bool String String Integer deriving (Eq, Show)

---------

user name local host remote host is guest? is super user? current directory home directory time connected

You could then write accessor functions, like (I’ve only listed a few): getUserName (Configuration un _ _ _ _ _ _ _) = un getLocalHost (Configuration _ lh _ _ _ _ _ _) = lh getRemoteHost (Configuration _ _ rh _ _ _ _ _) = rh getIsGuest (Configuration _ _ _ ig _ _ _ _) = ig ... You could also write update functions to update a single element. Of course, now if you add an element to the configuration, or remove one, all of these functions now have to take a different number of arguments. This is highly annoying and is an easy place for bugs to slip in. However, there’s a solution. We simply give names to the fields in the datatype declaration, as follows: data Configuration = Configuration { username localhost remotehost

:: String, :: String, :: String,

7.7. DATATYPES REVISITED isguest issuperuser currentdir homedir timeconnected

91 :: :: :: :: ::

Bool, Bool, String, String, Integer

} This will automatically generate the following accessor functions for us: username :: Configuration -> String localhost :: Configuration -> String ... Moreover, it gives us very convenient update methods. Here is a short example for a “post working directory” and “change directory” like functions that work on Configurations: changeDir :: Configuration -> String -> Configuration changeDir cfg newDir = -- make sure the directory exists if directoryExists newDir then -- change our current directory cfg{currentdir = newDir} else error "directory does not exist" postWorkingDir :: Configuration -> String -- retrieve our current directory postWorkingDir cfg = currentdir cfg So, in general, to update the field x in a datatype y to z, you write y{x=z}. You can change more than one; each should be separated by commas, for instance, y{x=z, a=b, c=d}. You can of course continue to pattern match against Configurations as you did before. The named fields are simply syntactic sugar; you can still write something like: getUserName (Configuration un _ _ _ _ _ _ _) = un But there is little reason to. Finally, you can pattern match against named fields as in: getHostData (Configuration {localhost=lh,remotehost=rh}) = (lh,rh)

92

CHAPTER 7. ADVANCED FEATURES

This matches the variable lh against the localhost field on the Configuration and the variable rh against the remotehost field on the Configuration. These matches of course succeed. You could also constrain the matches by putting values instead of variable names in these positions, as you would for standard datatypes. You can create values of Configuration in the old way as shown in the first definition below, or in the named-field’s type, as shown in the second definition below: initCFG = Configuration "nobody" "nowhere" "nowhere" False False "/" "/" 0 initCFG’ = Configuration { username="nobody", localhost="nowhere", remotehost="nowhere", isguest=False, issuperuser=False, currentdir="/", homedir="/", timeconnected=0 } Though the second is probably much more understandable unless you litter your code with comments.

7.8 More Lists todo: put something here

7.8.1 Standard List Functions Recall that the definition of the built-in Haskell list datatype is equivalent to: data List a = Nil | Cons a (List a) With the exception that Nil is called [] and Cons x xs is called x:xs. This is simply to make pattern matching easier and code smaller. Let’s investigate how some of the standard list functions may be written. Consider map. A definition is given below: map _ [] = [] map f (x:xs) = f x : map f xs

7.8. MORE LISTS

93

Here, the first line says that when you map across an empty list, no matter what the function is, you get an empty list back. The second line says that when you map across a list with x as the head and xs as the tail, the result is f applied to x consed onto the result of mapping f on xs. The filter can be defined similarly: filter _ [] = [] filter p (x:xs) | p x = x : filter p xs | otherwise = filter p xs How this works should be clear. For an empty list, we return an empty list. For a non empty list, we return the filter of the tail, perhaps with the head on the front, depending on whether it satisfies the predicate p or not. We can define foldr as: foldr _ z [] = z foldr f z (x:xs) = f x (foldr f z xs) Here, the best interpretation is that we are replacing the empty list ([]) with a particular value and the list constructor (:) with some function. On the first line, we can see the replacement of [] for z. Using backquotes to make f infix, we can write the second line as: foldr f z (x:xs) = x ‘f‘ (foldr f z xs) From this, we can directly see how : is being replaced by f. Finally, foldl: foldl _ z [] = z foldl f z (x:xs) = foldl f (f z x) xs This is slightly more complicated. Remember, z can be thought of as the current state. So if we’re folding across a list which is empty, we simply return the current state. On the other hand, if the list is not empty, it’s of the form x:xs. In this case, we get a new state by appling f to the current state z and the current list element x and then recursively call foldl on xs with this new state. There is another class of functions: the zip and unzip functions, which respectively take multiple lists and make one or take one lists and split them apart. For instance, zip does the following: Prelude> zip "hello" [1,2,3,4,5] [(’h’,1),(’e’,2),(’l’,3),(’l’,4),(’o’,5)]

CHAPTER 7. ADVANCED FEATURES

94

Basically, it pairs the first elements of both lists and makes that the first element of the new list. It then pairs the second elements of both lists and makes that the second element, etc. What if the lists have unequal length? It simply stops when the shorter one stops. A reasonable definition for zip is: zip [] _ = [] zip _ [] = [] zip (x:xs) (y:ys) = (x,y) : zip xs ys The unzip function does the opposite. It takes a zipped list and returns the two “original” lists: Prelude> unzip [(’f’,1),(’o’,2),(’o’,3)] ("foo",[1,2,3]) There are a whole slew of zip and unzip functions, named zip3, unzip3, zip4, unzip4 and so on; the ...3 functions use triples instead of pairs; the ...4 functions use 4-tuples, etc. Finally, the function take takes an integer n and a list and returns the first n elements off the list. Correspondingly, drop takes an integer n and a list and returns the result of throwing away the first n elements off the list. Neither of these functions produces an error; if n is too large, they both will just return shorter lists.

7.8.2 List Comprehensions There is some syntactic sugar for dealing with lists whose elements are members of the Enum class (see Section 7.6), such as Int or Char. If we want to create a list of all the elements from 1 to 10, we can simply write: Prelude> [1..10] [1,2,3,4,5,6,7,8,9,10] We can also introduce an amount to step by: Prelude> [1,3..10] [1,3,5,7,9] Prelude> [1,4..10] [1,4,7,10] These expressions are short hand for enumFromTo and enumFromThenTo, respectively. Of course, you don’t need to specify an upper bound. Try the following (but be ready to hit Control+C to stop the computation!): Prelude> [1..] [1,2,3,4,5,6,7,8,9,10,11,12{Interrupted!}

7.8. MORE LISTS

95

Probably yours printed a few thousand more elements than this. As we said before, Haskell is lazy. That means that a list of all numbers from 1 on is perfectly well formed and that’s exactly what this list is. Of course, if you attempt to print the list (which we’re implicitly doing by typing it in the interpreter), it won’t halt. But if we only evaluate an initial segment of this list, we’re fine: Prelude> take 3 [1..] [1,2,3] Prelude> take 3 (drop 5 [1..]) [6,7,8] This comes in useful if, say, we want to assign an ID to each element in a list. Without laziness we’d have to write something like this: assignID :: [a] -> [(a,Int)] assignID l = zip l [1..length l] Which means that the list will be traversed twice. However, because of laziness, we can simply write: assignID l = zip l [1..] And we’ll get exactly what we want. We can see that this works: Prelude> assignID "hello" [(’h’,1),(’e’,2),(’l’,3),(’l’,4),(’o’,5)] Finally, there is some useful syntactic sugar for map and filter, based on standard set-notation in mathematics. In math, we would write something like {f (x)|x ∈ s ∧ p(x)} to mean the set of all values of f when applied to elements of s which satisfy p. This is equivalent to the Haskell statement map f (filter p s). However, we can also use more math-like notation and write [f x | x <- s, p x]. While in math the ordering of the statements on the side after the pipe is free, it is not so in Haskell. We could not have put p x before x <- s otherwise the compiler wouldn’t know yet what x was. We can use this to do simple string processing. Suppose we want to take a string, remove all the lower-case letters and convert the rest of the letters to upper case. We could do this in either of the following two equivalent ways: Prelude> map toLower (filter isUpper "Hello World") "hw" Prelude> [toLower x | x <- "Hello World", isUpper x] "hw"

96

CHAPTER 7. ADVANCED FEATURES

These two are equivalent, and, depending on the exact functions you’re using, one might be more readable than the other. There’s more you can do here, though. Suppose you want to create a list of pairs, one for each point between (0,0) and (5,7) below the diagonal. Doing this manually with lists and maps would be cumbersome and possibly difficult to read. It couldn’t be easier than with list comprehensions: Prelude> [(x,y) | x <- [1..5], y <- [x..7]] [(1,1),(1,2),(1,3),(1,4),(1,5),(1,6),(1,7),(2,2),(2,3), (2,4),(2,5),(2,6),(2,7),(3,3),(3,4),(3,5),(3,6),(3,7), (4,4),(4,5),(4,6),(4,7),(5,5),(5,6),(5,7)] If you reverse the order of the x <- and y <- clauses, the order in which the space is traversed will be reversed (of course, in that case, y could no longer depend on x and you would need to make x depend on y but this is trivial).

7.9 Arrays Lists are nice for many things. It is easy to add elements to the beginning of them and to manipulate them in various ways that change the length of the list. However, they are bad for random access, having average complexity O(n) to access an arbitrary element (if you don’t know what O(. . . ) means, you can either ignore it or take a quick detour and read Appendix A, a two-page introduction to complexity theory). So, if you’re willing to give up fast insertion and deletion because you need random access, you should use arrays instead of lists. In order to use arrays you must import the Array module. There are a few methods for creating arrays, the array function, the listArray function, and the accumArray function. The array function takes a pair which is the bounds of the array, and an association list which specifies the initial values of the array. The listArray function takes bounds and then simply a list of values. Finally, the accumArray function takes an accumulation function, an initial value and an association list and accumulates pairs from the list into the array. Here are some examples of arrays being created: Arrays> array (1,5) [(i,2*i) | i <- [1..5]] array (1,5) [(1,2),(2,4),(3,6),(4,8),(5,10)] Arrays> listArray (1,5) [3,7,5,1,10] array (1,5) [(1,3),(2,7),(3,5),(4,1),(5,10)] Arrays> accumArray (+) 2 (1,5) [(i,i) | i <- [1..5]] array (1,5) [(1,3),(2,4),(3,5),(4,6),(5,7)] When arrays are printed out (via the show function), they are printed with an association list. For instance, in the first example, the association list says that the value of the array at 1 is 2, the value of the array at 2 is 4, and so on. You can extract an element of an array using the ! function, which takes an array and an index, as in:

7.10. FINITE MAPS

97

Arrays> (listArray (1,5) [3,7,5,1,10]) ! 3 5 Moreover, you can update elements in the array using the // function. This takes an array and an association list and updates the positions specified in the list: Arrays> (listArray (1,5) [3,7,5,1,10]) // [(2,99),(3,-99)] array (1,5) [(1,3),(2,99),(3,-99),(4,1),(5,10)] There are a few other functions which are of interest: bounds returns the bounds of an array indices returns a list of all indices of the array elems returns a list of all the values in the array in order assocs returns an association list for the array If we define arr to be listArray (1,5) [3,7,5,1,10], the result of these functions applied to arr are: Arrays> bounds arr (1,5) Arrays> indices arr [1,2,3,4,5] Arrays> elems arr [3,7,5,1,10] Arrays> assocs arr [(1,3),(2,7),(3,5),(4,1),(5,10)] Note that while arrays are O(1) access, they are not O(1) update. They are in fact O(n) update, since in order to maintain purity, the array must be copied in order to make an update. Thus, functional arrays are pretty much only useful when you’re filling them up once and then only reading. If you need fast access and update, you should probably use FiniteMaps, which are discussed in Section 7.10 and have O(log n) access and update.

7.10 Finite Maps The FiniteMap datatype (which is available in the FiniteMap module, or Data.FiniteMap module in the hierarchical libraries) is a purely functional implementation of balanced trees. Finite maps can be compared to lists and arrays in terms of the time it takes to perform various operations on those datatypes of a fixed size, n. A brief comparison is:

98

CHAPTER 7. ADVANCED FEATURES

List Array FiniteMap insert O(1) O(n) O(log n) update O(n) O(n) O(log n) delete O(n) O(n) O(log n) find O(n) O(1) O(log n) map O(n) O(n) O(n log n) As we can see, lists provide fast insertion (but slow everything else), arrays provide fast lookup (but slow everything else) and finite maps provide moderately fast everything (except mapping, which is a bit slower than lists or arrays). The type of a finite map is for the form FiniteMapkeyelt where key is the type of the keys and elt is the type of the elements. That is, finite maps are lookup tables from type key to type elt. The basic finite map functions are: emptyFM addToFM

:: FiniteMap key elt :: FiniteMap key elt -> FiniteMap key elt delFromFM :: FiniteMap key elt -> FiniteMap key elt elemFM :: key -> FiniteMap key lookupFM :: FiniteMap key elt ->

key -> elt -> key -> elt -> Bool key -> Maybe elt

In all these cases, the type key must be an instance of Ord (and hence also an instance of Eq). There are also function listToFM and fmToList to convert lists to and from finite maps. Try the following: Prelude> :m FiniteMap FiniteMap> let fm = listToFM [(’a’,5),(’b’,10),(’c’,1),(’d’,2)] FiniteMap> let myFM = addToFM fm ’e’ 6 FiniteMap> fmToList fm [(’a’,5),(’b’,10),(’c’,1),(’d’,2)] FiniteMap> fmToList myFM [(’a’,5),(’b’,10),(’c’,1),(’d’,2),(’e’,6)] FiniteMap> lookupFM myFM ’e’ Just 6 FiniteMap> lookupFM fm ’e’ Nothing You can also experiment with the other commands. Note that you cannot show a finite map, as they are not instances of Show: FiniteMap> show myFM

7.11. LAYOUT

99

:1: No instance for (Show (FiniteMap Char Integer)) arising from use of ‘show’ at :1 In the definition of ‘it’: show myFM In order to inspect the elements, you first need to use fmToList.

7.11 Layout 7.12 The Final Word on Lists You are likely tired of hearing about lists at this point, but they are so fundamental to Haskell (and really all of functional programming) that it would be terrible not to talk about them some more. It turns out that foldr is actually quite a powerful function: it can compute an primitive recursive function. A primitive recursive function is essentially one which can be calculated using only “for” loops, but not “while” loops. In fact, we can fairly easily define map in terms of foldr: map2 f = foldr (\a b -> f a : b) [] Here, b is the accumulator (i.e., the result list) and a is the element being currently considered. In fact, we can simplify this definition through a sequence of steps:

==> ==> ==> ==>

foldr foldr foldr foldr foldr

(\a b -> f a : b) [] (\a b -> (:) (f a) b) [] (\a -> (:) (f a)) [] (\a -> ((:) . f) a) [] ((:) . f) []

This is directly related to the fact that foldr (:) [] is the identity function on lists. This is because, as mentioned before, foldr f z can be thought of as replacing the [] in lists by z and the : by f. In this case, we’re keeping both the same, so it is the identity function. In fact, you can convert any function of the following style into a foldr: myfunc [] = z myfunc (x:xs) = f x (myfunc xs) By writing the last line with f in infix form, this should be obvious: myfunc [] = z myfunc (x:xs) = x ‘f‘ (myfunc xs)

primitive recursive

CHAPTER 7. ADVANCED FEATURES

100

Clearly, we are just replacing [] with z and : with f. Consider the filter function: filter p filter p if p x then else

[] = [] (x:xs) = x : filter p xs filter p xs

This function also follows the form above. Based on the first line, we can figure out that z is supposed to be [], just like in the map case. Now, suppose that we call the result of calling filter p xs simply b, then we can rewrite this as: filter p [] = [] filter p (x:xs) = if p x then x : b else b Given this, we can transform filter into a fold: filter p = foldr (\a b -> if p a then a:b else b) [] Let’s consider a slightly more complicated function: ++. The definition for ++ is: (++) [] ys = ys (++) (x:xs) ys = x : (xs ++ ys) Now, the question is whether we can write this in fold notation. First, we can apply eta reduction to the first line to give: (++) [] = id Through a sequence of steps, we can also eta-reduce the second line: ==> ==> ==>

(++) (++) (++) (++)

(x:xs) (x:xs) (x:xs) (x:xs)

ys = x : ((++) xs ys) ys = (x:) ((++) xs ys) ys = ((x:) . (++) xs) ys = (x:) . (++) xs

Thus, we get that an eta-reduced defintion of ++ is: (++) [] = id (++) (x:xs) = (x:) . (++) xs Now, we can try to put this into fold notation. First, we notice that the base case converts [] into id. Now, if we assume (++) xs is called b and x is called a, we can get the following definition in terms of foldr:

7.12. THE FINAL WORD ON LISTS

101

(++) = foldr (\a b -> (a:) . b) id This actually makes sense intuitively. If we only think about applying ++ to one argument, we can think of it as a function which takes a list and creates a function which, when applied, will prepend this list to another list. In the lambda function, we assume we have a function b which will do this for the rest of the list and we need to create a function which will do this for b as well as the single element a. In order to do this, we first apply b and then further add a to the front. We can further reduce this expression to a point-free style through the following sequence: ==> ==> ==> ==> ==> ==>

(++) (++) (++) (++) (++) (++)

= = = = = =

foldr foldr foldr foldr foldr foldr

(\a b -> (a:) . b) id (\a b -> (.) (a:) b) id (\a -> (.) (a:)) id (\a -> (.) ((:) a)) id (\a -> ((.) . (:)) a) id ((.) . (:)) id

This final version is point free, though not necessarily understandable. Presumbably the original version is clearer. As a final example, consider concat. We can write this as: concat [] = [] concat (x:xs) = x ++ concat xs It should be immediately clear that the z element for the fold is [] and that the recursive function is ++, yielding: concat = foldr (++) []

Exercises Exercise 7.2 The function and takes a list of booleans and returns True if and only if all of them are True. It also returns True on the empty list. Write this function in terms of foldr. Exercise 7.3 The function concatMap behaves such that concatMap f is the same as concat . map f. Write this function in terms of foldr.

102

CHAPTER 7. ADVANCED FEATURES

Chapter 8

Advanced Types As you’ve probably ascertained by this point, the type system is integral to Haskell. While this chapter is called “Advanced Types”, you will probably find it to be more general than that and it must not be skipped simply because you’re not interested in the type system.

8.1 Type Synonyms

Type synonyms exist in Haskell simply for convenience: their removal would not make Haskell any less powerful. Consider the case when you are constantly dealing with lists of three-dimensional points. For instance, you might have a function with type [(Double, Double, Double)] → Double → [(Double, Double, Double)]. Since you are a good software engineer, you want to place type signatures on all your top-level functions. However, typing [(Double, Double, Double)] all the time gets very tedious. To get around this, you can define a type synonym: type List3D = [(Double,Double,Double)] Now, the type signature for your functions may be written List3D → Double → List3D. We should note that type synonyms cannot be self-referential. That is, you cannot have: type BadType = Int -> BadType This is because this is an “infinite type.” Since Haskell removes type synonyms very early on, any instance of BadType will be replaced by Int → BadType, which will result in an infinite loop. Type synonyms can also be parameterized. For instance, you might want to be able to change the types of the points in the list of 3D points. For this, you could define: type List3D a = [(a,a,a)] 103

CHAPTER 8. ADVANCED TYPES

104

Then your references to [(Double, Double, Double)] would become List3D Double.

8.2 Newtypes Consider the problem in which you need to have a type which is very much like Int, but its ordering is defined differently. Perhaps you wish to order Ints first by even numbers then by odd numbers (that is, all odd numbers are greater than any even number and within the odd/even subsets, ordering is standard). Unfortunately, you cannot define a new instance of Ord for Int because then Haskell won’t know which one to use. What you want is to define a type which is isomorphic to Int. NOTE “Isomorphic” is a common term in mathematics which basically means “structurally identical.” For instance, in graph theory, if you have two graphs which are identical except they have different labels on the nodes, they are isomorphic. In our context, two types are isomorphic if they have the same underlying structure. One way to do this would be to define a new datatype: data MyInt = MyInt Int We could then write appropriate code for this datatype. The problem (and this is very subtle) is that this type is not truly isomorphic to Int: it has one more value. When we think of the type Int, we usually think that it takes all values of integers, but it really has one more value: | (pronounced “bottom”), which is used to represent erroneous or undefined computations. Thus, MyInt has not only values MyInt 0, MyInt 1 and so on, but also MyInt | . However, since datatypes can themselves be undefined, it has an additional value: | which differs from MyInt | and this makes the types non-isomorphic. (See Section ?? for more information on bottom.) Disregarding that subtlety, there may be efficiency issues with this representation: now, instead of simply storing an integer, we have to store a pointer to an integer and have to follow that pointer whenever we need the value of a MyInt. To get around these problems, Haskell has a newtype construction. A newtype is a cross between a datatype and a type synonym: it has a constructor like a datatype, but it can have only one constructor and this constructor can have only one argument. For instance, we can define: newtype MyInt = MyInt Int But we cannot define any of: newtype Bad1 = Bad1a Int | Bad1b Double newtype Bad2 = Bad2 Int Double

8.3. DATATYPES

105

Of course, the fact that we cannot define Bad2 as above is not a big issue: we can simply define the following by pairing the types: newtype Good2 = Good2 (Int,Double) Now, suppose we’ve defined MyInt as a newtype. This enables use to write our desired instance of Ord as: instance Ord MyInt where MyInt i < MyInt j | odd i && odd j = i < j | even i && even j = i < j | even i = True | otherwise = False where odd x = (x ‘mod‘ 2) == 0 even = not . odd Like datatype, we can still derive classes like Show and Eq over newtypes (in fact, I’m implicitly assuming we have derived Eq over MyInt – where is my assumption in the above code?). Moreover, in recent versions of GHC (see Section 2.2), on newtypes, you are allowed to derive any class of which the base type (in this case, Int) is an instance. For example, we could derive Num on MyInt to provide arithmetic functions over it. Pattern matching over newtypes is exactly as in datatypes. We can write constructor and destructor functions for MyInt as follows: mkMyInt i = MyInt i unMyInt (MyInt i) = i

8.3 Datatypes We’ve already seen datatypes used in a variety of contexts. This section concludes some of the discussion and introduces some of the common datatypes in Haskell. It also provides a more theoretical underpinning to what datatypes actually are.

8.3.1 Strict Fields One of the great things about Haskell is that computation is performed lazily. However, sometimes this leads to inefficiencies. One way around this problem is to use datatypes with strict fields. Before we talk about the solution, let’s spend some time to get a bit more comfortable with how bottom works in to the picture (for more theory, see Section ??). Suppose we’ve defined the unit datatype (this one of the simplest datatypes you can define):

106

CHAPTER 8. ADVANCED TYPES

data Unit = Unit This datatype has exactly one constructor, Unit, which takes no arguments. In a strict language like ML, there would be exactly one value of type Unit: namely, Unit. This is not quite so in Haskell. In fact, there are two values of type Unit. One of them is Unit. The other is bottom (written | ). You can think of bottom as representing a computation which won’t halt. For instance, suppose we define the value: foo = foo This is perfectly valid Haskell code and simply says that when you want to evaluate foo, all you need to do is evaluate foo. Clearly this is an “infinite loop.” What is the type of foo? Simply a. We cannot say anything more about it than that. The fact that foo has type a in fact tells us that it must be an infinite loop (or some other such strange value). However, since foo has type a and thus can have any type, it can also have type Unit. We could write, for instance: foo :: Unit foo = foo Thus, we have found a second value with type Unit. In fact, we have found all values of type Unit. Any other non-terminating function or error-producing function will have exactly the same effect as foo (though Haskell provides some more utility with the function error). This means, for instance, that there are actually four values with type Maybe Unit. They are: | , Nothing, Just | and Just Unit. However, it could be the fact that you, as a programmer, know that you will never come across the third of these. Namely, you want the argument to Just to be strict. This means that if the argument to Just is bottom, then the entire structure becomes bottom. You use an exclamation point to specify a constructor as strict. We can define a strict version of Maybe as: data SMaybe a = SNothing | SJust !a There are now only three values of SMaybe. We can see the difference by writing the following program: module Main where import System data SMaybe a = SNothing | SJust !a

deriving Show

8.3. DATATYPES main = do [cmd] "b" -> "c" -> "d" -> "e" "f" "g" "h"

-> -> -> ->

getArgs of printJust printJust printJust printJust

107

undefined Nothing (Just undefined) (Just ())

printSJust undefined printSJust SNothing printSJust (SJust undefined) printSJust (SJust ())

printJust :: Maybe () -> IO () printJust Nothing = putStrLn "Nothing" printJust (Just x) = do putStr "Just "; print x printJust :: SMaybe () -> IO () printSJust SNothing = putStrLn "Nothing" printSJust (SJust x) = do putStr "Just "; print x Here, depending on what command line argument is passed, we will do something different. The outputs for the various options are: \% ./strict a Fail: Prelude.undefined \% ./strict b Nothing \% ./strict c Just Fail: Prelude.undefined \% ./strict d Just () \% ./strict e Fail: Prelude.undefined \% ./strict f Nothing \% ./strict g Fail: Prelude.undefined

108

CHAPTER 8. ADVANCED TYPES

\% ./strict h Just () The thing worth noting here is the difference between cases “c” and “g”. In the “c” case, the Just is printed, because this is printed before the undefined value is evaluated. However, in the “g” case, since the constructor is strict, as soon as you match the SJust, you also match the value. In this case, the value is undefined, so the whole thing fails before it gets a chance to do anything.

8.4 Classes We have already encountered type classes a few times, but only in the context of previously existing type classes. This section is about how to define your own. We will begin the discussion by talking about Pong and then move on to a useful generalization of computations.

8.4.1 Pong The discussion here will be motivated by the construction of the game Pong (see Appendix ?? for the full code). In Pong, there are three things drawn on the screen: the two paddles and the ball. While the paddles and the ball are different in a few respects, they share many commonalities, such as position, velocity, acceleration, color, shape, and so on. We can express these commonalities by defining a class for Pong entities, which we call Entity. We make such a definition as follows: class Entity a where getPosition :: a -> (Int,Int) getVelocity :: a -> (Int,Int) getAcceleration :: a -> (Int,Int) getColor :: a -> Color getShape :: a -> Shape This code defines a typeclass Entity. This class has five methods: getPosition, getVelocity, getAcceleration, getColor and getShape with the corresponding types. The first line here uses the keyword class to introduce a new typeclass. We can read this typeclass definition as “There is a typeclass ’Entity’; a type ’a’ is an instance of Entity if it provides the following five functions: . . . ”. To see how we can write an instance of this class, let us define a player (paddle) datatype: data Paddle = Paddle { paddlePosX, paddleVelX, paddleAccX, paddleColor

paddlePosY, paddleVelY, paddleAccY :: Int, :: Color,

8.4. CLASSES

109 paddleHeight :: Int, playerNumber :: Int }

Given this data declaration, we can define Paddle to be an instance of Entity: instance Entity Paddle where getPosition p = (paddlePosX p, paddlePosY p) getVelocity p = (paddleVelX p, paddleVelY p) getAcceleration p = (paddleAccX p, paddleAccY p) getColor = paddleColor getShape = Rectangle 5 . paddleHeight The actual Haskell types of the class functions all have included the context Entity a =>. For example, getPosition has type Entity a ⇒ a → (Int, Int). However, it will turn out that many of our routines will need entities to also be instances of Eq. We can therefore choose to make Entity a subclass of Eq: namely, you can only be an instance of Entity if you are already an instance of Eq. To do this, we change the first line of the class declaration to: class Eq a => Entity a where Now, in order to define Paddles to be instances of Entity we will first need them to be instances of Eq – we can do this by deriving the class.

8.4.2 Computations Let’s think back to our original motivation for defining the Maybe datatype from Section ??. We wanted to be able to express that functions (i.e., computations) can fail. Let us consider the case of performing search on a graph. Allow us to take a small aside to set up a small graph library: data Graph v e = Graph [(Int,v)] [(Int,Int,e)] The Graph datatype takes two type arguments which correspond to vertex and edge labels. The first argument to the Graph constructor is a list (set) of vertices; the second is the list (set) of edges. We will assume these lists are always sorted and that each vertex has a unique id and that there is at most one edge between any two vertices. Suppose we want to search for a path between two vertices. Perhaps there is no path between those vertices. To represent this, we will use the Maybe datatype. If it succeeds, it will return the list of vertices traversed. Our search function could be written (naively) as follows: search :: Graph v e -> Int -> Int -> Maybe [Int] search g@(Graph vl el) src dst | src == dst = Just [src]

110

CHAPTER 8. ADVANCED TYPES | otherwise = search’ el where search’ [] = Nothing search’ ((u,v,_):es) | src == u = case search g v dst of Just p -> Just (u:p) Nothing -> search’ es | otherwise = search’ es

This algorithm works as follows (try to read along): to search in a graph g from src to dst, first we check to see if these are equal. If they are, we have found our way and just return the trivial solution. Otherwise, we want to traverse the edge-list. If we’re traversing the edge-list and it is empty, we’ve failed, so we return Nothing. Otherwise, we’re looking at an edge from u to v. If u is our source, then we consider this step and recursively search the graph from v to dst. If this fails, we try the rest of the edges; if this succeeds, we put our current position before the path found and return. If u is not our source, this edge is useless and we continue traversing the edge-list. This algorithm is terrible: namely, if the graph contains cycles, it can loop indefinitely. Nevertheless, it is sufficent for now. Be sure you understand it well: things only get more complicated. Now, there are cases where the Maybe datatype is not sufficient: perhaps we wish to include an error message together with the failure. We could define a datatype to express this as: data Failable a = Success a | Fail String Now, failures come with a failure string to express what went wrong. We can rewrite our search function to use this datatype: search2 :: Graph v e -> Int -> Int -> Failable [Int] search2 g@(Graph vl el) src dst | src == dst = Success [src] | otherwise = search’ el where search’ [] = Fail "No path" search’ ((u,v,_):es) | src == u = case search2 g v dst of Success p -> Success (u:p) _ -> search’ es | otherwise = search’ es This code is a straightforward translation of the above. There is another option for this computation: perhaps we want not just one path, but all possible paths. We can express this as a function which returns a list of lists of vertices. The basic idea is the same:

8.4. CLASSES

111

search3 :: Graph v e -> Int -> Int -> [[Int]] search3 g@(Graph vl el) src dst | src == dst = [[src]] | otherwise = search’ el where search’ [] = [] search’ ((u,v,_):es) | src == u = map (u:) (search3 g v dst) ++ search’ es | otherwise = search’ es The code here has gotten a little shorter, thanks to the standard prelude map function, though it is essentially the same. We may ask ourselves what all of these have in common and try to gobble up those commonalities in a class. In essense, we need some way of representing success and some way of representing failure. Furthermore, we need a way to combine two successes (in the first two cases, the first success is chosen; in the third, they are strung together). Finally, we need to be able to augment a previous success (if there was one) with some new value. We can fit this all into a class as follows: class Computation c where success :: a -> c a failure :: String -> c a augment :: c a -> (a -> c b) -> c b combine :: c a -> c a -> c a In this class declaration, we’re saying that c is an instance of the class Computation if it provides four functions: success, failure, augment and combine. The success function takes a value of type a and returns it wrapped up in c, representing a successful computation. The failure function takes a String and returns a computation representing a failure. The combine function takes two previous computation and produces a new one which is the combination of both. The augment function is a bit more complex. The augment function takes some previously given computation (namely, c a) and a function which takes the value of that computation (the a) and returns a b and produces a b inside of that computation. Note that in our current situation, giving augment the type c a → (a → a) → c a would have been sufficient, since a is always [Int], but we make it this more general time just for generality. How augment works is probably best shown by example. We can define Maybe, Failable and [] to be instances of Computation as: instance Computation Maybe where success = Just failure = const Nothing

CHAPTER 8. ADVANCED TYPES

112 augment augment combine combine

(Just x) f = f x Nothing _ = Nothing Nothing y = y x _ = x

Here, success is represented with Just and failure ignores its argument and returns Nothing. The combine function takes the first success we found and ignores the rest. The function augment checks to see if we succeeded before (and thus had a Just something) and, if we did, applies f to it. If we failed before (and thus had a Nothing), we ignore the function and return Nothing. instance Computation Failable where success = Success failure = Fail augment (Success x) f = f x augment (Fail s) _ = Fail s combine (Fail _) y = y combine x _ = x These definitions are obvious. Finally: instance Computation [] where success a = [a] failure = const [] augment l f = concat (map f l) combine = (++) Here, the value of a successful computation is a singleton list containing that value. Failure is represented with the empty list and to combine previous successes we simply catenate them. Finally, augmenting a computation amounts to mapping the function across the list of previous computations and concatentate them. we apply the function to each element in the list and then concatenate the results. Using these computations, we can express all of the above versions of search as: searchAll g@(Graph vl el) src dst | src == dst = success [src] | otherwise = search’ el where search’ [] = failure "no path" search’ ((u,v,_):es) | src == u = (searchAll g v dst ‘augment‘ (success . (u:))) ‘combine‘ search’ es | otherwise = search’ es

8.5. INSTANCES

113

In this, we see the uses of all the functions from the class Computation. If you’ve understood this discussion of computations, you are in a very good position as you have understood the concept of monads, probably the most difficult concept in Haskell. In fact, the Computation class is almost exactly the Monad class, except that success is called return, failure is called fail and augment is called >>= (read “bind”). The combine function isn’t actually required by monads, but is found in the MonadPlus class for reasons which will become obvious later. If you didn’t understand everything here, read through it again and then wait for the proper discussion of monads in Chapter 9.

8.5 Instances We have already seen how to declare instances of some simple classes; allow us to consider some more advanced classes here. There is a Functor class defined in the Functor module. NOTE The name “functor”, like “monad” comes from category theory. There, a functor is like a function, but instead of mapping elements to elements, it maps structures to structures. The definition of the functor class is: class Functor f where fmap :: (a -> b) -> f a -> f b The type definition for fmap (not to mention its name) is very similar to the function map over lists. In fact, fmap is essentially a generalization of map to arbitrary structures (and, of course, lists are already instances of Functor). However, we can also define other structures to be instances of functors. Consider the following datatype for binary trees: data BinTree a = Leaf a | Branch (BinTree a) (BinTree a) We can immediately identify that the BinTree type essentially “raises” a type a into trees of that type. There is a naturally associated functor which goes along with this raising. We can write the instance: instance Functor BinTree where fmap f (Leaf a) = Leaf (f a) fmap f (Branch left right) = Branch (fmap f left) (fmap f right)

114

CHAPTER 8. ADVANCED TYPES

Now, we’ve seen how to make something like BinTree an instance of Eq by using the deriving keyword, but here we will do it by hand. We want to make BinTree as instances of Eq but obviously we cannot do this unless a is itself an instance of Eq. We can specify this dependence in the instance declaration: instance Eq a => Eq (BinTree a) where Leaf a == Leaf b = a == b Branch l r == Branch l’ r’ = l == l’ && r == r’ _ == _ = False The first line of this can be read “if a is an instance of Eq, then BinTree a is also an instance of Eq”. We then provide the definitions. If we did not include the “Eq a =>” part, the compiler would complain because we’re trying to use the == function on as in the second line. The “Eq a =>” part of the definition is called the “context.” We should note that there are some restrictions on what can appear in the context and what can appear in the declaration. For instance, we’re not allowed to have instance declarations that don’t contain type constructors on the right hand side. To see why, consider the following declarations: class MyEq a where myeq :: a -> a -> Bool instance Eq a => MyEq a where myeq = (==) As it stands, there doesn’t seem to be anything wrong with this definition. However, if elsewhere in a program we had the definition: instance MyEq a => Eq a where (==) = myeq In this case, if we’re trying to establish if some type is an instance of Eq, we could reduce it to trying to find out if that type is an instance of MyEq, which we could in turn reduce to trying to find out if that type is an instance of Eq, and so on. The compiler protects itself against this by refusing the first instance declaration. This is commonly known as the closed-world assumption. That is, we’re assuming, when we write a definition like the first one, that there won’t be any declarations like the second. However, this assumption is invalid because there’s nothing to prevent the second declaration (or some equally evil declaration). The closed world assumption can also bite you in cases like: class OnlyInts a where foo :: a -> a -> Bool

8.6. KINDS

115

instance OnlyInts Int where foo == (==) bar :: OnlyInts a => a -> Bool bar = foo 5 We’ve again made the closed-world assumption: we’ve assumed that the only instance of OnlyInts is Int, but there’s no reason another instance couldn’t be defined elsewhere, ruining our defintion of bar.

8.6 Kinds Let us take a moment and think about what types are available in Haskell. We have simple types, like Int, Char, Double and so on. We then have type constructors like Maybe which take a type (like Char) and produce a new type, Maybe Char. Similarly, the type constructor [] (lists) takes a type (like Int) and produces [Int]. We have more complex things like → (function arrow) which takes two types (say Int and Bool) and produces a new type Int → Bool. In a sense, these types themselves have type. Types like Int have some sort of basic type. Types like Maybe have a type which takes something of basic type and returns something of basic type. And so forth. Talking about the types of types becomes unwieldy and highly ambiguous, so we call the types of types “kinds.” What we have been calling “basic types” have kind “*”. Something of kind * is something which can have an actual value. There is also a single kind constructor, → with which we can build more complex kinds. Consider Maybe. This takes something of kind * and produces something of kind *. Thus, the kind of Maybe is * -> *. Recall the definition of Pair from Section 4.5.1: data Pair a b = Pair a b Here, Pair is a type constructor which takes two arguments, each of kind * and produces a type of kind *. Thus, the kind of Pair is * -> (* -> *). However, we again assume associativity so we just write * -> * -> *. Let us make a slightly strange datatype definition: data Strange c a b = MkStrange (c a) (c b) Before we analyze the kind of Strange, let’s think about what it does. It is essentially a pairing constructor, though it doesn’t pair actual elements, but elements within another constructor. For instance, think of c as Maybe. Then MkStrange pairs Maybes of the two types a and b. However, c need not be Maybe but could instead by [], or many other things.

CHAPTER 8. ADVANCED TYPES

116

What do we know about c, though? We know that it must have kind * -> *. This is because we have c a on the right hand side. The type variables a and b each have kind * as before. Thus, the kind of Strange is (* -> *) -> * -> * -> *. That is, it takes a constructor (c) of kind * -> * together with two types of kind * and produces something of kind *. A question may arise regarding how we know a has kind * and not some other kind k. In fact, the inferred kind for Strange is (k -> *) -> k -> k -> *. However, this requires polymorphism on the kind level, which is too complex, so we make a default assumption that k = *. NOTE There are extensions to GHC which allow you to specify the kind of constructors directly. For instance, if you wanted a different kind, you could write this explicitly: data Strange (c :: (* -> *) -> *) a b = MkStrange (c a) (c b) to give a different kind to Strange. The notation of kinds suggests that we can perform partial application, as we can for functions. And, in fact, we can. For instance, we could have: type MaybePair = Strange Maybe The kind of MaybePair is, not surprisingly, * -> * -> *. We should note here that all of the following definitions are acceptable: type MaybePair1 = Strange Maybe type MaybePair2 a = Strange Maybe a type MaybePair3 a b = Strange Maybe a b These all appear to be the same, but they are in fact not identical as far as Haskell’s type system is concerned. The following are all valid type definitions using the above: type MaybePair1a = MaybePair1 type MaybePair1b = MaybePair1 Int type MaybePair1c = MaybePair1 Int Double type MaybePair2b = MaybePair2 Int type MaybePair2c = MaybePair2 Int Double type MaybePair3c = MaybePair3 Int Double But the following are not valid:

8.7. CLASS HIERARCHIES

117

type MaybePair2a = MaybePair2 type MaybePair3a = MaybePair3 type MaybePair3b = MaybePair3 Int This is because while it is possible to partially apply type constructors on datatypes, it is not possible on type synonyms. For instance, the reason MaybePair2a is invalid is because MaybePair2 is defined as a type synonym with one argument and we have given it none. The same applies for the invalid MaybePair3 definitions.

8.7 Class Hierarchies 8.8 Default what is it?

118

CHAPTER 8. ADVANCED TYPES

Chapter 9

Monads The most difficult concept to master, while learning Haskell, is that of understanding and using monads. We can distinguish two subcomponents here: (1) learning how to use existing monads and (2) learning how to write new ones. If you want to use Haskell, you must learn to use existing monads. On the other hand, you will only need to learn to write your own monads if you want to become a “super Haskell guru.” Still, if you can grasp writing your own monads, programming in Haskell will be much more pleasant. So far we’ve seen two uses of monads. The first use was IO actions: We’ve seen that, by using monads, we can abstract get away from the problems plaguing the RealWorld solution to IO presented in Chapter 5. The second use was representing different types of computations in Section 8.4.2. In both cases, we needed a way to sequence operations and saw that a sufficient definition (at least for computations) was: class Computation c where success :: a -> c a failure :: String -> c a augment :: c a -> (a -> c b) -> c b combine :: c a -> c a -> c a Let’s see if this definition will enable us to also perform IO. Essentially, we need a way to represent taking a value out of an action and performing some new operation on it (as in the example from Section 4.4.3, rephrased slightly): main = do s <- readFile "somefile" putStrLn (show (f s)) But this is exactly what augment does. Using augment, we can write the above code as: 119

computations

CHAPTER 9. MONADS

120

main = -- note the lack of a "do" readFile "somefile" ‘augment‘ \s -> putStrLn (show (f s)) This certainly seems to be sufficient. And, in fact, it turns out to be more than sufficient. The definition of a monad is a slightly trimmed-down version of our Computation class. The Monad class has four methods (but the fourth method can be defined in terms of the third): class Monad return fail (>>=) (>>)

bind then

m where :: a -> m :: String :: m a -> :: m a ->

a -> m a (a -> m b) -> m b m b -> m b

In this definition, return is equivalent to our success; fail is equivalent to our failure; and >>= (read: “bind” ) is equivalent to our augment. The >> (read: “then” ) method is simply a version of >>= that ignores the a. This will turn out to be useful; although, as mentioned before, it can be defined in terms of >>=: a >> x = a >>= \_ -> x

9.1 Do Notation

syntactic sugar

We have hinted that there is a connection between monads and the do notation. Here, we make that relationship concrete. There is actually nothing magic about the do notation – it is simply “syntactic sugar” for monadic operations. As we mentioned earlier, using our Computation class, we could define our above program as: main = readFile "somefile" ‘augment‘ \s -> putStrLn (show (f s)) But we now know that augment is called >>= in the monadic world. Thus, this program really reads: main = readFile "somefile" >>= \s -> putStrLn (show (f s))

9.1. DO NOTATION

121

And this is completely valid Haskell at this point: if you defined a function f :: Show a => String -> a, you could compile and run this program) This suggests that we can translate: x <- f g x into f >>= \x -> g x. This is exactly what the compiler does. Talking about do becomes easier if we do not use implicit layout (see Section ?? for how to do this). There are four translation rules: 1. do {e} → e 2. do {e; es} → e >> do {es} 3. do {let decls; es} → let decls in do {es} 4. do {p <- e; es} → let ok p = do {es} ; ok in e >>= ok

= fail "..."

Again, we will elaborate on these one at a time:

Translation Rule 1 The first translation rule, do {e} → e, states (as we have stated before) that when performing a single action, having a do or not is irrelevant. This is essentially the base case for an inductive definition of do. The base case has one action (namely e here); the other three translation rules handle the cases where there is more than one action.

Translation Rule 2 This states that do {e; es} → e >> do {es}. This tells us what to do if we have an action (e) followed by a list of actions (es). Here, we make use of the >> function, defined earlier. This rule simple states that to do {e; es}, we first perform the action e, throw away the result, and then do es. For instance, if e is putStrLn s for some string s, then the translation of do {e; es} is to perform e (i.e., print the string) and then do es. This is clearly what we want.

Translation Rule 3 This states that do {let decls; es} → let decls in do {es}. This rule tells us how to deal with lets inside of a do statement. We lift the declarations within the let out and do whatever comes after the declarations.

let

122

CHAPTER 9. MONADS

Translation Rule 4 This states that do {p <- e; es} → let ok p = do {es} ; ok = fail "..." in e >>= ok. Again, it is not exactly obvious what is going on here. However, an alternate formulation of this rule, which is roughly equivalent, is: do {p <e; es} → e >>= \p -> es. Here, it is clear what is happening. We run the action e, and then send the results into es, but first give the result the name p. The reason for the complex definition is that p doesn’t need to simply be a variable; it could be some complex pattern. For instance, the following is valid code: foo = do (’a’:’b’:’c’:x:xs) <- getLine putStrLn (x:xs) In this, we’re assuming that the results of the action getLine will begin with the string “abc” and will have at least one more character. The question becomes what should happen if this pattern match fails. The compiler could simply throw an error, like usual, for failed pattern matches. However, since we’re within a monad, we have access to a special fail function, and we’d prefer to fail using that function, rather than the “catch all” error function. Thus, the translation, as defined, allows the compiler to fill in the ... with an appropriate error message about the pattern matching having failed. Apart from this, the two definitions are equivalent.

9.2 Definition monad laws

There are three rules that all monads must obey called the “Monad Laws” (and it is up to you to ensure that your monads obey these rules) : 1. return a >>= f ≡ f a 2. f >>= return ≡ f 3. f >>= (\x -> g x >>= h) ≡ (f >>= g) >>= h Let’s look at each of these individually:

Law 1 This states that return a >>= f ≡ f a. Suppose we think about monads as computations. This means that if we create a trivial computation that simply returns the value a regardless of anything else (this is the return a part); and then bind it together with some other computation f, then this is equivalent to simply performing the computation f on a directly. For example, suppose f is the function putStrLn and a is the string “Hello World.” This rule states that binding a computation whose result is “Hello World” to putStrLn is the same as simply printing it to the screen. This seems to make sense. In do notation, this law states that the following two programs are equivalent:

9.2. DEFINITION

123

law1a = do x <- return a f x law1b = do f a

Law 2 The second monad law states that f >>= return ≡ f for some computation f. In other words, the law states that if we perform the computation f and then pass the result on to the trivial return function, then all we have done is to perform the computation. That this law must hold should be obvious. To see this, think of f as getLine (reads a string from the keyboard). This law states that reading a string and then returning the value read is exactly the same as just reading the string. In do notation, the law states that the following two programs are equivalent: law2a = do x <- f return x law2b = do f

Law 3 This states that f >>= (\x -> g x >>= h) ≡ (f >>= g) >>= h. At first glance, this law is not as easy to grasp as the other two. It is essentially an associativity law for monads. NOTE Outside the world of monads, a function · is associative if (f · g) · h = f · (g · h). For instance, + and * are associative, since bracketing on these functions doesn’t make a difference. On the other hand, - and / are not associative since, for example, 5 − (3 − 1) 6= (5 − 3) − 1. If we throw away the messiness with the lambdas, we see that this law states: f >>= (g >>= h) ≡ (f >>= g) >>= h. The intuition behind this law is that when we string together actions, it doesn’t matter how we group them. For a concrete example, take f to be getLine. Take g to be an action which takes a value as input, prints it to the screen, reads another string via getLine, and then returns that newly read string. Take h to be putStrLn. Let’s consider what (\x -> g x >>= h) does. It takes a value called x, and runs g on it, feeding the results into h. In this instance, this means that it’s going to

associative

124

CHAPTER 9. MONADS

take a value, print it, read another value and then print that. Thus, the entire left hand side of the law first reads a string and then does what we’ve just described. On the other hand, consider (f >>= g). This action reads a string from the keyboard, prints it, and then reads another string, returning that newly read string as a result. When we bind this with h as on the right hand side of the law, we get an action that does the action described by (f >>= g), and then prints the results. Clearly, these two actions are the same. While this explanation is quite complicated, and the text of the law is also quite complicated, the actual meaning is simple: if we have three actions, and we compose them in the same order, it doesn’t matter where we put the parentheses. The rest is just notation. In do notation, the law says that the following two programs are equivalent: law3a = do x <- f do y <- g x h y law3b = do y <- do x <- f g x h y

9.3 A Simple State Monad One of the simplest monads that we can craft is a state-passing monad. In Haskell, all state information usually must be passed to functions explicitly as arguments. Using monads, we can effectively hide some state information. Suppose we have a function f of type a → b, and we need to add state to this function. In general, if state is of type state, we can encode it by changing the type of f to a → state → (state, b). That is, the new version of f takes the original parameter of type a and a new state parameter. And, in addition to returning the value of type b, it also returns an updated state, encoded in a tuple. For instance, suppose we have a binary tree defined as: data Tree a = Leaf a | Branch (Tree a) (Tree a) Now, we can write a simple map function to apply some function to each value in the leaves: mapTree :: (a -> b) -> Tree a -> Tree b mapTree f (Leaf a) = Leaf (f a)

9.3. A SIMPLE STATE MONAD

125

mapTree f (Branch lhs rhs) = Branch (mapTree f lhs) (mapTree f rhs) This works fine until we need to write a function that numbers the leaves left to right. In a sense, we need to add state, which keeps track of how many leaves we’ve numbered so far, to the mapTree function. We can augment the function to something like: mapTreeState :: (a -> state -> (state, b)) -> Tree a -> state -> (state, Tree b) mapTreeState f (Leaf a) state = let (state’, b) = f a state in (state’, Leaf b) mapTreeState f (Branch lhs rhs) state = let (state’ , lhs’) = mapTreeState f lhs state (state’’, rhs’) = mapTreeState f rhs state’ in (state’’, Branch lhs’ rhs’) This is beginning to get a bit unweildy, and the type signature is getting harder and harder to understand. What we want to do is abstract away the state passing part. That is, the differences between mapTree and mapTreeState are: (1) the augmented f type, (2) we replaced the type -> Tree b with -> state -> (state, Tree b). Notice that both types changed in exactly the same way. We can abstract this away with a type synonym declaration: type State st a = st -> (st, a) To go along with this type, we write two functions: returnState :: a -> State st a returnState a = \st -> (st, a) bindState :: State st a -> (a -> State st b) -> State st b bindState m k = \st -> let (st’, a) = m st m’ = k a in m’ st’ Let’s examine each of these in turn. The first function, returnState, takes a value of type a and creates something of type State st a. If we think of the st as the state, and the value of type a as the value, then this is a function that doesn’t change the state and returns the value a. The bindState function looks distinctly like the interior let declarations in mapTreeState. It takes two arguments. The first argument is an action that returns something of type

CHAPTER 9. MONADS

126

a with state st. The second is a function that takes this a and produces something of type b also with the same state. The result of bindState is essentially the result of transforming the a into a b. The definition of bindState takes an initial state, st. It first applies this to the State st a argument called m. This gives back a new state st’ and a value a. It then lets the function k act on a, producing something of type State st b, called m’. We finally run m’ with the new state st’. We write a new function, mapTreeStateM and give it the type: mapTreeStateM :: (a -> State st b) -> Tree a -> State st (Tree b) Using these “plumbing” functions (returnState and bindState) we can write this function without ever having to explicitly talk about the state: mapTreeStateM f (Leaf a) = f a ‘bindState‘ \b -> returnState (Leaf b) mapTreeStateM f (Branch lhs rhs) = mapTreeStateM f lhs ‘bindState‘ \lhs’ -> mapTreeStateM f rhs ‘bindState‘ \rhs’ -> returnState (Branch lhs’ rhs’) In the Leaf case, we apply f to a and then bind the result to a function that takes the result and returns a Leaf with the new value. In the Branch case, we recurse on the left-hand-side, binding the result to a function that recurses on the right-hand-side, binding that to a simple function that returns the newly created Branch. As you have probably guessed by this point, State st is a monad, returnState is analogous to the overloaded return method, and bindState is analogous to the overloaded >>= method. In fact, we can verify that State st a obeys the monad laws: Law 1 states: return a >>= f ≡ f a. Let’s calculate on the left hand side, substituting our names: returnState a ‘bindState‘ f ==> \st -> let (st’, a) = (returnState a) st m’ = f a in m’ st’ ==> \st -> let (st’, a) = (\st -> (st, a)) st in (f a) st’ ==> \st -> let (st’, a) = (st, a) in (f a) st’

9.3. A SIMPLE STATE MONAD

127

==> \st -> (f a) st ==> f a In the first step, we simply substitute the definition of bindState. In the second step, we simplify the last two lines and substitute the definition of returnState. In the third step, we apply st to the lambda function. In the fourth step, we rename st’ to st and remove the let. In the last step, we eta reduce. Moving on to Law 2, we need to show that f >>= return ≡ f. This is shown as follows: f ‘bindState‘ returnState ==> \st -> let (st’, a) = f st in (returnState a) st’ ==> \st -> let (st’, a) = f st in (\st -> (st, a)) st’ ==> \st -> let (st’, a) = f st in (st’, a) ==> \st -> f st ==> f Finally, we need to show that State obeys the third law: f >>= (\x -> g x >>= h) ≡ (f >>= g) >>= h. This is much more involved to show, so we will only sketch the proof here. Notice that we can write the left-hand-side as: \st -> let (st’, a) = f st in (\x -> g x ‘bindState‘ h) a st’ ==> \st -> let (st’, a) = f st in (g a ‘bindState‘ h) st’ ==> \st -> let (st’, a) = f st in (\st’ -> let (st’’, b) = g a in h b st’’) st’ ==> \st -> let (st’ , a) = f st (st’’, b) = g a st’ (st’’’,c) = h b st’’ in (st’’’,c)

128

CHAPTER 9. MONADS

The interesting thing to note here is that we have both action applications on the same let level. Since let is associative, this means that we can put whichever bracketing we prefer and the results will not change. Of course, this is an informal, “hand waving” argument and it would take us a few more derivations to actually prove, but this gives the general idea. Now that we know that State st is actually a monad, we’d like to make it an instance of the Monad class. Unfortunately, the straightforward way of doing this doesn’t work. We can’t write: instance Monad (State st) where { ... } This is because you cannot make instances out of non-fully-applied type synonyms. Instead, what we need to do instead is convert the type synonym into a newtype, as: newtype State st a = State (st -> (st, a)) Unfortunately, this means that we need to do some packing and unpacking of the State constructor in the Monad instance declaration, but it’s not terribly difficult: instance Monad (State state) where return a = State (\state -> (state, a)) State run >>= action = State run’ where run’ st = let (st’, a) = run st State run’’ = action a in run’’ st’ mapTreeM

Now, we can write our mapTreeM function as: mapTreeM :: (a -> State state b) -> Tree a -> State state (Tree b) mapTreeM f (Leaf a) = do b <- f a return (Leaf b) mapTreeM f (Branch lhs rhs) = do lhs’ <- mapTreeM f lhs rhs’ <- mapTreeM f rhs return (Branch lhs’ rhs’) which is significantly cleaner than before. In fact, if we remove the type signature, we get the more general type: mapTreeM :: Monad m => (a -> m b) -> Tree a -> m (Tree b)

9.3. A SIMPLE STATE MONAD

129

That is, mapTreeM can be run in any monad, not just our State monad. Now, the nice thing about encapsulating the stateful aspect of the computation like this is that we can provide functions to get and change the current state. These look like: getState :: State state state getState = State (\state -> (state, state)) putState :: state -> State state () putState new = State (\_ -> (new, ())) Here, getState is a monadic operation that takes the current state, passes it through unchanged, and then returns it as the value. The putState function takes a new state and produces an action that ignores the current state and inserts the new one. Now, we can write our numberTree function as: numberTree :: Tree a -> State Int (Tree (a, Int)) numberTree tree = mapTreeM number tree where number v = do cur <- getState putState (cur+1) return (v,cur) Finally, we need to be able to run the action by providing an initial state: runStateM :: State state a -> state -> a runStateM (State f) st = snd (f st) Now, we can provide an example Tree: testTree = Branch (Branch (Leaf ’a’) (Branch (Leaf ’b’) (Leaf ’c’))) (Branch (Leaf ’d’) (Leaf ’e’)) and number it: State> runStateM (numberTree testTree) 1 Branch (Branch (Leaf (’a’,1)) (Branch (Leaf (’b’,2)) (Leaf (’c’,3)))) (Branch (Leaf (’d’,4)) (Leaf (’e’,5)))

getState putState

CHAPTER 9. MONADS

130

This may seem like a large amount of work to do something simple. However, note the new power of mapTreeM. We can also print out the leaves of the tree in a left-to-right fashion as: State> mapTreeM print testTree ’a’ ’b’ ’c’ ’d’ ’e’ This crucially relies on the fact that mapTreeM has the more general type involving arbitrary monads – not just the state monad. Furthermore, we can write an action that will make each leaf value equal to its old value as well as all the values preceeding: fluffLeaves tree = mapTreeM fluff tree where fluff v = do cur <- getState putState (v:cur) return (v:cur) and can see it in action: State> runStateM (fluffLeaves testTree) [] Branch (Branch (Leaf "a") (Branch (Leaf "ba") (Leaf "cba"))) (Branch (Leaf "dcba") (Leaf "edcba")) In fact, you don’t even need to write your own monad instance and datatype. All this is built in to the Control.Monad.State module. There, our runStateM is called evalState; our getState is called get; and our putState is called put. This module also contains a state transformer monad, which we will discuss in Section 9.7.

9.4 Common Monads lists

It turns out that many of our favorite datatypes are actually monads themselves. Consider, for instance, lists. They have a monad definition that looks something like: instance Monad return x = l >>= f = fail _ =

[] where [x] concatMap f l []

9.4. COMMON MONADS

131

This enables us to use lists in do notation. For instance, given the definition: cross l1 l2 = do x <- l1 y <- l2 return (x,y) we get a cross-product function: Monads> cross "ab" "def" [(’a’,’d’),(’a’,’e’),(’a’,’f’),(’b’,’d’),(’b’,’e’), (’b’,’f’)] It is not a coincidence that this looks very much like the list comprehension form:

list comprehensions

Prelude> [(x,y) | x <- "ab", y <- "def"] [(’a’,’d’),(’a’,’e’),(’a’,’f’),(’b’,’d’),(’b’,’e’), (’b’,’f’)] List comprehension form is simply an abbreviated form of a monadic statement using lists. In fact, in older versions of Haskell, the list comprehension form could be used for any monad – not just lists. However, in the current version of Haskell, this is no longer allowed. The Maybe type is also a monad, with failure being represented as Nothing and with success as Just. We get the following instance declaration: instance Monad Maybe where return a = Just a Nothing >>= f = Nothing Just x >>= f = f x fail _ = Nothing We can use the same cross product function that we did for lists on Maybes. This is because the do notation works for any monad, and there’s nothing specific to lists about the cross function. Monads> cross (Just ’a’) (Just ’b’) Just (’a’,’b’) Monads> cross (Nothing :: Maybe Char) (Just ’b’) Nothing Monads> cross (Just ’a’) (Nothing :: Maybe Char) Nothing Monads> cross (Nothing :: Maybe Char) (Nothing :: Maybe Char) Nothing

Maybe

132

CHAPTER 9. MONADS

What this means is that if we write a function (like searchAll from Section 8.4) only in terms of monadic operators, we can use it with any monad, depending on what we mean. Using real monadic functions (not do notation), the searchAll function looks something like: searchAll g@(Graph vl el) src dst | src == dst = return [src] | otherwise = search’ el where search’ [] = fail "no path" search’ ((u,v,_):es) | src == u = searchAll g v dst >>= \path -> return (u:path) | otherwise = search’ es The type of this function is Monad m => Graph v e -> Int -> Int -> m [Int]. This means that no matter what monad we’re using at the moment, this function will perform the calculation. Suppose we have the following graph: gr = Graph [(0, ’a’), (1, ’b’), (2, ’c’), (3, ’d’)] [(0,1,’l’), (0,2,’m’), (1,3,’n’), (2,3,’m’)] This represents a graph with four nodes, labelled a,b,c and d. There is an edge from a to both b and c. There is also an edge from both b and c to d. Using the Maybe monad, we can compute the path from a to d: Monads> searchAll gr 0 3 :: Maybe [Int] Just [0,1,3] We provide the type signature, so that the interpreter knows what monad we’re using. If we try to search in the opposite direction, there is no path. The inability to find a path is represented as Nothing in the Maybe monad: Monads> searchAll gr 3 0 :: Maybe [Int] Nothing Note that the string “no path” has disappeared since there’s no way for the Maybe monad to record this. If we perform the same impossible search in the list monad, we get the empty list, indicating no path: Monads> searchAll gr 3 0 :: [[Int]] []

sea

9.4. COMMON MONADS

133

If we perform the possible search, we get back a list containing the first path: Monads> searchAll gr 0 3 :: [[Int]] [[0,1,3]] You may have expected this function call to return all paths, but, as coded, it does not. See Section 9.6 for more about using lists to represent nondeterminism. If we use the IO monad, we can actually get at the error message, since IO knows how to keep track of error messages: Monads> searchAll gr 0 3 :: IO [Int] Monads> it [0,1,3] Monads> searchAll gr 3 0 :: IO [Int] *** Exception: user error Reason: no path In the first case, we needed to type it to get GHCi to actually evaluate the search. There is one problem with this implementation of searchAll: if it finds an edge that does not lead to a solution, it won’t be able to backtrack. This has to do with the recursive call to searchAll inside of search’. Consider, for instance, what happens if searchAll g v dst doesn’t find a path. There’s no way for this implementation to recover. For instance, if we remove the edge from node b to node d, we should still be able to find a path from a to d, but this algorithm can’t find it. We define: gr2 = Graph [(0, ’a’), (1, ’b’), (2, ’c’), (3, ’d’)] [(0,1,’l’), (0,2,’m’), (2,3,’m’)] and then try to search: Monads> searchAll gr2 0 3 *** Exception: user error Reason: no path To fix this, we need a function like combine from our Computation class. We will see how to do this in Section 9.6.

Exercises Exercise 9.1 Verify that Maybe obeys the three monad laws. Exercise 9.2 The type Either String is a monad that can keep track of errors. Write an instance for it, and then try doing the search from this chapter using this monad. Hint: Your instance declaration should begin: instance Monad (Either String) where.

nondeterminism

CHAPTER 9. MONADS

134

9.5 Monadic Combinators The Monad/Control.Monad library contains a few very useful monadic combinators, which haven’t yet been thoroughly discussed. The ones we will discuss in this section, together with their types, are: • (=<<) ::

(a -> m b) -> m a -> m b

• mapM ::

(a -> m b) -> [a] -> m [b]

• mapM ::

(a -> m b) -> [a] -> m ()

• filterM :: • foldM ::

(a -> m Bool) -> [a] -> m [a] (a -> b -> m a) -> a -> [b] -> m a

• sequence ::

[m a] -> m [a]

• sequence ::

[m a] -> m ()

• liftM ::

(a -> b) -> m a -> m b

• when ::

Bool -> m () -> m ()

• join ::

m (m a) -> m a

In the above, m is always assumed to be an instance of Monad. In general, functions with an underscore at the end are equivalent to the ones without, except that they do not return any value. The =<< function is exactly the same as >>=, except it takes its arguments in the opposite order. For instance, in the IO monad, we can write either of the following: Monads> writeFile "foo" "hello world!" >> (readFile "foo" >>= putStrLn) hello world! Monads> writeFile "foo" "hello world!" >> (putStrLn =<< readFile "foo") hello world! mapM filterM foldM

The mapM, filterM and foldM are our old friends map, filter and foldr wrapped up inside of monads. These functions are incredibly useful (particularly foldM) when working with monads. We can use mapM , for instance, to print a list of things to the screen: Monads> mapM_ print [1,2,3,4,5] 1 2 3 4 5

9.5. MONADIC COMBINATORS

135

We can use foldM to sum a list and print the intermediate sum at each step: Monads> foldM (\a b -> putStrLn (show a ++ "+" ++ show b ++ "=" ++ show (a+b)) >> return (a+b)) 0 [1..5] 0+1=1 1+2=3 3+3=6 6+4=10 10+5=15 Monads> it 15 The sequence and sequence functions simply “execute” a list of actions. For instance:

sequence

Monads> sequence [print 1, print 2, print ’a’] 1 2 ’a’ *Monads> it [(),(),()] *Monads> sequence_ [print 1, print 2, print ’a’] 1 2 ’a’ *Monads> it () We can see that the underscored version doesn’t return each value, while the nonunderscored version returns the list of the return values. The liftM function “lifts” a non-monadic function to a monadic function. (Do not confuse this with the lift function used for monad transformers in Section 9.7.) This is useful for shortening code (among other things). For instance, we might want to write a function that prepends each line in a file with its line number. We can do this with: numberFile :: FilePath -> IO () numberFile fp = do text <- readFile fp let l = lines text let n = zipWith (\n t -> show n ++ ’ ’ : t) [1..] l mapM_ putStrLn n However, we can shorten this using liftM:

liftM

CHAPTER 9. MONADS

136

numberFile :: FilePath -> IO () numberFile fp = do l <- lines ‘liftM‘ readFile fp let n = zipWith (\n t -> show n ++ ’ ’ : t) [1..] l mapM_ putStrLn n In fact, you can apply any sort of (pure) processing to a file using liftM. For instance, perhaps we also want to split lines into words; we can do this with: ... w <- (map words . lines) ‘liftM‘ readFile fp ... Note that the parentheses are required, since the (.) function has the same fixity has ‘liftM‘. Lifting pure functions into monads is also useful in other monads. For instance liftM can be used to apply function inside of Just. For instance: Monads> liftM (+1) (Just 5) Just 6 *Monads> liftM (+1) Nothing Nothing when

The when function executes a monadic action only if a condition is met. So, if we only want to print non-empty lines: Monads> mapM_ (\l -> when (not $ null l) (putStrLn l)) ["","abc","def","","","ghi"] abc def ghi

join

Of course, the same could be accomplished with filter, but sometimes when is more convenient. Finally, the join function is the monadic equivalent of concat on lists. In fact, when m is the list monad, join is exactly concat. In other monads, it accomplishes a similar task: Monads> join (Just (Just ’a’)) Just ’a’ Monads> join (Just (Nothing :: Maybe Char)) Nothing Monads> join (Nothing :: Maybe (Maybe Char)) Nothing

9.6. MONADPLUS

137

Monads> join (return (putStrLn "hello")) hello Monads> return (putStrLn "hello") Monads> join [[1,2,3],[4,5]] [1,2,3,4,5] These functions will turn out to be even more useful as we move on to more advanced topics in Chapter 10.

9.6 MonadPlus Given only the >>= and return functions, it is impossible to write a function like combine with type c a → c a → c a. However, such a function is so generally useful that it exists in another class called MonadPlus. In addition to having a combine function, instances of MonadPlus also have a “zero” element that is the identity under the “plus” (i.e., combine) action. The definition is:

combine MonadPlus

class Monad m => MonadPlus m where mzero :: m a mplus :: m a -> m a -> m a In order to gain access to MonadPlus, you need to import the Monad module (or Control.Monad in the hierarchical libraries). In Section 9.4, we showed that Maybe and list are both monads. In fact, they are also both instances of MonadPlus. In the case of Maybe, the zero element is Nothing; in the case of lists, it is the empty list. The mplus operation on Maybe is Nothing, if both elements are Nothing; otherwise, it is the first Just value. For lists, mplus is the same as ++. That is, the instance declarations look like: instance MonadPlus Maybe where mzero = Nothing mplus Nothing y = y mplus x _ = x instance MonadPlus [] where mzero = [] mplus x y = x ++ y We can use this class to reimplement the search function we’ve been exploring, such that it will explore all possible paths. The new function looks like: searchAll2 g@(Graph vl el) src dst | src == dst = return [src]

Maybe lists

CHAPTER 9. MONADS

138

| otherwise = search’ el where search’ [] = fail "no path" search’ ((u,v,_):es) | src == u = (searchAll2 g v dst >>= \path -> return (u:path)) ‘mplus‘ search’ es | otherwise = search’ es Now, when we’re going through the edge list in search’, and we come across a matching edge, not only do we explore this path, but we also continue to explore the out-edges of the current node in the recursive call to search’. The IO monad is not an instance of MonadPlus; we we’re not able to execute the search with this monad. We can see that when using lists as the monad, we (a) get all possible paths in gr and (b) get a path in gr2. MPlus> searchAll2 gr 0 3 :: [[Int]] [[0,1,3],[0,2,3]] MPlus> searchAll2 gr2 0 3 :: [[Int]] [[0,2,3]] You might be tempted to implement this as: searchAll2 g@(Graph vl el) src dst | src == dst = return [src] | otherwise = search’ el where search’ [] = fail "no path" search’ ((u,v,_):es) | src == u = do path <- searchAll2 g v dst rest <- search’ es return ((u:path) ‘mplus‘ rest) | otherwise = search’ es But note that this doesn’t do what we want. Here, if the recursive call to searchAll2 fails, we don’t try to continue and execute search’ es. The call to mplus must be at the top level in order for it to work.

Exercises Exercise 9.3 Suppose that we changed the order of arguments to mplus. I.e., the matching case of search’ looked like: search’ es ‘mplus‘ (searchAll2 g v dst >>= \path -> return (u:path))

9.7. MONAD TRANSFORMERS

139

How would you expect this to change the results when using the list monad on gr? Why?

9.7 Monad Transformers Often we want to “piggyback” monads on top of each other. For instance, there might be a case where you need access to both IO operations through the IO monad and state functions through some state monad. In order to accomplish this, we introduce a MonadTrans class, which essentially “lifts” the operations of one monad into another. You can think of this as stacking monads on top of eachother. This class has a simple method: lift. The class declaration for MonadTrans is:

MonadTrans lift

class MonadTrans t where lift :: Monad m => m a -> t m a The idea here is that t is the outer monad and that m lives inside of it. In order to execute a command of type Monad m => m a, we first lift it into the transformer. The simplest example of a transformer (and arguably the most useful) is the state transformer monad, which is a state monad wrapped around an arbitrary monad. Before, we defined a state monad as:

state monad

newtype State state a = State (state -> (state, a)) Now, instead of using a function of type state -> (state, a) as the monad, we assume there’s some other monad m and make the internal action into something of type state -> m (state, a). This gives rise to the following definition for a state transformer: newtype StateT state m a = StateT (state -> m (state, a)) For instance, we can think of m as IO. In this case, our state transformer monad is able to execute actions in the IO monad. First, we make this an instance of MonadTrans: instance MonadTrans (StateT state) where lift m = StateT (\s -> do a <- m return (s,a)) Here, lifting a function from the realm of m to the realm of StateT state simply involves keeping the state (the s value) constant and executing the action. Of course, we also need to make StateT a monad, itself. This is relatively straightforward, provided that m is already a monad:

state transformer

140

CHAPTER 9. MONADS

instance Monad m => Monad (StateT state m) where return a = StateT (\s -> return (s,a)) StateT m >>= k = StateT (\s -> do (s’, a) <- m s let StateT m’ = k a m’ s’) fail s = StateT (\_ -> fail s)

getT putT evalStateT

The idea behind the definition of return is that we keep the state constant and simply return the state/a pair in the enclosed monad. Note that the use of return in the definition of return refers to the enclosed monad, not the state transformer. In the definition of bind, we create a new StateT that takes a state s as an argument. First, it applies this state to the first action (StateT m) and gets the new state and answer as a result. It then runs the k action on this new state and gets a new transformer. It finally applies the new state to this transformer. This definition is nearly identical to the definition of bind for the standard (non-transformer) State monad described in Section 9.3. The fail function passes on the call to fail in the enclosed monad, since state transformers don’t natively know how to deal with failure. Of course, in order to actually use this monad, we need to provide function getT , putT and evalStateT . These are analogous to getState, putState and runStateM from Section 9.3: getT :: Monad m => StateT s m s getT = StateT (\s -> return (s, s)) putT :: Monad m => s -> StateT s m () putT s = StateT (\_ -> return (s, ())) evalStateT :: Monad m => StateT s m a -> s -> m a evalStateT (StateT m) state = do (s’, a) <- m state return a These functions should be straightforward. Note, however, that the result of evalStateT is actually a monadic action in the enclosed monad. This is typical of monad transformers: they don’t know how to actually run things in their enclosed monad (they only know how to lift actions). Thus, what you get out is a monadic action in the inside monad (in our case, IO), which you then need to run yourself. We can use state transformers to reimplement a version of our mapTreeM function from Section 9.3. The only change here is that when we get to a leaf, we print out the value of the leaf; when we get to a branch, we just print out “Branch.” mapTreeM action (Leaf a) = do lift (putStrLn ("Leaf " ++ show a))

9.7. MONAD TRANSFORMERS

141

b <- action a return (Leaf b) mapTreeM action (Branch lhs rhs) = do lift (putStrLn "Branch") lhs’ <- mapTreeM action lhs rhs’ <- mapTreeM action rhs return (Branch lhs’ rhs’) The only difference between this function and the one from Section 9.3 is the calls to lift (putStrLn ...) as the first line. The lift tells us that we’re going to be executing a command in an enclosed monad. In this case, the enclosed monad is IO, since the command lifted is putStrLn. The type of this function is relatively complex: mapTreeM :: (MonadTrans t, Monad (t IO), Show a) => (a -> t IO a1) -> Tree a -> t IO (Tree a1) Ignoring, for a second, the class constraints, this says that mapTreeM takes an action and a tree and returns a tree. This just as before. In this, we require that t is a monad transformer (since we apply lift in it); we require that t IO is a monad, since we use putStrLn we know that the enclosed monad is IO; finally, we require that a is an instance of show – this is simply because we use show to show the value of leaves. Now, we simply change numberTree to use this version of mapTreeM, and the new versions of get and put, and we end up with: numberTree tree = mapTreeM number tree where number v = do cur <- getT putT (cur+1) return (v,cur) Using this, we can run our monad: MTrans> evalStateT (numberTree testTree) 0 Branch Branch Leaf ’a’ Branch Leaf ’b’ Leaf ’c’ Branch Leaf ’d’ Leaf ’e’ *MTrans> it

142

CHAPTER 9. MONADS

Branch (Branch (Leaf (’a’,0)) (Branch (Leaf (’b’,1)) (Leaf (’c’,2)))) (Branch (Leaf (’d’,3)) (Leaf (’e’,4)))

cycles

One problem not specified in our discussion of MonadPlus is that our search algorithm will fail to terminate on graphs with cycles. Consider: gr3 = Graph [(0, ’a’), (1, ’b’), (2, ’c’), (3, ’d’)] [(0,1,’l’), (1,0,’m’), (0,2,’n’), (1,3,’o’), (2,3,’p’)] In this graph, there is a back edge from node b back to node a. If we attempt to run searchAll2, regardless of what monad we use, it will fail to terminate. Moreover, if we move this erroneous edge to the end of the list (and call this gr4), the result of searchAll2 gr4 0 3 will contain an infinite number of paths: presumably we only want paths that don’t contain cycles. In order to get around this problem, we need to introduce state. Namely, we need to keep track of which nodes we have visited, so that we don’t visit them again. We can do this as follows: searchAll5 g@(Graph vl el) src dst | src == dst = do visited <- getT putT (src:visited) return [src] | otherwise = do visited <- getT putT (src:visited) if src ‘elem‘ visited then mzero else search’ el where search’ [] = mzero search’ ((u,v,_):es) | src == u = (do path <- searchAll5 g v dst return (u:path)) ‘mplus‘ search’ es | otherwise = search’ es Here, we implicitly use a state transformer (see the calls to getT and putT) to keep track of visited states. We only continue to recurse, when we encounter a state we haven’t yet visited. Futhermore, when we recurse, we add the current state to our set of visited states. Now, we can run the state transformer and get out only the correct paths, even on the cyclic graphs:

9.7. MONAD TRANSFORMERS

143

MTrans> evalStateT (searchAll5 gr3 0 3) [] :: [[Int]] [[0,1,3],[0,2,3]] MTrans> evalStateT (searchAll5 gr4 0 3) [] :: [[Int]] [[0,1,3],[0,2,3]] Here, the empty list provided as an argument to evalStateT is the initial state (i.e., the initial visited list). In our case, it is empty. We can also provide an execStateT method that, instead of returning a result, returns the final state. This function looks like: execStateT :: Monad m => StateT s m a -> s -> m s execStateT (StateT m) state = do (s’, a) <- m state return s’ This is not so useful in our case, as it will return exactly the reverse of evalStateT (try it and find out!), but can be useful in general (if, for instance, we need to know how many numbers are used in numberTree).

Exercises Exercise 9.4 Write a function searchAll6, based on the code for searchAll2, that, at every entry to the main function (not the recursion over the edge list), prints the search being conducted. For instance, the output generated for searchAll6 gr 0 3 should look like: Exploring 0 -> 3 Exploring 1 -> 3 Exploring 3 -> 3 Exploring 2 -> 3 Exploring 3 -> 3 MTrans> it [[0,1,3],[0,2,3]] In order to do this, you will have to define your own list monad transformer and make appropriate instances of it. Exercise 9.5 Combine the searchAll5 function (from this section) with the searchAll6 function (from the previous exercise) into a single function called searchAll7. This function should perform IO as in searchAll6 but should also keep track of state using a state transformer.

144

CHAPTER 9. MONADS

9.8 Parsing Monads It turns out that a certain class of parsers are all monads. This makes the construction of parsing libraries in Haskell very clean. In this chapter, we begin by building our own (small) parsing library in Section 9.8.1 and then introduce the Parsec parsing library in Section 9.8.2.

9.8.1 A Simple Parsing Monad Consider the task of parsing. A simple parsing monad is much like a state monad, where the state is the unparsed string. We can represent this exactly as: newtype Parser a = Parser { runParser :: String -> Either String (String, a) } We again use Left err to be an error condition. This yields standard instances of Monad and MonadPlus: instance Monad Parser where return a = Parser (\xl -> Right (xl,a)) fail s = Parser (\xl -> Left s) Parser m >>= k = Parser $ \xl -> case m xl of Left s -> Left s Right (xl’, a) -> let Parser n = k a in n xl’ instance MonadPlus Parser where mzero = Parser (\xl -> Left "mzero") Parser p ‘mplus‘ Parser q = Parser $ \xl -> case p xl of Right a -> Right a Left err -> case q xl of Right a -> Right a Left _ -> Left err primitives

Now, we want to build up a library of paring “primitives.” The most basic primitive is a parser that will read a specific character. This function looks like: char :: Char -> Parser Char char c = Parser char’ where char’ [] = Left ("expecting " ++ show c ++ " got EOF") char’ (x:xs) | x == c = Right (xs, c)

9.8. PARSING MONADS | otherwise = Left

145 ("expecting " ++ show c ++ " got " ++ show x)

Here, the parser succeeds only if the first character of the input is the expected character. We can use this parser to build up a parser for the string “Hello”: helloParser :: Parser String helloParser = do char ’H’ char ’e’ char ’l’ char ’l’ char ’o’ return "Hello" This shows how easy it is to combine these parsers. We don’t need to worry about the underlying string – the monad takes care of that for us. All we need to do is combine these parser primatives. We can test this parser by using runParser and by supplying input: Parsing> runParser helloParser "Hello" Right ("","Hello") Parsing> runParser helloParser "Hello World!" Right (" World!","Hello") Parsing> runParser helloParser "hello World!" Left "expecting ’H’ got ’h’" We can have a slightly more general function, which will match any character fitting a description: matchChar :: (Char -> Bool) -> Parser Char matchChar c = Parser matchChar’ where matchChar’ [] = Left ("expecting char, got EOF") matchChar’ (x:xs) | c x = Right (xs, x) | otherwise = Left ("expecting char, got " ++ show x) Using this, we can write a case-insensitive “Hello” parser:

runParser

CHAPTER 9. MONADS

146

ciHelloParser = do c1 <- matchChar (‘elem‘ c2 <- matchChar (‘elem‘ c3 <- matchChar (‘elem‘ c4 <- matchChar (‘elem‘ c5 <- matchChar (‘elem‘ return [c1,c2,c3,c4,c5]

"Hh") "Ee") "Ll") "Ll") "Oo")

Of course, we could have used something like matchChar ((==’h’) . but the above implementation works just as well. We can test this function:

toLower),

Parsing> runParser ciHelloParser "hELlO world!" Right (" world!","hELlO") Finally, we can have a function, which will match any character: anyChar :: Parser Char anyChar = Parser anyChar’ where anyChar’ [] = Left ("expecting character, got EOF") anyChar’ (x:xs) = Right (xs, x) many

On top of these primitives, we usually build some combinators. The many combinator, for instance, will take a parser that parses entities of type a and will make it into a parser that parses entities of type [a] (this is a Kleene-star operator): many :: Parser a -> Parser [a] many (Parser p) = Parser many’ where many’ xl = case p xl of Left err -> Right (xl, []) Right (xl’,a) -> let Right (xl’’, rest) = many’ xl’ in Right (xl’’, a:rest) The idea here is that first we try to apply the given parser, p. If this fails, we succeed but return the empty list. If p succeeds, we recurse and keep trying to apply p until it fails. We then return the list of successes we’ve accumulated. In general, there would be many more functions of this sort, and they would be hidden away in a library, so that users couldn’t actually look inside the Parser type. However, using them, you could build up, for instance, a parser that parses (nonnegative) integers:

9.8. PARSING MONADS

147

int :: Parser Int int = do t1 <- matchChar isDigit tr <- many (matchChar isDigit) return (read (t1:tr)) In this function, we first match a digit (the isDigit function comes from the module Char/Data.Char) and then match as many more digits as we can. We then read the result and return it. We can test this parser as before: Parsing> runParser int "54" Right ("",54) *Parsing> runParser int "54abc" Right ("abc",54) *Parsing> runParser int "a54abc" Left "expecting char, got ’a’" Now, suppose we want to parse a Haskell-style list of Ints. This becomes somewhat difficult because, at some point, we’re either going to parse a comma or a close brace, but we don’t know when this will happen. This is where the fact that Parser is an instance of MonadPlus comes in handy: first we try one, then we try the other. Consider the following code: intList :: Parser [Int] intList = do char ’[’ intList’ ‘mplus‘ (char ’]’ >> return []) where intList’ = do i <- int r <- (char ’,’ >> intList’) ‘mplus‘ (char ’]’ >> return []) return (i:r) The first thing this code does is parse and open brace. Then, using mplus, it tries one of two things: parsing using intList’, or parsing a close brace and returning an empty list. The intList’ function assumes that we’re not yet at the end of the list, and so it first parses an int. It then parses the rest of the list. However, it doesn’t know whether we’re at the end yet, so it again uses mplus. On the one hand, it tries to parse a comma and then recurse; on the other, it parses a close brace and returns the empty list. Either way, it simply prepends the int it parsed itself to the beginning. One thing that you should be careful of is the order in which you supply arguments to mplus. Consider the following parser:

mplus

148

CHAPTER 9. MONADS

tricky = mplus (string "Hal") (string "Hall") You might expect this parser to parse both the words “Hal” and “Hall;” however, it only parses the former. You can see this with: Parsing> runParser tricky "Hal" Right ("","Hal") Parsing> runParser tricky "Hall" Right ("l","Hal") This is because it tries to parse “Hal,” which succeeds, and then it doesn’t bother trying to parse “Hall.” You can attempt to fix this by providing a parser primitive, which detects end-of-file (really, end-of-string) as: eof :: Parser () eof = Parser eof’ where eof’ [] = Right ([], ()) eof’ xl = Left ("Expecting EOF, got " ++ show (take 10 xl)) You might then rewrite tricky using eof as: tricky2 = do s <- mplus (string "Hal") (string "Hall") eof return s But this also doesn’t work, as we can easily see: Parsing> runParser tricky2 "Hal" Right ("",()) Parsing> runParser tricky2 "Hall" Left "Expecting EOF, got \"l\"" This is because, again, the mplus doesn’t know that it needs to parse the whole input. So, when you provide it with “Hall,” it parses just “Hal” and leaves the last “l” lying around to be parsed later. This causes eof to produce an error message. The correct way to implement this is: tricky3 = mplus (do s <- string "Hal"

9.8. PARSING MONADS

149

eof return s) (do s <- string "Hall" eof return s) We can see that this works: Parsing> runParser tricky3 "Hal" Right ("","Hal") Parsing> runParser tricky3 "Hall" Right ("","Hall") This works precisely because each side of the mplus knows that it must read the end. In this case, fixing the parser to accept both “Hal” and “Hall” was fairly simple, due to the fact that we assumed we would be reading an end-of-file immediately afterwards. Unfortunately, if we cannot disambiguate immediately, life becomes significantly more complicated. This is a general problem in parsing, and has little to do with monadic parsing. The solution most parser libraries (e.g., Parsec, see Section 9.8.2) have adopted is to only recognize “LL(1)” grammars: that means that you must be able to disambiguate the input with a one token look-ahead.

Exercises Exercise 9.6 Write a parser intListSpace that will parse int lists but will allow arbitrary white space (spaces, tabs or newlines) between the commas and brackets. Given this monadic parser, it is fairly easy to add information regarding source position. For instance, if we’re parsing a large file, it might be helpful to report the line number on which an error occurred. We could do this simply by extending the Parser type and by modifying the instances and the primitives: newtype Parser a = Parser { runParser :: Int -> String -> Either String (Int, String, a) } instance Monad Parser where return a = Parser (\n xl -> Right (n,xl,a)) fail s = Parser (\n xl -> Left (show n ++ ": " ++ s)) Parser m >>= k = Parser $ \n xl -> case m n xl of Left s -> Left s Right (n’, xl’, a) -> let Parser m2 = k a

line numbers

CHAPTER 9. MONADS

150 in

m2 n’ xl’

instance MonadPlus Parser where mzero = Parser (\n xl -> Left "mzero") Parser p ‘mplus‘ Parser q = Parser $ \n xl -> case p n xl of Right a -> Right a Left err -> case q n xl of Right a -> Right a Left _ -> Left err matchChar :: (Char -> Bool) -> Parser Char matchChar c = Parser matchChar’ where matchChar’ n [] = Left ("expecting char, got EOF") matchChar’ n (x:xs) | c x = Right (n+if x==’\n’ then 1 else 0 , xs, x) | otherwise = Left ("expecting char, got " ++ show x) The definitions for char and anyChar are not given, since they can be written in terms of matchChar. The many function needs to be modified only to include the new state. Now, when we run a parser and there is an error, it will tell us which line number contains the error: Parsing2> runParser helloParser 1 "Hello" Right (1,"","Hello") Parsing2> runParser int 1 "a54" Left "1: expecting char, got ’a’" Parsing2> runParser intList 1 "[1,2,3,a]" Left "1: expecting ’]’ got ’1’" We can use the intListSpace parser from the prior exercise to see that this does in fact work: Parsing2> runParser intListSpace 1 "[1 ,2 , 4 \n\n ,a\n]" Left "3: expecting char, got ’a’" Parsing2> runParser intListSpace 1 "[1 ,2 , 4 \n\n\n ,a\n]" Left "4: expecting char, got ’a’" Parsing2> runParser intListSpace 1

9.8. PARSING MONADS

151

"[1 ,\n2 , 4 \n\n\n ,a\n]" Left "5: expecting char, got ’a’" We can see that the line number, on which the error occurs, increases as we add additional newlines before the erroneous “a”.

9.8.2 Parsec As you continue developing your parser, you might want to add more and more features. Luckily, Graham Hutton and Daan Leijen have already done this for us in the Parsec library. This section is intended to be an introduction to the Parsec library; it by no means covers the whole library, but it should be enough to get you started. Like our libarary, Parsec provides a few basic functions to build parsers from characters. These are: char, which is the same as our char; anyChar, which is the same as our anyChar; satisfy, which is the same as our matchChar; oneOf, which takes a list of Chars and matches any of them; and noneOf, which is the opposite of oneOf. The primary function Parsec uses to run a parser is parse. However, in addition to a parser, this function takes a string that represents the name of the file you’re parsing. This is so it can give better error messages. We can try parsing with the above functions:

char anyChar satisfy oneOf noneOf parse

ParsecI> parse (char ’a’) "stdin" "a" Right ’a’ ParsecI> parse (char ’a’) "stdin" "ab" Right ’a’ ParsecI> parse (char ’a’) "stdin" "b" Left "stdin" (line 1, column 1): unexpected "b" expecting "a" ParsecI> parse (char ’H’ >> char ’a’ >> char ’l’) "stdin" "Hal" Right ’l’ ParsecI> parse (char ’H’ >> char ’a’ >> char ’l’) "stdin" "Hap" Left "stdin" (line 1, column 3): unexpected "p" expecting "l" Here, we can see a few differences between our parser and Parsec: first, the rest of the string isn’t returned when we run parse. Second, the error messages produced are much better. In addition to the basic character parsing functions, Parsec provides primitives for: spaces, which is the same as ours; space which parses a single space; letter, which parses a letter; digit, which parses a digit; string, which is the same as ours; and a few others. We can write our int and intList functions in Parsec as:

spaces space letter digit string

152

CHAPTER 9. MONADS

int :: CharParser st Int int = do i1 <- digit ir <- many digit return (read (i1:ir)) intList :: CharParser st [Int] intList = do char ’[’ intList’ ‘mplus‘ (char ’]’ >> return []) where intList’ = do i <- int r <- (char ’,’ >> intList’) ‘mplus‘ (char ’]’ >> return []) return (i:r) First, note the type signatures. The st type variable is simply a state variable that we are not using. In the int function, we use the many function (built in to Parsec) together with the digit function (also built in to Parsec). The intList function is actually identical to the one we wrote before. Note, however, that using mplus explicitly is not the preferred method of combining parsers: Parsec provides a <|> function that is a synonym of mplus, but that looks nicer: intList :: CharParser st [Int] intList = do char ’[’ intList’ <|> (char ’]’ >> return []) where intList’ = do i <- int r <- (char ’,’ >> intList’) <|> (char ’]’ >> return []) return (i:r) We can test this: ParsecI> parse intList "stdin" "[3,5,2,10]" Right [3,5,2,10] ParsecI> parse intList "stdin" "[3,5,a,10]" Left "stdin" (line 1, column 6): unexpected "a" expecting digit In addition to these basic combinators, Parsec provides a few other useful ones:

9.8. PARSING MONADS

153

• choice takes a list of parsers and performs an or operation (<|>) between all of them. • option takes a default value of type a and a parser that returns something of type a. It then tries to parse with the parser, but it uses the default value as the return, if the parsing fails. • optional takes a parser that returns () and optionally runs it. • between takes three parsers: an open parser, a close parser and a between parser. It runs them in order and returns the value of the between parser. This can be used, for instance, to take care of the brackets on our intList parser. • notFollowedBy takes a parser and returns one that succeeds only if the given parser would have failed. Suppose we want to parse a simple calculator language that includes only plus and times. Furthermore, for simplicity, assume each embedded expression must be enclosed in parentheses. We can give a datatype for this language as: data Expr = Value Int | Expr :+: Expr | Expr :*: Expr deriving (Eq, Ord, Show) And then write a parser for this language as: parseExpr :: Parser Expr parseExpr = choice [ do i <- int; return (Value i) , between (char ’(’) (char ’)’) $ do e1 <- parseExpr op <- oneOf "+*" e2 <- parseExpr case op of ’+’ -> return (e1 :+: e2) ’*’ -> return (e1 :*: e2) ] Here, the parser alternates between two options (we could have used <|>, but I wanted to show the choice combinator in action). The first simply parses an int and then wraps it up in the Value constructor. The second option uses between to parse text between parentheses. What it parses is first an expression, then one of plus or times, then another expression. Depending on what the operator is, it returns either e1 :+: e2 or e1 :*: e2. We can modify this parser, so that instead of computing an Expr, it simply computes the value:

154

CHAPTER 9. MONADS

parseValue :: Parser Int parseValue = choice [int ,between (char ’(’) (char ’)’) $ do e1 <- parseValue op <- oneOf "+*" e2 <- parseValue case op of ’+’ -> return (e1 + e2) ’*’ -> return (e1 * e2) ] We can use this as: ParsecI> parse parseValue "stdin" "(3*(4+3))" Right 21 bindings getState setState updateState

Now, suppose we want to introduce bindings into our language. That is, we want to also be able to say “let x = 5 in” inside of our expressions and then use the variables we’ve defined. In order to do this, we need to use the getState and setState (or updateState) functions built in to Parsec. parseValueLet :: CharParser (FiniteMap Char Int) Int parseValueLet = choice [ int , do string "let " c <- letter char ’=’ e <- parseValueLet string " in " updateState (\fm -> addToFM fm c e) parseValueLet , do c <- letter fm <- getState case lookupFM fm c of Nothing -> unexpected ("variable " ++ show c ++ " unbound") Just i -> return i , between (char ’(’) (char ’)’) $ do e1 <- parseValueLet op <- oneOf "+*" e2 <- parseValueLet case op of ’+’ -> return (e1 + e2)

9.8. PARSING MONADS

155

’*’ -> return (e1 * e2) ] The int and recursive cases remain the same. We add two more cases, one to deal with let-bindings, the other to deal with usages. In the let-bindings case, we first parse a “let” string, followed by the character we’re binding (the letter function is a Parsec primitive that parses alphabetic characters), followed by it’s value (a parseValueLet). Then, we parse the “ in ” and update the state to include this binding. Finally, we continue and parse the rest. In the usage case, we simply parse the character and then look it up in the state. However, if it doesn’t exist, we use the Parsec primitive unexpected to report an error. We can see this parser in action using the runParser command, which enables us to provide an initial state: ParsecI> runParser parseValueLet emptyFM "stdin" "let c=5 in ((5+4)*c)" Right 45 *ParsecI> runParser parseValueLet emptyFM "stdin" "let c=5 in ((5+4)*let x=2 in (c+x))" Right 63 *ParsecI> runParser parseValueLet emptyFM "stdin" "((let x=2 in 3+4)*x)" Right 14 Note that the bracketing does not affect the definitions of the variables. For instance, in the last example, the use of “x” is, in some sense, outside the scope of the definition. However, our parser doesn’t notice this, since it operates in a strictly leftto-right fashion. In order to fix this omission, bindings would have to be removed (see the exercises).

Exercises Exercise 9.7 Modify the parseValueLet parser, so that it obeys bracketing. In order to do this, you will need to change the state to something like FiniteMap Char [Int], where the [Int] is a stack of definitions.

runParser

156

CHAPTER 9. MONADS

Chapter 10

Advanced Techniques 10.1 Exceptions 10.2 Mutable Arrays 10.3 Mutable References 10.4 The ST Monad 10.5 Concurrency 10.6 Regular Expressions 10.7 Dynamic Types

157

158

CHAPTER 10. ADVANCED TECHNIQUES

Appendix A

Brief Complexity Theory Complexity Theory is the study of how long a program will take to run, depending on the size of its input. There are many good introductory books to complexity theory and the basics are explained in any good algorithms book. I’ll keep the discussion here to a minimum. The idea is to say how well a program scales with more data. If you have a program that runs quickly on very small amounts of data but chokes on huge amounts of data, it’s not very useful (unless you know you’ll only be working with small amounts of data, of course). Consider the following Haskell function to return the sum of the elements in a list: sum [] = 0 sum (x:xs) = x + sum xs How long does it take this function to complete? That’s a very difficult question; it would depend on all sorts of things: your processor speed, your amount of memory, the exact way in which the addition is carried out, the length of the list, how many other programs are running on your computer, and so on. This is far too much to deal with, so we need to invent a simpler model. The model we use is sort of an arbitrary “machine step.” So the question is “how many machine steps will it take for this program to complete?” In this case, it only depends on the length of the input list. If the input list is of length 0, the function will take either 0 or 1 or 2 or some very small number of machine steps, depending exactly on how you count them (perhaps 1 step to do the pattern matching and 1 more to return the value 0). What if the list is of length 1. Well, it would take however much time the list of length 0 would take, plus a few more steps for doing the first (and only element). If the input list is of length n, it will take however many steps an empty list would take (call this value y) and then, for each element it would take a certain number of steps to do the addition and the recursive call (call this number x). Then, the total time this function will take is nx + y since it needs to do those additions n many times. These x and y values are called constant values, since they are independent of n, and actually dependent only on exactly how we define a machine step, so we really don’t 159

160

APPENDIX A. BRIEF COMPLEXITY THEORY

want to consider them all that important. Therefore, we say that the complexity of this sum function is O(n) (read “order n”). Basically saying something is O(n) means that for some constant factors x and y, the function takes nx+ y machine steps to complete. Consider the following sorting algorithm for lists (commonly called “insertion sort”): sort [] = [] sort [x] = [x] sort (x:xs) = insert (sort xs) where insert [] = [x] insert (y:ys) | x <= y = x : y : ys | otherwise = y : insert ys The way this algorithm works is as follow: if we want to sort an empty list or a list of just one element, we return them as they are, as they are already sorted. Otherwise, we have a list of the form x:xs. In this case, we sort xs and then want to insert x in the appropriate location. That’s what the insert function does. It traverses the now-sorted tail and inserts x wherever it naturally fits. Let’s analyze how long this function takes to complete. Suppose it takes f (n) stepts to sort a list of length n. Then, in order to sort a list of n-many elements, we first have to sort the tail of the list first, which takes f (n − 1) time. Then, we have to insert x into this new list. If x has to go at the end, this will take O(n − 1) = O(n) steps. Putting all of this together, we see that we have to do O(n) amount of work O(n) many times, which means that the entire complexity of this sorting algorithm is O(n2 ). Here, the squared is not a constant value, so we cannot throw it out. What does this mean? Simply that for really long lists, the sum function won’t take very long, but that the sort function will take quite some time. Of course there are algorithms that run much more slowly that simply O(n2 ) and there are ones that run more quickly than O(n). Consider the random access functions for lists and arrays. In the worst case, accessing an arbitrary element in a list of length n will take O(n) time (think about accessing the last element). However with arrays, you can access any element immediately, which is said to be in constant time, or O(1), which is basically as fast an any algorithm can go. There’s much more in complexity theory than this, but this should be enough to allow you to understand all the discussions in this tutorial. Just keep in mind that O(1) is faster than O(n) is faster than O(n2 ), etc.

Appendix B

Recursion and Induction Informally, a function is recursive if its definition depends on itself. The prototypical example is factorial, whose definition is:  1 n=0 f act(n) = n ∗ f act(n − 1) n > 0 Here, we can see that in order to calculate f act(5), we need to calculate f act(4), but in order to calculatef act(4), we need to calculate f act(3), and so on. Recursive function definitions always contain a number of non-recursive base cases and a number of recursive cases. In the case of factorial, we have one of each. The base case is when n = 0 and the recursive case is when n > 0. One can actually think of the natural numbers themselves as recursive (in fact, if you ask set theorists about this, they’ll say this is how it is). That is, there is a zero element and then for every element, it has a successor. That is 1 = succ(0), 2 = succ(1), . . . , 573 = succ(572), . . . and so on forever. We can actually implement this system of natural numbers in Haskell: data Nat = Zero | Succ Nat This is a recursive type definition. Here, we represent one as Succ Zero and three as Succ (Succ (Succ Zero)). One thing we might want to do is be able to convert back and forth beween Nats and Ints. Clearly, we can write a base case as: natToInt Zero = 0 In order to write the recursive case, we realize that we’re going to have something of the form Succ n. We can make the assumption that we’ll be able to take n and produce an Int. Assuming we can do this, all we need to do is add one to this result. This gives rise to our recursive case: natToInt (Succ n) = natToInt n + 1 161

162

APPENDIX B. RECURSION AND INDUCTION

There is a close connection between recursion and mathematical induction. Induction is a proof technique which typically breaks problems down into base cases and “inductive” cases, very analogous to our analysis of recursion. Let’s say we want to prove the statement n! ≥ n for all n ≥ 0. First we formulate a base case: namely, we wish to prove the statement when n = 0. When n = 0, n! = 1 by definition. Since n! = 1 > 0 = n, we get that 0! ≥ 0 as desired. Now, suppose that n > 0. Then n = k + 1 for some value k. We now invoke the inductive hypothesis and claim that the statement holds for n = k. That is, we assume that k! ≥ k. Now, we use k to formate the statement for our value of n. That is, n! ≥ n if and only iff (k + 1)! ≥ (k + 1). We now apply the definition of factorial and get (k + 1)! = (k + 1) ∗ k!. Now, we know k! ≥ k, so (k + 1) ∗ k! ≥ k + 1 if and only if k + 1 ≥ 1. But we know that k ≥ 0, which means k + 1 ≥ 1. Thus it is proven. It may seem a bit counter-intuitive that we are assuming that the claim is true for k in our proof that it is true for n. You can think of it like this: we’ve proved the statement for the case when n = 0. Now, we know it’s true for n = 0 so using this we use our inductive argument to show that it’s true for n = 1. Now, we know that it is true for n = 1 so we reuse our inductive argument to show that it’s true for n = 2. We can continue this argument as long as we want and then see that it’s true for all n. It’s much like pushing down dominoes. You know that when you push down the first domino, it’s going to knock over the second one. This, in turn will knock over the third, and so on. The base case is like pushing down the first domino, and the inductive case is like showing that pushing down domino k will cause the k + 1st domino to fall. In fact, we can use induction to prove that our natToInt function does the right thing. First we prove the base case: does natToInt Zero evaluate to 0? Yes, obviously it does. Now, we can assume that natToInt n evaluates to the correct value (this is the inductive hypothesis) and ask whether natToInt (Succ n) produces the correct value. Again, it is obvious that it does, by simply looking at the definition. Let’s consider a more complex example: addition of Nats. We can write this concisely as: addNat Zero m = m addNat (Succ n) m = addNat n (Succ m) Now, let’s prove that this does the correct thing. First, as the base case, suppose the first argument is Zero. We know that 0 + m = m regardless of what m is; thus in the base case the algorithm does the correct thing. Now, suppose that addNat n m does the correct thing for all m and we want to show that addNat (Succ n) m does the correct thing. We know that (n + 1) + m = n + (m + 1) and thus since addNat n (Succ m) does the correct thing (by the inductive hypothesis), our program is correct.

Appendix C

Solutions To Exercises Solution 3.1 It binds more tightly; actually, function application binds more tightly than anything else. To see this, we can do something like: Prelude> sqrt 3 * 3 5.19615 If multiplication bound more tightly, the result would have been 3.

Solution 3.2 Solution: snd (fst ((1,’a’),"foo")). This is because first we want to take the first half the the tuple: (1,’a’) and then out of this we want to take the second half, yielding just ’a’. If you tried fst (snd ((1,’a’),"foo")) you will have gotten a type error. This is because the application of snd will leave you with fst "foo". However, the string “foo” isn’t a tuple, so you cannot apply fst to it.

Solution 3.3 Solution: map Char.isLower ”aBCde”

Solution 3.4 Solution: length (filter Char.isLower ”aBCde”)

Solution 3.5 foldr max 0 [5,10,2,8,1]. You could also use foldl. The foldr case is easier to explain: we replace each cons with an application of max and the empty list with 0. Thus, the inner-most application will take the maximum of 0 and the last element of the list (if it exists). Then, the next-most inner application will return the maximum of whatever was the maximum before and the second-to-last element. This will continue on, carrying to current maximum all the way back to the beginning of the list. 163

APPENDIX C. SOLUTIONS TO EXERCISES

164

In the foldl case, we can think of this as looking at each element in the list in order. We start off our “state” with 0. We pull off the first element and check to see if it’s bigger than our current state. If it is, we replace our current state with that number and the continue. This happens for each element and thus eventually returns the maximal element.

Solution 3.6 fst (head (tail [(5,’b’),(1,’c’),(6,’a’)]))

Solution 3.7 We can define a fibonacci function as: fib 1 = 1 fib 2 = 1 fib n = fib (n-1) + fib (n-2) We could also write it using explicit if statements, like: fib n = if n == 1 || n == 2 then 1 else fib (n-1) + fib (n-2) Either is acceptable, but the first is perhaps more natural in Haskell.

Solution 3.8 We can define: a∗b=



a b=1 a + a ∗ (b − 1) otherwise

And then type out code: mult a 1 = a mult a b = a + mult a (b-1) Note that it doesn’t matter that of a and b we do the recursion on. We could just as well have defined it as: mult 1 b = b mult a b = b + mult (a-1) b

Solution 3.9 We can define my map as:

165

my_map f [] = [] my_map f (x:xs) = f x : my_map f xs Recall that the my map function is supposed to apply a function f to every element in the list. In the case that the list is empty, there are no elements to apply the function to, so we just return the empty list. In the case that the list is non-empty, it is an element x followed by a list xs. Assuming we’ve already properly applied my map to xs, then all we’re left to do is apply f to x and then stick the results together. This is exactly what the second line does.

Solution 3.10 The code below appears in Numbers.hs. The only tricky parts are the recursive calls in getNums and showFactorials. module Main where import IO main = do nums <- getNums putStrLn ("The sum is " ++ show (sum nums)) putStrLn ("The product is " ++ show (product nums)) showFactorials nums getNums = do putStrLn "Give me a number (or 0 to stop):" num <- getLine if read num == 0 then return [] else do rest <- getNums return ((read num :: Int):rest) showFactorials [] = return () showFactorials (x:xs) = do putStrLn (show x ++ " factorial is " ++ show (factorial x)) showFactorials xs factorial 1 = 1 factorial n = n * factorial (n-1) The idea for getNums is just as spelled out in the hint. For showFactorials, we consider first the recursive call. Suppose we have a list of numbers, the first of

APPENDIX C. SOLUTIONS TO EXERCISES

166

which is x. First we print out the string showing the factorial. Then we print out the rest, hence the recursive call. But what should we do in the case of the empty list? Clearly we are done, so we don’t need to do anything at all, so we simply return (). Note that this must be return () instead of just () because if we simply wrote showFactorials [] = () then this wouldn’t be an IO action, as it needs to be. For more clarification on this, you should probably just keep reading the tutorial.

Solution 4.1 String or [Char] 2. type error: lists are homogenous 3. Num a ⇒ (a, Char) 4. Int 5. type error: cannot add values of different types

Solution 4.2 The types: 1. (a, b)− > b 2. [a]− > a 3. [a]− > Bool 4. [a]− > a 5. [[a]]− > a

Solution 4.3 The types: 1. a− > [a]. This function takes an element and returns the list containing only that element. 2. a− > b− > b− > (a, [b]). The second and third argument must be of the same type, since they go into the same list. The first element can be of any type. 3. Num a => a− > a. Since we apply (+) to a, it must be an instance of Num. 4. a− > String. This ignores the first argument, so it can be any type. 5. (Char− > a)− > a. In this expression, x must be a function which takes a Char as an argument. We don’t know anything about what it produces, though, so we call it a.

1.

167 6. Type error. Here, we assume x has type a. But x is applied to itself, so it must have type b− > c. But then it must have type (b− > c)− > c, but then it must have type ((b− > c)− > c)− > c and so on, leading to an infinite type. 7. Num a => a− > a. Again, since we apply (+), this must be an instance of Num.

Solution 4.4 The definitions will be something like: data Triple a b c = Triple a b c tripleFst (Triple x y z) = x tripleSnd (Triple x y z) = y tripleThr (Triple x y z) = z

Solution 4.5 The code, with type signatures, is: data Quadruple a b = Quadruple a a b b firstTwo :: Quadruple a b -> [a] firstTwo (Quadruple x y z t) = [x,y] lastTwo :: Quadruple a b -> [b] lastTwo (Quadruple x y z t) = [z,t] We note here that there are only two type variables, a and b associated with Quadruple.

Solution 4.6 The code: data Tuple a b c d e = | | |

One a Two a b Three a b c Four a b c d

tuple1 tuple1 tuple1 tuple1

= = = =

(One (Two (Three (Four

a ) a b ) a b c ) a b c d)

tuple2 (One a tuple2 (Two a b tuple2 (Three a b c

Just Just Just Just

a a a a

) = Nothing ) = Just b ) = Just b

APPENDIX C. SOLUTIONS TO EXERCISES

168 tuple2 (Four

a b c d) = Just b

tuple3 tuple3 tuple3 tuple3

(One (Two (Three (Four

a ) a b ) a b c ) a b c d)

= = = =

Nothing Nothing Just c Just c

tuple4 tuple4 tuple4 tuple4

(One (Two (Three (Four

a ) a b ) a b c ) a b c d)

= = = =

Nothing Nothing Nothing Just d

Solution 4.7 The code: fromTuple fromTuple fromTuple fromTuple fromTuple

:: Tuple (One a (Two a (Three a (Four a

a b c d -> Either (Either a (a,b)) (Either (a,b,c) (a,b, ) = Left (Left a ) b ) = Left (Right (a,b) ) b c ) = Right (Left (a,b,c) ) b c d) = Right (Right (a,b,c,d))

Here, we use embedded Eithers to represent the fact that there are four (instead of two) options.

Solution 4.8 The code: listHead (Cons x xs) = x listTail (Cons x xs) = xs listFoldl f y Nil = y listFoldl f y (Cons x xs) = listFoldl f (f y x) xs listFoldr f y Nil = y listFoldr f y (Cons x xs) = f x (listFoldr f y xs)

Solution 4.9 The code: elements (Leaf x) = [x] elements (Branch lhs x rhs) = elements lhs ++ [x] ++ elements rhs

Solution 4.10 The code:

169

foldTree :: (a -> b -> b) -> b -> BinaryTree a -> b foldTree f z (Leaf x) = f x z foldTree f z (Branch lhs x rhs) = foldTree f (f x (foldTree f z rhs)) lhs elements2 = foldTree (:) [] or: elements2 tree = foldTree (\a b -> a:b) [] tree The first elements2 is simply a more compact version of the second.

Solution 4.11 It mimicks neither exactly. It’s behavior most closely resembles foldr, but differs slightly in its treatment of the initial value. We can observe the difference in an interpreter: CPS> foldr (-) 0 [1,2,3] 2 CPS> foldl (-) 0 [1,2,3] -6 CPS> fold (-) 0 [1,2,3] -2 Clearly it behaves differently. By writing down the derivations of fold and foldr we can see exactly where they diverge: ==> ==> ==> ==> ==>

foldr (-) 0 [1,2,3] 1 - foldr (-) 0 [2,3] ... 1 - (2 - (3 - foldr (-) 0 [])) 1 - (2 - (3 - 0)) 2

==> ==> ==> ==> ==>

fold (-) 0 [1,2,3] fold’ (-) (\y -> 0 - y) [1,2,3] 0 - fold’ (-) (\y -> 1 - y) [2,3] 0 - (1 - fold’ (-) (\y -> 2 - y) [3]) 0 - (1 - (2 - 3)) -2

Essentially, the primary difference is that in the foldr case, the “initial value” is used at the end (replacing []), whereas in the CPS case, the initial value is used at the beginning.

170

APPENDIX C. SOLUTIONS TO EXERCISES

Solution 4.12 Solution 5.1 Using if, we get something like: main = do putStrLn "Please enter your name:" name <- getLine if name == "Simon" || name == "John" || name == "Phil" then putStrLn "Haskell is great!" else if name == "Koen" then putStrLn "Debugging Haskell is fun!" else putStrLn "I don’t know who you are." Note that we don’t need to repeat the dos inside the ifs, since these are only one action commands. We could also be a bit smarter and use the elem command which is built in to the Prelude: main = do putStrLn "Please enter your name:" name <- getLine if name ‘elem‘ ["Simon", "John", "Phil"] then putStrLn "Haskell is great!" else if name == "Koen" then putStrLn "Debugging Haskell is fun!" else putStrLn "I don’t know who you are." Of course, we needn’t put all the putStrLns inside the if statements. We could instead write: main = do putStrLn "Please enter your name:" name <- getLine putStrLn (if name ‘elem‘ ["Simon", "John", "Phil"] then "Haskell is great!" else if name == "Koen" then "Debugging Haskell is fun!" else "I don’t know who you are.") Using case, we get something like:

171

main = do putStrLn "Please enter your name:" name <- getLine case name of "Simon" -> putStrLn "Haskell is great!" "John" -> putStrLn "Haskell is great!" "Phil" -> putStrLn "Haskell is great!" "Koen" -> putStrLn "Debugging Haskell is fun!" _ -> putStrLn "I don’t know who you are." Which, in this case, is actually not much cleaner.

Solution 5.2 The code might look something like: module DoFile where import IO main = do putStrLn "Do you want to [read] a file, ...?" cmd <- getLine case cmd of "quit" -> return () "read" -> do doRead; main "write" -> do doWrite; main _ -> do putStrLn ("I don’t understand the command " ++ cmd ++ ".") main doRead = do putStrLn "Enter a file name to read:" fn <- getLine bracket (openFile fn ReadMode) hClose (\h -> do txt <- hGetContents h putStrLn txt) doWrite = do putStrLn "Enter a file name to write:" fn <- getLine bracket (openFile fn WriteMode) hClose (\h -> do putStrLn "Enter text (...):"

APPENDIX C. SOLUTIONS TO EXERCISES

172

writeLoop h) writeLoop h = do l <- getLine if l == "." then return () else do hPutStrLn h l writeLoop h The only interesting things here are the calls to bracket, which ensure the that the program lives on, regardless of whether there’s a failure or not; and the writeLoop function. Note that we need to pass the handle returned by openFile (through bracket to this function, so it knows where to write the input to).

Solution 7.1 Function func3 cannot be converted into point-free style. The others look something like: func1 x = map (*x) func2 f g = filter f . map g func4 = map (+2) . filter (‘elem‘ [1..10]) . (5:) func5 = foldr (flip $ curry f) 0 You might have been tempted to try to write func2 as filter f . map, trying to eta-reduce off the g. In this case, this isn’t possible. This is because the function composition operator (.) has type (b → c) → (a → b) → (a → c). In this case, we’re trying to use map as the second argument. But map takes two arguments, while (.) expects a function which takes only one.

Solution 7.2 We can start out with a recursive definition: and [] = True and (x:xs) = x && and xs From here, we can clearly rewrite this as: and = foldr (&&) True

Solution 7.3 We can write this recursively as:

173

concatMap f [] = [] concatMap f (x:xs) = f x ++ concatMap f xs This hints that we can write this as: concatMap f = foldr (\a b -> f a ++ b) [] Now, we can do point elimination to get:

==> ==> ==> ==>

foldr foldr foldr foldr foldr

(\a b -> f a ++ b) [] (\a b -> (++) (f a) b) [] (\a -> (++) (f a)) [] (\a -> ((++) . f) a) [] ((++) . f) []

Solution 9.1 The first law is: return a >>= f ≡ f a. In the case of Maybe, we get: ==> ==> ==>

return a >>= f Just a >>= \x -> f x (\x -> f x) a f a

The second law is: f >>= return ≡ f. Here, we get: ==> ==>

f >>= return f >>= \x -> return x f >>= \x -> Just x

At this point, there are two cases depending on whether f is Nothing or not. In the first case, we get: ==> ==> ==>

Nothing >>= \x -> Just x Nothing f

In the second case, f is Just a. Then, we get: ==> ==> ==> ==>

Just a >>= \x -> Just x (\x -> Just x) a Just a f

174

APPENDIX C. SOLUTIONS TO EXERCISES

And the second law is shown. The third law states: f >>= (\x -> g x >>= h) ≡ (f >>= g) >>= h. If f is Nothing, then the left-hand-side clearly reduces to Nothing. The righthand-side reduces to Nothing >>= h which in turn reduces to Nothing, so they are the same. Suppose f is Just a. Then the LHS reduces to g a >>= h and the RHS reduces to (Just a >>= \x -> g x) >>= h which in turn reduces to g a >>= h, so these two are the same.

Solution 9.2 The idea is that we wish to use the Left constructor to represent errors on the Right constructor to represent successes. This leads to an instance declaration like: instance Monad (Either String) where return x = Right x Left s >>= _ = Left s Right x >>= f = f x fail s = Left s If we try to use this monad to do search, we get: Monads> searchAll gr 0 3 :: Either String [Int] Right [0,1,3] Monads> searchAll gr 3 0 :: Either String [Int] Left "no path" which is exactly what we want.

Solution 9.3 The order to mplus essentially determins the search order. When the recursive call to searchAll2 comes first, we are doing depth-first search. When the recursive call to search’ comes first, we are doing breadth-first search. Thus, using the list monad, we expect the solutions to come in the other order: MPlus> searchAll3 gr 0 3 :: [[Int]] [[0,2,3],[0,1,3]] Just as we expected.

Solution 9.4 This is a very difficult problem; if you found that you were stuck immediately, please just read as much of this solution as you need to try it yourself. First, we need to define a list transformer monad. This looks like:

175

newtype ListT m e = ListT { unListT :: m [e] } The ListT constructor simply wraps a monadic action (in monad m) which returns a list. We now need to make this a monad: instance Monad m => Monad (ListT m) where return x = ListT (return [x]) fail s = ListT (return [] ) ListT m >>= k = ListT $ do l <- m l’ <- mapM (unListT . k) l return (concat l’) Here, success is designated by a monadic action which returns a singleton list. Failure (like in the standard list monad) is represented by an empty list: of course, it’s actually an empty list returned from the enclosed monad. Binding happens essentially by running the action which will result in a list l. This has type [e]. We now need to apply k to each of these elements (which will result in something of type ListT m [e2]. We need to get rid of the ListTs around this (by using unListT) and then concatenate them to make a single list. Now, we need to make it an instance of MonadPlus instance Monad m => MonadPlus (ListT m) where mzero = ListT (return []) ListT m1 ‘mplus‘ ListT m2 = ListT $ do l1 <- m1 l2 <- m2 return (l1 ++ l2) Here, the zero element is a monadic action which returns an empty list. Addition is done by executing both actions and then concatenating the results. Finally, we need to make it an instance of MonadTrans: instance MonadTrans ListT where lift x = ListT (do a <- x; return [a]) Lifting an action into ListT simply involves running it and getting the value (in this case, a) out and then returning the singleton list. Once we have all this together, writing searchAll6 is fairly straightforward: searchAll6 g@(Graph vl el) src dst | src == dst = do lift $ putStrLn $

APPENDIX C. SOLUTIONS TO EXERCISES

176

"Exploring " ++ show src ++ " -> " ++ show dst return [src] | otherwise = do lift $ putStrLn $ "Exploring " ++ show src ++ " -> " ++ show dst search’ el where search’ [] = mzero search’ ((u,v,_):es) | src == u = (do path <- searchAll6 g v dst return (u:path)) ‘mplus‘ search’ es | otherwise = search’ es The only change (besides changing the recursive call to call searchAll6 instead of searchAll2) here is that we call putStrLn with appropriate arguments, lifted into the monad. If we look at the type of searchAll6, we see that the result (i.e., after applying a graph and two ints) has type MonadTrans t, MonadPlus (t IO) => t IO [Int]). In theory, we could use this with any appropriate monad transformer; in our case, we want to use ListT. Thus, we can run this by: MTrans> unListT (searchAll6 gr 0 3) Exploring 0 -> 3 Exploring 1 -> 3 Exploring 3 -> 3 Exploring 2 -> 3 Exploring 3 -> 3 MTrans> it [[0,1,3],[0,2,3]] This is precisely what we were looking for.

Solution 9.5 This exercise is actually simpler than the previous one. All we need to do is incorporate the calls to putT and getT into searchAll6 and add an extra lift to the IO calls. This extra lift is required because now we’re stacking two transformers on top of IO instead of just one. searchAll7 g@(Graph vl el) src dst | src == dst = do lift $ lift $ putStrLn $ "Exploring " ++ show src ++ " -> " ++ show dst visited <- getT

177 putT (src:visited) return [src] | otherwise = do lift $ lift $ putStrLn $ "Exploring " ++ show src ++ " -> " ++ show dst visited <- getT putT (src:visited) if src ‘elem‘ visited then mzero else search’ el where search’ [] = mzero search’ ((u,v,_):es) | src == u = (do path <- searchAll7 g v dst return (u:path)) ‘mplus‘ search’ es | otherwise = search’ es The type of this has grown significantly. After applying the graph and two ints, this has type Monad (t IO), MonadTrans t, MonadPlus (StateT [Int] (t IO)) => StateT [Int] (t IO) [Int]. Essentially this means that we’ve got something that’s a state transformer wrapped on top of some other arbitrary transformer (t) which itself sits on top of IO. In our case, t is going to be ListT. Thus, we run this beast by saying: MTrans> unListT (evalStateT (searchAll7 gr4 0 3) []) Exploring 0 -> 3 Exploring 1 -> 3 Exploring 3 -> 3 Exploring 0 -> 3 Exploring 2 -> 3 Exploring 3 -> 3 MTrans> it [[0,1,3],[0,2,3]] And it works, even on gr4.

Solution 9.6 First we write a function spaces which will parse out whitespaces: spaces :: Parser () spaces = many (matchChar isSpace) >> return () Now, using this, we simply sprinkle calls to spaces through intList to get intListSpace:

178

APPENDIX C. SOLUTIONS TO EXERCISES

intListSpace :: Parser [Int] intListSpace = do char ’[’ spaces intList’ ‘mplus‘ (char ’]’ >> return []) where intList’ = do i <- int spaces r <- (char ’,’ >> spaces >> intList’) ‘mplus‘ (char ’]’ >> return []) return (i:r) We can test that this works: Parsing> runParser intListSpace "[1 ,2 , 4 Right ("",[1,2,4,5]) Parsing> runParser intListSpace "[1 ,2 , 4 Left "expecting char, got ’a’"

\n\n ,5\n]" \n\n ,a\n]"

Solution 9.7 We do this by replacing the state functions with push and pop functions as follows: parseValueLet2 :: CharParser (FiniteMap Char [Int]) Int parseValueLet2 = choice [ int , do string "let " c <- letter char ’=’ e <- parseValueLet2 string " in " pushBinding c e v <- parseValueLet2 popBinding c return v , do c <- letter fm <- getState case lookupFM fm c of Nothing -> unexpected ("variable " ++ show c ++ " unbound") Just (i:_) -> return i , between (char ’(’) (char ’)’) $ do e1 <- parseValueLet2

179 op <- oneOf "+*" e2 <- parseValueLet2 case op of ’+’ -> return (e1 + e2) ’*’ -> return (e1 * e2) ] where pushBinding c v = do fm <- getState case lookupFM fm c of Nothing -> setState (addToFM fm c [v]) Just l -> setState (addToFM fm c (v:l)) popBinding c = do fm <- getState case lookupFM fm c of Just [_] -> setState (delFromFM fm c) Just (_:l) -> setState (addToFM fm c l) The primary difference here is that instead of calling updateState, we use two local functions, pushBinding and popBinding. The pushBinding function takes a variable name and a value and adds the value onto the head of the list pointed to in the state FiniteMap. The popBinding function looks at the value and if there is only one element on the stack, it completely removes the stack from the FiniteMap; otherwise it just removes the first element. This means that if something is in the FiniteMap, the stack is never empty. This enables us to modify only slightly the usage case; this time, we simply take the top element off the stack when we need to inspect the value of a variable. We can test that this works: ParsecI> runParser parseValueLet2 emptyFM "stdin" "((let x=2 in 3+4)*x)" Left "stdin" (line 1, column 20): unexpected variable ’x’ unbound

Index (), 53 ∗, 13 +, 13 ++, 17 −, 13 −−, 28 ., 25 .., 94 /, 13 //, 97 :, 16 ::, 37 ==, 38 [], 16 $, 77 ˆ, 13 ΓE30F , 42 λ, 42 {−−}, 28 , 81 as, 70 derive, 89 do, 32 hiding, 70 import, 69 let, 27 qualified, 69 accumArray, 96 actions, 58–62 arithmetic, 13–14 array, 96 arrays, 96–97 mutable, 157 assocs, 97 Bird scripts, 71

boolean, 38 bounds, 97 bracket, 62 brackets, 15 buffering, 33 comments, 28–29 common sub-expression, 74 Compilers, see GHC,NHC,Interpreters concatenate, 17 concurrent, 157 cons, 16 constructors, 48 continuation passing style, 53–56 CPS, see continuation passing style destructive update, 13 do notation, 120–122 do notation, 32 drop, 94 dynamic, 157 Editors, 9–10 Emacs, 10 elems, 97 Enum, 87–88 enumerated types, 52 enumFromThenTo, 94 enumFromTo, 94 Eq, 84–86 equality, 38 equals, 41 eta reduction, 76 evaluation order, 13 exceptions, 157 exports, 67–69 expressions, 13 180

INDEX extensions enabling in GHC, 9 in Hugs, 7 fallthrough, 82 false, see boolean files, 20–22 filter, 18, 93 FiniteMap, 97–99 foldl, 18, 93 foldr, 18, 93 folds, 99–101 fst, 15 functional, i functions, 22–28 anonymous, see lambda as arguments, 46–47 associative, 19 composition, 25 type, 42–47 getChar, 62 getContents, 62 getLine, 32, 62 GHC, 5, 7–9 guards, 83–84 Haskell 98, iv Haskell Bookshelf, iv hClose, 62 head, 17, 43 hGetChar, 62 hGetContents, 62 hGetLin, 62 hIsEOF, 62 hPutChar, 62 hPutStr, 62 hPutStrLn, 62 Hugs, 5–7 immutable, 12 imports, 69–70 indices, 97 induction, 30 infix, 44, 73–74

181 input, see IO instance, 39 interactive programs, 31–36 Interpreters, see GHC,Hugs IO, 57–66 library, 62–64 isLower, 18 lambda, 42 lambda calculus, 42 LaTeX scripts, 72 layout, 99 lazy, i, 11 length, 17, 46 let and do, 121 list comprehensions, 96 listArray, 96 lists, 15–20, 92–96 comprehensions, 94 cons, 16 empty, 16 literate style, 71–72 local bindings, 27, 74 local declarations, 74 loop, 29 map, 18, 92 maps, 97–99 modules, 67–72 hierarchical, 70–71 monads, 58 and do, 120–122 combinators, 133 definition of, 122–124 laws, 122 plus, 137–139 st, 157 state, 124–130 transformer, 139–143 monads-combinators, 137 mutable, see immutable named fields, 90–92 NHC, 5, 9

INDEX

182 null, 43 Num, 88 numeric types, 41 openFile, 62 operator precedence, 14 Ord, 86–87 output, see IO pairs, see tuples parentheses, 14 parsing, 143–155 partial application, 76–79 pattern matching, 48, 79–83 point-free programming, 77 primitive recursive, 99 pure, i, 12 putChar, 62 putStr, 62 putStrLn, 22, 62 random numbers, 33 randomRIO, 33 Read, 88 read, 17 readFile, 62 recursion, 29–31 references, 157 referential tranparency, 13 regular expressions, 157 sections, 73–74 shadowing, 74 Show, 86 show, 17, 41 snd, 15 sqrt, 13 standard, iv state, i strict, i, 11, 105–108 strings, 17 converting from/to, 17 tail, 17, 43 take, 94 toUpper, 18 true, see boolean

tuples, 14–15 type, 37, 117 checking, 37 classes, 40–42, 108–113 instances, 84–89, 113–115 datatypes, 47–53, 89–92, 105–108 constructors, 48–50 recursive, 50–51 strict, 105–108 default, 117 errors, 38 explicit declarations, 45–46 hierarchy, 117 higher-order, 42–44 inference, 37 IO, 44–45 kinds, 115–117 newtype, 104–105 polymorphic, 39–40 signatures, 45 synonyms, 63, 103–104 Unit, 53 unzip, 93 user input, see interactive programs wildcard, 81 writeFile, 62 zip, 93

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 1

Haskell/Print version From Wikibooks, the open-content textbooks collection

Table Of Contents Haskell Basics Getting set up Variables and functions Lists and tuples Next steps Type basics Simple input and output Type declarations

Elementary Haskell Recursion List processing Pattern matching More about lists Control structures More on functions Higher order functions

Intermediate Haskell Modules Indentation More on datatypes Class declarations Classes and types Keeping track of State

Monads Understanding monads Advanced monads Additive monads (MonadPlus) Monad transformers Practical monads

Advanced Haskell Arrows Understanding arrows Continuation passing style (CPS)

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 2

Mutable objects Zippers Applicative Functors Concurrency

Fun with Types Existentially quantified types Polymorphism Advanced type classes Phantom types Generalised algebraic data-types (GADT) Datatype algebra

Wider Theory Denotational semantics Equational reasoning Program derivation Category theory The Curry-Howard isomorphism

Haskell Performance Graph reduction Laziness Strictness Algorithm complexity Parallelism Choosing data structures

Libraries Reference The Hierarchical Libraries Lists:Arrays:Maybe:Maps IO:Random Numbers

General Practices Building a standalone application Debugging Testing Packaging your software (Cabal) Using the Foreign Function Interface (FFI)

Specialised Tasks Graphical user interfaces (GUI) Databases Web programming Working with XML Using Regular Expressions

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 3

Haskell Basics Getting set up This chapter will explore how to install the programs you'll need to start coding in Haskell.

Installing Haskell First of all, you need a Haskell compiler. A compiler is a program that takes your code and spits out an executable which you can run on your machine. There are several Haskell compilers available freely, the most popular and fully featured of them all being the Glasgow Haskell Compiler or GHC for short. The GHC was originally written at the University of Glasgow. GHC is available for most platforms: For MS Windows, see the GHC download page (http://haskell.org/ghc/download.html) for details For MacOS X, Linux or other platforms, you are most likely better off using one of the pre-packaged versions (http://haskell.org/ghc/distribution_packages.html) for your distribution or operating system. Note A quick note to those people who prefer to compile from source: This might be a bad idea with GHC, especially if it's the first time you install it. GHC is itself mostly written in Haskell, so trying to bootstrap it by hand from source is very tricky. Besides, the build takes very long time and consumes a lot of disk space. If you are sure that you want to build GHC from the source, see Building and Porting GHC at the GHC homepage (http://hackage.haskell.org/trac/ghc/wiki/Building) .

Getting interactive If you've just installed GHC, then you'll have also installed a sideline program called GHCi. The 'i' stands for 'interactive', and you can see this if you start it up. Open a shell (or click Start, then Run, then type 'cmd' and hit Enter if you're on Windows) and type ghci, then press Enter. You should get output that looks something like the following: ___ ___ _ / _ \ /\ /\/ __(_) / /_\// /_/ / / | | / /_\\/ __ / /___| | \____/\/ /_/\____/|_|

GHC Interactive, version 6.6, for Haskell 98. http://www.haskell.org/ghc/ Type :? for help.

Loading package base ... linking ... done. Prelude>

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 4

The first bit is GHCi's logo. It then informs you it's loading the base package, so you'll have access to most of the built-in functions and modules that come with GHC. Finally, the Prelude> bit is known as the prompt. This is where you enter commands, and GHCi will respond with what they evaluate to. Let's try some basic arithmetic: Prelude> 2 + 2 4 Prelude> 5 * 4 + 3 23 Prelude> 2 ^ 5 32

The operators are similar to what they are in other languages: + is addition, * is multiplication, and ^ is exponentiation (raising to the power of). GHCi is a very powerful development environment. As we progress through the course, we'll learn how we can load source files into GHCi, and evaluate different bits of them. The next chapter will introduce some of the basic concepts of Haskell. Let's dive into that and have a look at our first Haskell functions.

Variables and functions (All the examples in this chapter can be typed into a Haskell source file and evaluated by loading that file into GHC or Hugs.)

Variables Previously, we saw how to do simple arithmetic operations like addition and subtraction. Pop quiz: what is the area of a circle whose radius is 5 cm? No, don't worry, you haven't stumbled through the Geometry 2 wikibook by mistake. The area of our circle is π r where r is our radius (5cm) and π , for the sake of simplicity, is 3.14. So let's try this out in GHCi: ___ ___ _ / _ \ /\ /\/ __(_) / /_\// /_/ / / | | / /_\\/ __ / /___| | \____/\/ /_/\____/|_|

GHC Interactive, version 6.4.1, for Haskell 98. http://www.haskell.org/ghc/ Type :? for help.

Loading package base-1.0 ... linking ... done. Prelude>

So let's see, we want to multiply pi (3.14) times our a radius squared, so that would be Prelude> 3.14 * 5^2 78.5

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 5

Great! Well, now since we have these wonderful, powerful computers to help us calculate things, there really isn't any need to round pi down to 2 decimal places. Let's do the same thing again, but with a slightly longer value for pi Prelude> 3.14159265358979323846264338327950 * (5 ^ 2) 78.53981633974483

Much better, so now how about giving me the circumference of that circle (hint: 2π r)? Prelude> 2 * 3.14159265358979323846264338327950 * 5 31.41592653589793

Or how about the area of a different circle with radius 25 (hint: π r )? 2

Prelude> 3.14159265358979323846264338327950 * (25 ^ 2) 1963.4954084936207

What we're hoping here is that sooner or later, you are starting to get sick of typing (or copy-and-pasting) all this text into your interpreter (some of you might even have noticed the up-arrow and Emacs-style key bindings to zip around the command line). Well, the whole point of programming, we would argue, is to avoid doing stupid, boring, repetitious work like typing the first 20 digits of pi in a million times. What we really need is a means of remembering the value of pi: Prelude> let pi = 3.14159265358979323846264338327950

Note If this command does not work, you are probably using hugs instead of GHCi, which expects a slightly different syntax.

Here you are literally telling Haskell to: "let pi be equal to 3.14159...". This introduces the new variable pi, which is now defined as being the number 3.14159265358979323846264338327950. This will be very handy because it means that we can call that value back up by just typing pi again: Prelude> pi 3.141592653589793

Don't worry about all those missing digits; they're just skipped when displaying the value. All the digits will be used in any future calculations. Having variables takes some of the tedium out of things. What is the area of a circle having a radius of 5 cm? How about a radius of 25cm? Prelude> pi * 5^2 78.53981633974483

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 6

Prelude> pi * 25^2 1963.4954084936207

Note What we call "variables" in this book are often referred to as "symbols" in other introductions to functional programming. This is because other languages, namely the more popular imperative languages have a very different use for variables: keeping track of state. Variables in Haskell do no such thing; they store a value and an immutable one at that.

Types Following the previous example, you might be tempted to try storing a value for that radius. Let's see what happens: Prelude> let r = 25 Prelude> 2 * pi * r :1:9: Couldn't match `Double' against `Integer' Expected type: Double Inferred type: Integer In the second argument of `(*)', namely `r' In the definition of `it': it = (2 * pi) * r

Whoops! You've just run into a programming concept known as types. Types are a feature of many programming languages which are designed to catch some of your programming errors early on so that you find out about them before it's too late. We'll discuss types in more detail later on in the Type basics chapter, but for now it's useful to think in terms of plugs and connectors. For example, many of the plugs on the back of your computer are designed to have different shapes and sizes for a purpose. This is partly so that you don't inadvertently plug the wrong bits of your computer in together and blow something up. Types serve a similar purpose, but in this particular example, well, types aren't so helpful. The main problem is that Haskell doesn't let you multiply Integers with real numbers. We'll explain why later, but for now, you can get around the issue by using a Double for r so that the pieces fit together: Prelude> let r = 25.0 Prelude> 2 * pi * r 157.07963267948966

Variables within variables Variables can contain much more than just simple values such as 3.14. Indeed, they can contain any Haskell expression whatsoever. So, if we wanted to keep around, say the area of a circle with radius of 5, we could write something like this: Prelude> let area = pi * 5^2

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 7

What's interesting about this is that we've stored a complicated chunk of Haskell (an arithmetic expression containing a variable) into yet another variable. We can use variables to store any arbitrary Haskell code, so let's use this to get our acts together. Prelude> let r = 25.0 Prelude> let area2 = pi * r ^ 2 Prelude> area2 1963.4954084936207

So far so good. Prelude> let r = 2.0 Prelude> area2 1963.4954084936207

Wait a second, why didn't this work? That is, why is it that we get the same value for area as we did back when r was 25? The reason this is the case is Variables do not that variables in Haskell do not change. What actually happens when you vary defined r the second time is that you are talking about a different r. This is something that happens in real life as well. How many people do you know that have the name John? What's interesting about people named John is that most of the time, you can talk about "John" to your friends, and depending on the context, your friends will know which John your are refering to. Programming has something similar to context, called scope. We won't explain scope (at least not now), but Haskell's lexical scope is the magic that lets us define two different r and always get the right one back. Scope, however, does not solve the current problem. What we want to do is define a generic area that always gives you the area of a circle. What we could do is just define it a second time: Prelude> let area3 = pi * r ^ 2 Prelude> area3 12.566370614359172

But we are programmers, and programmers loathe repetition. Is there a better way?

Functions What we are really trying to accomplish with our generic area is to define a function. Defining functions in Haskell is dead-simple. It is exactly like defining a variable, except with a little extra stuff on the left hand side. For instance, below is our definition of pi, followed by our definition of area: Prelude> let pi = 3.14159265358979323846264338327950 Prelude> let area r = pi * r ^ 2

To calculate the area of our two circles, we simply pass it a different value: Prelude> area 5 78.53981633974483

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 8

Prelude> area 25 1963.4954084936207

Functions allow us to make a great leap forward in the reusability of our code. But let's slow down for a moment, or rather, back up to dissect things. See the r in our definition area r = ...? This is what we call a parameter . A parameter is what we use to provide input to the function. When Haskell is interpreting the function, the value of its parameter must come from the outside. In the case of area, the value of r is 5 when you say area 5, but it is 25 if you say area 25. Exercises Say I type something in like this (don't type it in yet): Prelude> let r = 0 Prelude> let area r = pi * r ^ 2 Prelude> area 5

1. What do you think should happen? Are we in for an unpleasant surprise? 2. What actually happens? Why? (Hint: remember what was said before about "scope")

Scope and parameters Warning: this section contains spoilers to the previous exercise We hope you have completed the very short exercise (I would say thought experiment) above. Fortunately, the following fragment of code does not contain any unpleasant surprises: Prelude> let r = 0 Prelude> let area r = pi * r ^ 2 Prelude> area 5 78.53981633974483

An unpleasant surprise here would have been getting the value 0. This is just a consequence of what we wrote above, namely the value of a parameter is strictly what you pass in when you call the function. And that is directly a consequence of our old friend scope. Informally, the r in let r = 0 is true when you are in the top level of the interpreter, but it is not the same r as the one inside our defined function area - the r inside area overrides the other r; you can think of it as Haskell picking the most specific version of r there is. If you have many friends all named John, you go with the one which just makes more sense and is specific to the context; similarly, what value of r we get depends on the scope. Multiple parameters Another thing you might want to know about functions is that they can accept more than one parameter. Say for instance, you want to calculate the area of a rectangle. This is quite simple to express:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 9

Prelude> let areaRect l w = l * w Prelude> areaRect 5 10 50

Or say you want to calculate the area of a triangle

:

Prelude> let areaTriangle b h = (b * h) / 2 Prelude> areaTriangle 3 9 13.5

Passing parameters in is pretty straightforward: you just give them in the same order that they are defined. So, whereas areaTriangle 3 9 gives us the area of a triangle with base 3 and height 9, areaTriangle 9 3 gives us the area with the base 9 and height 3. Exercises Write a function to calculate the volume of a box. A box has width, height and depth. You have to multiply them all to get the volume.

Functions within functions To further cut down the amount of repetition it is possible to call functions from within other functions. A simple example showing how this can be used is to create a function to compute the area of a Square. We can think of a square as a special case of a rectangle (the area is still the width multiplied by the length); however, we also know that the width and length are the same, so why should we need to type it in twice? Prelude> let areaRect l w = l * w Prelude> let areaSquare s = areaRect s s Prelude> areaSquare 5 25

Exercises Write a function to calculate the volume of a cylinder. The volume of a cylinder is the area of the base, which is a circle (you already programmed this function in this chapter, so reuse it) multiplied by the height.

Summary 1. 2. 3. 4.

Variables store values. In fact, they store any arbitrary Haskell expression. Variables do not change. Functions help you write reusable code. Functions can accept more than one parameter.

Notes 1. ^ For readers with prior programming experience: Variables don't change? I only get constants? Shock! Horror! No... trust us, as we hope to show you in the rest of this book, you can go a very long

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 10

way without changing a single variable! In fact, this non-changing of variables makes life easier because it makes programs so much more predictable.

Lists and tuples Lists and tuples are two ways of crushing several values down into a single value.

Lists The functional programmer's next best friend In the last section we introduced the concept of variables and functions in Haskell. Functions are one of the two major building blocks of any Haskell program. The other is the versatile list. So, without further ado, let's switch over to the interpreter and build some lists: Example - Building Lists in the Interpreter Prelude> let numbers = [1,2,3,4] Prelude> let truths = [True, False, False] Prelude> let strings = ["here", "are", "some", "strings"]

The square brackets denote the beginning and the end of the list. List elements are separated by the comma "," operator. Further, list elements must be all of the same type. Therefore, [42, "life, universe and everything else"] is not a legal list because it contains two elements of different types, namely, integer and string respectively. However, [12, 80] or, ["beer", "sandwiches"] are valid lists because they are both type-homogeneous. Here is what happens if you try to define a list with mixed-type elements: Prelude> let mixed = [True, "bonjour"] :1:19: Couldn't match `Bool' against `[Char]' Expected type: Bool Inferred type: [Char] In the list element: "bonjour" In the definition of `mixed': mixed = [True, "bonjour"]

If you're confused about this business of lists and types, don't worry about it. We haven't talked very much about types yet and we are confident that this will clear up as the book progresses. Building lists Square brackets and commas aren't the only way to build up a list. Another thing you can do with them is to build them up piece by piece, by consing things on to them, via the (:) operator.

Example: Consing something on to a list

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 11

Prelude> let numbers = [1,2,3,4] Prelude> numbers [1,2,3,4] Prelude> 0:numbers [0,1,2,3,4]

When you cons something on to a list (something:someList), what you get back is another list. So, unsurprisingly, you could keep on consing your way up.

Example: Consing lots of things to a list Prelude> 1:0:numbers [1,0,1,2,3,4] Prelude> 2:1:0:numbers [2,1,0,1,2,3,4] Prelude> 5:4:3:2:1:0:numbers [5,4,3,2,1,0,1,2,3,4]

In fact, this is just about how all lists are built, by consing them up from the empty list ([]). The commas and brackets notation is actually a pleasant form of syntactic sugar. In other words, a list like [1,2,3,4,5] is exactly equivalent to 1:2:3:4:5:[] You will, however, want to watch out for a potential pitfall in list construction. Whereas 1:2:[] is perfectly good Haskell, 1:2 is not. In fact, if you try it out in the interpreter, you get a nasty error message.

Example: Whoops! Prelude> 1:2 :1:2: No instance for (Num [a]) arising from the literal `2' at :1:2 Probable fix: add an instance declaration for (Num [a]) In the second argument of `(:)', namely `2' In the definition of `it': it = 1 : 2

Well, to be fair, the error message is nastier than usual because numbers are slightly funny beasts in Haskell. Let's try this again with something simpler, but still wrong, True:False

Example: Simpler but still wrong Prelude> True:False :1:5: Couldn't match `[Bool]' against `Bool' Expected type: [Bool]

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 12

Inferred type: Bool In the second argument of `(:)', namely `False' In the definition of `it': it = True : False

The basic intuition for this is that the cons operator, (:) works with this pattern something:someList ; however, what we gave it is more something:somethingElse. Cons only knows how to stick things onto lists. We're starting to run into a bit of reasoning about types. Let's summarize so far: The elements of the list have to have the same type. You can only cons (:) something onto a list. Well, sheesh, aren't types annoying? They are indeed, but as we will see in Type basics, they can also be a life saver. In either case, when you are programming in Haskell and something blows up, you'll probably want to get used to thinking "probably a type error". Exercises 1. Would the following piece of Haskell work: 3:[True,False]? Why or why not? 2. Write a function cons8 that takes a list and conses 8 on to it. Test it out on the following lists by doing: 1. cons8 [] cons8 [1,2,3] 2. 3. cons8 [True,False] 4. let foo = cons8 [1,2,3] 5. cons8 foo 3. Write a function that takes two arguments, a list and a thing, and conses the thing onto the list. You should start out with let myCons list thing =

Lists within lists Lists can contain anything, just as long as they are all of the same type. Well, then, chew on this: lists are things too, therefore, lists can contain... yes indeed, other lists! Try the following in the interpreter:

Example: Lists can contain lists Prelude> let listOfLists = [[1,2],[3,4],[5,6]] Prelude> listOfLists [[1,2],[3,4],[5,6]]

Lists of lists can be pretty tricky sometimes, because a list of things does not have the same type as a thing all by itself. Let's sort through these implications with a few exercises: Exercises

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 13

1. Which of these are valid Haskell and which are not? Rewrite in cons notation. 1. [1,2,3,[]] 2. [1,[2,3],4] 3. [[1,2,3],[]] 2. Which of these are valid Haskell, and which are not? Rewrite in comma and bracket notation. 1. []:[[1,2,3],[4,5,6]] 2. []:[] 3. []:[]:[] 4. [1]:[]:[] 3. Can Haskell have lists of lists of lists? Why or why not? 4. Why is the following list invalid in Haskell? Don't worry too much if you don't get this one yet. 1. [[1,2],3,[4,5]]

Lists of lists are extremely useful, because they allow you to express some very complicated, structured data (two-dimensional matrices, for example). They are also one of the places where the Haskell type system truly shines. Human programmers, or at least this wikibook author, get confused all the time when working with lists of lists, and having restrictions of types often helps in wading through the potential mess.

Tuples A different notion of many Tuples are another way of storing multiple values in a single value, but they are subtly different in a number of ways. They are useful when you know, in advance, how many values you want to store, and they lift the restriction that all the values have to be of the same type. For example, we might want a type for storing pairs of co-ordinates. We know how many elements there are going to be (two: an x and y co-ordinate), so tuples are applicable. Or, if we were writing a phonebook application, we might want to crunch three values into one: the name, phone number and address of someone. Again, we know how many elements there are going to be. Also, those three values aren't likely to have the same type, but that doesn't matter here, because we're using tuples. Let's look at some sample tuples.

Example: Some tuples (True, 1) ("Hello world", False) (4, 5, "Six", True, 'b')

The first example is a tuple containing two elements. The first one is True and the second is 1. The next example again has two elements, the first is "Hello world" and the second, False. The third example is a bit more complex. It's a tuple consisting of five elements, the first is the number 4, the second the number 5, the third "Six", the fourth True, and the last one the character 'b'. So the syntax for tuples is: separate the different elements with a comma, and surround the whole thing in parentheses.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 14

A quick note on nomenclature: in general you write n-tuple for a tuple of size n. 2-tuples (that is, tuples with 2 elements) are normally called 'pairs' and 3-tuples triples. Tuples of greater sizes aren't actually all that common, although, if you were to logically extend the naming system, you'd have 'quadruples', 'quintuples' and so on, hence the general term 'tuple'. So tuples are a bit like lists, in that they can store multiple values. However, there is a very key difference: pairs don't have the same type as triples, and triples don't have the same type as quadruples, and in general, two tuples of different sizes have different types. You might be getting a little disconcerted because we keep mentioning this word 'type', but for now, it's just important to grasp how lists and tuples differ in their approach to sizes. You can have, say, a list of numbers, and add a new number on the front, and it remains a list of numbers. If you have a pair and wish to add a new element, it becomes a triple, and this is a fundamentally different object

[1]

. Exercises

1. Write down the 3-tuple whose first element is 4, second element is "hello" and third element is True. 2. Which of the following are valid tuples ? 1. (4, 4) 2. (4, "hello") 3. (True, "Blah", "foo") 3. Lists can be built by consing new elements on to them: you cons a number onto a list of numbers, and get back a list of numbers. It turns out that there is no such way to build up tuples. 1. Why do you think that is? 2. Say for the sake of argument, that there was such a function. What would you get if you "consed" something on a tuple?

What are tuples for? Tuples are handy when you want to return more than one value from a function. In most languages trying to return two or more things at once means wrapping them up in a special data structure, maybe one that only gets used in that function. In Haskell, just return them as a tuple. You can also use tuples as a primitive kind of data structure. But that needs an understanding of types, which we haven't covered yet. Getting data out of tuples In this section, we concentrate solely on pairs. This is mostly for simplicity's sake, but pairs are by far and away the most commonly used size of tuple. Okay, so we've seen how we can put values in to tuples, simply by using the (x, y, z) syntax. How can we get them out again? For example, a typical use of tuples is to store the (x, y) co-ordinate pair of a point: imagine you have a chess board, and want to specify a specific square. You could do this by labeling all the rows from 1 to 8, and similarly with the columns, then letting, say, (2, 5) represent the square in row 2 and column 5. Say we want to define a function for finding all the pieces in a given row. One way of doing this would be to find the co-ordinates of all the pieces, then look at the row part and see if it's equal to whatever row we're being asked to examine. This function would need, once it had the co-ordinate pair (x, y) of a piece, to extract the x (the row part). To do this there are two functions, fst and snd, which project the first

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 15

and second elements out of a pair, respectively (in math-speak a function that gets some data out of a structure is called a "Projection"). Let's see some examples:

Example: Using fst and snd Prelude> fst (2, 5) 2 Prelude> fst (True, "boo") True Prelude> snd (5, "Hello") "Hello"

It should be fairly obvious what these functions do. Note that you can only use these functions on pairs. Why? It all harks back to the fact that tuples of different sizes are different beasts entirely. fst and snd are specialized to pairs, and so you can't use them on anything else

[2]

.

Exercises 1. Use a combination of fst and snd to extract the 4 from the tuple ( ("Hello", 4), True). 2. Normal chess notation is somewhat different to ours: it numbers the rows from 1-8 but then labels the columns A-H. Could we label a specific point with a number and a character, like (4, 'a')? What important difference with lists does this illustrate?

Tuples within tuples (and other combinations) We can apply the same reasoning to tuples about storing lists within lists. Tuples are things too, so you can store tuples with tuples (within tuples up to any arbitrary level of complexity). Likewise, you could also have lists of tuples, tuples of lists, all sorts of combinations along the same lines.

Example: Nesting tuples and lists ((2,3), True) ((2,3), [2,3]) [(1,2), (3,4), (5,6)]

Some discussion about this - what you get out of this, maybe, what's the big idea behind grouping things together There is one bit of trickiness to watch out for, however. The type of a tuple is defined not only by its size, but by the types of objects it contains. For example, the tuples like ("Hello",32) and (47,"World") are fundamentally different. One is of type (String,Int) tuples, whereas the other is (Int,String). This has implications for building up lists of tuples. We could very well have lists like [("a",1),

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 16

("b",9),("c",9)], but having a list like [("a",1),(2,"b"),(9,"c")] is right out. Can you spot the difference? Exercises 1. Which of these are valid Haskell, and why? fst [1,2] 1:(2,3) (2,4):(2,3) (2,4):[] [(2,4),(5,5),('a','b')] ([2,4],[2,2]) 2. FIXME: to be added

Summary We have introduced two new notions in this chapter, lists and tuples. To sum up: 1. Lists are defined by square brackets and commas : [1,2,3]. They can contain anything as long as all the elements of the list are of the same type They can also be built by the cons operator, (:), but you can only cons things onto lists 2. Tuples are defined by parentheses and commas : ("Bob",32) They can contain anything, even things of different types They have a fixed length, or at least their length is encoded in their type. That is, two tuples with different lengths will have different types. 3. Lists and tuples can be combined in any number of ways: lists within lists, tuples with lists, etc We hope that at this point, you're somewhat comfortable enough manipulating them as part of the fundamental Haskell building blocks (variables, functions and lists), because we're now going to move to some potentially heady topics, types and recursion. Types, we have alluded to thrice in this chapter without really saying what they are, so these shall be the next major topic that we cover. But before we get to that, we're going to make a short detour to help you make better use of the GHC interpreter.

Notes ↑

1. ↑ At least as far as types are concerned, but we're trying to avoid that word :) 2. More technically, fst and snd have types which limit them to pairs. It would be impossible to define projection functions on tuples in general, because they'd have to be able to accept tuples of different sizes, so the type of the function would vary.

Next steps Haskell files

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 17

Up to now, we've made heavy use of the GHC interpreter. The interpreter is indeed a useful tool for trying things out quickly and for debugging your code. But we're getting to the point where typing everything directly into the interpreter isn't very practical. So now, we'll be writing our first Haskell source files. Open up a file varfun.hs in your favourite text editor (the hs stands for Haskell) and paste the following definition in. Remember, Haskell uses indentations and spaces to decide where functions (and other things) begin and end, so make sure there are no leading spaces and that indentations are correct, otherwise GHC will report parse errors. area r = pi * r^2

(In case you're wondering, pi is actually predefined in Haskell, no need to include it here). Now change into the directory where you saved your file, open up ghci, and use :load (or :l for short): Prelude> :load varfun.hs Compiling Main Ok, modules loaded: Main. *Main>

( varfun.hs, interpreted )

Now you can execute the bindings found in that file: *Main> area 5 78.53981633974483

If you make changes to the file, just use :reload (:r for short) to reload the file. Note GHC can also be used as a compiler. That is, you could use GHC to convert your Haskell files into a program that can then be run without running the interpreter. See the documentation for details.

You'll note that there are a couple of differences between how we do things when we type them directly into ghci, and how we do them when we load them from files. The differences may seem awfully arbitrary for now, but they're actually quite sensible consequences of the scope, which rest assured, we will explain later. No let For starters, you no longer say something like let x = 3 let y = 2 let area r = pi * r ^ 2

Instead, you say things like

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 18

x = 3 y = 2 area r = pi * r ^ 2

The keyword let is actually something you use a lot in Haskell, but not exactly in this context. We'll see further on in this chapter when we discuss the use of let bindings. You can't define the same thing twice Previously, the interpreter cheerfully allowed us to write something like this Prelude> Prelude> 5 Prelude> Prelude> 2

let r = 5 r let r = 2 r

On the other hand, writing something like this in a source file does not work -- this does not work r = 5 r = 2

As we mentioned above, variables do not change, and this is even more the case when you're working in a source file. This has one very nice implication. It means that: Order does not matter The order in which you declare things does not matter. For example, the following fragments of code do exactly the same thing:

y = x * 2 x = 3

x = 3 y = x * 2

This is a unique feature of Haskell and other functional programming languages. The fact that variables never change means that we can opt to write things in any order that we want (but this is also why you can't declare something more than once... it would be ambiguous otherwise). Exercises Save the functions you had written in the previous module's exercises into a Haskell file. Load the file in GHCi and test the functions on a few parameters

More about functions

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 19

Working with actual source code files instead of typing things into the interpreter makes things convenient to define much more substantial functions than those we've seen up to now. Let's flex some Haskell muscle here and examine the kinds of things we can do with our functions. Conditional expressions if / then / else Haskell supports standard conditional expressions. For instance, we could define a function that returns - 1 if its argument is less than 0; 0 if its argument is 0; and 1 if its argument is greater than 0 (this is called the signum function). Actually, such a function already exists, but let's define one of our own, what we'll call mySignum. mySignum x = if x < 0 then -1 else if x > 0 then 1 else 0

You can experiment with this as:

Example: Test> 1 Test> 0 Test> -1 Test> -1

mySignum 5 mySignum 0 mySignum (5-10) mySignum (-1)

Note that the parenthesis around "-1" in the last example are required; if missing, the system will think you are trying to subtract the value "1" from the value "signum," which is ill-typed. The if/then/else construct in Haskell is very similar to that of most other programming languages; however, you must have both a then and an else clause. It evaluates the condition (in this case x < 0 and, if this evaluates to True, it evaluates the then condition; if the condition evaluated to False, it evaluates the else condition). You can test this program by editing the file and loading it back into your interpreter. If Test is already the current module, instead of typing :l Test.hs again, you can simply type :reload or just :r to reload the current file. This is usually much faster. case Haskell, like many other languages, also supports case constructions. These are used when there are multiple values that you want to check against (case expressions are actually quite a bit more powerful than this -- see the Pattern matching chapter for all of the details).

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 20

Suppose we wanted to define a function that had a value of 1 if its argument were 0; a value of 5 if its argument were 1; a value of 2 if its argument were 2; and a value of - 1 in all other instances. Writing this function using if statements would be long and very unreadable; so we write it using a case statement as follows (we call this function f): f x = case x 0 -> 1 -> 2 -> _ ->

of 1 5 2 -1

In this program, we're defining f to take an argument x and then inspect the value of x. If it matches 0, the value of f is 1. If it matches 1, the value of f is 5. If it matches 2, then the value of f is 2; and if it hasn't matched anything by that point, the value of f is - 1 (the underscore can be thought of as a "wildcard" -- it will match anything). The indentation here is important. Haskell uses a system called "layout" to structure its code (the programming language Python uses a similar system). The layout system allows you to write code without the explicit semicolons and braces that other languages like C and Java require. Because whitespace matters in Haskell, you need to be careful about whether you are using tabs or spaces. If you can configure your editor to never use tabs, that's probably better. If not, make sure your tabs are always 8 spaces long, or you're likely to run in to problems. Indentation The general rule for layout is that an open-brace is inserted after the keywords where, let, do and of, and the column position at which the next command appears is remembered. From then on, a semicolon is inserted before every new line that is indented the same amount. If a following line is indented less, a closebrace is inserted. This may sound complicated, but if you follow the general rule of indenting after each of those keywords, you'll never have to remember it (see the Indentation chapter for a more complete discussion of layout). Some people prefer not to use layout and write the braces and semicolons explicitly. This is perfectly acceptable. In this style, the above function might look like: f x = case x of { 0 -> 1 ; 1 -> 5 ; 2 -> 2 ; _ -> -1 }

Of course, if you write the braces and semicolons explicitly, you're free to structure the code as you wish. The following is also equally valid: f x = case x of { 0 -> 1 ; 1 -> 5 ; 2 -> 2 ; _ -> -1 }

However, structuring your code like this only serves to make it unreadable (in this case).

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 21

Defining one function for different parameters Functions can also be defined piece-wise, meaning that you can write one version of your function for certain parameters and then another version for other parameters. For instance, the above function f could also be written as: f f f f

0 1 2 _

= = = =

1 5 2 -1

Here, the order is important. If we had put the last line first, it would have matched every argument, and f would return -1, regardless of its argument (most compilers will warn you about this, though, saying something about overlapping patterns). If we had not included this last line, f would produce an error if anything other than 0, 1 or 2 were applied to it (most compilers will warn you about this, too, saying something about incomplete patterns). This style of piece-wise definition is very popular and will be used quite frequently throughout this tutorial. These two definitions of f are actually equivalent -- this piece-wise version is translated into the case expression. Function composition More complicated functions can be built from simpler functions using function composition. Function composition is simply taking the result of the application of one function and using that as an argument for another. We've already seen this back in the Getting set up chapter, when we wrote 5*4+3. In this, we were evaluating 5 * 4 and then applying + 3 to the result. We can do the same thing with our square and f functions: square x = x^2

Example: Test> 25 Test> 4 Test> 5 Test> -1

square (f 1) square (f 2) f (square 1) f (square 2)

The result of each of these function applications is fairly straightforward. The parentheses around the inner function are necessary; otherwise, in the first line, the interpreter would think that you were trying to get the value of square f, which has no meaning. Function application like this is fairly standard in most programming languages. There is another, more mathematically oriented, way to express function composition, using the (.) (just a single period) function. This (.) function is supposed to look like the ( ) operator in mathematics. Note

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks In mathematics we write g also to mean "f following g."

Page 22

to mean "f following g," in Haskell we write f .

The meaning of is simply that applying the value x to the function result, and then applying that to f.

. That is, is the same as applying it to g, taking the

The (.) function (called the function composition function), takes two functions and makes them in to one. For instance, if we write (square . f), this means that it creates a new function that takes an argument, applies f to that argument and then applies square to the result. Conversely, (f . square) means that it creates a new function that takes an argument, applies square to that argument and then applies f to the result. We can see this by testing it as before:

Example: Test> 25 Test> 4 Test> 5 Test> -1

(square . f) 1 (square . f) 2 (f . square) 1 (f . square) 2

Here, we must enclose the function composition in parentheses; otherwise, the Haskell compiler will think we're trying to compose square with the value f 1 in the first line, which makes no sense since f 1 isn't even a function. It would probably be wise to take a little time-out to look at some of the functions that are defined in the Prelude. Undoubtedly, at some point, you will accidentally rewrite some already-existing function (I've done it more times than I can count), but if we can keep this to a minimum, that would save a lot of time. Let Bindings Often we wish to provide local declarations for use in our functions. For instance, if you remember back to your grade school mathematics courses, the following equation is used to find the roots (zeros) of a 2

polynomial of the form ax + bx + c = 0: function to compute the two values of x:

. We could write the following

roots a b c = ((-b + sqrt(b*b - 4*a*c)) / (2*a), (-b - sqrt(b*b - 4*a*c)) / (2*a))

Notice that our definition here has a bit of redundancy. It is not quite as nice as the mathematical definition because we have needlessly repeated the code for sqrt(b*b - 4*a*c). To remedy this problem, Haskell allows for local bindings. That is, we can create values inside of a function that only that function can see.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 23

For instance, we could create a local binding for sqrt(b*b-4*a*c) and call it, say, disc and then use that in both places where sqrt(b*b - 4*a*c) occurred. We can do this using a let/in declaration: roots a b c = let disc = sqrt (b*b - 4*a*c) in ((-b + disc) / (2*a), (-b - disc) / (2*a))

In fact, you can provide multiple declarations inside a let. Just make sure they're indented the same amount, or you will have layout problems: roots a b c = let disc = sqrt (b*b - 4*a*c) twice_a = 2*a in ((-b + disc) / twice_a, (-b - disc) / twice_a)

Type basics Types in programming are a way of grouping similar values. In Haskell, the type system is a powerful way of ensuring there are fewer mistakes in your code.

Introduction Programming deals with different sorts of entities. For example, consider adding two numbers together: 2+3 What are 2 and 3? They are numbers, clearly. But how about the plus sign in the middle? That's certainly not a number. So what is it? Similarly, consider a program that asks you for your name, then says "Hello". Neither your name nor the word Hello is a number. What are they then? We might refer to all words and sentences and so forth as Text. In fact, it's more normal in programming to use a slightly more esoteric word, that is, String.

In Haskell, the rule is that all type names have to begin with a capital letter. We shall adhere to this convention henceforth.

If you've ever set up a database before, you'll likely have come across types. For example, say we had a table in a database to store details about a person's contacts; a kind of personal telephone book. The contents might look like this: First Name

Last Name

Telephone number

Address

Sherlock

Holmes

743756

221B Baker Street London

Bob

Jones

655523

99 Long Road Street Villestown

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 24

The fields contain values. Sherlock is a value as is 99 Long Road Street Villestown as well as 655523. As we've said, types are a way of grouping different sorts of data. What do we have in the above table? Two of the columns, First name and Last name contain text, so we say that the values are of type String. The type of the third column is a dead giveaway by its name, Telephone number. Values in that column have the type of Number! At first glance one may be tempted to class address as a string. However, the semantics behind an innocent address are quite complex. There's a whole lot of human conventions that dictate. For example, if the first line contains a number, then that's the number of the house, if not, then it's probably the name of the house, except if the line begins with PO Box then it's just a postal box address and doesn't indicate where the person lives at all... Clearly, there's more going on here than just Text. We could say that addresses are Text; there'd be nothing wrong with that. However, claiming they're of some different type, say, Address, is more powerful. If we know some piece of data has the type of Text, that's not very helpful. However, if we know it has the type of Address, we instantly know much more about the piece of data. We might also want to apply this line of reasoning to our telephone number column. Indeed, it would be a good idea to come up with a TelephoneNumber type. Then if we were to come across some arbitrary sequence of digits, knowing that sequence of digits was of type TelephoneNumber, we would have access to a lot more information than if it were just a Number. Why types are useful So far, what we've done just seems like categorizing things -- hardly a feature which would cause every modern programming language designer to incorporate into their language! In the next section we explore how Haskell uses types to the programmer's benefit.

Using the interactive :type command Characters and strings The best way to explore how types work in Haskell is to fire up GHCi. Let's do it! Once we're up and running, let us get to know the :type command.

Example: Using the :t command in GHCi on a literal character Prelude> :type 'H' 'H' :: Char

(The :type can be also shortened to :t, which we shall use from now on.) And there we have it. You give GHCi an expression and it returns its type. In this case we gave it the literal value 'H' - the letter H enclosed in single quotation marks (a.k.a. apostrophe, ANSI 39) and GHCi printed it followed by the "::" symbol which reads "is of type" followed by Char. The whole thing reads: 'H' is of type Char. If we try to give it a string of characters, we need to enclose them in quotation marks:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 25

Example: Using the :t command in GHCi on a literal string Prelude> :t "Hello World" "Hello World" :: [Char]

In this case we gave it some text enclosed in double quotation marks and GHCi printed "Hello World" :: [Char]. [Char] means a list of characters. Notice the difference between Char and [Char] the square brackets are used to construct literal lists, and they are also used to describe the list type. Exercises 1. Try using the :type command on the literal value "H" (notice the double quotes). What happens? Why? 2. Try using the :type command on the literal value 'Hello World' (notice the single quotes). What happens? Why?

This is essentially what strings are in Haskell - lists of characters. A string in Haskell can be initialized in several ways: It may be entered as a sequence of characters enclosed in double quotation marks (ANSI 34); it may be constructed similar to any other list as individual elements of type Char joined together with the ":" function and terminated by an empty list or, built with individual Char values enclosed in brackets and separated by commas. So, for the final time, what precisely is this concept of text that we're throwing around? One way of interpreting it is to say it's basically a sequence of characters. Think about it: the word "Hey" is just the character 'H' followed by the character 'e' followed by the character 'y'. Haskell uses a list to hold this sequence of characters. Square brackets indicate a list of things, for example here [Char] means 'a list of Chars'. Haskell has a concept of type synonyms. Just as in the English language, two words that mean the same thing, for example 'fast' and 'quick', are called synonyms, in Haskell two types which are exactly the same are called 'type synonyms'. Everywhere you can use [Char], you can use String. So to say: "Hello World" :: String

Is also perfectly valid. From here on we'll mostly refer to text as String, rather than [Char]. Boolean values One of the other useful types in most languages is called a Boolean or Bool for short. This has two values: true or false. This turns out to be very useful. For example consider a program that would ask the user for a name then look that name up in a spreadsheet. It might be useful to have a function, nameExists, which indicates whether or not the name of the user exists in the spreadsheet. If it does exist, you could say that it is true that the name exists, and if not, you could say that it is false that the name exists. So we've come across Bools. The two values of bools are, as we've mentioned, true and false. In Haskell boolean values are capitalized (for reasons that will later become clear):

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 26

Example: Exploring the types of True and False in GHCi Prelude> :t True True :: Bool Prelude> :t False False :: Bool

This shouldn't need too much explaining at this point. The values True and False are categorized as Booleans, that is to say, they have type Bool. Numeric types If you've been playing around with typing :t on all the familiar values you've come across, perhaps you've run into the following complication: Prelude> :t 5 5 :: Num a => a

We'll defer the explanation of this until later. The short version of the story is that there are many different types of numbers (fractions, whole numbers, etc) and 5 can be any one of them. This weird-looking type relates to a Haskell feature called type classes, which we will be playing with later in this book.

Functional types So far, we've covered what we call values, and explained how types help to categorize them, but also describe them. The next thing we'll look at is what makes the type system truly powerful: We can assign [3]

types not only to values, but to functions as well . Let's look at some examples. Example: not

Example: Negating booleans not True = False not False = True

not is a standard Prelude function that simply negates Bools, in the sense that truth turns into falsity and vice versa. For example, given the above example we gave using Bools, nameExists, we could define a similar function that would test whether a name doesn't exist in the spreadsheet. It would likely look something like this:

Example: nameDoesntExist: using not

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 27

nameDoesntExist name = not (nameExists name)

To assign a type to not we look at two things: the type of values it takes as its input, and the type of values it returns. In our example, things are easy. not takes a Bool (the Bool to be negated), and returns a Bool (the negated Bool). Therefore, we write that:

Example: Type signature for not not :: Bool -> Bool

You can read this as 'not is a function from things of type Bool to things of type Bool'. Example: unlines and unwords A common programming task is to take a list of Strings, then join them all up into a single string, but insert a newline character between each one, so they all end up on different lines. For example, say you had the list ["Bacon", "Sausages", "Egg"], and wanted to convert it to something resembling a shopping list, the natural thing to do would be to join the list together into a single string, placing each item from the list onto a new line. This is precisely what unlines does. unwords is similar, but it uses a space instead of a newline as a separator. (mnemonic: un = unite)

Example: unlines and unwords Prelude> unlines ["Bacon", "Sausages", "Egg"] "Bacon\nSausages\nEgg" Prelude> unwords ["Bacon", "Sausages", "Egg"] "Bacon Sausages Egg"

Notice the weird output from unlines. This isn't particularly related to types, but it's worth noting anyway, so we're going to digress a little and explore why this is. Basically, any output from GHCi is first run through the show function, which converts it into a String. This makes sense, because GHCi shows you the result of your commands as text, so it has to be a String. However, what does show do if you give it something which is already a String? Although the obvious answer would be 'do nothing', the behaviour is actually slightly different: any 'special characters', like tabs, newlines and so on in the String are converted to their 'escaped forms', which means that rather than a newline actually making the stuff following it appear on the next line, it is shown as "\n". To avoid this, we can use the putStrLn function, which GHCi sees and doesn't run your output through show.

Example: Using putStrLn in GHCi Prelude> putStrLn (unlines ["Bacon", "Sausages", "Egg"]) Bacon

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 28

Sausages Egg Prelude> putStrLn (unwords ["Bacon", "Sausages", "Egg"]) Bacon Sausages Egg

The second result may look identical, but notice the lack of quotes. putStrLn outputs exactly what you give it. Also, note that you can only pass it a String. Calls like putStrLn 5 will fail. You'd need to convert the number to a String first, that is, use show: putStrLn (show 5) (or use the equivalent function print: print 5). Getting back to the types. What would the types of unlines and unwords be? Well, again, let's look at both what they take as an argument, and what they return. As we've just seen, we've been feeding these functions a list, and each of the items in the list has been a String. Therefore, the type of the argument is [String]. They join all these Strings together into one long String, so the return type has to be String. Therefore, both of the functions have type [String] -> String. Note that we didn't mention the fact that the two functions use different separators. This is totally inconsequential when it comes to types — all that matters is that they return a String. The type of a String with some newlines is precisely the same as the type of a String with some spaces. Example: chr and ord Text presents a problem to computers. Once everything is reduced down to its lowest level, all a computer knows how to deal with is 1's and 0's: computers speak in binary. As talking in binary isn't very convenient, humans have come up with ways of making computers store text. Every character is first converted to a number, then that number is converted to binary and stored. Hence, a piece of text, which is just a sequence of characters, can be encoded into binary. Normally, we're only interested in how to encode characters into their numerical representations, because the number to binary bit is very easy. The easiest way of converting characters to numbers is simply to write all the possible characters down, then number them. For example, we might decide that 'a' corresponds to 1, then 'b' to 2, and so on. This is exactly what a thing called the ASCII standard is: 128 of the most commonly-used characters, numbered. Of course, it would be a bore to sit down and look up a character in a big lookup table every time we wanted to encode it, so we've got two functions that can do it for us, chr (pronounced 'char') and ord

[4]

:

Example: Type signatures for chr and ord chr :: Int -> Char ord :: Char -> Int

Remember earlier when we stated Haskell has many numeric types? The simplest is Int, which represents [5]

whole numbers, or integers, to give them their proper name. So what do the above type signatures say? Recall how the process worked for not above. We look at the type of the function's argument, then at the type of the function's result. In the case of chr (find the character corresponding to a specific numeric encoding), the type signature tells us that it takes arguments of type Int and has a result of type Char. The converse is the case with ord (find the specific numeric encoding for a given character: it takes things of type Char and returns things of type Int.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 29

To make things more concrete, here are a few examples of function calls to chr and ord, so you can see how the types work out. Notice that the two functions aren't in the standard prelude, but instead in the Data.Char module, so you have to load that module with the :m (or :module) command.

Example: Function calls to chr and ord Prelude> :m Data.Char Prelude Data.Char> chr 97 'a' Prelude Data.Char> chr 98 'b' Prelude Data.Char> ord 'c' 99

Functions in more than one argument So far, all we've seen is functions that take a single argument. This isn't very interesting! For example, the following is a perfectly valid Haskell function, but what would its type be?

Example: A function in more than one argument f x y = x + 5 + 2 * y

As we've said a few times, there's more than one type for numbers, but we're going to cheat here and pretend that x and y have to be Ints. The general technique for forming the type of a function in more than one argument, then, is to just write down all the types of the arguments in a row, in order (so in this case x first then y), then write -> in between all of them. Finally, add the type of the result to the end of the row and stick a final -> in just before it. So in this case, we have: FIXME: use images here.

There are very deep reasons for this, which we'll cover in the chapter on Currying.

1. Write down the types of the arguments. We've already said that x and y have to be Ints, so it becomes: Int ^^ x is an Int

Int ^^ y is an Int as well

2. Fill in the gaps with ->: Int -> Int

3. Add in the result type and a final ->. In our case, we're just doing some basic arithmetic so the result remains an Int.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 30

Int -> Int -> Int ^^ We're returning an Int ^^ There's the extra -> that got added in

Real-World Example: openWindow As you'll learn in the Practical Haskell section of the course, one popular group of Haskell libraries are the GUI ones. These provide functions for dealing with all the parts of Windows or Linux you're familiar with: opening and closing application windows, moving the mouse around etc. One of the functions from one of these libraries is called openWindow, and you can use it to open a new window in your application. For example, say you're writing a word processor like Microsoft Word, and the user has clicked on the 'Options' button. You need to open a new window which contains all the options that they can change. Let's look at the type signature for this function

[6]

A library is a collection of common code used by many programs.

:

Example: openWindow openWindow :: WindowTitle -> WindowSize -> Window

Don't panic! Here are a few more types you haven't come across yet. But don't worry, they're quite simple. All three of the types there, WindowTitle, WindowSize and Window are defined by the GUI library that provides openWindow. As we saw when constructing the types above, because there are two arrows, the first two types are the types of the parameters, and the last is the type of the result. WindowTitle holds the title of the window (what appears in the blue bar - you didn't change the color, did you? - at the top), WindowSize how big the window should be. The function then returns a value of type Window which you can use to get information on and manipulate the window.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 31

Exercises Finding types for functions is a basic Haskell skill that you should become very familiar with. What are the types of the following functions? 1. The negate function, which takes an Int and returns that Int with its sign swapped. For example, negate 4 = -4, and negate (-2) = 2 2. The && function, pronounced 'and', that takes two Bools and returns a third Bool which is True if both the arguments were, and False otherwise. 3. The || function, pronounced 'or', that takes two Bools and returns a third Bool which is True if either of the arguments were, and False otherwise. For any functions hereafter involving numbers, you can just assume the numbers are Ints. 1. f x y = not x && y 2. g x = (2*x - 1)^2 3. h x y z = chr (x - 2)

Polymorphic types So far all we've looked at are functions and values with a single type. However, if you start playing around with :t in GHCi you'll quickly run into things that don't have types beginning with the familiar capital letter. For example, there's a function that finds the length of a list, called (rather predictably) length. Remember that [Foo] is a list of things of type Foo. However, we'd like length to work on lists of any type. I.e. we'd rather not have a lengthInts :: [Int] -> Int, as well as a lengthBools :: [Bool] -> Int, as well as a lengthStrings :: [String] -> Int, as well as a... That's too complicated. We want one single function that will find the length of any type of list. The way Haskell does this is using type variables. For example, the actual type of length is as follows:

Example: Our first polymorphic type length :: [a] -> Int

Type variables begin with a lowercase letter. Indeed, this is why types have to begin with an uppercase letter — so they can be distinguished from type variables. When Haskell sees a type variable, it allows any type to take its place. This is exactly what we want. In type theory (a branch of mathematics), this is called polymorphism: functions or values with only a single type (like all the ones we've looked at so far except length) are called monomorphic, and things that use type variables to admit more than one type are therefore polymorphic.

We'll look at the theory behind polymorphism in much more detail later in the course.

Example: fst and snd

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 32

As we saw, you can use the fst and snd functions to extract parts of pairs. By this time you should be in the habit of thinking "What type is that function?" about every function you come across. Let's examine fst and snd. First, a few sample calls to the functions:

Example: Example calls to fst and snd Prelude> fst (1, 2) 1 Prelude> fst ("Hello", False) "Hello" Prelude> snd (("Hello", False), 4) 4

To begin with, let's point out the obvious: these two functions take a pair as their parameter and return one part of this pair. The important thing about pairs, and indeed tuples in general, is that they don't have to be homogeneous with respect to types; their different parts can be different types. Indeed, that is the case in the second and third examples above. If we were to say: fst :: (a, a) -> a

That would force the first and second part of input pair to be the same type. That illustrates an important aspect to type variables: although they can be replaced with any type, they have to be replaced with the same type everywhere. So what's the correct type? Simply:

Example: The types of fst and snd fst :: (a, b) -> a snd :: (a, b) -> b

Note that if you were just given the type signatures, you might guess that they return the first and second parts of a pair, respectively. In fact this is not necessarily true, they just have to return something with the same type of the first and second parts of the pair.

Type signatures in code Now we've explored the basic theory behind types and types in Haskell, let's look at how they appear in code. Most Haskell programmers will annotate every function they write with its associated type. That is, you might be writing a module that looks something like this:

Example: Module without type signatures module StringManip where import Data.Char

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 33

uppercase = map toUpper lowercase = map toLower capitalise x = let capWord [] = [] capWord (x:xs) = toUpper x : xs in unwords (map capWord (words x))

This is a small library that provides some frequently used string manipulation functions. uppercase converts a string to uppercase, lowercase to lowercase, and capitalize capitalizes the first letter of every word. Providing a type for these functions makes it more obvious what they do. For example, most Haskellers would write the above module something like the following:

Example: Module with type signatures module StringManip where import Data.Char uppercase, lowercase :: String -> String uppercase = map toUpper lowercase = map toLower capitalise :: String -> String capitalise x = let capWord [] = [] capWord (x:xs) = toUpper x : xs in unwords (map capWord (words x))

Note that you can group type signatures together into a single type signature (like ours for uppercase and lowercase above) if the two functions share the same type.

Type inference So far, we've explored types by using the :t command in GHCi. However, before you came across this chapter, you were still managing to write perfectly good Haskell code, and it has been accepted by the compiler. In other words, it's not necessary to add type signatures. However, if you don't add type signatures, that doesn't mean Haskell simply forgets about typing altogether! Indeed, when you didn't tell Haskell the types of your functions and variables, it worked them out. This is a process called type inference, whereby the compiler starts with the types of things it knows, then works out the types of the rest of the things. Type inference for Haskell is decidable, which means that the compiler can always work out the types, even if you never write them in

[7]

. Lets look at some examples to see how the compiler works out types.

Example: Simple type inference

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 34

-- We're deliberately not providing a type signature for this function isL c = c == 'l'

This function takes a character and sees if it is an 'l' character. The compiler derives the type for isL something like the following:

Example: A typing derivation (==) :: a -> 'l' :: Char Replacing the 'l': (==) :: Char isL :: Char

a -> Bool second ''a'' in the signature for (==) with the type of -> Char -> Bool -> Bool

[8] The first line indicates that the type of the function (==), which tests for equality, is a -> a -> Bool . (We include the function name in parentheses because it's an operator: its name consists of all nonalphanumeric characters. More on this later.) The compiler also knows that something in 'single quotes' has type Char, so clearly the literal 'l' has type Char. Next, the compiler starts replacing the type variables in the signature for (==) with the types it knows. Note that in one step, we went from a -> a -> Bool to Char -> Char -> Bool, because the type variable a was used in both the first and second argument, so they need to be the same. And so we arrive at a function that takes a single argument (whose type we don't know yet, but hold on!) and applies it as the first argument to (==). We have a particular instance of the polymorphic type of (==), that is, here, we're talking about (==) :: Char -> Char -> Bool because we know that we're comparing Chars. Therefore, as (==) :: Char -> Char -> Bool and we're feeding the parameter into the first argument to (==), we know that the parameter has the type of Char. Phew!

But wait, we're not even finished yet! What's the return type of the function? Thankfully, this bit is a bit easier. We've fed two Chars into a function which (in this case) has type Char -> Char -> Bool, so we must have a Bool. Note that the return value from the call to (==) becomes the return value of our isL function. So, let's put it all together. isL is a function which takes a single argument. We discovered that this argument must be of type Char. Finally, we derived that we return a Bool. So, we can confidently say that isL has the type:

Example: isL with a type isL :: Char -> Bool isL c = c == 'l'

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 35

And, indeed, if you miss out the type signature, the Haskell compiler will discover this on its own, using exactly the same method we've just run through. Reasons to use type signatures So if type signatures are optional, why bother with them at all? Here are a few reasons: Documentation: the most prominent reason is that it makes your code easier to read. With most functions, the name of the function along with the type of the function are sufficient to guess at what the function does. (Of course, you should always comment your code anyway.) Debugging: if you annotate a function with a type, then make a typo in the body of the function, the compiler will tell you at compile-time that your function is wrong. Missing off the type signature could have the effect of allowing your function to compile, and the compiler would assign it an erroneous type. You wouldn't know until you ran your program that it was wrong. In fact, this is so important, let's explore it some more. Types prevent errors Imagine you have a few functions set up like the following:

Example: Type inference at work fiveOrSix :: Bool -> Int fiveOrSix True = 5 fiveOrSix False = 6 pairToInt :: (Bool, String) -> Int pairToInt x = fiveOrSix (fst x)

Our function fiveOrSix takes a Bool. When pairToInt receives its arguments, it knows, because of the type signature we've annotated it with, that the first element of the pair is a Bool. So, we could extract this using fst and pass that into fiveOrSix, and this would work, because the type of the first element of the pair and the type of the argument to fiveOrSix are the same. This is really central to typed languages. When passing expressions around you have to make sure the types match up like they did here. If they don't, you'll get type errors when you try to compile; your program won't typecheck. This is really how types help you to keep your programs bug-free. To take a very trivial example:

Example: A non-typechecking program "hello" + " world"

Having that line as part of your program will make it fail to compile, because you can't add two strings together! More likely, you wanted to use the string concatenation operator, which joins two strings together into a single one:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 36

Example: Our erroneous program, fixed "hello" ++ " world"

An easy typo to make, but because you use Haskell, it was caught when you tried to compile. You didn't have to wait until you ran the program for the bug to become apparent. This was only a simple example. However, the idea of types being a system to catch mistakes works on a much larger scale too. In general, when you make a change to your program, you'll change the type of one of the elements. If this change isn't something that you intended, then it will show up immediately. A lot of Haskell programmers remark that once they have fixed all the type errors in their programs, and their programs compile, that they tend to 'just work': function flawlessly first time, with only minor problems. Run-time errors, where your program goes wrong when you run it rather than when you compile it, are much rarer in Haskell than in other languages. This is a huge advantage of a strong type system like Haskell's. Exercises To come.

Notes ↑

1. ↑ At least as far as types are concerned, but we're trying to avoid that word :) 2. More technically, fst and snd have types which limit them to pairs. It would be impossible to define projection functions on tuples in general, because they'd have to be able to accept tuples of different sizes, so the type of the function would vary. ↑ 3. ↑ In fact, these are one and the same concept in Haskell. 4. This isn't quite what chr and ord do, but that description fits our purposes well, and it's close enough. ↑ 5. To make things even more confusing, there's actually even more than one type for integers! Don't worry, we'll come on to this in due course. ↑ 6. This has been somewhat simplified to fit our purposes. Don't worry, the essence of the function is there. ↑ 7. Some of the newer type system extensions to GHC do break this, however, so you're better off just always putting down types anyway. ↑ 8. This is a slight lie. That type signature would mean that you can compare two values of any type whatsoever, but this clearly isn't true: how can you see if two functions are equal? Haskell includes a kind of 'restricted polymorphism' that allows type variables to range over some, but not all types. Haskell implements this using type classes, which we'll learn about later. In this case, the correct type of (==) is Eq a => a -> a -> Bool.

Simple input and output So far this tutorial has discussed functions that return values, which is well and good. But how do we write "Hello world"? To give you a rough taste of it, here is a small variant of the "Hello world" program:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 37

Example: Hello! What is your name? module Main where import System.IO main = do putStrLn "Please enter your name: " name <- getLine putStrLn ("Hello, " ++ name ++ ", how are you?")

At the very least, what should be clear is that dealing with input and output (IO) in Haskell is not a lost cause! Functional languages have always had a problem with input and output because they require side effects. Functions always have to return the same results for the same arguments. But how can a function "getLine" return the same value every time it is called? Before we give the solution, let's take a step back and think about the difficulties inherent in such a task. Any IO library should provide a host of functions, containing (at a minimum) operations like: print a string to the screen read a string from a keyboard write data to a file read data from a file There are two issues here. Let's first consider the initial two examples and think about what their types should be. Certainly the first operation (I hesitate to call it a "function") should take a String argument and produce something, but what should it produce? It could produce a unit (), since there is essentially no return value from printing a string. The second operation, similarly, should return a String, but it doesn't seem to require an argument. We want both of these operations to be functions, but they are by definition not functions. The item that reads a string from the keyboard cannot be a function, as it will not return the same String every time. And if the first function simply returns () every time, then referential transparency tells us we should have no problem with replacing it with a function f _ = (). But clearly this does not have the desired effect.

Actions The breakthrough for solving this problem came when Phil Wadler realized that monads would be a good way to think about IO computations. In fact, monads are able to express much more than just the simple operations described above; we can use them to express a variety of constructions like concurrence, exceptions, IO, non-determinism and much more. Moreover, there is nothing special about them; they can be defined within Haskell with no special handling from the compiler (though compilers often choose to optimize monadic operations). Monads also have a somewhat undeserved reputation of being difficult to understand. So we're going to leave things at that -- knowing simply that IO somehow makes use of monads without neccesarily understanding the gory details behind them (they really aren't so gory). So for now, we can forget that monads even exist. As pointed out before, we cannot think of things like "print a string to the screen" or "read data from a file" as functions, since they are not (in the pure mathematical sense). Therefore, we give them another name:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 38

actions. Not only do we give them a special name, we give them a special type. One particularly useful action is putStrLn, which prints a string to the screen. This action has type: putStrLn :: String -> IO ()

As expected, putStrLn takes a string argument. What it returns is of type IO (). This means that this function is actually an action (that is what the IO means). Furthermore, when this action is evaluated (or "run") , the result will have type (). Note Actually, this type means that putStrLn is an action "within the IO monad", but we will gloss over this for now.

You can probably already guess the type of getLine: getLine :: IO String

This means that getLine is an IO action that, when run, will have type String. The question immediately arises: "how do you 'run' an action?". This is something that is left up to the compiler. You cannot actually run an action yourself; instead, a program is, itself, a single action that is run when the compiled program is executed. Thus, the compiler requires that the main function have type IO (), which means that it is an IO action that returns nothing. The compiled code then executes this action. However, while you are not allowed to run actions yourself, you are allowed to combine actions. There are two ways to go about this. The one we will focus on in this chapter is the do notation, which provides a convenient means of putting actions together, and allows us to get useful things done in Haskell without having to understand what really happens. Lurking behind the do notation is the more explicit approach using the (>>=) operator, but we will not be ready to cover this until the chapter Understanding monads. Note Do notation is just syntactic sugar for (>>=). If you have experience with higher order functions, it might be worth starting with the latter approach and coming back here to see how do notation gets used.

Let's consider the following name program:

Example: What is your name? main = do putStrLn "Please enter your name: "

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 39

name <- getLine putStrLn ("Hello, " ++ name ++ ", how are you?")

We can consider the do notation as a way to combine a sequence of actions. Moreover, the <- notation is a way to get the value out of an action. So, in this program, we're sequencing three actions: a putStrLn, a getLine and another putStrLn. The putStrLn action has type String -> IO (), so we provide it a String, so the fully applied action has type IO (). This is something that we are allowed to run as a program. Exercises Write a program which asks the user for the base and height of a triangle, calculates its area and prints it to the screen. The interaction should look something like: The base? 3.3 The height? 5.4 The area of that triangle is 8.91

Hint: you can use the function read to convert user strings like "3.3" into numbers like 3.3 and function show to convert a number into string. Left arrow clarifications The <- is optional While we are allowed to get a value out of certain actions like getLine, we certainly are not obliged to do so. For example, we could very well have written something like this:

Example: executing getLine directly main = do putStrLn "Please enter your name: " getLine putStrLn ("Hello, how are you?")

Clearly, that isn't very useful: the whole point of prompting the user for his or her name was so that we could do something with the result. That being said, it is conceivable that one might wish to read a line and completely ignore the result. Omitting the <- will allow for that; the action will happen, but the data won't be stored anywhere. In order to get the value out of the action, we write name <- getLine, which basically means "run getLine, and put the results in the variable called name." The <- can be used with any action (except the last)

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 40

On the flip side, there are also very few restrictions which actions can have values gotten out of them. Consider the following example, where we put the results of each action into a variable (except the last... more on that later):

Example: putting all results into a variable main = do x <- putStrLn "Please enter your name: " name <- getLine putStrLn ("Hello, " ++ name ++ ", how are you?")

The variable x gets the value out of its action, but that isn't very interesting because the action returns the unit value (). So while we could technically get the value out of any action, it isn't always worth it. But wait, what about that last action? Why can't we get a value out of that? Let's see what happens when we try:

Example: getting the value out of the last action main = x
do putStrLn "Please enter your name: " <- getLine putStrLn ("Hello, " ++ name ++ ", how are you?")

Whoops! YourName.hs:5:2: The last statement in a 'do' construct must be an expression

This is a much more interesting example, but it requires a somewhat deeper understanding of Haskell than we currently have. Suffice it to say, whenever you use <- to get the value of an action, Haskell is always expecting another action to follow it. So the very last action better not have any <-s. Controlling actions Normal Haskell constructions like if/then/else and case/of can be used within the do notation, but you need to be somewhat careful. For instance, in a simple "guess the number" program, we have: doGuessing num = do putStrLn "Enter your guess:" guess <- getLine if (read guess) < num then do putStrLn "Too low!" doGuessing num else if (read guess) > num then do putStrLn "Too high!" doGuessing num else do putStrLn "You Win!"

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 41

If we think about how the if/then/else construction works, it essentially takes three arguments: the condition, the "then" branch, and the "else" branch. The condition needs to have type Bool, and the two branches can have any type, provided that they have the same type. The type of the entire if/then/else construction is then the type of the two branches. In the outermost comparison, we have (read guess) < num as the condition. This clearly has the correct type. Let's just consider the "then" branch. The code here is: do putStrLn "Too low!" doGuessing num

Here, we are sequencing two actions: putStrLn and doGuessing. The first has type IO (), which is fine. The second also has type IO (), which is fine. The type result of the entire computation is precisely the type of the final computation. Thus, the type of the "then" branch is also IO (). A similar argument shows that the type of the "else" branch is also IO (). This means the type of the entire if/then/else construction is IO (), which is just what we want. Note In this code, the last line is else do putStrLn "You Win!". This is somewhat overly verbose. In fact, else putStrLn "You Win!" would have been sufficient, since do is only necessary to sequence actions. Since we have only one action here, it is superfluous.

It is incorrect to think to yourself "Well, I already started a do block; I don't need another one," and hence write something like: do if (read guess) < num then putStrLn "Too low!" doGuessing num else ...

Here, since we didn't repeat the do, the compiler doesn't know that the putStrLn and doGuessing calls are supposed to be sequenced, and the compiler will think you're trying to call putStrLn with three arguments: the string, the function doGuessing and the integer num. It will certainly complain (though the error may be somewhat difficult to comprehend at this point). We can write the same doGuessing function using a case statement. To do this, we first introduce the Prelude function compare, which takes two values of the same type (in the Ord class) and returns one of GT, LT, EQ, depending on whether the first is greater than, less than or equal to the second. doGuessing num = do putStrLn "Enter your guess:" guess <- getLine case compare (read guess) num of LT -> do putStrLn "Too low!" doGuessing num GT -> do putStrLn "Too high!"

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 42

doGuessing num EQ -> putStrLn "You Win!"

Here, again, the dos after the ->s are necessary on the first two options, because we are sequencing actions. If you're used to programming in an imperative language like C or Java, you might think that return will exit you from the current function. This is not so in Haskell. In Haskell, return simply takes a normal value (for instance, one of type Int) and makes it into an action that returns the given value (for the same example, the action would be of type IO Int). In particular, in an imperative language, you might write this function as: void doGuessing(int num) { print "Enter your guess:"; int guess = atoi(readLine()); if (guess == num) { print "You win!"; return (); } // we won't get here if guess == num if (guess < num) { print "Too low!"; doGuessing(num); } else { print "Too high!"; doGuessing(num); } }

Here, because we have the return () in the first if match, we expect the code to exit there (and in most imperative languages, it does). However, the equivalent code in Haskell, which might look something like: doGuessing num = do putStrLn "Enter your guess:" guess <- getLine case compare (read guess) num of EQ -> do putStrLn "You win!" return () -- we don't expect to get here unless guess == num if (read guess < num) then do print "Too low!"; doGuessing else do print "Too high!"; doGuessing

First of all, if you guess correctly, it will first print "You win!," but it won't exit, and it will check whether guess is less than num. Of course it is not, so the else branch is taken, and it will print "Too high!" and then ask you to guess again. On the other hand, if you guess incorrectly, it will try to evaluate the case statement and get either LT or GT as the result of the compare. In either case, it won't have a pattern that matches, and the program will fail immediately with an exception.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 43

Exercises What does the following program print out? main = do x <- getX putStrLn x getX = do return "hello" return "aren't" return "these" return "returns" return "rather" return "pointless?"

Why? Exercises Write a program that asks the user for his or her name. If the name is one of Simon, John or Phil, tell the user that you think Haskell is a great programming language. If the name is Koen, tell them that you think debugging Haskell is fun (Koen Classen is one of the people who works on Haskell debugging); otherwise, tell the user that you don't know who he or she is. Write two different versions of this program, one using if statements, the other using a case statement.

Actions under the microscope Actions may look easy up to now, but they are actually a common stumbling block for new Haskellers. If you have run into trouble working with actions, you might consider looking to see if one of your problems or questions matches the cases below. It might be worth skimming this section now, and coming back to it when you actually experience trouble. Mind your action types One temptation might be to simplify our program for getting a name and printing it back out. Here is one unsuccessful attempt:

Example: Why doesn't this work? main = do putStrLn "What is your name? " putStrLn ("Hello " ++ getLine)

Ouch!

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 44

YourName.hs:3:26: Couldn't match expected type `[Char]' against inferred type `IO String'

Let us boil the example above to its simplest form. Would you expect this program to compile?

Example: This still does not work main = do putStrLn getLine

For the most part, this is the same (attempted) program, except that we've stripped off the superflous "What is your name" prompt as well as the polite "Hello". One trick to understanding this is to reason about it in terms of types. Let us compare: putStrLn :: String -> IO () getLine :: IO String

We can use the same mental machinery we learned in Type basics to figure how everything went wrong. Simply put, putStrLn is expecting a String as input. We do not have a String, but something tantalisingly close, an IO String. This represents an action that will give us a String when it's run. To obtain the String that putStrLn wants, we need to run the action, and we do that with the ever-handy left arrow, <-.

Example: This time it works main = do name <- getLine putStrLn name

Working our way back up to the fancy example: main = do putStrLn "What is your name? " name <- getLine putStrLn ("Hello " ++ name)

Now the name is the String we are looking for and everything is rolling again. Mind your expression types too

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 45

Fine, so we've made a big deal out of the idea that you can't use actions in situations that don't call for them. The converse of this is that you can't use non-actions in situations that DO expect them. Say we want to greet the user, but this time we're so excited to meet them, we just have to SHOUT their name out:

Example: Exciting but incorrect. Why? import Data.Char (toUpper) main = do name <- getLine loudName <- makeLoud name putStrLn ("Hello " ++ loudName ++ "!") putStrLn ("Oh boy! Am I excited to meet you, " ++ loudName) -- Don't worry too much about this function; it just capitalises a String makeLoud :: String -> String makeLoud s = map toUpper s

This goes wrong... Couldn't match expected type `IO' against inferred type `[]' Expected type: IO t Inferred type: String In a 'do' expression: loudName <- makeLoud name

This is quite similar to the problem we ran into above: we've got a mismatch between something that is expecting an IO type, and something which is not. This time, the cause is our use of the left arrow <-; we're trying to left arrow a value of makeLoud name, which really isn't left arrow material. It's basically the same mismatch we saw in the previous section, except now we're trying to use regular old String (the loud name) as an IO String, which clearly are not the same thing. The latter is an action, something to be run, whereas the former is just an expression minding its own business. So how do we extricate ourselves from this mess? We have a number of options: We could find a way to turn makeLoud into an action, to make it return IO String. But this is not desirable, because the whole point of functional programming is to cleanly separate our side-effecting stuff (actions) from the pure and simple stuff. For example, what if we wanted to use makeLoud from some other, non-IO, function? An IO makeLoud is certainly possible (how?), but missing the point entirely. We could use return to promote the loud name into an action, writing something like loudName <- return (makeLoud name). This is slightly better, in that we are at least leaving the makeLoud itself function nice and IO-free, whilst using it in an IO-compatible fashion. But it's still moderately clunky, because by virtue of left arrow, we're implying that there's action to be had -- how exciting! -- only to let our reader with a somewhat anticlimatic return Or we could use a let binding... It turns out that Haskell has a special extra-convenient syntax for let bindings in actions. It looks a little like this:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 46

Example: let bindings in do blocks. main = do name <- getLine let loudName = makeLoud name putStrLn ("Hello " ++ loudName ++ "!") putStrLn ("Oh boy! Am I excited to meet you, " ++ loudName)

If you're paying attention, you might notice that the let binding above is missing an in. This is because let bindings in do blocks do not require the in keyword. You could very well use it, but then you'd have to make a mess of your do blocks. For what it's worth, the following two blocks of code are equivalent. sweet

unsweet

do name <- getLine let loudName = makeLoud name putStrLn ("Hello " ++ loudName ++ "!") putStrLn ("Oh boy! Am I excited to meet you, " ++ loudName)

do name <- getLine let loudName = makeLoud name in do putStrLn ("Hello " ++ loudName ++ "!") putStrLn ("Oh boy! Am I excited to meet you, " ++ loudName)

Exercises 1. Why does the unsweet version of the let binding require an extra do keyword? 2. Do you always need the extra do? 3. (extra credit) Curiously, let without in is exactly how we wrote things when we were playing with the interpreter in the beginning of this book. Why can you omit the in keyword in the interpreter, when you'd have to put it in when typing up a source file?

The big secret We've been insisting rather vehemently on the distinction between actions and expressions and hope that you have a more or less solid grasp on the difference (if not, it will become clearer as you write more real-life code). But it turns out that there is a deeper, more beautiful and perhaps frightening truth behind this. Ready?

Actions are expressions too

To be precise, the world of expressions can be handily divided into those that are actions, and those that are

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 47

not. We've been making a distinction between actions and expressions the whole time, but what we should really have been making the distinction between is expressions-that-are-actions and expressions-that-aren't. This isn't quite a hair-splitting terminological difference either. It will not be entirely useful for now, but for me at least it is vaguely reassuring to know that underneath this heavy do-block-left-arrow machinery is a very elegant monadic core, that actions are first class citizens in the same way that functions and other expressions are. Sure, there is some extra sugar that makes actions more agreeable to use, but like a graphical user interface, it is never really strictly obligatory. Moreover, it is sometimes very powerful to manipulate actions in the same way we do the non-actions, for example to map or foldr over them. Unimpressed? It's ok, it's not strictly necessary to know this, but it might make Haskell just a little tastier for you to know it. Just think of this as foreshadowing to Understanding monads.

Learn more At this point, you should have the skills you need to do some fancier input/output. Here are some IO-related options to consider. You could continue the sequential track, by learning more about types and eventually monads. Alternately: you could start learning about building graphical user interfaces in the GUI chapter For more IO-related functionality, you could also consider learning more about the System.IO library

Type declarations Haskell has three basic ways to declare a new type: The data declaration for structures and enumerations. The type declaration for type synonyms. The newtype declaration, which is a cross between the other two. In this chapter, we will focus on the most essential way, data, and to make life easier, type. You'll find out about newtype later on, but don't worry too much about it; it's there mainly for optimisation.

data for making your own types Here is a data structure for a simple list of anniversaries: data Anniversary = Birthday String Int Int Int | Wedding String String Int Int Int year

-- Name, month, day, year -- First partner's name, second partner's name, month, day,

This declares a new data type Anniversary with two constructor functions called Birthday and Wedding. As usual with Haskell the case of the first letter is important: type names and constructor functions must always start with capital letters. Note also the vertical bar: this marks the point where one alternative ends and the next begins; you can think of it almost as an or - which you'll remember was || except used in types.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 48

The declaration says that an Anniversary can be one of two things; a Birthday or a Wedding. A Birthday contains one string and three integers, and a Wedding contains two strings and three integers. The comments (after the "--") explain what the fields actually mean. Now we can create new anniversaries by calling the constructor functions. For example, suppose we have John Smith born on 3rd July 1968: johnSmith :: Anniversary johnSmith = Birthday "John Smith" 7 3 1968

He married Jane Smith on 4th March 1997: smithWedding :: Anniversary smithWedding = Wedding "John Smith" "Jane Smith" 3 4 1997

These two objects can now be put in a list: anniversaries :: [Anniversary] anniversaries = [johnSmith, smithWedding]

(Obviously a real application would not hard-code its entries: this is just to show how constructor functions work). Constructor functions can do all of the things ordinary functions can do. Anywhere you could use an ordinary function you can use a constructor function. Anniversaries will need to be converted into strings for printing. This needs another function: showAnniversary :: Anniversary -> String showAnniversary (Birthday name month day year) = name ++ " born " ++ showDate month day year

showAnniversary (Wedding name1 name2 month day year) = name1 ++ " married " ++ name2 ++ " " ++ showDate month day year

This shows the one way that constructor functions are special: they can also be used to deconstruct objects. showAnniversary takes an argument of type Anniversary. If the argument is a Birthday then the first version gets used, and the variables name, month, date and year are bound to its contents. If the argument is a Wedding then the second version is used and the arguments are bound in the same way. The brackets indicate that the whole thing is one argument split into five or six parts, rather than five or six separate arguments. Notice the relationship between the type and the constructors. All versions of showAnniversary convert an anniversary to a string. One of them handles the Birthday case and the other handles the Wedding case. It also needs an additional showDate routine:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 49

showDate m d y = show m ++ "/" ++ show d ++ "/" ++ show y -- Non-US readers may wish to rearrange this

Of course, it's a bit clumsy having the date passed around as three separate integers. What we really need is a new datatype: data Date = Date Int Int Int

-- Month, Day, Year

Constructor functions are allowed to be the same name as the type, and if there is only one then it is good practice to make it so.

type for making type synonyms It would also be nice to make it clear that the strings in the Anniversary type are names, but still be able to manipulate them like ordinary strings. The type declaration does this: type Name = String

This says that a Name is a synonym for a String. Any function that takes a String will now take a Name as well, and vice versa. The right hand side of a type declaration can be a more complex type as well. For example String itself is defined in the standard libraries as type String = [Char]

So now we can rewrite the Anniversary type like this: data Anniversary = Birthday Name Date | Wedding Name Name Date

which is a lot easier to read. We can also have a type for the list: type AnniversaryBook = [Anniversary]

The rest of the code needs to be changed to match: johnSmith :: Anniversary johnSmith = Birthday "John Smith" (Date 7 3 1968) smithWedding :: Anniversary smithWedding = Wedding "John Smith" "Jane Smith" (Date 3 4 1997) anniversaries :: AnniversaryBook anniversaries = [johnSmith, smithWedding]

showAnniversary :: Anniversary -> String

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 50

showAnniversary (Birthday name date) = name ++ " born " ++ showDate date showAnniversary (Wedding name1 name2 date) = name1 ++ " married " ++ name2 ++ showDate date

showDate :: Date -> String showDate (Date m d y) = show m ++ "/" ++ show d ++ "/" ++ show y

Elementary Haskell Recursion Recursion is a clever idea that says that a given function can use itself as part of its definition.

Numeric recursion The factorial function In mathematics, especially combinatorics, there is a function used fairly frequently called the factorial [9]

function . This takes a single argument, a number, finds all the numbers between one and this number, and multiplies them all together. For example, the factorial of 6 is 1 × 2 × 3 × 4 × 5 × 6 = 720. This is an interesting function for us, because it is a candidate to be written in the recursive style. The idea is to look at factorials of two adjacent numbers:

Example: Factorials of adjacent numbers Factorial of 6 = 6 × 5 × 4 × 3 × 2 × 1 Factorial of 5 = 5 × 4 × 3 × 2 × 1

Notice how we've lined things up. What you can see here is that the factorial of 6 involves the factorial of 5. In fact, the factorial of 6 is just 6 × (factorial of 5). Let's look at some more examples:

Example: Factorials of adjacent numbers Factorial of 3 = 3 × 2 × 1 Factorial of 2 = 2 × 1

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 51

Factorial of 8 = 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 Factorial of 7 = 7 × 6 × 5 × 4 × 3 × 2 × 1

Indeed, we can see that the factorial of any number is just that number multiplied by the factorial of the number one less than it. There's one exception to this: if we ask for the factorial of 0, we don't want to multiply 0 by the factorial of -1! In fact, we just say the factorial of 0 is 1 (we define it to be so. It just is, okay?). So, we can sum up the definition of the factorial function: The factorial of 0 is 1 The factorial of any other number is that number multiplied by the factorial of the number one less than it. We can translate this directly into Haskell:

Example: Factorial function factorial 0 = 1 factorial n = n * factorial (n-1)

This defines a new function called factorial. The first line says that the factorial of 0 is 1, and the second one says that the factorial of any other number n is equal to n times the factorial of n-1. Note the parentheses around the n-1: without them this would have been parsed as (factorial n) - 1; function application (applying a value to a function) will happen before anything else does (we say that function application binds more tightly than anything else). This all seems a little voodoo so far, though. How does it work? Well, let's look at what happens when you execute factorial 3: 3 isn't 0, so we recur: work out the factorial of 2 2 isn't 0, so we recur. 1 isn't 0, so we recur. 0 is 0, so we return 1. We multiply the current number, 1, by the result of the recursion, 1, obtaining 1 (1 × 1). We multiply the current number, 2, by the result of the recursion, 1, obtaining 2 (2 × 1 × 1). We multiply the current number, 3, by the result of the recursion, obtaining 6 (3 × 2 × 1 × 1). (Note that we end up with the one appearing twice, but that's okay, because the 'base case' is 0 rather than 1. This is just mathematical convention (it's useful to have the factorial of 0 defined); we could have stopped at 1 if had wanted to.) We can see how the multiplication 'builds up' through the recursion. Exercises Type the factorial function into a Haskell source file and load it into your favourite Haskell environment. What is factorial 5?

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 52

What about factorial 1000? If you have a scientific calculator (that isn't your computer), try it there first. Does Haskell give you what you expected? What about factorial (-1)?

A quick aside This section is aimed at people used to more imperative-style languages like C and Java. This example shows how you do loops in Haskell. The idiomatic way of doing this in an imperative language would be to use a for loop, like the following (in C):

Example: The factorial function in an imperative language int factorial(int n) { int res = 1; for (i = 1; i <= n; i++) res *= i; return res; }

This isn't possible in Haskell because you're changing the value of the variable res (a destructive update), but you can use recursion. To do it through recursion, you take your current result and modify it before the recursive call. An example: sometimes you'll want to read input from the user that includes linebreaks/ newlines. A looping solution would be to read a line of input, append it to a string variable containing all previous lines, check it for whatever marks the end of the input (ending the loop if true, or looping again if false). Here's a recursive solution which will accumulate input until the '.' is input:

Example: Recursively accumulating a number of lines from standard input getLinesUntilDot :: IO [String] getLinesUntilDot = do x <- getLine if x == "." then return [] else do xs <- getLinesUntilDot return (x:xs)

The other way would be to use lists: you could throw all the numbers between 1 and n in a list, then use the product function to multiply them all together: product [1..10]

Another thing to note is that you shouldn't be worried about poor performance through recursion with Haskell. In general, functional programming compilers include a lot of optimization for recursion, including one important one called tail-call optimisation; remember too that Haskell is lazy - if a calculation isn't needed, it won't be done. We'll learn about these in later chapters.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 53

A note on equation order Unlike the other example, the order of the two recursive declarations is important. Haskell matches function calls starting at the top and picking the first one that matches. In this case, if we had the equation starting factorial n before the 'special case' starting factorial 0, then the general n would match anything passed into it, including, importantly, 0. So a call factorial 0 would match on the general, n case, the compiler would conclude that factorial 0 equals 0 * factorial -1, and so on to negative infinity. Not what we want. Other recursive functions It turns out a lot of functions are recursive! For example, let's think about multiplication. When you were first introduced to multiplication (remember that moment? :)), it may have been through a process of 'repeated addition'. That is, 5 × 4 is just 5 added to itself 4 times. So, it turns out we can define multiplication recursively:

Example: Multiplication defined recursively n * 1 = n n * m = n + n * (m - 1)

Recursion, then, generally looks at two cases: what to do if the argument is the base case (normally either 1 or 0), and what to do otherwise. The actual recursion normally happens in the latter case, passing the number minus one back into the function, so that we proceed down the number line. When the number hits 1 or 0, our base case is invoked, and we stop. Exercises 1. Expand out the multiplication 5 × 4 similarly to the expansion we used above for factorial 3. 2. Define a recursive function power such that power x y raises x to the y power. 3. You are given a function plusOne x = x + 1. Without using any other (+)s, define a recursive function addition such that addition x y adds x and y together.

List-based recursion In fact, a lot of functions in Haskell will turn out to be recursive, especially those concerning lists. Consider the length function that finds the lengths of lists.

Example: The recursive definition of length

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 54

length :: [a] -> Int length [] = 0 length (x:xs) = 1 + length xs

Note the syntax. The (x:xs) represents a list where x is the first element and xs is the rest of the list. See the section on Pattern matching How about the concatenation function, (++) which joins two lists together? (Some examples of usage are also given, as we haven't come accross this function so far.)

Example: The recursive (++) Prelude> [1,2,3] ++ [4,5,6] [1,2,3,4,5,6] Prelude> "Hello " ++ "world" -- Strings are lists of Chars "Hello world" (++) :: [a] -> [a] -> [a] [] ++ ys = ys (x:xs) ++ ys = x : xs ++ ys

We seem to have a recurring pattern. With list-based functions, at least, we tend to think in two cases: what to do if the list is empty (the base case), and what to do otherwise. The actual recursion normally happens in the second step, where we pass the tail of the list to our function again, so that the list becomes progressively smaller. When it hits the empty list, our base case is invoked. Exercises Give recursive definitions for the following list-based functions. In each case, think what the base case would be, then think what the general case would look like, in terms of everything smaller than it. 1. replicate :: Int -> a -> [a], which takes an element and a count and returns the list which is that element repeated that many times. E.g. replicate 3 'a' = "aaa". (Hint: think about what replicate of anything with a count of 0 should be; a count of 0 is your 'base case'.) 2. (!!) :: [a] -> Int -> a, which returns the element at the given 'index'. The first element is at index 0, the second at index 1, and so on. Note that with this function, you're recurring both numerically and down a list. 3. (A bit harder.) zip :: [a] -> [b] -> [(a, b)], which takes two lists and 'zips' them together, so that the first pair in the resulting list is the first two elements of the two lists, and so on. E.g. zip [1,2,3] "abc" = [(1, 'a'), (2, 'b'), (3, 'c')]. If either of the lists is shorter than the other, you can stop once either list runs out. E.g. zip [1,2] "abc" = [(1, 'a'), (2, 'b')].

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 55

Recursion is used to define nearly all functions to do with lists and numbers. The next time you need a listbased algorithm, start with a case for the empty list and a case for the non-empty list and see if your algorithm is recursive.

Summary Recursion is the practise of using a function you're defining in the body of the function itself. It nearly always comes in two parts: a base case and a recursive case. Recursion is especially useful for dealing with list- and number-based functions.

Notes ↑

1. ↑ At least as far as types are concerned, but we're trying to avoid that word :) 2. More technically, fst and snd have types which limit them to pairs. It would be impossible to define projection functions on tuples in general, because they'd have to be able to accept tuples of different sizes, so the type of the function would vary. ↑ 3. ↑ In fact, these are one and the same concept in Haskell. 4. This isn't quite what chr and ord do, but that description fits our purposes well, and it's close enough. ↑ 5. To make things even more confusing, there's actually even more than one type for integers! Don't worry, we'll come on to this in due course. ↑ 6. This has been somewhat simplified to fit our purposes. Don't worry, the essence of the function is there. ↑ 7. Some of the newer type system extensions to GHC do break this, however, so you're better off just always putting down types anyway. ↑ 8. This is a slight lie. That type signature would mean that you can compare two values of any type whatsoever, but this clearly isn't true: how can you see if two functions are equal? Haskell includes a kind of 'restricted polymorphism' that allows type variables to range over some, but not all types. Haskell implements this using type classes, which we'll learn about later. In this case, the correct type of ↑ (==) is Eq a => a -> a -> Bool. 9. In mathematics, n! normally means the factorial of n, but that syntax is impossible in Haskell, so we don't use it here.

Pattern matching Pattern matching is a convenient way to bind variables to different parts of a given value.

What is pattern matching? You've actually met pattern matching before, in the lists chapter. Recall functions like map: map _ [] = [] map f (x:xs) = f x : map f xs

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 56

Here there are four different patterns going on: two per equation. Let's explore each one in turn (although not in the order they appeared in that example): [] is a pattern that matches the empty list. It doesn't bind any variables. (x:xs) is a pattern that matches something (which gets bound to x), which is cons'd, using the function (:), onto something else (which gets bound to the variable xs). f is a pattern which matches anything at all, and binds f to that something. _ is the pattern which matches anything at all, but doesn't do any binding. So pattern matching is a way of assigning names to things (or binding those names to those things), and possibly breaking down expressions into subexpressions at the same time (as we did with the list in the definition of map). However, you can't pattern match with anything. For example, you might want to define a function like the following to chop off the first three elements of a list: dropThree ([x,y,z] ++ xs) = xs

However, that won't work, and will give you an error. The problem is that the function (++) isn't allowed in patterns. So what is allowed? The one-word answer is constructors. Recall algebraic datatypes, which look something like: data Foo = Bar | Baz Int

Here Bar and Baz are constructors for the type Foo. And so you can pattern match with them: f :: Foo -> Int f Bar = 1 f (Baz x) = x - 1

Remember that lists are defined thusly (note that the following isn't actually valid syntax: lists are in reality deeply grained into Haskell): data [a] = [] | a : [a]

So the empty list, [], and the (:) function, are in reality constructors of the list datatype, so you can pattern match with them. Note, however, that as [x, y, z] is just syntactic sugar for x:y:z:[], you can still pattern match using the former form: dropThree (_:_:_:xs) = xs

If the only relevant information is the type of the constructor (regardless of the number of its elements) the {} pattern can be used:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 57

g :: Foo -> Bool g Bar {} = True g Baz {} = False

The function g does not have to be changed when the number of elements of the constructors Bar or Baz changes. Note: Foo does not have to be a record for this to work. For constructors with many elements, it can help to use records: data Foo2 = Bar2 | Baz2 {barNumber::Int, barName::String}

which then allows: h :: Foo2 -> Int h Baz2 {barName=name} = length name h Bar2 {} = 0

The one exception There is one exception to the rule that you can only pattern match with constructors. It's known as n+k patterns. It is indeed valid Haskell 98 to write something like: pred :: Int -> Int pred (n+1) = n

However, this is generally accepted as bad form and not many Haskell programmers like this exception, and so try to avoid it.

Where you can use it The short answer is that wherever you can bind variables, you can pattern match. Let's have a look at that more precisely. Equations The first place is in the left-hand side of function equations. For example, our above code for map: map _ [] = [] map f (x:xs) = f x : map f xs

Here we're binding, and doing pattern-matching, on the left hand side of both of these equations. Let expressions / Where clauses You can obviously bind variables with a let expression or where clause. As such, you can also do pattern matching here. A trivial example:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 58

let Just x = lookup "bar" [("foo", 1), ("bar", 2), ("baz", 3)]

Case expressions One of the most obvious places you can use pattern binding is on the left hand side of case branches: case someRandomList of [] -> "The list was empty" (x:xs) -> "The list wasn't empty: the first element was " ++ x ++ ", and " ++ "there were " ++ show (length xs) ++ " more elements in the list."

Lambdas As lambdas can be easily converted into functions, you can pattern match on the left-hand side of lambda expressions too: head = (\(x:xs) -> x)

Note that here, along with on the left-hand side of equations as described above, you have to use parentheses around your patterns (unless they're just _ or are just a binding, not a pattern, like x). List comprehensions After the | in list comprehensions, you can pattern match. This is actually extremely useful. For example, the function catMaybes from Data.Maybe takes a list of Maybes, filters all the Just xs, and gets rid of all the Just wrappers. It's easy to write it using list comprehensions: catMaybes :: [Maybe a] -> [a] catMaybes ms = [ x | Just x <- ms ]

If the pattern match fails, it just moves on to the next element in ms. (More formally, as list comprehensions are just the list monad, a failed pattern match invokes fail, which is the empty list in this case, and so gets ignored.) A few other places That's mostly it, but there are one or two other places you'll find as you progress through the book. Here's a list in case you're very eager already: In p <- x in do-blocks, p can be a pattern. Similarly, with let bindings in do-blocks, you can pattern match analogously to 'real' let bindings.

More about lists By now we have seen the basic tools for working with lists. We can build lists up from the cons operator (:) and the empty list [] (see Lists and tuples if you are unsure about this); and we can take them apart by using

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 59

a combination of Recursion and Pattern matching. In this chapter, we will delve a little deeper into the innerworkings and the use of Haskell lists. We'll discover a little bit of new notation and some characteristically Haskell-ish features like infinite lists and list comprehensions. But before going into this, let us step back for a moment and combine the things we have already learned about lists.

Constructing Lists We'll start by making a function to double every element of a list of integers. First, we must specify the type declaration for our function. For our purposes here, the function maps a list of integers to another list of integers: doubleList :: [Integer] -> [Integer]

Then, we must specify the function definition itself. We'll be using a recursive definition, which consists of 1. the general case which iteratively generates a successive and simpler general case and 2. the base case, where iteration stops. doubleList (n:ns) = (n * 2) : doubleList ns doubleList [] = []

Since by definition, there are no more elements beyond the end of a list, intuition tells us iteration must stop at the end of the list. The easiest way to accomplish this is to return the null list: As a constant, it halts our iteration. As the empty list, it doesn't change the value of any list we append it to. The general case requires some explanation. Remember that ":" is one of a special class of functions known as "constructors". The important thing about constructors is that they can be used to break things down as part of "pattern matching" on the left hand side of function definitions. In this case the argument passed to doubleList is broken down into the first element of the list (known as the "head") and the rest of the list (known as the "tail"). On the right hand side doubleList builds up a new list by using ":". It says that the first element of the result is twice the head of the argument, and the rest of the result is obtained by applying "doubleList" to the tail. Note the naming convention implicit in (n:ns). By appending an "s" to the element "n" we are forming its plural. The idea is that the head contains one item while the tail contains many, and so should be pluralised. So what actually happens when we evaluate the following? doubleList [1,2,3,4]

We can work this out longhand by substituting the argument into the function definition, just like schoolbook algebra:

doubleList 1:[2,3,4] = (1*2) = (1*2) = (1*2) = (1*2) = (1*2)

: : : : :

doubleList (2 : [3,4]) (2*2) : doubleList (3 : [4]) (2*2) : (3*2) : doubleList (4 : []) (2*2) : (3*2) : (4*2) : doubleList [] (2*2) : (3*2) : (4*2) : []

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 60

= 2 : 4 : 6 : 8 : [] = [2, 4, 6, 8]

Notice how the definition for empty lists terminates the recursion. Without it, the Haskell compiler would have had no way to know what to do when it reached the end of the list. Also notice that it would make no difference when we did the multiplications (unless one of them is an error or nontermination: we'll get to that later). If I had done them immediately it would have made absolutely no difference. This is an important property of Haskell: it is a "pure" functional programming language. Because evaluation order can never change the result, it is mostly left to the compiler to decide when to actually evaluate things. Haskell is a "lazy" evaluation language, so evaluation is usually deferred until the value is really needed, but the compiler is free to evaluate things sooner if this will improve efficiency. From the programmer's point of view evaluation order rarely matters (except in the case of infinite lists, of which more will be said shortly). Of course a function to double a list has limited generality. An obvious generalization would be to allow multiplication by any number. That is, we could write a function "multiplyList" that takes a multiplicand as well as a list of integers. It would be declared like this: multiplyList :: Integer -> [Integer] -> [Integer] multiplyList _ [] = [] multiplyList m (n:ns) = (m*n) : multiplyList m ns

This example introduces the "_", which is used for a "don't care" argument; it will match anything, like * does in shells or .* in regular expressions. The multiplicand is not used for the null case, so instead of being bound to an unused argument name it is explicitly thrown away, by "setting" _ to it. ("_" can be thought of as a write-only "variable".) The type declaration needs some explanation. Hiding behind the rather odd syntax is a deep and clever idea. The "->" arrow is actually an operator for types, and is right associative. So if you add in the implied brackets the type definition is actually multiplyList :: Integer -> ( [Integer] -> [Integer] )

Think about what this is saying. It means that "multiplyList" doesn't take two arguments. Instead it takes one (an Integer), and then returns a new function. This new function itself takes one argument (a list of Integers) and returns a new list of Integers. This process of functions taking one argument is called "currying", and is very important. The new function can be used in a straightforward way: evens = multiplyList 2 [1,2,3,4]

or it can do something which, in any other language, would be an error; this is partial function application and because we're using Haskell, we can write the following neat & elegant bits of code: doubleList = multiplyList 2 evens = doubleList [1,2,3,4]

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 61

It may help you to understand if you put the implied brackets in the first definition of "evens": evens = (multiplyList 2) [1,2,3,4]

In other words "multiplyList 2" returns a new function that is then applied to [1,2,3,4]. Dot Dot Notation Haskell has a convenient shorthand for specifying a list containing a sequence of integers. Some examples are enough to give the flavor: Code ---[1..10] [2,4..10] [5,4..1] [1,3..10]

Result -----[1,2,3,4,5,6,7,8,9,10] [2,4,6,8,10] [5,4,3,2,1] [1,3,5,7,9]

The same notation can be used for floating point numbers and characters as well. However, be careful with floating point numbers: rounding errors can cause unexpected things to happen. Try this: [0,0.1 .. 1]

Similarly, there are limits to what kind of sequence can be written through dot-dot notation. You can't put in [0,1,1,2,3,5,8..100]

and expect to get back the rest of the Fibonacci series, or put in the beginning of a geometric sequence like [1,3,9,27..100]

Infinite Lists One of the most mind-bending things about Haskell lists is that they are allowed to be infinite. For example, the following generates the infinite list of integers starting with 1: [1..]

(If you try this in GHCi, remember you can stop an evaluation with C-c). Or you could define the same list in a more primitive way by using a recursive function: intsFrom n = n : intsFrom (n+1) positiveInts = intsFrom 1

This works because Haskell uses lazy evaluation: it never actually evaluates more than it needs at any given moment. In most cases an infinite list can be treated just like an ordinary one. The program will only go into

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 62

an infinite loop when evaluation would actually require all the values in the list. Examples of this include sorting or printing the entire list. However: evens = doubleList [1..]

will define "evens" to be the infinite list [2,4,6,8....]. And you can pass "evens" into other functions, and it will all just work. See the exercise 4 below for an example of how to process an infinite list and then take the first few elements of the result. Infinite lists are quite useful in Haskell. Often it's more convenient to define an infinite list and then take the first few items than to create a finite list. Functions that process two lists in parallel generally stop with the shortest, so making the second one infinite avoids having to find the length of the first. An infinite list is often a handy alternative to the traditional endless loop at the top level of an interactive program. Exercises Write the following functions and test them out. Don't forget the type declarations. 1. takeInt returns the first n items in a list. So takeInt 4 [11,21,31,41,51,61] returns [11,21,31,41] 2. dropInt drops the first n items in a list and returns the rest. so dropInt 3 [11,21,31,41,51] returns [41,51]. 3. sumInt returns the sum of the items in a list. 4. scanSum adds the items in a list and returns a list of the running totals. So scanSum [2,3,4,5] returns [2,5,9,14]. Is there any difference between "scanSum (takeInt 10 [1..])" and "takeInt 10 (scanSum [1..])"? 5. diffs returns a list of the differences between adjacent items. So diffs [3,5,6,8] returns [2,1,2]. (Hint: write a second function that takes two lists and finds the difference between corresponding items).

Deconstructing lists So now we know how to generate lists by appending to the empty list, or using infinite lists and their notation. Very useful. But what happens if our function is not generating a list and handing it off to some other function, but is rather receiving a list? It needs to be analyzed and broken down in some way. For this purpose, Haskell includes the same basic functionality as other programming languages, except with better names than "cdr" or "car": the "head" and "tail" functions. head :: [a] -> a tail :: [a] -> [a]

From these two functions we can build pretty much all the functionality we want. If we want the first item in the list, a simple head will do:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Code ---head [1,2,3] head [5..100]

Page 63

Result -----1 5

If we want the second item in a list, we have to be a bit clever: head gives the first item in a list, and tail effectively removes the first item in a list. They can be combined, though: Code Result --------head(tail [1,2,3,4,5]) 2 head(tail (tail [1,2,3,4,5])) 3

Enough tails can reach to arbitrary elements; usually this is generalized into a function which is passed a list and a number, which gives the position in a list to return. Exercises Write a function which takes a list and a number and returns the given element; use head or tail, and not !!. List comprehensions This is one further way to deconstruct lists; it is called a List comprehension. List comprehensions are useful and concise expressions, although they are fairly rare. List comprehensions are basically syntactic sugar for a common pattern dealing with lists: when one wants to take a list and generate a new list composed only of elements of the first list that meet a certain condition. One could write this out manually. For example, suppose one wants to take a list [1..10], and only retain the even numbers? One could handcraft a recursive function called retainEven, based on a test for evenness which we've already written called isEven: isEven :: isEven n | | |

Integer -> Bool n < 0 = error "isEven needs a positive integer" ((mod n 2) == 0) = True -- Even numbers have no remainder when divided by 2 otherwise = False -- If it has a remainder of anything but 0, it is not even

retainEven :: [Integer] -> [Integer] retainEven [] = [] retainEven (e:es) | isEven e = e:retainEven es --If something is even, let's hang onto it | otherwise = retainEven es --If something isn't even, discard it and move on

Exercises Write a function which will take a list and return only odd numbers greater than 1. Hint: isOdd can be defined as the negation of isEven.

This is fairly verbose, though, and we had to go through a fair bit of effort and define an entirely new function just to accomplish the relatively simple task of filtering a list. Couldn't it be generalized? What we

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 64

want to do is construct a new list with only the elements of an old list for which some boolean condition is true. Well, we could generalize our function writing above like this, involving the higher-order functions map and filter. For example, the above can also be written as retainEven es = filter isEven es

We can do this through the list comprehension form, which looks like this: retainEven es = [ n | n <- es , isEven n ]

We can read the first half as an arbitrary expression modifying n, which will then be prepended to a new list. In this case, n isn't being modified, so we can think of this as repeatedly prepending the variable, like n:n:n:n:[] - but where n is different each time. n is drawn (the "<-") from the list es (a subtle point is that es can be the name of a list, or it can itself be a list). Thus if es is equal to [1,2,3,4], then we would get back the list [2,4]. Suppose we wanted to subtract one from every even? evensMinusOne es = [n - 1 | n<-es , isEven n ]

We can do more than that, and list comprehensions can be easily modifiable. Perhaps we wish to generalize factoring a list, instead of just factoring it by evenness (that is, by 2). Well, given that ((mod n x) == 0) returns true for numbers n which are factorizable by x, it's obvious how to use it, no? Write a function using a list comprehension which will take an integer, and a list of integers, and return a list of integers which are divisible by the first argument. In other words, the type signature is thus: returnfact :: Int -> [Int] -> [Int]

We can load the function, and test it with: returnFact 10 [10..1000]

which should give us this: *Main> returnFact 10 [10..1000] [10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200,....etc.]

Which is as it should be. But what if we want to write the opposite? What if we want to write a function which returns those integers which are not divisible? The modification is very simple, and the type signature the same. What decides whether a integer will be added to the list or not is the mod function, which currently returns true for those to be added. A simple 'not' suffices to reverse when it returns true, and so reverses the operation of the list:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 65

rmFact :: Int -> [Int] -> [Int] rmFact x ys = [n | n<-ys , (not ((mod n x) == 0))]

We can load it and give the equivalent test: *Main> rmFact 10 [10..1000] [11,12,13,14,15,16,17,18,19,21,22,23,24,25,26,27,28,29,......etc.]

Of course this function is not perfect. We can still do silly things like *Main> rmFact 0 [1..1000] *** Exception: divide by zero

We can stack on more tests besides the one: maybe all our even numbers should be larger than 2: evensLargerThanTwo = [ n | n <- [1..10] , isEven n, n > 2 ]

Fortunately, our Boolean tests are commutative, so it doesn't matter whether (n > 2) or (isEven 2) is evaluated first. Pattern matching in list comprehensions It's useful to note that the left arrow in list comprehensions can be used with pattern matching. For example, suppose we had a list of tuples [(Integer, Integer)]. What we would like to do is return the first element of every tuple whose second element is even. We could write it with a filter and a map, or we could write it as follows: firstOfEvens xys = [ x | (x,y) <- xys, isEven y ]

And if we wanted to double those first elements: doubleFirstOfEvens xys = [ 2 * x | (x,y) <- xys, isEven y ]

Control structures Haskell offers several ways of expressing a choice between different values. This section will describe them all and explain what they are for:

if Expressions

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 66

You have already seen these. The full syntax is: if then <true-value> else

If the is True then the <true-value> is returned, otherwise the is returned. Note that in Haskell if is an expression (returning a value) rather than a statement (to be executed). Because of this the usual indentation is different from imperative languages. If you need to break an if expression across multiple lines then you should indent it like one of these:

The else is required!

if then <1> else <0>

if then <true-value> else

Here is a simple example: message42 :: Integer -> String message42 n = if n == 42 then "The Answer is forty two." else "The Answer is not forty two."

Unlike many other languages, in Haskell the else is required. Since if is an expression, it must return a result, and the else ensures this.

case Expressions case expressions are a generalization of if expressions. As an example, let's clone if as a case: case of True -> <true-value> False -> _ -> error "Neither True nor False? How can that be?"

First, this checks for a pattern match against True. If they match, the whole expression will evaluate to <true-value>, otherwise it will continue down the list. You can use _ as the pattern wildcard. In fact, the left hand side of any case branch is just a pattern, so it can also be used for binding: case str of (x:xs) -> "The first character is " ++ [x] ++ "; the rest of the string is " ++ xs "" -> "This is the empty string."

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 67

This expression tells you whether str is the empty string or something else. Of course, you could just do this with an if-statement (with a condition of null str), but using a case binds variables to the head and tail of our list, which is convenient in this instance.

Equations and Case Expressions You can use multiple equations as an alternative to case expressions. The case expression above could be named describeString and written like this: describeString :: String -> String describeString (x:xs) = "The first character is " ++ [x] ++ "; the rest of the string is " ++ xs describeString "" = "This is the empty string."

Named functions and case expressions at the top level are completely interchangeable. In fact the function definition form shown here is just syntactic sugar for a case expression. The handy thing about case expressions is that they can go inside other expressions, or be used in an anonymous function. TODO: this isn't really limited to case. For example, this case expression returns a string which is then concatenated with two other strings to create the result: data Colour = Black | White | RGB Int Int Int describeColour c = "This colour is " ++ (case c of Black -> "black" White -> "white" RGB _ _ _ -> "freaky, man, sort of in between") ++ ", yeah?"

You can also put where clauses in a case expression, just as you can in functions: describeColour c = "This colour is " ++ (case c of Black -> "black" White -> "white" RGB red green blue -> "freaky, man, sort of " ++ show av where av = (red + green + blue) `div` 3 ) ++ ", yeah?"

Guards As shown, if we have a top-level case expression, we can just give multiple equations for the function instead, which is normally neater. Is there an analogue for if expressions? It turns out there is. We use some additonal syntax known as "guards". A guard is a boolean condition, like this: describeLetter :: Char -> String describeLetter c | c >= 'a' && c <= 'z' = "Lower case"

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 68

| c >= 'A' && c <= 'Z' = "Upper case" | otherwise = "Not a letter"

Note the lack of an = before the first |. Guards are evaluated in the order they appear. That is, if you have a set up similar to the following: f (pattern1) | | f (pattern2) | |

predicate1 predicate2 predicate3 predicate4

= = = =

w x y z

Then the input to f will be pattern-matched against pattern1. If it succeeds, then predicate1 will be evaluated. If this is true, then w is returned. If not, then predicate2 is evaluated. If this is true, then x is returned. Again, if not, then we jump out of this 'branch' of f and try to pattern match against pattern2, repeating the guards procedure with predicate3 and predicate4. If no guards match, an error will be produced at runtime, so it's always a good idea to leave an 'otherwise' guard in there to handle the "But this can't happen!' case. The otherwise you saw above is actually just a normal value defined in the Standard Prelude as: otherwise :: Bool otherwise = True

This works because of the sequential evaluation described a couple of paragraphs back: if none of the guards previous to your 'otherwise' one are true, then your otherwise will definitely be true and so whatever is on the right-hand side gets returned. It's just nice for readability's sake. 'where' and guards One nicety about guards is that where clauses are common to all guards. doStuff x | x < 3 = report "less than three" | otherwise = report "normal" where report y = "the input is " ++ y

The difference between if and case It's worth noting that there is a fundamental difference between if-expressions and case-expressions. ifexpressions, and guards, only check to see if a boolean expression evaluated to True. case-expressions, and multiple equations for the same function, pattern match against the input. Make sure you understand this important distinction.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 69

List processing Because lists are such a fundamental data type in Haskell it has a wide collection of functions for processing them. These are mostly to be found in a library module called the "Standard Prelude" which is automatically imported to all Haskell programs.

Map This module will explain one particularly important function, called "map", and then describe some of the other list processing functions that work in similar ways. Recall the "multiplyList" function from a couple of chapters ago. multiplyList :: Integer -> [Integer] -> [Integer] multiplyList _ [] = [] multiplyList m (n:ns) = (m*n) : multiplyList m ns

This works on a list of integers, multiplying each item by a constant. But Haskell allows us to pass functions around just as easily as we can pass integers. So instead of passing a multiplier "m" we could pass a function "f", like this: mapList1 :: (Integer -> Integer) -> [Integer] -> [Integer] mapList1 _ [] = [] mapList1 f (n:ns) = (f n) : mapList1 f ns

Take a minute to compare the two functions. The difference is in the first parameter. Instead of being just an integer it is now a function. This function parameter has the type "(Integer -> Integer)", meaning that it is a function from one integer to another. The second line says that if this is applied to an empty list then the result is itself an empty list, and the third line says that for a non-empty list the result is "f" applied to the first item in the list, followed by a recursive call to "mapList1" for the rest of the list. Remember that "*" has type "Integer -> Integer -> Integer". So if I write "(2*)" then this returns a new function that doubles its argument and has type "Integer -> Integer". But that is exactly what I can pass to "mapList1". So now I can write "doubleList" like this: doubleList = mapList1 (2*)

Or if I put in all the arguments I could also write doubleList ns = mapList1 (2*) ns

The two are equivalent because if I just pass one argument to mapList1 I get back a new function. The second version is more natural for newcomers to Haskell, but experts often favour the first, known as "point free" style. Obviously this idea is not limited to just integers. I could just as easily write

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 70

mapListString :: (String -> String) -> [String] -> [String] mapListString _ [] = [] mapListString f (n:ns) = (f n) : mapList1 f ns

and have a function that does this for strings. But this is horribly wasteful: the code is exactly the same for both strings and integers. What is needed is a way to say that "mapList" works for both Integers, Strings, and any other type I might want to put in a list. In fact there is no reason why the input list should be the same type as the output list: I might very well want to convert a list of integers into a list of their string representations, or vice versa. And indeed Haskell provides a way to do this. The Standard Prelude contains the following definition of "map": map :: (a -> b) -> [a] -> [b] map _ [] = [] map f (x:xs) = (f x) : map f xs

Instead of constant types like String or Integer this definition uses type variables. These start with lower case letters (as opposed to type constants that start with upper case) and otherwise follow the same lexical rules as normal variables. However the convention is to start with "a" and go up the alphabet. Even the most complicated functions rarely get beyond "d". So what this says is that "map" takes two parameters: A function from a thing of type "a" to a thing of type "b". A list of things of type "a". Then it returns a new list containing things of type "b", constructed by applying the function to all of the things of type "a". Exercises Use map to build functions that, given a list l of Ints, returns: A list that is the element-wise negation of l. A list of lists of Ints ll that, for each element of l, contains the factors of l. It will help to know that factors p = [ f | f <- [1..p], p `mod` f == 0 ]

The element-wise negation of ll.

Folds A fold applies a function to a list in a similar way to map, but it accumulates a single result instead of a list. Take for example, a function like sum, which might be implemented as follows:

Example: sum

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 71

sum :: [Integer] -> Integer sum [] = 0 sum (x:xs) = x + sum xs

or product:

Example: product product [] = 1 product (x:xs) = x * product xs

or, concat, which takes a list of lists and joins (concatenates) them into one:

Example: concat concat [] = [] concat (x:xs) = x ++ concat xs

There is a certain pattern of recursion common to all of these. It is known as a fold, possibly from the idea that a list is being "folded up" into a single value, or that a function is being "folded between" the elements of the list. The Standard Prelude has four fold functions: "foldr", "foldl", "foldr1" and "foldl1". The most natural and commonly used of these in a lazy language like Haskell is the right-associative foldr: foldr :: (a -> b -> b) -> b -> [a] -> b foldr f z [] = z foldr f z (x:xs) = f x (foldr f z xs)

The first argument is a function with two arguments, the second is a "zero" value for the accumulator, and the third is the list to be folded. For example, in sum, f is (+), and z is 0, and in concat, f is (++) and z is []. In many cases, like all of our examples so far, the function passed to a fold will have both its arguments be of the same type, but this is not necessarily the case in general. What foldr f z xs does is to replace each cons (:) in the list xs with the function f, and the empty list at the end with z. a : b : c : []

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 72

becomes f a (f b (f c z))

This is perhaps most elegantly seen by picturing the list data structure as a tree: : f / \ / \ a : foldr f z a f / \ -------------> / \ b : b f / \ / \ c [] c z

It is fairly easy to see with this picture that foldr (:) [] is just the identity function on lists. The left-associative foldl, processes the list in the opposite direction: foldl :: (a -> b -> a) -> a -> [b] -> a foldl f z [] = z foldl f z (x:xs) = foldl f (f z x) xs

So brackets in the resulting expression accumulate on the left. Our list above, after being transformed by foldl f z becomes: f (f (f z a) b) c

The corresponding trees look like: : f / \ / \ a : foldl f z f c / \ -------------> / \ b : f b / \ / \ c [] z a

Technical Note: The left associative fold is tail-recursive, that is, it recurses immediately, calling itself. For this reason the compiler will optimise it to a simple loop, and it will then be much more efficient than foldr. However, Haskell is a lazy language, and so the calls to f will by default be left unevaluated, building up an expression in memory whose size is linear in the length of the list, exactly what we hoped to avoid in the first place. To get back this efficiency, there is a version of foldl which is strict, that is, it forces the evaluation of f immediately, called foldl'. Note the single quote character: this is pronounced "foldell-tick". A tick is a valid character in Haskell identifiers. foldl' can be found in the library Data.List.. As a rule you should use foldr on lists that might be infinite or where the fold is building up a data structure, and foldl' if the list is known to be finite and comes down to a single value. As previously noted, the type declaration for foldr makes it quite possible for the list elements and result to be of different types. For example, "read" is a function that takes a string and converts it into some type (the type system is smart enough to figure out which one). In this case we convert it into a float.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 73

Example: The list elements and results can have different types addStr :: String -> Float -> Float addStr str x = read str + x sumStr :: [String] -> Float sumStr = foldr addStr 0.0

If you substitute the types Float and String for the type variables "a" and "b" in the type of foldr you will see that this is type correct. There is also a variant called foldr1 (that is "fold - arr - one") which dispenses with an explicit zero by taking the last element of the list instead: foldr1 :: foldr1 f [x] = foldr1 f (x:xs) = foldr1 _ [] =

(a -> a -> a) -> [a] -> a x f x (foldr1 f xs) error "Prelude.foldr1: empty list"

And foldl1 as well: foldl1 :: (a -> a -> a) -> [a] -> a foldl1 f (x:xs) = foldl f x xs foldl1 _ [] = error "Prelude.foldl1: empty list"

Note: There is additionally a strict version of foldl1 called foldl1' in the Data.List library. Notice that in this case all the types have to be the same, and that an empty list is an error. These variants are occasionally useful, especially when there is no obvious candidate for z, but you need to be sure that the list is not going to be empty. If in doubt, use foldr or foldl'. One good reason that right-associative folds are more natural to use in Haskell than left-associative ones is that right folds can operate on infinite lists, which are not so uncommon in Haskell programming. If the input function f only needs its first parameter to produce the first part of the output, then everything works just fine. However, a left fold will continue recursing, never producing anything in terms of output until it reaches the end of the input list. Needless to say, this never happens if the input list is infinite, and the program will spin endlessly in an infinite loop. As a toy example of how this can work, consider a function "echoes" taking a list of integers, and producing a list where if the number n occurs in the input list, then n replicated n times will occur in the output list. We will make use of the prelude function "replicate": replicate n x is a list of length n with x the value of every element. We can write echoes as a foldr quite handily: echoes = foldr (\x xs -> (replicate x x) ++ xs) []

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 74

or as a foldl: echoes = foldl (\xs x -> xs ++ (replicate x x)) []

but only the first definition works on an infinite list like [1..]. Try it! Note the syntax in the above example: the \xs x -> means that xs is set to the first argument outside the parentheses (in this case, []), and x is set to the second (will end up being the argument of echoes when it is called). As a final example, another thing that you might notice is that "map" itself is patterned as a fold: map f = foldr (\x xs -> f x : xs) []

Folding takes a little time to get used to, but it is a fundamental pattern in functional programming, and eventually becomes very natural. Any time you want to traverse a list and build up a result from its members you want a fold. Exercises Define the following functions recursively (like the definitions for sum, product and concat above), then turn them into a fold: and :: [Bool] -> Bool, which returns True if a list of Bools are all True, and False otherwise. or :: [Bool] -> Bool, which returns True if any of a list of Bools are True, and False otherwise. Define the following functions using foldl1 or foldr1: maximum :: Ord a => [a] -> a, which returns the maximum element of a list (hint: max :: Ord a => a -> a -> a returns the maximum of two values). minimum :: Ord a => [a] -> a, which returns the minimum element of a list (hint: min :: Ord a => a -> a -> a returns the minimum of two values).

Scans A "scan" is much like a cross between a map and a fold. Folding a list accumulates a single return value, whereas mapping puts each item through a function with no accumulation. A scan does both: it accumulates a value like a fold, but instead of returning a final value it returns a list of all the intermediate values. The Standard Prelude contains four scan functions: scanl

:: (a -> b -> a) -> a -> [b] -> [a]

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 75

This accumulates the list from the left, and the second argument becomes the first item in the resulting list. So scanl (+) 0 [1,2,3] = [0,1,3,6] scanl1

:: (a -> a -> a) -> [a] -> [a]

This is the same as scanl, but uses the first item of the list as a zero parameter. It is what you would typically use if the input and output items are the same type. Notice the difference in the type signatures. scanl1 (+) [1,2,3] = [1,3,6]. scanr scanr1

:: (a -> b -> b) -> b -> [a] -> [b] :: (a -> a -> a) -> [a] -> [a]

These two functions are the exact counterparts of scanl and scanl1. They accumulate the totals from the right. So: scanr (+) 0 [1,2,3] = [6,5,3,0] scanr1 (+) [1,2,3] = [6,5,3]

Exercises Define the following functions: factList :: Int -> [Int], which returns a list of factorials from 1 up to Int. Won't using Int here lead to overflow issues? More to be added

More on functions As functions are absolutely essential to functional programming, there are some nice features you can use to make using functions easier.

Private Functions Remember the sumStr function from the chapter on list processing. It used another function called addStr: addStr :: Float -> String -> Float addStr x str = x + read str

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 76

sumStr :: [String] -> Float sumStr = foldl addStr 0.0

So you could find that addStr 4.3 "23.7"

gives 28.0, and sumStr ["1.2", "4.3", "6.0"]

gives 11.5. But maybe you don't want addStr cluttering up the top level of your program. Haskell lets you nest declarations in two subtly different ways: sumStr = foldl addStr 0.0 where addStr x str = x + read str

sumStr = let addStr x str = x + read str in foldl addStr 0.0

The difference between let and where lies in the fact that let foo = 5 in foo + foo is an expression, but foo + foo where foo = 5 is not. (Try it: an interpreter will reject the latter expression.) Where clauses are part of the function declaration as a whole, which makes a difference when using guards.

Anonymous Functions An alternative to creating a named function like addStr is to create an anonymous function, also known as a lambda function. For example, sumStr could have been defined like this: sumStr = foldl (\x str -> x + read str) 0.0

The bit in the parentheses is a lambda function. The backslash is used as the nearest ASCII equivalent to the Greek letter lambda (λ). This example is a lambda function with two arguments, x and str, and the result is "x + read str". So, the sumStr presented just above is precisely the same as the one that used addStr in a let binding. Lambda functions are handy for one-off function parameters, especially where the function in question is simple. The example above is about as complicated as you want to get.

Infix versus Prefix As we noted in the previous chapter, you can take an operator and turn it into a function by surrounding it in brackets:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 77

2 + 4 (+) 2 4

This is called making the operator prefix: you're using it before its arguments, so it's known as a prefix function. We can now formalise the term 'operator': it's a function which is entirely non-alphanumeric characters, and is used infix (normally). You can define your own operators just the same as functions, just don't use any alphanumeric characters. For example, here's the set-difference definition from Data.List: (\\) :: Eq a => [a] -> [a] -> [a] xs \\ ys = foldl (\x y -> delete y x) xs ys

Note that aside from just using operators infix, you can define them infix as well. This is a point that most newcomers to Haskell miss. I.e., although one could have written: (\\) xs ys = foldl (\x y -> delete y x) xs ys

It's more common to define operators infix. However, do note that in type declarations, you have to surround the operators by parentheses. You can use a variant on this parentheses style for 'sections': (2+) 4 (+4) 2

These sections are functions in their own right. (2+) has the type Int -> Int, for example, and you can pass sections to other functions, e.g. map (+2) [1..4]. If you have a (prefix) function, and want to use it as an operator, simply surround it by backticks: 1 `elem` [1..4]

This is called making the function infix: you're using it in between its arguments. It's normally done for readability purposes: 1 `elem` [1..4] reads better than elem 1 [1..4]. You can also define functions infix: elem :: Eq a => a -> [a] -> Bool x `elem` xs = any (==x) xs

But once again notice that in the type signature you have to use the prefix style. Sections even work with infix functions: (1 `elem`) [1..4] (`elem` [1..4]) 1

You can only make binary functions (those that take two arguments) infix. Think about the functions you use, and see which ones would read better if you used them infix.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 78

Exercises Lambdas are a nice way to avoid defining unnecessary separate functions. Convert the following let- or where-bindings to lambdas: map f xs where f x = x * 2 + 3 let f x y = read x + y in foldr f 1 xs Sections are just syntactic sugar for lambda operations. I.e. (+2) is equivalent to \x -> x + 2. What would the following sections 'desugar' to? What would be their types? (4+) (1 `elem`) (`notElem` "abc")

Higher-order functions and Currying Higher-order functions are functions that take other functions as arguments. We have already met some of them, such as map, so there isn't anything really frightening or unfamiliar about them. They offer a form of abstraction that is unique to the functional programming style. In functional programming languages like Haskell, functions are just like any other value, so it doesn't get any harder to deal with higher-order functions. Higher order functions have a separate chapter in this book, not because they are particularly difficult -we've already worked with them, after all -- but because they are powerful enough to draw special attention to them. We will see in this chapter how much we can do if we can pass around functions as values. Generally speaking, it is a good idea to abstract over a functionality whenever we can. Besides, Haskell without higher order functions wouldn't be quite as much fun.

The Quickest Sorting Algorithm In Town Don't get too excited, but quickSort is certainly one of the quickest. Have you heard of it? If you did, you can skip the following subsection and go straight to the next one: The Idea Behind quickSort The idea is very much simple. For a big list, we pick an element, and divide the whole list into three parts. The first part has all elements that should go before that element, the second part consists of all of the elements that are equal to the picked element, the third has the elements that ought to go after that element. And then, of course, we are supposed to concatenate these. What we get is somewhat better, right? The trick is to note that only the first and the third are yet to be sorted, and for the second, sorting doesn't really make sense (they are all equal!). How to go about sorting the yet-to-be-sorted sub-lists? Why... apply the same algorithm on them again! By the time the whole process is finished, you get a completely sorted list.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 79

So Let's Get Down To It! -- if the list is empty, we do nothing -- note that this is the base case for the recursion quickSort [] = [] -- if there's only one element, no need to sort it -- actually, the third case takes care of this one pretty well -- I just wanted you to take it step by step quickSort [x] = [x] -- this is the gist of the process -- we pick the first element as our "pivot", the rest is to be sorted -- don't forget to include the pivot in the middle part! quickSort (x : xs) = (quickSort less) ++ (x : equal) ++ (quickSort more) where less = filter (< x) xs equal = filter (== x) xs more = filter (> x) xs

And we are done! I suppose if you have met quickSort before, you thought recursion is a neat trick but is hard to implement as so many things need to be kept track of.

Now, How Do We Use It? With quickSort at our disposal, sorting any list is a piece of cake. Suppose we have a list of String, maybe from a dictionary, and we want to sort them, we just apply quickSort to the list. For the rest of this chapter, we will use a pseudo-dictionary of words (but a 25,000 word dictionary should do the trick as well): dictionary = ["I", "have", "a", "thing", "for", "Linux"]

We get, for quickSort dictionary, ["I", "Linux", "a", "for", "have", "thing"]

But, what if we wanted to sort them in the descending order? Easy, just reverse the list, reverse sortedDictionary gives us what we want. But wait! We didn't really sort in the descending order, we sorted (in the ascending order) and reversed it. They may have the same effect, but they are not the same thing! Besides, you might object that the list you got isn't what you wanted. "a" should certainly be placed before "I". "Linux" should be placed between "have" and "thing". What's the problem here? The problem is, the way Strings are represented in a typical programming settings is by a list of ASCII characters. ASCII (and almost all other encodings of characters) specifies that the character code for capital letters are less than the small letters. Bummer. So "Z" is less than "a". We should do something about it. Looks like we need a case insensitive quickSort as well. It might come handy some day. But, there's no way you can blend that into quickSort as it stands. We have work to do.

Tweaking What We Already Have

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 80

What we need to do is to factor out the comparisons quickSort makes. We need to provide quickSort with a function that compares two elements, and gives an Ordering, and as you can imagine, an Ordering is any of LT, EQ, GT. To sort in the descending order, we supply quickSort with a function that returns the opposite of the usual Ordering. For the case-insensitive sort, we may need to define the function ourselves. By all means, we want to make quickSort applicable to all such functions so that we don't end up writing it over and over again, each time with only minor changes.

quickSort, Take Two So, forget the version of quickSort we have now, and let's think again. Our quickSort will take two things this time: first, the comparison function, and second, the list to sort. A comparison function will be a function that takes two things, say, x and y, and compares them. If x is less than y (according to the criteria we want to implement by this function), then the value will be LT. If they are equal (well, equal with respect to the comparison, we want "Linux" and "linux" to be equal when we are dealing with the insensitive case), we will have EQ. The remaining case gives us GT (pronounced: greater than, for obvious reasons). -- no matter how we compare two things -- the first two equations should not change -- they need to accept the comparison function though quickSort comparison [] = [] quickSort comparison [x] = [x] -- we are in a more general setting now -- but the changes are worth it! quickSort comparison (x : xs) = (quickSort comparison comparison more) where less = filter (\y equal = filter (\y more = filter (\y

less) ++ (x : equal) ++ (quickSort -> comparison y x == LT) xs -> comparison y x == EQ) xs -> comparison y x == GT) xs

Cool! Note Almost all the basic data types in Haskell are members of the Ord class. This class defines an ordering, the "natural" one. The functions (or, operators, in this case) (<), (<=) or (>) provide shortcuts to the compare function each type defines. When we want to use the natural ordering as defined by the types themselves, the above code can be written using those operators, as we did last time. In fact, that makes for much clearer style; however, we wrote it the long way just to make the relationship between sorting and comparing more evident.

But What Did We Gain? Reuse. We can reuse quickSort to serve different purposes.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 81

-- the usual ordering -- uses the compare function from the Ord class usual = compare -- the descending ordering, note we flip the order of the arguments to compare descending x y = compare y x -- the case-insensitive version is left as an exercise! insensitive = ... -- can you think of anything without making a very big list of all possible cases?

And we are done! quickSort usual dictionary

should, then, give ["I", "Linux", "a", "for", "have", "thing"]

The comparison is just compare from the Ord class. This was our quickSort, before the tweaking.

quickSort descending dictionary

now gives ["thing", "have", "for", "a", "Linux", "I"]

And finally, quickSort insensitive dictionary

gives ["a", "for", "have", "I", "Linux", "thing"]

Exactly what we wanted! Exercises Write insensitive, such that quickSort insensitive dictionary gives ["a", "for", "have", "I", "Linux", "thing"]

Higher-Order Functions and Types http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 82

Our quickSort has type (a -> a -> Ordering) -> [a] -> [a]. Most of the time, the type of a higher-order function provides a good guideline about how to use it. A straightforward way of reading the type signature would be, "quickSort takes a function that gives an ordering of as, and a list of as, to give a list of as". It is then natural to guess that the function sorts the list respecting the given ordering function. Note that the parentheses surrounding a -> a -> Ordering is mandatory. It says that a -> a -> Ordering altogether form a single argument, an argument that happens to be a function. What happens if we omit the parentheses? We would get a function of type a -> a -> Ordering -> [a] -> [a], which accepts four arguments instead of the desired two (a -> a -> Ordering and [a]). Furthermore none of the four arguments, neither a nor Ordering nor [a] are functions, so omitting the parentheses would give us something that isn't a higher order function. Furthermore, it's worth noting that the -> operator is right-associative, which means that a -> a -> Ordering -> [a] -> [a] means the same thing as a -> (a -> (Ordering -> ([a] -> [a]))). We really must insist that the a -> a -> Ordering be clumped together by writing those parenetheses... but wait... if -> is right-associative, wouldn't that mean that the correct signature (a -> a -> Ordering) -> [a] -> [a] actualy means... (a -> a -> Ordering) -> ([a] -> [a]) ? Is that really what we want? If you think about it, we're trying to build a function that takes two arguments, a function and a list, returning a list. Instead, what this type signature is telling us is that our function takes ONE argument (a function) and returns another function. That is profoundly odd... but if you're lucky, it might also strike you as being profoundly beautiful. Functions in multiple arguments are fundamentally the same thing as functions that take one argument and give another function back. It's OK if you're not entirely convinced. We'll go into a little bit more detail below and then show how something like this can be turned to our advantage.

Currying

Intermediate Haskell Modules Modules Haskell modules are a useful way to group a set of related functionalities into a single package and manage a set of different functions that have the same name. The module definition is the first thing that goes in your Haskell file.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 83

Here is what a basic module definition looks like: module YourModule where

Note that 1. Each file contains only one module 2. The name of the module begins with a capital letter

Importing One thing your module can do is import functions from other modules. Thats is, in between the module declaration and the rest of your code, you may include some import declarations such as -- import only the functions toLower and toUpper from Data.Char import Data.Char (toLower, toUpper) -- import everything exported from Data.List import Data.List -- import everything exported from MyModule import MyModule

Imported datatypes are specified by their name, followed by a list of imported constructors in parenthesis. For example: -- import only the Tree data type, and its Node constructor from Data.Tree import Data.Tree (Tree(Node))

Now what to do if you import some modules, but some of them have overlapping definitions? Or if you import a module, but want to overwrite a function yourself? There are three ways to handle these cases: Qualified imports, hiding definitions and renaming imports. Qualified imports Say MyModule and MyOtherModule both have a definition for remove_e, which removes all instances of e from a string. However, MyModule only removes lower-case e's, and MyOtherModule removes both upper and lower case. In this case the following code is ambiguous: -- import everything exported from MyModule import MyModule -- import everything exported from MyOtherModule import MyOtherModule -- someFunction puts a c in front of the text, and removes all e's from the rest someFunction :: String -> String someFunction text = 'c' : remove_e text

In this case, it isn't clear which remove_e is meant. To avoid this, use the qualified keyword:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 84

import qualified MyModule import qualified MyOtherModule someFunction text = 'c' : MyModule.remove_e text -- Will work, removes lower case e's someOtherFunction text = 'c' : MyOtherModule.remove_e text -- Will work, removes all e's someIllegalFunction text = 'c' : remove_e text -- Won't work, remove_e isn't defined.

See the difference. In this case the function remove_e isn't even defined. We call the functions from the imported modules by adding the module's name. Note that MyModule.remove_e also works if the qualified flag isn't included. The difference lies in the fact that remove_e is ambiguously defined in the first case, and undefined in the second case. If we have a remove_e defined in the current module, then using remove_e without any prefix will call this function. Note There is an ambiguity between a qualified name like MyModule.remove_e and function composition (.). Writing reverse.MyModule.remove_e is bound to confuse your Haskell compiler. One solution is stylistic: to always use spaces for function composition, for example, reverse . remove_e or Just . remove_e or even Just . MyModule.remove_e

Hiding definitions Now suppose we want to import both MyModule and MyOtherModule, but we know for sure we want to remove all e's, not just the lower cased ones. It will become really tedious (and disorderly) to add MyOtherModule before every call to remove_e. Can't we just not import remove_e from MyModule? The answer is: yes we can. -- Note that I didn't use qualified this time. import MyModule hiding (remove_e) import MyOtherModule someFunction text = 'c' : remove_e text

This works. Why? Because of the word hiding on the import line. Followed by it, is a list of functions that shouldn't be imported. Hiding more than one function works like this: import MyModule hiding (remove_e, remove_f)

Note that algebraic datatypes and type synonyms cannot be hidden. These are always imported. If you have a datatype defined in more modules, you must use qualified. Renaming imports This is not really a technique to allow for overwriting, but it is often used along with the qualified flag. Imagine:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 85

import qualified MyModuleWithAVeryLongModuleName someFunction text = 'c' : MyModuleWithAVeryLongModuleName.remove_e $ text

Especially when using qualified, this gets irritating. What we can do about it, is using the as keyword: import qualified MyModuleWithAVeryLongModuleName as Shorty someFunction text = 'c' : Shorty.remove_e $ text

This allows us to use Shorty instead of MyModuleWithAVeryLongModuleName as prefix for the imported functions. As long as there are no ambiguous definitions, the following is also possible: import MyModule as My import MyCompletelyDifferentModule as My

In this case, both the functions in MyModule and the functions in MyCompletelyDifferentModule can be prefixed with My.

Exporting In the examples at the start of this article, the words "import everything exported from MyModule" were used. This raises a question. How can we decide which functions are exported and which stay "internal"? Here's how: module MyModule (remove_e, add_two) where add_one blah = blah + 1 remove_e text = filter (/= 'e') text add_two blah = add_one . add_one $ blah

In this case, only remove_e and add_two are exported. While add_two is allowed to make use of add_ one, functions in modules that import MyModule aren't allowed to try to use add_one, as it isn't exported. Datatype export specifications are written quite similarly to import. You name the type, and follow with the list of constructors in parenthesis: module MyModule2 (Tree(Branch, Leaf)) where data Tree a = Branch {left, right :: Tree a} | Leaf a

In this case, the module declaration could be rewritten "MyModule2 (Tree(..))", declaring that all constructors are exported.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 86

Note: maintaining an export list is good practise not only because it reduces namespace pollution, but also because it enables certain compile-time optimizations (http://www.haskell.org/haskellwiki/Performance/ GHC#Inlining) which are unavailable otherwise.

Notes In Haskell98, the last standardised version of Haskell, the module system is fairly conservative. But recent common practice consists of using an hierarchical module system, using periods to section off namespaces. A module may export functions that it imports. See the Haskell report for more details on the module system: http://www.haskell.org/onlinereport/modules.html

Indentation Haskell relies on indentation to reduce the verbosity of your code, but working with the indentation rules can be a bit confusing. The rules may seem many and arbitrary, but the reality of things is that there are only one or two layout rules, and all the seeming complexity and arbitrariness comes from how these rules interact with your code. So to take the frustration out of indentation and layout, the simplest solution is to get a grip on these rules.

The golden rule of indentation Whilst the rest of this chapter will discuss in detail Haskell's indentation system, you will do fairly well if you just remember a single rule:

Code which is part of some expression should be indented further in than the line containing the beginning of that expression

What does that mean? The easiest example is a let binding group. The equations binding the variables are part of the let expression, and so should be indented further in than the beginning of the binding group: the let keyword. So, let x = a y = b

Although you actually only need to indent by one extra space, it's more normal to place the first line alongside the 'let' and indent the rest to line up: let x = a y = b

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 87

Here are some more examples: do foo bar baz where x = a y = b case x of p -> foo p' -> baz

Note that with 'case' it's less common to place the next expression on the same line as the beginning of the expression, as with 'do' and 'where'. Also note we lined up the arrows here: this is purely aesthetic and isn't counted as different layout; only indentation, whitespace beginning on the far-left edge, makes a difference to layout. Things get more complicated when the beginning of the expression isn't right at the left-hand edge. In this case, it's safe to just indent further than the beginning of the line containing the beginning of the expression. So, myFunction firstArgument secondArgument = do -- the 'do' isn't right at the left-hand edge foo -- so indent these commands more than the beginning of the line containing the 'do'. bar baz

Here are some alternative layouts to the above which would have also worked: myFunction firstArgument secondArgument = do foo bar baz myFunction firstArgument secondArgument = do foo bar baz

A mechanical translation Did you know that layout (whitespace) is optional? It is entirely possible to treat Haskell as a one-dimensional language like C, using semicolons to separate things, and curly braces to group them back. To understand layout, you need to understand two things: where we need semicolons/braces, and how to get there from layout. The entire layout process can be summed up in three translation rules:

It is sometimes useful to avoid layout or to mix it with semicolons and braces.

1. If you see one of the layout keywords, (let, where, of, do), insert an open curly brace (right before the stuff that follows it) 2. If you see something indented to the SAME level, insert a semicolon 3. If you see something indented LESS, insert a closing curly brace

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 88

Exercises In one word, what happens if you see something indented MORE? to be completed: work through an example Exercises Translate the following layout into curly braces and semicolons. Note: to underscore the mechanical nature of this process, we deliberately chose something which is probably not valid Haskell: of a b c d where a b c do you like the way i let myself abuse these layout rules

Layout in action Wrong do first thing second thing third thing

Right do first thing second thing third thing

do within if What happens if we put a do expression with an if? Well, as we stated above, the keywords if then else, and everything besides the 4 layout keywords do not affect layout. So things remain exactly the same: Wrong if foo then do first thing second thing third thing else do something else

Right if foo then do first thing second thing third thing else do something else

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 89

Indent to the first Remember from the First Rule of Layout Translation (above) that although the keyword do tells Haskell to insert a curly brace, where the curly braces goes depends not on the do, but the thing that immediately follows it. For example, this weird block of code is totally acceptable: do first thing second thing third thing

As a result, you could also write combined if/do combination like this: Wrong

if foo then do first thing second thing third thing else do something else

Right if foo then do first thing second thing third thing else do something else

This is also the reason why you can write things like this main = do first thing second thing

instead of main = do first thing second thing

Both are acceptable if within do This is a combination which trips up many Haskell programmers. Why does the following block of code not work? -- why is this bad? do first thing if condition then foo else bar third thing

Just to reiterate, the if then else block is not at fault for this problem. Instead, the issue is that the do block notices that the then part is indented to the same column as the if part, so it is not very happy,

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 90

because from its point of view, it just found a new statement of the block. It is as if you had written the unsugared version on the right: sweet (layout)

unsweet

-- why is this bad? -- still bad, just explicitly so do first thing do { first thing ; if condition if condition then foo ; then foo ; else bar else bar third thing ; third thing }

Naturally enough, your Haskell compiler is unimpressed, because it thinks that you never finished writing your if expression, before charging off to write some other new statement, oh ye of little attention span. Your compiler sees that you have written something like if condition;, which is clearly bad, because it is unfinished. So, in order to fix this, we need to indent the bottom parts of this if block a little bit inwards sweet (layout)

unsweet

-- whew, fixed it! -- the fixed version without sugar do first thing do { first thing if condition ; if condition then foo then foo else bar else bar third thing ; third thing }

This little bit of indentation prevents the do block from misinterpreting your then as a brand new expression. Exercises The if-within-do problem has tripped up so many Haskellers, that one programmer has posted a proposal (http://hackage.haskell.org/trac/haskell-prime/ticket/23) to the Haskell prime initiative to add optional semicolons between if then else. How would that fix the problem?

References The Haskell Report (lexemes) (http://www.haskell.org/onlinereport/lexemes.html#sect2.7) - see 2.7 on layout

More on datatypes Enumerations One special case of the data declaration is the enumeration. This is simply a data type where none of the constructor functions have any arguments:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 91

data Month = January | February | March | April | May | June | July | August | September | October | November | December

You can mix constructors that do and do not have arguments, but its only an enumeration if none of the constructors have arguments. The section below on "Deriving" explains why the distinction is important. For instance, data Colour = Black | Red | Green | Blue | Cyan | Yellow | Magenta | White | RGB Int Int Int

The last constructor takes three arguments, so Colour is not an enumeration. Incidentally, the definition of the Bool datatype is: data Bool = False | True deriving (Eq, Ord, Enum, Read, Show, Bounded)

Named Fields (Record Syntax) Consider a datatype whose purpose is to hold configuration settings. Usually when you extract members from this type, you really only care about one or possibly two of the many settings. Moreover, if many of the settings have the same type, you might often find yourself wondering "wait, was this the fourth or fifth element?" One thing you could do would be to write accessor functions. Consider the following made-up configuration type for a terminal program: data Configuration = Configuration String String String Bool Bool String String Integer deriving (Eq, Show)

---------

user name local host remote host is guest? is super user? current directory home directory time connected

You could then write accessor functions, like (I've only listed a few): getUserName (Configuration un _ _ _ _ _ _ _) = un getLocalHost (Configuration _ lh _ _ _ _ _ _) = lh getRemoteHost (Configuration _ _ rh _ _ _ _ _) = rh getIsGuest (Configuration _ _ _ ig _ _ _ _) = ig ...

You could also write update functions to update a single element. Of course, now if you add an element to the configuration, or remove one, all of these functions now have to take a different number of arguments. This is highly annoying and is an easy place for bugs to slip in. However, there's a solution. We simply give names to the fields in the datatype declaration, as follows:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

data Configuration = Configuration { username localhost remotehost isguest issuperuser currentdir homedir timeconnected }

:: :: :: :: :: :: :: ::

Page 92

String, String, String, Bool, Bool, String, String, Integer

This will automatically generate the following accessor functions for us: username :: Configuration -> String localhost :: Configuration -> String ...

Moreover, it gives us very convenient update methods. Here is a short example for a "post working directory" and "change directory" like functions that work on Configurations: changeDir :: Configuration -> String -> Configuration changeDir cfg newDir = -- make sure the directory exists if directoryExists newDir then -- change our current directory cfg{currentdir = newDir} else error "directory does not exist" postWorkingDir :: Configuration -> String -- retrieve our current directory postWorkingDir cfg = currentdir cfg

So, in general, to update the field x in a datatype y to z, you write y{x=z}. You can change more than one; each should be separated by commas, for instance, y{x=z, a=b, c=d}. It's only sugar You can of course continue to pattern match against Configurations as you did before. The named fields are simply syntactic sugar; you can still write something like: getUserName (Configuration un _ _ _ _ _ _ _) = un

But there is little reason to. Finally, you can pattern match against named fields as in: getHostData (Configuration {localhost=lh,remotehost=rh}) = (lh,rh)

This matches the variable lh against the localhost field on the Configuration and the variable rh against the remotehost field on the Configuration. These matches of course succeed. You could also constrain the matches by putting values instead of variable names in these positions, as you would for standard datatypes.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 93

You can create values of Configuration in the old way as shown in the first definition below, or in the named-field's type, as shown in the second definition below: initCFG = Configuration "nobody" "nowhere" "nowhere" False False "/" "/" 0 initCFG' = Configuration { username="nobody", localhost="nowhere", remotehost="nowhere", isguest=False, issuperuser=False, currentdir="/", homedir="/", timeconnected=0 }

Though the second is probably much more understandable unless you litter your code with comments.

Parameterised Types Parameterised types are similar to "generic" or "template" types in other languages. A parameterised type takes one or more type parameters. For example the Standard Prelude type Maybe is defined as follows: data Maybe a = Nothing | Just a

This says that the type Maybe takes a type parameter a. You can use this to declare, for example: lookupBirthday :: [Anniversary] -> String -> Maybe Anniversary

The lookupBirthday function takes a list of birthday records and a string and returns a Maybe Anniversary. Typically, our interpretation is that if it finds the name then it will return Just the corresponding record, and otherwise, it will return Nothing. You can parameterise type and newtype declarations in exactly the same way. Furthermore you can combine parameterised types in arbitrary ways to construct new types. More than one type parameter We can also have more than one type parameter. An example of this is the Either type: data Either a b = Left a | Right b

For example: eitherExample :: Int -> Either Int String eitherExample a | even a = Left (a/2) | a `mod` 3 == 0 = Right "three" | otherwise = Right "neither two or three" otherFunction :: Int -> String otherFunction a = case eitherExample a of

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 94

Left c = "Even: " ++ show a ++ " = 2*" ++ show c ++ "." Right s = show a ++ " is divisible by " ++ s ++ "."

In this example, when you call otherFunction, it'll return a String. If you give it an even number as argument, it'll say so, and give half of it. If you give it anything else, eitherExample will determine if it's divisible by three and pass it through to otherFunction.

Kind Errors The flexibility of Haskell parameterised types can lead to errors in type declarations that are somewhat like type errors, except that they occur in the type declarations rather than in the program proper. Errors in these "types of types" are known as "kind" errors. You don't program with kinds: the compiler infers them for itself. But if you get parameterised types wrong then the compiler will report a kind error.

Trees Now let's look at one of the most important datastructures: Trees. A tree is an example of a recursive datatype. Typically, its definition will look like this: data Tree a = Leaf a | Branch a (Tree a) (Tree a)

As you can see, it's parameterised, so we can have trees of Ints, trees of Strings, trees of Maybe Ints, even trees of (Int, String) tuples, if you really want. What makes it special is that Tree appears in the definition of itself. We will see how this works by using an already known example: the list. Lists as Trees Think about it. As we have seen in the List Processing chapter, we break lists down into two cases: An empty list (denoted by []), and an element of the specified type, with another list (denoted by (x:xs)). This gives us valuable insight about the definition of lists: data [a] = [] | (a:[a]) -- Pseudo-Haskell, will not work properly.

Which is sometimes written as (for Lisp-inclined people): data List a = Nil | Cons a (List a)

As you can see this is also recursive, like the tree we had. Here, the constructor functions are [] and (:). They represent what we have called Leaf and Branch. We can use these in pattern matching, just as we did with the empty list and the (x:xs): Maps and Folds We already know about maps and folds for lists. With our realisation that a list is some sort of tree, we can try to write map and fold functions for our own type Tree. To recap:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 95

data Tree a = Leaf a | Branch a (Tree a) (Tree a) data [a] = [] | (:) a [a] -- (:) a [a] would be the same as (a:[a]) with prefix instead of infix notation.

I will handle map first, then folds. Map Let's take a look at the definition of map for lists: map :: (a -> b) -> [a] -> [b] map _ [] = [] map f (x:xs) = f x : map f xs

First, if we were to write treeMap, what would its type be? Defining the function is easier if you have an idea of what its type should be. We want it to work on a Tree of some type, and it should return another Tree of some type. What treeMap does is applying a function on each element of the tree, so we also need a function. In short: treeMap :: (a -> b) -> Tree a -> Tree b

See how this is similar to the list example? Next, we should start with the easiest case. When talking about a Tree, this is obviously the case of a Leaf. A Leaf only contains a single value, so all we have to do is apply the function to that value and then return a Leaf with the altered value: treeMap :: (a -> b) -> Tree a -> Tree b treeMap f (Leaf x) = Leaf (f x)

Also, this looks a lot like the empty list case with map. Now what happens if we have a Branch. This will include one value of type a, and two other trees. The function we take as argument can transform this value of type a into a value of type b, but what about the two subtrees? When looking at the list-map, you can see it uses a call to itself on the tail of the list. We also shall do that with the two subtrees. The complete definition of treeMap is as follows: treeMap :: (a -> b) -> Tree a -> Tree b treeMap f (Leaf x) = Leaf (f x) treeMap f (Branch x firstSub secondSub) = Branch (f x) (treeMap f firstSub) (treeMap f secondSub)

If you don't understand it just now, re-read it. Especially the use of pattern matching may seem weird at first, but it is essential to the use of datatypes. The most important thing to remember is that pattern matching happens on constructor functions. If you understand it, read on for folds. Fold

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 96

Now we've had the treeMap, let's try to write a treeFold. Again let's take a look at the definition of foldr for lists, as those are easier to understand. foldr :: (a -> b -> b) -> b -> [a] -> b foldr f z [] = z foldr f z (x:xs) = f x (foldr f z xs)

I'll use the same strategy to find a definition for treeFold as I did for treeMap. First, the type. What do we want it to do? We need a tree of some type to transform into a value of some other type. This Tree a fits nicely into the place of [a]. In case of a Leaf, we will want some replacement, and in case of a Branch we'll need a function that combines a value of type a and two already folded trees into a value of type b. This gives us the following idea for a type definition: treeFold :: (a -> b -> b -> b) -> b -> Tree a -> b

The (a -> b -> b -> b) might look frightening, but remember: the 'a' is the single value in a Branch, the first and second 'b' are the two subtrees, the third 'b' is the return type. Now, let's figure out what to do in case of a Leaf. We had a separate 'b' in our type definition especially for that purpose, so let's use it here: treeFold :: (a -> b -> b -> b) -> b -> Tree a -> b treeFold f z (Leaf x) = f x z z

This looks similar to foldr on lists except that we are applying f to the Leaf value x, and using z as fillers for the two remaining parameters to f (remember that f takes 3 parameters altogether). Now for the Branch. First look at foldr. What does it do? It applies the function to the two 'parts' of the list: the front element and the folded version of the rest of the list. We have a function that works on three parameters. These are our single value, and the folded versions of the two subtrees. Our full definition becomes: treeFold :: (a -> b -> b -> b) -> b -> Tree a -> b treeFold f z (Leaf x) = f x z z treeFold f z (Branch x firstSub secondSub) = f x (treeFold f z firstSub) (treeFold f z secondSub)

And that is what we wanted. For examples of how these work, copy the Tree data definition and the treeMap and treeFold functions to a Haskell file, along with the following: --helper functions for treeFold. Here firstSub and secondSub are the already folded subtrees. --a and b as in the treeFold definition addTree :: Int -> Int -> Int -> Int -- a = Int and b = Int addTree x firstSub secondSub = x + firstSub + secondSub treeConcat :: c -> [c] -> [c] -> [c] -- a = c and b = [c] treeConcat x firstSub secondSub = x : (firstSub ++ secondSub) tree1 :: Tree Int tree1 = Branch 1 (Branch 3 (Branch 5 (Leaf 7)

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 97

(Branch 6 (Leaf 4) (Leaf 1))) (Branch 2 (Leaf 3) (Branch 7 (Leaf 9) (Leaf 2)))) (Branch 1 (Branch 8 (Leaf 2) (Leaf 1)) (Leaf 5)) add1Tree :: Tree Int -> Tree Int add1Tree = treeMap (+1) addTreeElements = treeFold addTree 0 treeToList = treeFold treeConcat []

Then load it into your favourite Haskell interpreter, and evaluate: add1Tree tree1 addTreeElements tree1 treeToList tree1

Other datatypes Now, unlike mentioned in the chapter about trees, folds and maps aren't tree-only. They are very useful for any kind of data type. Let's look at the following, somewhat weird, type: data Weird a b = First a | Second b | Third [(a,b)] | Fourth (Weird a b)

There's no way you will be using this in a program written yourself, but it demonstrates how folds and maps are really constructed. General Map Again, we start with weirdMap. Now, unlike before, this Weird type has two parameters. This means that we can't just use one function (as was the case for lists and Tree), but we need more. For every parameter, we need one function. The type of weirdMap will be: weirdMap :: (a -> c) -> (b -> d) -> Weird a b -> Weird c d

Read it again, and it makes sense. Maps don't throw away the structure of a datatype, so if we start with a Weird thing, the output is also a Weird thing. Now we have to split it up into patterns. Remember that these patterns are the constructor functions. To avoid having to type the names of the functions again and again, I use a where clause: weirdMap :: (a -> c) -> (b -> d) -> Weird a b -> Weird c d weirdMap fa fb = weirdMap' where weirdMap' (First a) = --More to follow weirdMap' (Second b) = --More to follow

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 98

weirdMap' (Third ((a,b):xs)) = --More to follow weirdMap' (Fourth w) = --More to follow

It isn't very hard to find the definition for the First and Second constructors. The list of (a,b) tuples is harder. The Fourth is even recursive! Remember that a map preserves structure. This is important. That means, a list of tuples stays a list of tuples. Only the types are changed in some way or another. You might have already guessed what we should do with the list of tuples. We need to make another list, of which the elements are tuples. This might sound silly to repeat, but it becomes clear that we first have to change individual elements into other tuples, and then add them to a list. Together with the First and Second constructors, we get: weirdMap :: (a -> c) -> (b -> d) -> Weird a b -> Weird c d weirdMap fa fb = weirdMap' where weirdMap' (First a) = First (fa a) weirdMap' (Second b) = Second (fb b) weirdMap' (Third ((a,b):xs)) = Third ( (fa a, fb b) : weirdMap' (Third xs)) weirdMap' (Fourth w) = --More to follow

First we change (a,b) into (fa a, fb b). Next we need the mapped version of the rest of the list to add to it. Since we don't know a function for a list of (a,b), we must change it back to a Weird value, by adding Third. This isn't really stylish, though, as we first "unwrap" the Weird package, and then pack it back in. This can be changed into a more elegant solution, in which we don't even have to break list elements into tuples! Remember we already had a function to change a list of some type into another list, of a different type? Yup, it's our good old map function for lists. Now what if the first type was, say (a,b), and the second type (c,d)? That seems useable. Now we must think about the function we're mapping over the list. We have already found it in the above definition: It's the function that sends (a,b) to (fa a, fb b). To write it in the Lambda Notation: \(a, b) -> (fa a, fb b). weirdMap :: (a -> c) -> (b -> d) -> Weird a b -> Weird c d weirdMap fa fb = weirdMap' where weirdMap' (First a) = First (fa a) weirdMap' (Second b) = Second (fb b) weirdMap' (Third list) = Third ( map (\(a, b) -> (fa a, fb b) ) list) weirdMap' (Fourth w) = --More to follow

That's it! We only have to match the list once, and call the list-map function on it. Now for the Fourth Constructor. This is actually really easy. Just weirdMap it again! weirdMap :: (a -> c) -> (b -> d) -> Weird a b -> Weird c d weirdMap fa fb = weirdMap' where weirdMap' (First a) = First (fa a) weirdMap' (Second b) = Second (fb b) weirdMap' (Third list) = Third ( map (\(a, b) -> (fa a, fb b) ) list) weirdMap' (Fourth w) = Fourth (weirdMap w)

General Fold

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 99

Where we were able to define a map, by giving it a function for every separate type, this isn't enough for a fold. For a fold, we'll need a function for every constructor function. This is also the case with lists! Remember the constructors of a list are [] and (:). The 'z'-argument in the foldr function corresponds to the []-constructor. The 'f'-argument in the foldr function corresponds to the (:) constructor. The Weird datatype has four constructors, so we need four functions. Next, we have a parameter of the Weird a b type, and we want to end up with some other type of value. Even more specific: the return type of each individual function we pass to weirdFold will be the return type of weirdFold itself. weirdFold :: (something1 -> c) -> (something2 -> c) -> (something3 -> c) -> (something4 -> c) -> Weird a b -> c

This in itself won't work. We still need the types of something1, something2, something3 and something4. But since we know the constructors, this won't be much of a problem. Let's first write down a sketch for our definition. Again, I use a where clause, so I don't have to write the four function all the time. weirdFold :: (something1 -> c) -> (something2 -> c) -> Weird a b -> c weirdFold f1 f2 f3 f4 = weirdFold' where weirdFold' First a = --Something of type c weirdFold' Second b = --Something of type c weirdFold' Third list = --Something of type c weirdFold' Fourth w = --Something of type c

(something3 -> c) -> (something4 -> c) ->

here here here here

Again, the types and definitions of the first two functions are easy to find. The third one isn't very difficult either, as it's just some other combination with 'a' and 'b'. The fourth one, however, is recursive, and we have to watch out. As in the case of weirdMap, we also need to recursively use the weirdFold function here. This brings us to the following, final, definition: weirdFold :: (a -> c) -> (b -> c) -> ([(a,b)] -> c) -> (c -> c) -> Weird a b -> c weirdFold f1 f2 f3 f4 = weirdFold' where weirdFold' First a = f1 a weirdFold' Second b = f2 b weirdFold' Third list = f3 list weirdFold' Fourth w = f4 (weirdFold f1 f2 f3 f4 w)

In which the hardest part, supplying of f1, f2, f3 and f4, is left out. Folds on recursive datatypes Since I didn't bring enough recursiveness in the Weird a b datatype, here's some help for the even weirder things. Someone, please clean this up! Weird was a fairly nice datatype. Just one recursive constructor, which isn't even nested inside other structures. What would happen if we added a fifth constructor? Fifth [Weird a b] a (Weird a a, Maybe (Weird a b))

A valid, and difficult, question. In general, the following rules apply:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 100

A function to be supplied to a fold has the same amount of arguments as the corresponding constructor. The type of such a function is the same as the type of the constructor. The only difference is that every instance of the type the constructor belongs to, should be replaced by the type of the fold. If a constructor is recursive, the complete fold function should be applied to the recursive part. If a recursive instance appears in another structure, the appropriate map function should be used So f5 would have the type: f5 :: [c] -> a -> (Weird a a, Maybe c)

as the type of Fifth is: Fifth :: [Weird a b] -> a -> (Weird a a, Maybe (Weird a b))

The definition of weirdFold' for the Fifth constructor will be: weirdFold' Fifth list a (waa, maybe) = f5 (map (weirdFold f1 f2 f3 f4 f5) list) a (waa, maybeMap (weirdFold f1 f2 f3 f4 f5) maybe) where maybeMap f Nothing = Nothing maybeMap f (Just w) = Just (f w)

Now note that nothing strange happens with the Weird a a part. No weirdFold gets called. What's up? This is a recursion, right? Well... not really. Weird a a has another type than Weird a b, so it isn't a real recursion. It isn't guaranteed that, for example, f2 will work with something of type 'a', where it expects a type 'b'. It can be true for some cases, but not for everything. Also look at the definition of maybeMap. Verify that it is indeed a map function: It preserves structure. Only types are changed.

Class declarations Type classes are a way of ensuring you have certain operations defined on your inputs. For example, if you know a certain types instantiates the class Fractional, then you can find its reciprocal.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 101

Note For programmers coming from C++, Java and other object-oriented languages: the concept of "class" in Haskell is not the same as in OO languages. There are just enough similarities to cause confusion, but not enough to let you reason by analogy with what you already know. When you work through this section try to forget everything you already know about classes and subtyping. It might help to mentally substitute the word "group" (or "interface") for "class" when reading this section. Java programmers in particular may find it useful to think of Haskell classes as being akin to Java interfaces.

Introduction Haskell has several numeric types, including Int, Integer and Float. You can add any two numbers of the same type together, but not numbers of different types. You can also compare two numbers of the same type for equality. You can also compare two values of type Bool for equality, but you cannot add them together. The Haskell type system expresses these rules using classes. A class is a template for types: it specifies the operations that the types must support. A type is said to be an "instance" of a class if it supports these operations. For instance, here is the definition of the "Eq" class from the Standard Prelude. It defines the == and /= functions. class Eq a where (==), (/=) :: a -> a -> Bool -- Minimal complete definition: -(==) or (/=) x /= y = not (x == y) x == y = not (x /= y)

This says that a type a is an instance of Eq if it supports these two functions. It also gives default definitions of the functions in terms of each other. This means that if an instance of Eq defines one of these functions then the other one will be defined automatically. Here is how we declare that a type is an instance of Eq: data Foo = Foo {x :: Integer, str :: String} instance Eq Foo where (Foo x1 str1) == (Foo x2 str2) = (x1 == x2) && (str1 == str2)

There are several things to notice about this: The class Eq is defined in the standard prelude. This code sample defines the type Foo and then declares it to be an instance of Eq. The three definitions (class, data type and instance) are completely

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 102

separate and there is no rule about how they are grouped. You could just as easily create a new class Bar and then declare the type Integer to be an instance of it. Types and classes are not the same thing. A class is a "template" for types. Again this is unlike most OO languages, where a class is also itself a type. The definition of == depends on the fact that Integer and String are also members of Eq. In fact almost all types in Haskell (the most notable exception being functions) are members of Eq. You can only declare types to be instances of a class if they were defined with data or newtype. Type synonyms are not allowed.

Deriving Obviously most of the data types you create in any real program should be members of Eq, and for that matter a lot of them will also be members of other Standard Prelude classes such as Ord and Show. This would require large amounts of boilerplate for every new type, so Haskell has a convenient way to declare the "obvious" instance definitions using the keyword deriving. Using it, Foo would be written as: data Foo = Foo {x :: Integer, str :: String} deriving (Eq, Ord, Show)

This makes Foo an instance of Eq with exactly the same definition of == as before, and also makes it an instance of Ord and Show for good measure. If you are only deriving from one class then you can omit the parentheses around its name, e.g.: data Foo = Foo {x :: Integer, str :: String} deriving Eq

You can only use deriving with a limited set of built-in classes. They are: Eq Equality operators == and /= Ord Comparison operators < <= > >=. Also min and max. Enum For enumerations only. Allows the use of list syntax such as [Blue .. Green]. Bounded Also for enumerations, but can also be used on types that have only one constructor. Provides minBound and maxBound, the lowest and highest values that the type can take. Show Defines the function show (note the letter case of the class and function names) which converts the type to a string. Also defines some other functions that will be described later. Read Defines the function read which parses a string into a value of the type. As with Show it also defines some other functions as well.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 103

The precise rules for deriving the relevant functions are given in the language report. However they can generally be relied upon to be the "right thing" for most cases. The types of elements inside the data type must also be instances of the class you are deriving. This provision of special magic for a limited set of predefined classes goes against the general Haskell philosophy that "built in things are not special". However it does save a lot of typing. Experimental work with Template Haskell is looking at how this magic, or something like it, can be extended to all classes.

Class Inheritance Classes can inherit from other classes. For example, here is the definition of the class Ord from the Standard Prelude, for types that have comparison operators: class (Eq a) => Ord a compare (<), (<=), (>=), (>) max, min

where :: a -> a -> Ordering :: a -> a -> Bool :: a -> a -> a

The actual definition is rather longer and includes default implementations for most of the functions. The point here is that Ord inherits from Eq. This is indicated by the => symbol in the first line. It says that any type that is an instance of Ord is also an instance of Eq, and hence must also implement the == and /= operations. A class can inherit from several other classes: just put all the ancestor classes in the parentheses before the =>. Strictly speaking those parentheses can be omitted for a single ancestor, but including them acts as a visual prompt that this is not the class being defined and hence makes for easier reading.

Standard Classes This diagram, copied from the Haskell Report, shows the relationships between the classes and types in the Standard Prelude. The names in bold are the classes. The non-bold text are the types that are instances of each class. The (->) refers to functions and the [] refers to lists.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 104

Classes and types Classes and Types Simple Type Constraints So far we have seen how to declare classes, how to declare types, and how to declare that types are instances of classes. But there is something missing. How do we declare the type of a simple arithmetic function? plus x y = x + y

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 105

Obviously x and y must be of the same type because you can't add different numbers together. So how about: plus :: a -> a -> a

which says that plus takes two values and returns a new value, and all three values are of the same type. But there is a problem: the arguments to plus need to be of a type that supports addition. Instances of the class Num support addition, so we need to limit the type signature to just that class. The syntax for this is: plus :: Num a => a -> a -> a

This says that the type of the arguments to plus must be an instance of Num, which is what we want. You can put several limits into a type signature like this: foo :: (Num a, Show a, Show b) => a -> a -> b -> String foo x y t = show x ++ " plus " ++ show y ++ " is " ++ show (x+y) ++ ".

" ++ show t

This says that the arguments x and y must be of the same type, and that type must be an instance of both Num and Show. Furthermore the final argument t must be of some (possibly different) type that is also an instance of Show. You can omit the parentheses for a single constraint, but they are required for multiple constraints. Actually it is common practice to put even single constraints in parentheses because it makes things easier to read. More Type Constraints You can put a type constraint in almost any type declaration. The only exception is a type synonym declaration. The following is not legal: type (Num a) => Foo a = a -> a -> a

But you can say: data (Num a) => Foo a = F1 a | F2 Integer

This declares a type Foo with two constructors. F1 takes any numeric type, while F2 takes an integer. You can also use type parameters in newtype and instance declarations. Class inheritance (see the previous section) also uses the same syntax.

Monads Understanding monads http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 106

Introduction Haskell is a pure functional language. That means that nothing is allowed to have a side effect. However this is a bit of a problem if we want to do something involving side effects. Most languages are "imperative": they have no problem with side effects. Part of their programmer model is a "flow of control". A statement which has a side effect can change the result of something that happens further along in the flow of control. But because Haskell has no side effects it has no concept of flow of control either. What we need is some way to capture the pattern "do X and then do Y, where Y may be affected by X". Monads are the way we do this. It may seem odd to have to do all this work just to do what imperative languages do automatically, but there is an important difference. An imperative language can only provide one method of flow control and for side effects to propagate. In fact almost all imperative languages do this exactly the same way. Haskell provides this model of side effect propagation as a special case, called the IO monad. But others are possible: The Prolog language provides a different approach to side effect propagation. Prolog tries to find a combination of values under which a predicate evaluates to True. When it meets an expression that evaluates to False it backs up and tries a different value. This backtracking makes Prolog great for logic problems but lousy for anything else. Parser generators such as YACC execute code when a grammar clause is recognised. The outputs from sub-clauses are passed to outer clauses automatically. The great thing about Haskell is that you can create your own monads. That means you can create your own rules for how side effects propagate from one statement to the next, and then mix and match those rules to suit the particular bit of the problem you are working on. If you are writing a parser then use the Parser monad. If you are solving a logic problem then use the List monad. And if you are talking to something outside the program then use the IO monad. In a sense, each monad is its own little minilanguage specially suited for its particular task. dollar Imagine the following bit of code: i (h (g (f x)))

Pretty ugly, isn't it? Fortunately, the Haskell library has a very handy operator called dollar, or actually ($), which allows us to rewrite the same code in a more readable manner: i $ h $ g $ f x

One could almost think of this as piping x through the functions f, g, h and i. Implementing ($) (http:// www.haskell.org/ghc/docs/latest/html/libraries/base/Prelude.html#v%3A%24) is just a simple matter of function application. Here's the implementation in one line of Haskell (two if you count the type signature):

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 107

($) :: (a -> b) -> a -> b f $ x = f x

Note This definition is not quite complete. We also need (infixr 0 $) to specify that it is right-associative with very low precedence. Also, if you're not convinced that dollar is a useful thing to have, compare: i ((h z) ((g y) (f x))) vs. i $ h z $ g y $ f x

euro The dollar operator allows us to remove a certain number of parentheses from our code, often adding clarity. One thing which might make it even more intuitive is if it worked backwards. Say we wrote an operator called euro that does exactly the same thing as dollar, but with the arguments flipped around. (€) :: a -> (a -> b) -> b x € f = f x

N.B.: the euro symbol isn't valid Haskell... if you want to try this, use (|>) as an operator instead

Now, what on earth would something like this be good for? Let's revisit the dollar example from above: i $ h $ g $ f x

This is what the same example would look like using euro: f x € g € h € i

This example should look vaguely familiar to programmers with experience in imperative languages like Java. To drive the point home, we would even write the example above over multiple lines: f x € g € h € i

One could almost think of these euros as being the semicolons from C or Java. It's not entirely the same, because here we have the concept of doing f and then doing g, but we don't have any way for f to affect what g does apart from the data it explicitly passes. It's actually closer to Unix pipes than anything else. Nevertheless, this notion of "sequencing" is basically 1/3 of the story behind monads.

The nuclear waste metaphor

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 108

We've got sequencing down and would like to go further. In this section, we'll see a bit more what it means by "going further", that is, what we're really trying to accomplish and how it relates to the euro operator. This is also where things take a turn for the different, as the rest of this chapter will be infused with a rather heavy dose of metaphor. Imagine that we are working in a giant factory which is in the business of treating large amounts of nuclear waste. Scattered throughout our factory are a bunch of waste processors, machines for treating the waste at various stages of "production". A waste processor is just a metaphor for a function: it takes nuclear waste in, and spits nuclear waste out.

Keep in mind that there is a huge variety of waste to deal with in our factory. There is also huge variety of waste processing machines to treat them, but each machine is highly specialised, or typed. Each machine is custom-built to accept one type of waste input and produce exactly one type of output. Up to here, we have not done anything unusual. We have simply provided a new metaphor for functions in Haskell. But let's take a short breath. Here are the things we are manipulating so far: 1. nuclear waste (inputs, values) 2. waste processors (functions) One thing we would really like to do is somehow connect our machines together to form a single assembly line: you insert some waste into one machine, and whatever comes out, you feed directly into the next. The problem is that this is nuclear waste that we're dealing with, and just running around with large quantities of waste would cause our workers to get radiation poisoning and die. So we need a solution that isolates the workers from the materials they are working with. Use a container The first thing we're going to do is simplify matters by only connecting together machines from our deluxe ultra-modern line. What makes these ultra-modern machines so special is that they pack the outgoing, treated nuclear waste into a special container, thus making the waste much safer to handle:

Of course, ideally, we'd be able to make use of all the machines in our shop, but let us concentrate on the newest machines first and eventually expand to the rest of the factory. You'll note that the new machines all have a very similar type signature, something like a -> m b. What this signature means is that they take something of type a and return something of type b, but since we're dealing with nasty radioactive waste, we pack the stuff into a container m. As we shall discover later in this tutorial, the containers have many

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 109

interesting uses in the real world -- far beyond our artificial concerns of workers and radiation poisoning. It is going to take us a while to get there though, so let us continue slowly working our way up.

bind (>>=) As we saw above, our job is to connect processing machines together, that is, to send nuclear waste from one processing machine to another. We've accomplished half of this job by -- for now -- concentrating only on processing machines which output the waste in a container. The only problem is that processing machines do not accept containers as inputs, they accept nuclear waste! We could decide to restrict ourselves to machines which accept containers instead of raw nuclear waste, but as it turns out, there is a more elegant solution. What we're going to do is create a kind of robot that takes a container and waste processor, removes the waste from its container and feeds the waste into the processor. This robot shall be called bind informally but will be written in Haskell as >>=. This is roughly what the bind robot would do:

container >>= fn = let a = extractWaste container in fn a

Its type signature is then (>>=) :: m a -> (a -> m b) -> m b

Let's read this one piece a time: 1. It takes in waste in a container (m a). 2. It takes a processing machine (a -> m b). 3. After unpacking the container and feeding the waste in, it sends out whatever the waste processor produces. Because the type is (m b) this must also be in a waste container: bind must not be used with machines that output the waste without a container. We can get around this by having another machine that straps on to any processing machine and puts the output in a container. Call this putInContainer.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 110

To be precise, bind sometimes does do something to the container that the processor sends out, but this detail does not matter right now.

Bind and the euro Remember the euro operator from early on in this tutorial? Well, bind pretty much serves the same purpose; with the exception being that it handles all this business of removing nuclear waste from containers. But the idea remains the same. Using the bind robot, we can chain together various processing machines much in the same way that you would use euro in a non-monadic context. To illustrate this idea, here is an example of three waste processors connected together by bind. They all take nuclear waste and return containers, and the bind operator simply feeds the output of one processing machine into another. wasteInAContainer >>= (\a1 -> putInContainer (decompose a1)) >>= (\a2 -> putInContainer (decay a2)) >>= (\a3 -> putInContainer (melt a3))

Remember that bind is written >>=

So now we have the idea that the bind robot is used to connect the output of one processing machine (waste in a container) into the input of another processing machine. Notice that because of the type of bind the resulting chain of machines must always output waste in a container, so we can wrap up the chain of machines in a single big box and treat it as a single machine. This is a very important property because it means that we can construct arbitrarily complex machines.

Different factories (monads) So we have a notion of composing machines together via the bind robot, but let's take a small step back and look at the bigger picture. Did you ever consider that there can be different kinds of factories for treating the same waste? What's interesting about this is that the way the bind robot works -- the way it is implemented -depends on each factory.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 111

In the upcoming section, I will discuss two simple factories, Maybe and List, and show what the corresponding bind robot looks like. But first, let us quickly review the list of metaphors we are manipulating so far, just to make sure we are all on the same page 1. 2. 3. 4. 5.

processing machines (functions) nuclear waste (inputs) containers (monadic values) the bind robot >>= factories (monads)

The Maybe monad Note: to understand this section, it really helps to have used the Maybe datatype in Haskell. The Maybe monad is one of the simplest monads you can show that does something interesting. In a Maybe factory, the bind robot looks something like this: container >>= fn = case container of Nothing -> Nothing Just a -> fn a

So what's the story here? There are two kinds of containers to be used in Maybe-land: those that contain nothing and those that contain a piece of waste. If bind receives a container with nothing in it, well there isn't much to do, we just return an empty container as well. If, however, there was a piece of waste, well then we use pattern matching on Just (i.e. Just a) to extract the waste from the container, and then we feed it to the processing machine fn. The result must be boxed to be safe! fn must be of type a -> Maybe b then. Now remember, the processing machines we're interested in all output nuclear waste in containers, so as far as types are concerned, everything fits together: either we return Nothing (which is a container) or we return whatever fn a returns, which also is a container. Just to be explicit: the type signature of bind in general is as follows (>>=) :: m a -> (a -> m b) -> m b

In a Maybe factory, this reduces down to something a little more specific: (>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b

The [] monad (List) Another simple monad to work with is [] (List). Here's what the bind robot looks like in a [] (List) factory : container >>= fn = case container of

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 112

[] -> [] xs -> concat (map fn xs)

This largely resembles the Maybe monad: except this time, we can either have an empty container or some kind of multi-component container that holds several pieces of nuclear waste (all of the same type, of course). If we get an empty container, we just return an empty container. If we have several pieces of nuclear waste in the container, then we have to individually feed each one of these pieces into the processing machine. This gives us a bunch of containers, which we then have to merge (concat) into one single container so that all the types fit together and everything continues working smoothly. Here is the type of the bind robot in a [] factory: (>>=) :: [a] -> (a -> [b]) -> [b]

Beyond Maybe and List If everything up to here has been easy, despite the tortured metaphors, we are now in an excellent position because we have the understood the essence of how monads work. The next thing we will need to concentrate on is figuring out how to do something truly useful with them, something more substantial than manipulating Maybe and List. Return But before going further, it's time to revisit some of simplifying assumptions. Way back in the beginning, we decided to focus only on the fancy next-generation machines which output their waste in a container. But what about all the old machines, perfectly good machines that we can't afford to retrofit for container-output capability? These processing machines can be helped with a little robot called return, whose only job is to take raw nuclear waste and put it into containers. Having the return robot would let us bring all these old-school processing machines in line with the rest of our factory. We just have to call return on them. The type signature of return is something like this: return :: a -> m a

Here is what return looks like in the Maybe factory: return a = Just a

Nothing to it. No pun intended. And in the List factory? return a = [a]

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 113

In our earlier examples, we used an imaginary function called putInContainer. In fact, this is exactly the return robot we've just shown you. What happens when you are working with monads is that with some processing machines, you have to use return function to wrap the output waste in a container. Why bother? return puts nuclear waste in containers, and bind takes them back out. It might seem somewhat ludicrous that we're going through all of this machinery only to cancel ourselves out. Why bother? One reason is that some processing machines have their monad compatibility built-in. They don't need some special function like putInContainer or some robot like return because returning containers is part of their raison d'être. You can recognise these processing machines by their type, because they always return a monadic value. The putStr function, for example, returns IO (), which is simply an IO container with a waste of type () inside. So one justification for all this monadic stuff is that it lets us handle these fancy new processing machines in an elegant manner. If connecting the older container-less processing machines together was the only issue, we could have just used something simpler, like dollar or euro. There are also many other reasons, for example, keeping our factories nice and tidy

The State monad Being able to construct a daisy chain of waste processing machines is all very well and good. But how do we deal with side effects? Let's have a look at the State monad and see. The State monad is where things start to get really useful, but it is also where they start to get a little crazy. No matter what happens here, it is useful to keep in mind that we're still always doing the same thing, building a bind robot which takes a container, takes a processing machine, extracts the waste from the container, feeds it into the processing machine, and sends out whatever the processing machine produces. A State monad is useful for passing information around at the same time we run our functions. The tricky thing here is that in a State factory, the container is itself a function! return a = \st -> (a, st)

This looks a little exotic, but we can reassure ourselves that it's really more of the same thing by comparing the implementations of other return functions: Maybe

List

State

return a = Just a return a = [a] return a = \st -> (a, st)

See, nothing special. With Maybe, we return a maybe, with List, we return a list, and with State, well, we return a function. To continue abusing the nuclear waste metaphor, we can say containers in the State factory are very sophisticated: they all have a ticket reader, and when you feed a ticket (st) into the container, it opens up to reveal a piece of waste and a new ticket (a, st).

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 114

This new ticket can be seen as a receipt. Now in the case of return, the container trivially reveals the same ticket that it was fed, but other containers might not do the same. In fact, that is the whole point! Tickets are used to represent some kind of state (say some kind of routing information), and this mechanism of taking tickets in and spitting receipt tickets out is how we pass state information from one processing machine into another. State and the bind robot Under the State monad, the bind robot implements what one might call a bureaucratic mess. But remember, it's doing exactly the same thing as all the other bind robots, just under the conditions of the State factory. container >>= fn = \st -> let (a, st2) = container st container2 = fn a in container2 st2

Experienced Haskellers and other observant readers might notice that we're slightly fudging it with the types! Please bear with us, we'll fess up with the details later!

To start things off, don't worry about the \st. Just imagine that somehow magically, we have a ticket st. This is very fortunate, because the only way we're going to get our sophisticated State containers to open is by feeding them a ticket. In the line (a, st2) = container st, we do exactly that; we feed our ticket st into the container. And it opens up to reveal both the waste and a receipt (a, st2). Next, in line container2 = fn a, we feed the waste into the processing machine fn, which by the way, outputs a container, as is the practice in our factories. Here is the hard part: what does the line in container2 st2 mean? Well, here it's useful to ignore the whole let..in construct and think of the whole expression. Ultimately, the implementation of bind is \st -> container2 st2. And all this does is to encapsulate an interesting chain reaction into a container container. The idea is that when you feed a ticket (st) into the container: 1. st gets fed to the first container. This results in waste and a new receipt (a, st2). 2. a gets fed into processing machine fn. This results in a new container (container2) 3. the outer container now feeds the new ticket (st2) into the new container (container2) and what comes out is yet another piece of waste and a new receipt which represents the result of the whole Rube Goldberg contraption.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 115

This is the hardest part to understand. Once we have gotten a hang of this part, we essentially have the monads story cinched. It's all easy from here on out. Useful functions Here are a couple of functions that increase the usefulness of the State monad. Note that these are processing machines, like all the others; they accept nuclear waste and produce containers. One thing that makes them special though, is that they are the kind of function that have monad-compatibility built right in! But they only work in a State factory, though. get and put The functions get and put are incredibly simple, and also incredibly useful. get simply returns the current state, and put sets it to something else get = \st -> (st,st) put x = \_ -> ((),x)

The idea behind these functions is that they can be inserted into your chain of processing machines with a simple bind operator. One thing which is odd is that the waste that is sent out by get is a ticket! It is a state! Why? Well, remember how bind works with State. It pulls the value out of the (value, state) pair and then feeds that into the function the right of the >>=. That means if we had a function f which did something with the current state, we could do this: get >>= f

get copies the current state as the value, so when we bind the result of get, we access the current state. The put function is similarly exotic. Whereas the nuclear waste returned by get is a ticket, the waste returned by put is simply (), which is akin to unit or void from other languages and isn't very

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 116

interesting in itself. But that's ok, because putting things in states isn't about the nuclear waste, it's about the tickets. The thing you have to be careful of is that the tickets always have to be the same type. If you are using a State thumbprint monad, then you can only put thumbprints. If you are using a State Int monad, then you can only put Ints. anonymous bind (>>) We have seen that the functions get and put are weird because they return tickets and () as nuclear waste, and we have also seen that they are useful because they allow you to manipulate tickets as if they were waste. If you are paying close attention, however, you should notice that something is terribly amiss. Suppose that we want to observe the state of a container foo. That would be written like this: foo >>= get

What happens to the nuclear waste from foo? The bind operator is supposed to unpack that waste and feed it into get; however, get isn't expecting any nuclear waste at all. We've just broken our entire chain of waste processing machines! Recall the type of the bind operator: m a -> (a -> m b) -> m b

The problem here is that we're trying to plug get into the a -> m b side of things, when in fact the type of the get operator is merely m b. But no worries, because fixing this little technical detail is very easy. We have to introduce an anonymous bind, >>, yes, it's yet another operator to learn about, but please relax, because its job is astoundingly simple: (>>) :: m a -> m b -> m b container >> f = container >>= (\_ -> f)

The anonymous bind operator's only job is provide a wrapper around the traditional >>=. It takes an inputfree waste outputting machine (i.e. one which does not process nuclear waste) and transforms it into an inputaccepting machine that completely ignores the incoming waste and continues about its business of outputting nuclear waste in a container. Thus, you cannot make calls like foo >>= get -- no!

But what you can do is make calls like foo >> get

which is really just a more succint way of saying f >>= (\_ -> get). Likewise, you probably don't want to have any sequences like this: put st >>= bar -- probably not

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 117

You could, but what would be the point? The nuclear waste returned by put is merely (), remember? That's not very useful, so you most likely just want to ignore it altogether: put st >> bar

An important Haskell detail The code above for the State monad is not proper Haskell! Just as with the fictitious € operator, we have taken a few minor liberties with the syntax in the interest of clarity. Now that things are (hopefully) clear, let us make them correct. The problem lies in the idea of using a function as a container: return a = \st -> (a, st)

This doesn't entirely make sense. It means that the type of our container would be something like return :: a -> (st -> (a,st)), when what we really need to be returning is something of type State st a. But that's just a minor detail. All we have to do is wrap up the function with a constructor. We can define the State monad as follows: data State s a = State (s -> (a,s))

The return function is not very much different from our initial white lie; it just packs everything up with a constructor. return a = State (\st -> (a, st))

The bind operator would likewise have to be modified, but that's just extra bureaucracy. You have to take the function out of State, call it (as before), and return a new function in State. We'll leave this as an exercise. Exercises Correct the definition for (>>=) to take the State constructor into consideration. You'll need to use pattern matching to remove the State constructor. runState Note also that the real definition for State has a slightly different style: newtype State s a = State { runState :: s -> (a, s) }

That's an odd-looking beast, but a quick dissection reveals that there is nothing out of the ordinary. To begin with, we can mentally substitute the newtype for the more familiar data, which takes some of the exoticness out of things. Next, we observe that the { runState :: s -> (a, s) } is really just a record with one element. The name of the element might be confusing, because it lends the false impression that State monads are somehow objects that contain a function runState, the way an Address type might contain a street name. But it turns out the choice of runState is just a sleight-of-Haskell. Recall from the presentation of named fields that the record syntax automatically gives us a projection function to access parts of the record. If I have an address type like data Address = Address { street ::

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 118

String, number :: Int }, street would be a function of type Address -> String. That being said, what is the type of runState? Is it s -> (a, s) as we might be tempted to think from its type signature? Certainly not! Just as street is of type Address -> String, runState is of type State s a -> s -> (a,s). The runState function merely gives us a way to access this container without pattern-matching on State. That's not all. Up to now, we've only considered the issue of putting things into the State monad (return) and sequencing them together (>>=). What's been crucially missing up to now is a way to get them back out. That's exactly what runState is for. Exercises 1. Slightly rewrite your definition of (>>=) to use runState instead of pattern-matching on State 2. TODO: an exercise which uses runState in a more realistic setting.

The (I)O monad Understanding the State monad is essentially all there is to understanding that IO monad we make so much use of. The first useful idea is to simplify matters by only concentrating on output. Let's call this the O monad. The O monad is simply a state monad where the state (the ticket) is a list of things to be written to the screen. Putting something on the screen simply consists of appending something to the list. putStr Perhaps a good way to illustrate the point is to show one way that the putStr function would work: putStr str = \out -> ((), out ++ str)

That's all there is to it. We append the string to the output. If this isn't completely clear, try noticing how much this putStr in our hypothetical O monad looks like the put function in the State monad. Now, in real life, it is very rare that people write things like foo >>= putStr

What usually happens is that programmers already know what String they want to put... but that's ok, because they can just use the anonymous bind operator: foo >> putStr "hello"

What about input? So what about all the complicated stuff like stdin and stderr? Same old thing. The IO monad is still just a State monad, but instead of the state being a list, it is now a tuple of lists, one for each file handle. Or to be more realistic, the state in an IO monad is some horribly complicated data structure which represents the state of the computer at some point in the computation. That's what you're passing around when you manipulate IO: the entire environment as nothing more than a state.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 119

do notation Now that we know what the underlying mechanisms behind monads are, it's time to reconsider the "secret sauce" behind Haskell monads: do-notation. We've gotten by so far with seemingly magical rules like, ">= and return machinery we've seen in this chapter. Consider this fragment of monadic code: wasteInAContainer >>= \a1 -> foo a1 >>= \a2 -> bar a2 >>= \a3 -> baz a3

One might reasonably argue that code like this is cumbersome and impractical to write. This sounds like a job for syntactic sugar. We begin by slightly adjusting the whitespace so that all of these lambdas move up, leaving the newlines in an admittably funky place: wasteInAContainer >>= \a1 -> foo a1 >>= \a2 -> bar a2 >>= \a3 -> baz a3

All the do notation does is move the binds and lambdas from the right to the left: do a1 <- wasteInAContainer a2 <- foo a1 a3 <- bar a2 baz a3

See? Same code, but sugarfied. There's a bit more to the do notation, especially the use of let, and of the anonymous bind (>>) for lines without a left arrow (<-) [except for that pesky last line]. You can learn more about this by looking at the Haskell report or in Yet Another Haskell Tutorial.

Conclusion There is still a good bit of ground to cover on monads. We'll see much more in the rest of this book. In the meantime, it is also worth looking at other tutorials or even the Haskell API on Control.Monad (http:// www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad.html) , to get a more complete picture. Browsing the Control.Monad implementation (http://darcs.haskell.org/packages/base/Control/Monad.hs) , for example, is definitely a worthwhile experience. Exercises Write a tutorial explaining how monads work. You might find inspiration in the tutorials listed on the Haskell meta-tutorial (http://www.haskell.org/haskellwiki/ Meta-tutorial) . Try to find a new audience for your tutorial, or a new way of explaining things.

Acknowledgments

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 120

Without a combination of Hal Daume's Yet Another Haskell Tutorial and Jeff Newbern's excellent All about Monads (http://www.haskell.org/all_about_monads/html/) , I wouldn't have had the slightest clue what a Monad was. Hopefully this tutorial will provide another useful angle from which to understand the whole idea behind monads. Brian Slesinsky pointed out a pretty big goof in version 0.7 of this tutorial: I had been incorrectly writing the bind operator as <<=. Thanks much!

Advanced monads This chapter follows on from Understanding monads, and explains a few more of the more advanced concepts.

Monads as computations The concept A metaphor we explored in the last chapter was that of monads as containers. That is, we looked at what monads are in terms of their structure. What was touched on but not fully explored is why we use monads. After all, monads structurally can be very simple, so why bother at all? The secret is in the view that each monad represents a different type of computation. Here, and in the rest of this chapter, a 'computation' is simply a function call: we're computing the result of this function. In a minute, we'll give some examples to explain what we mean by this, but first, let's re-interpret our basic monadic operators: >>= The >>= operator is used to sequence two monadic computations. That means it runs the first computation, then feeds the output of the first computation into the second and runs that too. return return x, in computation-speak, is simply the computation that has result x, and 'does nothing'. The meaning of the latter phrase will become clear when we look at State below. So how does the computations analogy work in practice? Let's look at some examples. The Maybe monad Computations in the Maybe monad (that is, function calls which result in a type wrapped up in a Maybe) represent computations that might fail. The easiest example is with lookup tables. A lookup table is a table which relates keys to values. You look up a value by knowing its key and using the lookup table. For example, you might have a lookup table of contact names as keys to their phone numbers as the values in a phonebook application. One way of implementing lookup tables in Haskell is to use a list of pairs: [(a, b) ]. Here a is the type of the keys, and b the type of the values. Here's how the phonebook lookup table might look: phonebook :: [(String, String)] phonebook = [ ("Bob", "01788 665242"), ("Fred", "01624 556442"),

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 121

("Alice", "01889 985333"), ("Jane", "01732 187565") ]

The most common thing you might do with a lookup table is look up values! However, this computation might fail. Everything's fine if we try to look up one of "Bob", "Fred", "Alice" or "Jane" in our phonebook, but what if we were to look up "Zoe"? Zoe isn't in our phonebook, so the lookup has failed. Hence, the Haskell function to look up a value from the table is a Maybe computation: lookup :: Eq a => a -> [(a, b)] -> Maybe b

-- a key -- the lookup table to use -- the result of the lookup

Lets explore some of the results from lookup: Prelude> lookup "Bob" phonebook Just "01788 665242" Prelude> lookup "Jane" phonebook Just "01732 187565" Prelude> lookup "Zoe" phonebook Nothing

Now let's expand this into using the full power of the monadic interface. Say, we're now working for the government, and once we have a phone number from our contact, we want to look up this phone number in a big, government-sized lookup table to find out the registration number of their car. This, of course, will be another Maybe-computation. But if they're not in our phonebook, we certainly won't be able to look up their registration number in the governmental database! So what we need is a function that will take the results from the first computation, and put it into the second lookup, but only if we didn't get Nothing the first time around. If we did indeed get Nothing from the first computation, or if we get Nothing from the second computation, our final result should be Nothing. comb :: Maybe a -> (a -> Maybe b) -> Maybe b comb Nothing _ = Nothing comb (Just x) f = f x

Observant readers may have guessed where we're going with this one. That's right, comb is just >>=, but restricted to Maybe-computations. So we can chain our computations together: getRegistrationNumber :: String -- their name -> Maybe String -- their registration number getRegistrationNumber name = lookup name phonebook >>= (\number -> lookup number governmentalDatabase)

If we then wanted to use the result from the governmental database lookup in a third lookup (say we want to look up their registration number to see if they owe any car tax), then we could extend our getRegistrationNumber function: getTaxOwed :: String -- their name -> Maybe Double -- the amount of tax they owe

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 122

getTaxOwed name = lookup name phonebook >>= (\number -> lookup number governmentalDatabase) >>= (\registration -> lookup registration taxDatabase)

Or, using the do-block style: getTaxOwed name = do number <- lookup name phonebook registration <- lookup number governmentalDatabase lookup registration taxDatabase

Let's just pause here and think about what would happen if we got a Nothing anywhere. Trying to use >>= to combine a Nothing from one computation with another function will result in the Nothing being carried on and the second function ignored (refer to our definition of comb above if you're not sure). That is, a Nothing at any stage in the large computation will result in a Nothing overall, regardless of the other functions! Thus we say that the structure of the Maybe monad propagates failures. An important thing to note is that we're not by any means restricted to lookups! There are many, many functions whose results could fail and therefore use Maybe. You've probably written one or two yourself. Any computations in Maybe can be combined in this way. Summary The important features of the Maybe monad are that: 1. It represents computations that could fail. 2. It propagates failure. The List monad Computations that are in the list monad (that is, they end in a type [a]) represent computations with zero or more valid answers. For example, say we are modelling the game of noughts and crosses (known as tic-tactoe in some parts of the world). An interesting (if somewhat contrived) problem might be to find all the possible ways the game could progress: find the possible states of the board 3 turns later, given a certain board configuration (i.e. a game in progress). Here is the instance declaration for the list monad: instance Monad [] where return a = [a] xs >>= f = concat (map f xs)

As monads are only really useful when we're chaining computations together, let's go into more detail on our example. The problem can be boiled down to the following steps: 1. Find the list of possible board configurations for the next turn. 2. Repeat the computation for each of these configurations: replace each configuration, call it C, with the list of possible configurations of the turn after C. 3. We will now have a list of lists (each sublist representing the turns after a previous configuration), so in order to be able to repeat this process, we need to collapse this list of lists into a single list. This structure should look similar to the monadic instance declaration above. Here's how it might look, without using the list monad:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 123

getNextConfigs :: Board -> [Board] getNextConfigs = undefined -- details not important

tick :: [Board] -> [Board] tick bds = concatMap getNextConfigs bds

find3rdConfig :: Board -> [Board] find3rdConfig bd = tick $ tick $ tick [bd]

(concatMap is a handy function for when you need to concat the results of a map: concatMap f xs = concat (map f xs).) Alternatively, we could define this with the list monad: find3rdConfig :: Board -> [Board] find3rdConfig bd0 = do bd1 <- getNextConfigs bd0 bd2 <- getNextConfigs bd1 bd3 <- getNextConfigs bd2 return bd3

List comprehensions An interesting thing to note is how similar list comprehensions and the list monad are. For example, the classic function to find Pythagorean triples: pythags = [ (x, y, z) | z <- [1..], x <- [1..z], y <- [x..z], x^2 + y^2 == z^2 ]

This can be directly translated to the list monad: import Control.Monad (guard) pythags = do z <- [1..] x <- [1..z] y <- [x..z] guard (x^2 + y^2 == z^2) return (x, y, z)

The only non-trivial element here is guard. This is explained in the next module, Additive monads. The State monad The State monad actually makes a lot more sense when viewed as a computation, rather than a container. Computations in State represents computations that depend on and modify some internal state. For example, say you were writing a program to model the three body problem (http://en.wikipedia.org/wiki/Three_body_ problem#Three_body_problem) . The internal state would be the positions, masses and velocities of all three bodies. Then a function, to, say, get the acceleration of a specific body would need to reference this state as part of its calculations.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 124

The other important aspect of computations in State is that they can modify the internal state. Again, in the three-body problem, you could write a function that, given an acceleration for a specific body, updates its position. The State monad is quite different from the Maybe and the list monads, in that it doesn't represent the result of a computation, but rather a certain property of the computation itself. What we do is model computations that depend on some internal state as functions which take a state parameter. For example, if you had a function f :: String -> Int -> Bool, and we want to modify it to make it depend on some internal state of type s, then the function becomes f :: String -> Int -> s -> Bool. To allow the function to change the internal state, the function returns a pair of (new state, return value). So our function becomes f :: String -> Int -> s -> (s, Bool) It should be clear that this method is a bit cumbersome. However, the types aren't the worst of it: what would happen if we wanted to run two stateful computations, call them f and g, one after another, passing the result of f into g? The second would need to be passed the new state from running the first computation, so we end up 'threading the state': fThenG :: (s -> (s, a)) -> (a -> s -> fThenG f g s = let (s', v ) = f s -- run f with (s'', v') = g v s' -- run g with in (s'', v') -- return the

(s, b)) -> s -> (s, b) our initial state s. the new state s' and the result of f, v. latest state and the result of g

All this 'plumbing' can be nicely hidden by using the State monad. The type constructor State takes two type parameters: the type of its environment (internal state), and the type of its output. So State s a indicates a stateful computation which depends on, and can modify, some internal state of type s, and has a result of type a. How is it defined? Well, simply as a function that takes some state and returns a pair of (new state, value): newtype State s a = State (s -> (s, a))

The above example of fThenG is, in fact, the definition of >>= for the State monad, which you probably remember from the first monads chapter. The meaning of return We mentioned right at the start that return x was the computation that 'did nothing' and just returned x. This idea only really starts to take on any meaning in monads with side-effects, like State. That is, computations in State have the opportunity to change the outcome of later computations by modifying the internal state. It's a similar situation with IO (because, of course, IO is just a special case of State). return x doesn't do this. A computation produced by return generally won't have any side-effects. The monad law return x >>= f == f x basically guarantees this, for most uses of the term 'side-effect'.

Further reading A tour of the Haskell Monad functions (http://members.chello.nl/hjgtuyl/tourdemonad.html) by HenkJan van Tuyl All about monads (http://www.haskell.org/all_about_monads/html/index.html) by Jeff Newbern explains well the concept of monads as computations, using good examples. It also has a section

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 125

outlining all the major monads, explains each one in terms of this computational view, and gives a full example.

MonadPlus MonadPlus is a typeclass whose instances are monads which represent a number of computations.

Introduction You may have noticed, whilst studying monads, that the Maybe and list monads are quite similar, in that they both represent the number of results a computation can have. That is, you use Maybe when you want to indicate that a computation can fail somehow (i.e. it can have 0 or 1 result), and you use the list monad when you want to indicate a computation could have many valid answers (i.e. it could have 0 results -- a failure -or many results). Given two computations in one of these monads, it might be interesting to amalgamate these: find all the valid solutions. I.e. given two lists of valid solutions, to find all of the valid solutions, you simply concatenate the lists together. It's also useful, especially when working with folds, to require a 'zero results' value (i.e. failure). For lists, the empty list represents zero results. We combine these two features into a typeclass: class Monad m => MonadPlus m where mzero :: m a mplus :: m a -> m a -> m a

Here are the two instance declarations for Maybe and the list monad: instance MonadPlus [] where mzero = [] mplus = (++) instance MonadPlus Maybe where mzero = Nothing Nothing `mplus` Nothing = Nothing Just x `mplus` Nothing = Just x Nothing `mplus` Just x = Just x Just x `mplus` Just y = Just x

-------

0 solutions 1 solution 0 solutions 1 solution but as Maybe solution, we

+ 0 solutions + 0 solutions + 1 solution + 1 solution can only have disregard the

= 0 solutions = 1 solution = 1 solution = 2 solutions, up to one second one.

Also, if you import Control.Monad.Error, then (Either e) becomes an instance: instance (Error e) => MonadPlus (Either e) where mzero = Left noMsg Left _ `mplus` n = n Right x `mplus` _ = Right x

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 126

Remember that (Either e) is similar to Maybe in that it represents computations that can fail, but it allows the failing computations to include an error message. Typically, Left s means a failed computation with error message s, and Right x means a successful computation with result x.

Example A traditional way of parsing an input is to write functions which consume it, one character at a time. That is, they take an input string, then chop off ('consume') some characters from the front if they satisfy certain criteria (for example, you could write a function which consumes one uppercase character). However, if the characters on the front of the string don't satisfy these criteria, the parsers have failed, and therefore they make a valid candidate for a Maybe. Here we use mplus to run two parsers in parallel. That is, we use the result of the first one if it succeeds, but if not, we use the result of the second. If that too fails, then our whole parser returns Nothing. -- | Consume a digit in the input, and return the digit that was parsed. We use -a do-block so that if the pattern match fails at any point, fail of the -the Maybe monad (i.e. Nothing) is returned. digit :: Int -> String -> Maybe Int digit i s | i > 9 || i < 0 = Nothing | otherwise = do let (c:_) = s if read [c] == i then Just i else Nothing -- | Consume a binary character in the input (i.e. either a 0 or an 1) binChar :: String -> Maybe Int binChar s = digit 0 s `mplus` digit 1 s

The MonadPlus laws Instances of MonadPlus are required to fulfill several rules, just as instances of Monad are required to fulfill the three monad laws. Unfortunately, these laws aren't set in stone anywhere and aren't fully agreed on. For example, the Haddock documentation (http://haskell.org/ghc/docs/latest/html/libraries/base/ControlMonad.html#t%3AMonadPlus) for Control.Monad quotes them as: mzero >>= f v >> mzero

= =

mzero mzero

All About Monads (http://www.haskell.org/all_about_monads/html/laws.html#zero) quotes the above two, but adds: mzero `mplus` m m `mplus` mzero

= =

m m

There are even more sets of laws available, and therefore you'll sometimes see monads like IO being used as a MonadPlus. The Haskell Wiki page (http://www.haskell.org/haskellwiki/MonadPlus) for MonadPlus has more information on this. TODO: should that information be copied here?

Useful functions

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 127

Beyond the basic mplus and mzero themselves, there are a few functions you should know about: msum A very common task when working with instances of MonadPlus is to take a list of the monad, e.g. [Maybe a] or [[a]], and fold down the list with mplus. msum fulfills this role: msum :: MonadPlus m => [m a] -> m a msum = foldr mplus mzero

A nice way of thinking about this is that it generalises the list-specific concat operation. Indeed, for lists, the two are equivalent. For Maybe it finds the first Just x in the list, or returns Nothing if there aren't any. guard This is a very nice function which you have almost certainly used before, without knowing about it. It's used in list comprehensions, as we saw in the previous chapter. List comprehensions can be decomposed into the list monad, as we saw: pythags = [ (x, y, z) | x <- [1..], y <- [x..], z <- [y..], x^2 + y^2 == z^2 ]

The previous can be considered syntactic sugar for: pythags = do x <- [1..] y <- [x..] z <- [y..] guard (x^2 + y^2 == z^2) return (x, y, z)

guard looks like this: guard :: MonadPlus m => Bool -> m () guard True = return () guard False = mzero

Concretely, guard will reduce a do-block to mzero if its predicate is False. By the very first law stated in the 'MonadPlus laws' section above, an mzero on the left-hand side of an >>= operation will produce mzero again. As do-blocks are decomposed to lots of expressions joined up by >>=, an mzero at any point will cause the entire do-block to become mzero. To further illustrate that, we will examine guard in the special case of the list monad, extending on the pythags function above. First, here is guard defined for the list monad: guard :: Bool -> [()] guard True = [()] guard False = []

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 128

guard blocks off a route. For example, in pythags, we want to block off all the routes (or combinations of x, y and z) where x^2 + y^2 == z^2 is False. Let's look at the expansion of the above do-block to see how it works: pythags = [1..] >>= \x -> [x..] >>= \y -> [y..] >>= \z -> guard (x^2 + y^2 == z^2) >>= \_ return (x, y, z)

Replacing >>= and return with their definitions for the list monad (and using some let-bindings to make things prettier), we obtain: pythags = let ret x y z = [(x, y, z)] gd x y z = concatMap (\_ -> ret x y z) (guard $ x^2 + y^2 == z^2) doZ x y = concatMap (gd x y) [y..] doY x = concatMap (doZ x ) [x..] doX = concatMap (doY ) [1..] in doX

Remember that guard returns the empty list in the case of its argument being False. Mapping across the empty list produces the empty list, no matter what function you pass in. So the empty list produced by the call to guard in the binding of gd will cause gd to be the empty list, and therefore ret to be the empty list. To understand why this matters, think about list-computations as a tree. With our Pythagorean triple algorithm, we need a branch starting from the top for every choice of x, then a branch from each of these branches for every value of y, then from each of these, a branch for every value of z. So the tree looks like this:

x

y

z

start |____________________________________________ ... | | | 1 2 3 |_______________ ... |_______________ ... |_______________ ... | | | | | | | | | 1 2 3 1 2 3 1 2 3 |___...|___...|___... |___...|___...|___...|___...|___...|___... | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3

Any combination of x, y and z represents a route through the tree. Once all the functions have been applied, each branch is concatenated together, starting from the bottom. Any route where our predicate doesn't hold evaluates to an empty list, and so has no impact on this concat operation.

Exercises 1. Prove the MonadPlus laws for Maybe and the list monad. 2. We could augment our above parser to involve a parser for any character: -- | Consume a given character in the input, and return the the character we -just consumed, paired with rest of the string. We use a do-block so that -if the pattern match fails at any point, fail of the Maybe monad (i.e.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 129

-Nothing) is returned. char :: Char -> String -> Maybe (Char, String) char c s = do let (c':s') = s if c == c' then Just (c, s') else Nothing

It would then be possible to write a hexChar function which parses any valid hexidecimal character (0-9 or a-f). Try writing this function (hint: map digit [0..9] :: [Maybe Int]). 3. More to come...

Relationship with Monoids TODO: is this at all useful? (If you don't know anything about the Monoid data structure, then don't worry about this section. It's just a bit of a muse.) Monoids are a data structure with two operations defined: an identity (or 'zero') and a binary operation (or 'plus'), which satisfy some axioms. class Monoid m where mempty :: m mappend :: m -> m -> m

For example, lists form a trivial monoid: instance Monoid [a] where mempty = [] mappend = (++)

Note the usage of [a], not [], in the instance declaration. Monoids are not necessarily 'containers' of anything. For example, the integers (or indeed even the naturals) form two possible monoids: newtype AdditiveInt = AI Int newtype MultiplicativeInt = MI Int instance Monoid AdditiveInt where mempty = AI 0 AI x `mappend` AI y = AI (x + y) instance Monoid MultiplicativeInt where mempty = MI 1 MI x `mappend` MI y = MI (x * y)

(A nice use of the latter is to keep track of probabilities.) Monoids, then, look very similar to MonadPlus instances. Both feature concepts of a zero and plus, and indeed MonadPlus is a subclass of Monoid: instance MonadPlus m => Monoid (m a) where mempty = mzero mappend = mplus

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 130

However, they work at different levels. As noted, there is no requirement for monoids to be any kind of container. More formally, monoids have kind *, but instances of MonadPlus, as they're Monads, have kind * -> *.

Monad transformers Introduction Monad transformers are special variants of standard monads that facilitate the combining of monads. For example, ReaderT Env IO a is a computation Monad transformers which can read from some environment of type Env, can do some IO and are monads too! returns a type a. Their type constructors are parameterized over a monad type constructor, and they produce combined monadic types. In this tutorial, we will assume that you understand the internal mechanics of the monad abstraction, what makes monads "tick". If, for instance, you are not comfortable with the bind operator (>>=) , we would recommend that you first read Understanding monads. Transformers are cousins A useful way to look at transformers is as cousins of some base monad. For example, the monad ListT is a cousin of its base monad List. Monad transformers are typically implemented almost exactly the same way that their cousins are, only more complicated because they are trying to thread some inner monad through. The standard monads of the monad template library all have transformer versions which are defined consistently with their non-transformer versions. However, it is not the case that all monad transformers apply the same transformation. We have seen that the ContT transformer turns continuations of the form (a->r)->r into continuations of the form (a->m r)->m r. The StateT transformer is different. It turns state transformer functions of the form s->(a,s) into state transformer functions of the form s->m (a,s). In general, there is no magic formula to create a transformer version of a monad — the form of each transformer depends on what makes sense in the context of its non-transformer type. Standard Monad Transformer Version

Original Type

Combined Type

Error

ErrorT

Either e a

m (Either e a)

State

StateT

s -> (a,s)

s -> m (a,s)

Reader

ReaderT

r -> a

r -> m a

Writer

WriterT

(a,w)

m (a,w)

Cont

ContT

(a -> r) -> r (a -> m r) -> m r

Implementing transformers The key to understanding how monad transformers work is understanding how they implement the bind (>>=) operator. You'll notice that this implementation very closely resembles that of their standard, nontransformer cousins. Transformer type constructors

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 131

Type constructors play a fundamental role in Haskell's monad support. Recall that Reader r a is the type of values of type a within a Reader monad with environment of type r. The type constructor Reader r is an instance of the Monad class, and the runReader::Reader r a->r->a function performs a computation in the Reader monad and returns the result of type a. A transformer version of the Reader monad, called ReaderT, exists which adds a monad type constructor as an addition parameter. ReaderT r m a is the type of values of the combined monad in which Reader is the base monad and m is the inner monad. ReaderT r m is an instance of the monad class, and the runReaderT::ReaderT r m a->r->m a function performs a computation in the combined monad and returns a result of type m a. The Maybe transformer We begin by defining the data type for the Maybe transformer. Our MaybeT constructor takes a single argument. Since transformers have the same data as their non-transformer cousins, we will use the newtype keyword. We could very well have chosen to use data, but that introduces needless overhead. newtype MaybeT m a = MaybeT { runMaybeT :: m (Maybe a) }

This might seem a little off-putting at first, but it's actually simpler than it looks. The constructor for MaybeT takes a single argument, of type m Records are just (Maybe a). That is all. We use some syntactic sugar so that you can see syntactic sugar MaybeT as a record, and access the value of this single argument by calling runMaybeT. One trick to understanding this is to see monad transformers as sandwiches: the bottom slice of the sandwhich is the base monad (in this case, Maybe). The filling is the inner monad, m. And the top slice is the monad transformer MaybeT. The purpose of the runMaybeT function is simply to remove this top slice from the sandwich. What is the type of runMaybeT? It is (MaybeT m a) -> m (Maybe a). As we mentioned in the beginning of this tutorial, monad transformers are monads too. Here is a partial implementation of the MaybeT monad. To understand this implementation, it really helps to know how its simpler cousin Maybe works. For comparison's sake, we put the two monad implementations side by side Note Note the use of 't', 'm' and 'b' to mean 'top', 'middle', 'bottom' respectively

Maybe

MaybeT

instance Monad Maybe where instance (Monad m) => Monad (MaybeT m) where b_v >>= f = tmb_v >>= f = -MaybeT $ runMaybeT tmb_v case b_v of >>= \b_v -> case b_v of Nothing -> Nothing Nothing -> return Nothing Just v -> f v Just v -> runMaybeT $ f v

You'll notice that the MaybeT implementation looks a lot like the Maybe implementation of bind, with the exception that MaybeT is doing a lot of extra work. This extra work consists of unpacking the two extra

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 132

layers of monadic sandwich (note the convention topMidBot to reflect the sandwich layers) and packing them up. If you really want to cut into the meat of this, read on. If you think you've understood up to here, why not try the following exercises: Exercises 1. Implement the return function for the MaybeT monad 2. Rewrite the implementation of the bind operator >>= to be more concise.

Dissecting the bind operator So what's going on here? You can think of this as working in three phases: first we remove the sandwich layer by layer, and then we apply a function to the data, and finally we pack the new value into a new sandwich Unpacking the sandwich: Let us ignore the MaybeT constructor for now, but note that everything that's going on after the $ is happening within the m monad and not the MaybeT monad! 1. The first step is to remove the top slice of the sandwich by calling runMaybeT topMidBotV 2. We use the bind operator (>>=) to remove the second layer of the sandwich -- remember that we are working in the confines of the m monad. 3. Finally, we use case and pattern matching to strip off the bottom layer of the sandwich, leaving behind the actual data with which we are working Packing the sandwich back up: If the bottom layer was Nothing, we simply return Nothing (which gives us a 2-layer sandwich). This value then goes to the MaybeT constructor at the very beginning of this function, which adds the top layer and gives us back a full sandwich. If the bottom layer was Just v (note how we have pattern-matched that bottom slice of monad off): we apply the function f to it. But now we have a problem: applying f to v gives a full three-layer sandwich, which would be absolutely perfect except for the fact that we're now going to apply the MaybeT constructor to it and get a type clash! So how do we avoid this? By first running runMaybeT to peel the top slice off so that the MaybeT constructor is happy when you try to add it back on. The List transformer Just as with the Maybe transformer, we create a datatype with a constructor that takes one argument: newtype ListT m a = ListT { runListT :: m [a] }

The implementation of the ListT monad is also strikingly similar to its cousin, the List monad. We do exactly the same things for List, but with a little extra support to operate within the inner monad m, and to pack and unpack the monadic sandwich ListT - m - List. List

ListT

instance Monad [] where instance (Monad m) => Monad (ListT m) where b_v >>= f = tmb_v >>= f = -ListT $ runListT tmb_v

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks let x = map f b_v in concat x

Page 133

>>= \b_v -> mapM (runListT . f) b_v >>= \x -> return (concat x)

Exercises 1. Dissect the bind operator for the (ListT m) monad. For example, which do we now have mapM and return? 2. Now that you have seen two simple monad transformers, write a monad transformer IdentityT, which would be the transforming cousin of the Identity monad. 3. Would IdentityT SomeMonad be equivalent to SomeMonadT Identity for a given monad and its transformer cousin?

Lifting FIXME: insert introduction liftM We begin with a notion which, strictly speaking, isn't about monad transformers. One small and surprisingly useful function in the standard library is liftM, which as the API states, is meant for lifting non-monadic functions into monadic ones. Let's take a look at that type: liftM :: Monad m => (a1 -> r) -> m a1 -> m r

So let's see here, it takes a function (a1 -> r), takes a monad with an a1 in it, applies that function to the a1, and returns the result. In my opinion, the best way to understand this function is to see how it is used. The following pieces of code all mean the same thing do notation

liftM

liftM as an operator

do foo <- someMonadicThing liftM myFn someMonadicThing myFn `liftM` someMonadicThing return (myFn foo)

What made the light bulb go off for me is this third example, where we use liftM as an operator. liftM is just a monadic version of ($)! non monadic

monadic

myFn $ aNonMonadicThing myFn `liftM` someMonadicThing

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 134

Exercises 1. How would you write liftM? You can inspire yourself from the the first example

lift When using combined monads created by the monad transformers, we avoid having to explicitly manage the inner monad types, resulting in clearer, simpler code. Instead of creating additional do-blocks within the computation to manipulate values in the inner monad type, we can use lifting operations to bring functions from the inner monad into the combined monad. Recall the liftM family of functions which are used to lift non-monadic functions into a monad. Each monad transformer provides a lift function that is used to lift a monadic computation into a combined monad. The MonadTrans class is defined in Control.Monad.Trans (http://www.haskell.org/ghc/docs/latest/html/ base/Control.Monad.Trans.html) and provides the single function lift. The lift function lifts a monadic computation in the inner monad into the combined monad. class MonadTrans t where lift :: (Monad m) => m a -> t m a

Monads which provide optimized support for lifting IO operations are defined as members of the MonadIO class, which defines the liftIO function. class (Monad m) => MonadIO m where liftIO :: IO a -> m a

Using lift Implementing lift Implementing lift is usually pretty straightforward. Consider the transformer MaybeT: instance MonadTrans MaybeT where lift mon = MaybeT (mon >>= return . Just)

We begin with a monadic value (of the inner monad), the middle layer, if you prefer the monadic sandwich analogy. Using the bind operator and a type constructor for the base monad, we slip the bottom slice (the base monad) under the middle layer. Finally we place the top slice of our sandwich by using the constructor MaybeT. So using the lift function, we have transformed a lowly piece of sandwich filling into a bona-fide three-layer monadic sandwich. As with our implementation of the Monad class, the bind operator is working within the confines of the inner monad.

Exercises

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 135

1. Why is it that the lift function has to be defined seperately for each monad, where as liftM can be defined in a universal way? 2. Implement the lift function for the ListT transformer. 3. How would you lift a regular function into a monad transformer? Hint: very easily.

The State monad transformer Previously, we have pored over the implementation of two very simple monad transformers, MaybeT and ListT. We then took a short detour to talk about lifting a monad into its transformer variant. Here, we will bring the two ideas together by taking a detailed look at the implementation of one of the more interesting transformers in the standard library, StateT. Studying this transformer will build insight into the transformer mechanism that you can call upon when using monad transformers in your code. You might want to review the section on the State monad before continuing. Just as the State monad was built upon the definition newtype State s a = State { runState :: (s -> (a,s)) }

the StateT transformer is built upon the definition newtype StateT s m a = StateT { runStateT :: (s -> m (a,s)) }

State s is an instance of both the Monad class and the MonadState s class, so StateT s m should also be members of the Monad and MonadState s classes. Furthermore, if m is an instance of MonadPlus, StateT s m should also be a member of MonadPlus. To define StateT s m as a Monad instance: State

StateT

newtype State s a = State { runState :: (s > (a,s)) }

newtype StateT s m a = StateT { runStateT :: (s -> m (a,s)) }

instance Monad (State s) where return a = State $ \s -> (a,s) (State x) >>= f = State $ \s -> let (v,s') = x s in runState (f v) s'

instance (Monad m) => Monad (StateT s m) where return a = StateT $ \s -> return (a,s) (StateT x) >>= f = StateT $ \s -> do -- get new value, state (v,s') <- x s -- apply bound function to get new state transformation fn (StateT x') <- return $ f v -- apply the state transformation fn to the new state x' s'

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 136

Our definition of return makes use of the return function of the inner monad, and the binding operator uses a do-block to perform a computation in the inner monad. We also want to declare all combined monads that use the StateT transformer to be instances of the MonadState class, so we will have to give definitions for get and put:

instance (Monad m) => MonadState s (StateT s m) where get = StateT $ \s -> return (s,s) put s = StateT $ \_ -> return ((),s)

Finally, we want to declare all combined monads in which StateT is used with an instance of MonadPlus to be instances of MonadPlus:

instance (MonadPlus m) => MonadPlus (StateT s m) where mzero = StateT $ \s -> mzero (StateT x1) `mplus` (StateT x2) = StateT $ \s -> (x1 s) `mplus` (x2 s)

The final step to make our monad transformer fully integrated with Haskell's monad classes is to make StateT s an instance of the MonadTrans class by providing a lift function: instance MonadTrans (StateT s) where lift c = StateT $ \s -> c >>= (\x -> return (x,s))

The lift function creates a StateT state transformation function that binds the computation in the inner monad to a function that packages the result with the input state. The result is that a function that returns a list (i.e., a computation in the List monad) can be lifted into StateT s [[]], where it becomes a function that returns a StateT (s -> (a,s)). That is, the lifted computation produces multiple (value,state) pairs from its input state. The effect of this is to "fork" the computation in StateT, creating a different branch of the computation for each value in the list returned by the lifted function. Of course, applying StateT to a different monad will produce different semantics for the lift function.

Acknowledgements This module uses a large amount of text from All About Monads with permission from its author Jeff Newbern.

Practical monads Parsing monads In the beginner's track of this book, we saw how monads were used for IO. We've also started working more extensively with some of the more rudimentary monads like Maybe, List or State. Now let's try using monads for something quintessentially "practical". Let's try writing a very simple parser. We'll be using the

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 137

Parsec (http://www.cs.uu.nl/~daan/download/parsec/parsec.html) library, which comes with GHC but may need to be downloaded separately if you're using another compiler. Start by adding this line to the import section: import System import Text.ParserCombinators.Parsec hiding (spaces)

This makes the Parsec library functions and getArgs available to us, except the "spaces" function, whose name conflicts with a function that we'll be defining later. Now, we'll define a parser that recognizes one of the symbols allowed in Scheme identifiers:

symbol :: Parser Char symbol = oneOf "!$%&|*+-/:<=>?@^_~"

This is another example of a monad: in this case, the "extra information" that is being hidden is all the info about position in the input stream, backtracking record, first and follow sets, etc. Parsec takes care of all of that for us. We need only use the Parsec library function oneOf (http://www.cs.uu.nl/~daan/download/parsec/ parsec.html#oneOf) , and it'll recognize a single one of any of the characters in the string passed to it. Parsec provides a number of pre-built parsers: for example, letter (http://www.cs.uu.nl/~daan/download/parsec/ parsec.html#letter) and digit (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#digit) are library functions. And as you're about to see, you can compose primitive parsers into more sophisticated productions. S Let's define a function to call our parser and handle any possible errors:

readExpr :: String -> String readExpr input = case parse symbol "lisp" input of Left err -> "No match: " ++ show err Right val -> "Found value"

As you can see from the type signature, readExpr is a function (->) from a String to a String. We name the parameter input, and pass it, along with the symbol action we defined above and the name of the parser ("lisp"), to the Parsec function parse (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#parse) . Parse can return either the parsed value or an error, so we need to handle the error case. Following typical Haskell convention, Parsec returns an Either (http://www.haskell.org/onlinereport/standard-prelude.html#$ tEither) data type, using the Left constructor to indicate an error and the Right one for a normal value. We use a case...of construction to match the result of parse against these alternatives. If we get a Left value (error), then we bind the error itself to err and return "No match" with the string representation of the error. If we get a Right value, we bind it to val, ignore it, and return the string "Found value". The case...of construction is an example of pattern matching, which we will see in much greater detail [evaluator1.html#primitiveval later on].

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 138

Finally, we need to change our main function to call readExpr and print out the result:

main :: IO () main = do args <- getArgs putStrLn (readExpr (args !! 0))

To compile and run this, you need to specify "-package parsec" on the command line, or else there will be link errors. For example:

debian:/home/jdtang/haskell_tutorial/code# ghc -package parsec -o simple_parser [../code/ listing3.1.hs listing3.1.hs] debian:/home/jdtang/haskell_tutorial/code# ./simple_parser $ Found value debian:/home/jdtang/haskell_tutorial/code# ./simple_parser a No match: "lisp" (line 1, column 1): unexpected "a"

Whitespace Next, we'll add a series of improvements to our parser that'll let it recognize progressively more complicated expressions. The current parser chokes if there's whitespace preceding our symbol:

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser " No match: "lisp" (line 1, column 1): unexpected " "

%"

Let's fix that, so that we ignore whitespace. First, lets define a parser that recognizes any number of whitespace characters. Incidentally, this is why we included the "hiding (spaces)" clause when we imported Parsec: there's already a function "spaces (http:// www.cs.uu.nl/~daan/download/parsec/parsec.html#spaces) " in that library, but it doesn't quite do what we want it to. (For that matter, there's also a parser called lexeme (http://www.cs.uu.nl/~daan/download/parsec/ parsec.html#lexeme) that does exactly what we want, but we'll ignore that for pedagogical purposes.)

spaces :: Parser () spaces = skipMany1 space

Just as functions can be passed to functions, so can actions. Here we pass the Parser action space (http:// www.cs.uu.nl/~daan/download/parsec/parsec.html#space) to the Parser action skipMany1 (http:// www.cs.uu.nl/~daan/download/parsec/parsec.html#skipMany1) , to get a Parser that will recognize one or more spaces. Now, let's edit our parse function so that it uses this new parser. Changes are in red:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 139

readExpr input = case parse (spaces >> symbol) "lisp" input of Left err -> "No match: " ++ show err Right val -> "Found value"

We touched briefly on the >> ("bind") operator in lesson 2, where we mentioned that it was used behind the scenes to combine the lines of a do-block. Here, we use it explicitly to combine our whitespace and symbol parsers. However, bind has completely different semantics in the Parser and IO monads. In the Parser monad, bind means "Attempt to match the first parser, then attempt to match the second with the remaining input, and fail if either fails." In general, bind will have wildly different effects in different monads; it's intended as a general way to structure computations, and so needs to be general enough to accomodate all the different types of computations. Read the documentation for the monad to figure out precisely what it does. Compile and run this code. Note that since we defined spaces in terms of skipMany1, it will no longer recognize a plain old single character. Instead you have to preceed a symbol with some whitespace. We'll see how this is useful shortly:

debian:/home/jdtang/haskell_tutorial/code# listing3.2.hs listing3.2.hs] debian:/home/jdtang/haskell_tutorial/code# debian:/home/jdtang/haskell_tutorial/code# No match: "lisp" (line 1, column 1): unexpected "%" expecting space debian:/home/jdtang/haskell_tutorial/code# No match: "lisp" (line 1, column 4): unexpected "a" expecting space

ghc -package parsec -o simple_parser [../code/ ./simple_parser " ./simple_parser %

%" Found value

./simple_parser "

abc"

Return Values Right now, the parser doesn't do much of anything - it just tells us whether a given string can be recognized or not. Generally, we want something more out of our parsers: we want them to convert the input into a data structure that we can traverse easily. In this section, we learn how to define a data type, and how to modify our parser so that it returns this data type. First, we need to define a data type that can hold any Lisp value:

data LispVal = | | | | |

Atom String List [LispVal] DottedList [LispVal] LispVal Number Integer String String Bool Bool

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 140

This is an example of an algebraic data type: it defines a set of possible values that a variable of type LispVal can hold. Each alternative (called a constructor and separated by |) contains a tag for the constructor along with the type of data that that constructor can hold. In this example, a LispVal can be: 1. An Atom, which stores a String naming the atom 2. A List, which stores a list of other LispVals (Haskell lists are denoted by brackets) 3. A DottedList, representing the Scheme form (a b . c). This stores a list of all elements but the last, and then stores the last element as another field 4. A Number, containing a Haskell Integer 5. A String, containing a Haskell String 6. A Bool, containing a Haskell boolean value Constructors and types have different namespaces, so you can have both a constructor named String and a type named String. Both types and constructor tags always begin with capital letters. Next, let's add a few more parsing functions to create values of these types. A string is a double quote mark, followed by any number of non-quote characters, followed by a closing quote mark:

parseString :: Parser LispVal parseString = do char '"' x <- many (noneOf "\"") char '"' return $ String x

We're back to using the do-notation instead of the >> operator. This is because we'll be retrieving the value of our parse (returned by many (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#many) (noneOf (http:// www.cs.uu.nl/~daan/download/parsec/parsec.html#noneOf) "\"")) and manipulating it, interleaving some other parse operations in the meantime. In general, use >> if the actions don't return a value, >>= if you'll be immediately passing that value into the next action, and do-notation otherwise. Once we've finished the parse and have the Haskell String returned from many, we apply the String constructor (from our LispVal data type) to turn it into a LispVal. Every constructor in an algebraic data type also acts like a function that turns its arguments into a value of its type. It also serves as a pattern that can be used in the left-hand side of a pattern-matching expression; we saw an example of this in [#symbols Lesson 3.1] when we matched our parser result against the two constructors in the Either data type. We then apply the built-in function return (http://www.haskell.org/onlinereport/standard-prelude.html#$ tMonad) to lift our LispVal into the Parser monad. Remember, each line of a do-block must have the same type, but the result of our String constructor is just a plain old LispVal. Return lets us wrap that up in a Parser action that consumes no input but returns it as the inner value. Thus, the whole parseString action will have type Parser LispVal. The $ operator is infix function application: it's the same as if we'd written return (String x), but $ is rightassociative, letting us eliminate some parentheses. Since $ is an operator, you can do anything with it that you'd normally do to a function: pass it around, partially apply it, etc. In this respect, it functions like the Lisp function apply (http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html#%_sec_6.4) . Now let's move on to Scheme variables. An atom (http://www.schemers.org/Documents/Standards/R5RS/ HTML/r5rs-Z-H-5.html#%_sec_2.1) is a letter or symbol, followed by any number of letters, digits, or symbols:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 141

parseAtom :: Parser LispVal parseAtom = do first <- letter <|> symbol rest <- many (letter <|> digit <|> symbol) let atom = [first] ++ rest return $ case atom of "#t" -> Bool True "#f" -> Bool False otherwise -> Atom atom

Here, we introduce another Parsec combinator, the choice operator <|> (http://www.cs.uu.nl/~daan/download/ parsec/parsec.html#or) . This tries the first parser, then if it fails, tries the second. If either succeeds, then it returns the value returned by that parser. The first parser must fail before it consumes any input: we'll see later how to implement backtracking. Once we've read the first character and the rest of the atom, we need to put them together. The "let" statement defines a new variable "atom". We use the list concatenation operator ++ for this. Recall that first is just a single character, so we convert it into a singleton list by putting brackets around it. If we'd wanted to create a list containing many elements, we need only separate them by commas. Then we use a case statement to determine which LispVal to create and return, matching against the literal strings for true and false. The otherwise alternative is a readability trick: it binds a variable named otherwise, whose value we ignore, and then always returns the value of atom Finally, we create one more parser, for numbers. This shows one more way of dealing with monadic values:

parseNumber :: Parser LispVal parseNumber = liftM (Number . read) $ many1 digit

It's easiest to read this backwards, since both function application ($) and function composition (.) associate to the right. The parsec combinator many1 (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#many1) matches one or more of its argument, so here we're matching one or more digits. We'd like to construct a number LispVal from the resulting string, but we have a few type mismatches. First, we use the built-in function read (http://www.haskell.org/onlinereport/standard-prelude.html#$vread) to convert that string into a number. Then we pass the result to Number to get a LispVal. The function composition operator "." creates a function that applies its right argument and then passes the result to the left argument, so we use that to combine the two function applications. Unfortunately, the result of many1 digit is actually a Parser String, so our combined Number . read still can't operate on it. We need a way to tell it to just operate on the value inside the monad, giving us back a Parser LispVal. The standard function liftM does exactly that, so we apply liftM to our Number . read function, and then apply the result of that to our Parser. We also have to import the Monad module up at the top of our program to get access to liftM:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 142

import Monad

This style of programming - relying heavily on function composition, function application, and passing functions to functions - is very common in Haskell code. It often lets you express very complicated algorithms in a single line, breaking down intermediate steps into other functions that can be combined in various ways. Unfortunately, it means that you often have to read Haskell code from right-to-left and keep careful track of the types. We'll be seeing many more examples throughout the rest of the tutorial, so hopefully you'll get pretty comfortable with it. Let's create a parser that accepts either a string, a number, or an atom:

parseExpr :: Parser LispVal parseExpr = parseAtom <|> parseString <|> parseNumber

And edit readExpr so it calls our new parser:

readExpr :: String -> String readExpr input = case parse parseExpr "lisp" input of Left err -> "No match: " ++ show err Right _ -> "Found value"

Compile and run this code, and you'll notice that it accepts any number, string, or symbol, but not other strings:

debian:/home/jdtang/haskell_tutorial/code# ghc -package parsec -o simple_parser [.../code/ listing3.3.hs listing3.3.hs] debian:/home/jdtang/haskell_tutorial/code# ./simple_parser "\"this is a string\"" Found value debian:/home/jdtang/haskell_tutorial/code# ./simple_parser 25 Found value debian:/home/jdtang/haskell_tutorial/code# ./simple_parser symbol Found value debian:/home/jdtang/haskell_tutorial/code# ./simple_parser (symbol) bash: syntax error near unexpected token `symbol' debian:/home/jdtang/haskell_tutorial/code# ./simple_parser "(symbol)" No match: "lisp" (line 1, column 1): unexpected "(" expecting letter, "\"" or digit

Exercises 1. Rewrite parseNumber using 1. do-notation 2. explicit sequencing with the >>= (http://www.haskell.org/ onlinereport/standard-prelude.html#tMonad) operator

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 143

2. Our strings aren't quite R5RS compliant (http://www.schemers.org/ Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html#%_sec_6.3.5) , because they don't support escaping of internal quotes within the string. Change parseString so that \" gives a literal quote character instead of terminating the string. You may want to replace noneOf "\"" with a new parser action that accepts either a non-quote character or a backslash followed by a quote mark. 3. Modify the previous exercise to support \n, \r, \t, \\, and any other desired escape characters 4. Change parseNumber to support the Scheme standard for different bases (http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H9.html#%_sec_6.2.4) . You may find the readOct and readHex (http:// www.haskell.org/onlinereport/numeric.html#sect14) functions useful. 5. Add a Character constructor to LispVal, and create a parser for character literals (http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rsZ-H-9.html#%_sec_6.3.4) as described in R5RS. 6. Add a Float constructor to LispVal, and support R5RS syntax for decimals (http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H9.html#%_sec_6.2.4) . The Haskell function readFloat (http:// www.haskell.org/onlinereport/numeric.html#sect14) may be useful. 7. Add data types and parsers to support the full numeric tower (http:// www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html# %_sec_6.2.1) of Scheme numeric types. Haskell has built-in types to represent many of these; check the Prelude (http://www.haskell.org/ onlinereport/standard-prelude.html#$tNum) . For the others, you can define compound types that represent eg. a Rational as a numerator and denominator, or a Complex as a real and imaginary part (each itself a Real number).

Recursive Parsers: Adding lists, dotted lists, and quoted datums Next, we add a few more parser actions to our interpreter. Start with the parenthesized lists that make Lisp famous:

parseList :: Parser LispVal parseList = liftM List $ sepBy parseExpr spaces

This works analogously to parseNumber, first parsing a series of expressions separated by whitespace (sepBy parseExpr spaces) and then apply the List constructor to it within the Parser monad. Note too that we can pass parseExpr to sepBy (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#sepBy) , even though it's an action we wrote ourselves. The dotted-list parser is somewhat more complex, but still uses only concepts that we're already familiar with:

parseDottedList :: Parser LispVal parseDottedList = do

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 144

head <- endBy parseExpr spaces tail <- char '.' >> spaces >> parseExpr return $ DottedList head tail

Note how we can sequence together a series of Parser actions with >> and then use the whole sequence on the right hand side of a do-statement. The expression char '.' >> spaces returns a Parser (), then combining that with parseExpr gives a Parser LispVal, exactly the type we need for the do-block. Next, let's add support for the single-quote syntactic sugar of Scheme: parseQuoted :: Parser LispVal parseQuoted = do char '\'' x <- parseExpr return $ List [Atom "quote", x]

Most of this is fairly familiar stuff: it reads a single quote character, reads an expression and binds it to x, and then returns (quote x), to use Scheme notation. The Atom constructor works like an ordinary function: you pass it the String you're encapsulating, and it gives you back a LispVal. You can do anything with this LispVal that you normally could, like put it in a list. Finally, edit our definition of parseExpr to include our new parsers:

parseExpr :: Parser LispVal parseExpr = parseAtom <|> parseString <|> parseNumber <|> parseQuoted <|> do char '(' x <- (try parseList) <|> parseDottedList char ')' return x

This illustrates one last feature of Parsec: backtracking. parseList and parseDottedList recognize identical strings up to the dot; this breaks the requirement that a choice alternative may not consume any input before failing. The try (http://www.cs.uu.nl/~daan/download/parsec/parsec.html#try) combinator attempts to run the specified parser, but if it fails, it backs up to the previous state. This lets you use it in a choice alternative without interfering with the other alternative. Compile and run this code:

debian:/home/jdtang/haskell_tutorial/code# listing3.4.hs listing3.4.hs] debian:/home/jdtang/haskell_tutorial/code# Found value debian:/home/jdtang/haskell_tutorial/code# debian:/home/jdtang/haskell_tutorial/code# Found value debian:/home/jdtang/haskell_tutorial/code# Found value

ghc -package parsec -o simple_parser [../code/ ./simple_parser "(a test)" ./simple_parser "(a (nested) test)" Found value ./simple_parser "(a (dotted . list) test)" ./simple_parser "(a '(quoted (dotted . list)) test)"

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 145

debian:/home/jdtang/haskell_tutorial/code# ./simple_parser "(a '(imbalanced parens)" No match: "lisp" (line 1, column 24): unexpected end of input expecting space or ")"

Note that by referring to parseExpr within our parsers, we can nest them arbitrarily deep. Thus, we get a full Lisp reader with only a few definitions. That's the power of recursion. Exercises 1. Add support for the backquote (http://www.schemers.org/Documents/ Standards/R5RS/HTML/r5rs-Z-H-7.html#%_sec_4.2.6) syntactic sugar: the Scheme standard details what it should expand into (quasiquote/unquote). 2. Add support for vectors (http://www.schemers.org/Documents/Standards/ R5RS/HTML/r5rs-Z-H-9.html#%_sec_6.3.6) . The Haskell representation is up to you: GHC does have an Array (http://www.haskell.org/ghc/docs/ latest/html/libraries/base/Data-Array.html) data type, but it can be difficult to use. Strictly speaking, a vector should have constant-time indexing and updating, but destructive update in a purely functional language is difficult. You may have a better idea how to do this after the section on set!, later in this tutorial. 3. Instead of using the try combinator, left-factor the grammar so that the common subsequence is its own parser. You should end up with a parser that matches a string of expressions, and one that matches either nothing or a dot and a single expressions. Combining the return values of these into either a List or a DottedList is left as a (somewhat tricky) exercise for the reader: you may want to break it out into another helper function

Generic monads Write me: The idea is that this section can show some of the benefits of not tying yourself to one single monad, but writing your code for any arbitrary monad m. Maybe run with the idea of having some elementary monad, and then deciding it's not good enough, so replacing it with a fancier one... and then deciding you need to go even further and just plug in a monad transformer For instance: Using the Identity Monad: module Identity(Id(Id)) where newtype Id a = Id a instance Monad Id where (>>=) (Id x) f = f x return = Id instance (Show a) => Show (Id a) where show (Id x) = show x

In another File import Identity type M = Id

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 146

my_fib :: Integer -> M Integer my_fib = my_fib_acc 0 1 my_fib_acc my_fib_acc my_fib_acc my_fib_acc val
:: Integer -> Integer -> Integer -> M Integer _ fn1 1 = return fn1 fn2 _ 0 = return fn2 fn2 fn1 n_rem = do my_fib_acc fn1 (fn2+fn1) (n_rem - 1) val

Doesn't seem to accomplish much, but It allows to you add debugging facilities to a part of your program on the fly. As long as you've used return instead of explicit Id constructors, then you can drop in the following monad: module PMD (Pmd(Pmd)) where --PMD = Poor Man's Debugging, Now available for haskell import IO newtype Pmd a = Pmd (a, IO ()) instance Monad Pmd where (>>=) (Pmd (x, prt)) f = let (Pmd (v, prt')) = f x in Pmd (v, prt >> prt') return x = Pmd (x, return ()) instance (Show a) => Show (Pmd a) where show (Pmd (x, _) ) = show x

If we wanted to debug our fibonacci program above, We could modify it as follows: import Identity import PMD import IO type M = Pmd ... my_fib_acc :: Integer -> Integer -> Integer -> M Integer my_fib_acc _ fn1 1 = return fn1 my_fib_acc fn2 _ 0 = return fn2 my_fib_acc fn2 fn1 n_rem = val <- my_fib_acc fn1 (fn2+fn1) (n_rem - 1) Pmd (val, putStrLn (show fn1))

All we had to change is the lines where we wanted to print something for debugging, and add some code wherever you extracted the value from the Id Monad to execute the resulting IO () you've returned. Something like main :: IO () main = do let (Id f25) = my_fib 25 putStrLn ("f25 is: " ++ show f25)

for the Id Monad vs. main :: IO () main = do let (Pmd (f25, prt)) = my_fib 25

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 147

prt putStrLn ("f25 is: " ++ show f25)

For the Pmd Monad. Notice that we didn't have to touch any of the functions that we weren't debugging.

Advanced Haskell Arrows Introduction Arrows are a generalization of monads. They can do everything monads can do, and more. They serve much the same purpose as monads -- providing a common structure for libraries -- but are more general. In particular they allow notions of computation that may be partially static (independent of the input) or may take multiple inputs. If your application works fine with monads, you might as well stick with them. But if you're using a structure that's very like a monad, but isn't one, maybe it's an arrow.

proc and the arrow tail Let's begin by getting to grips with the arrows notation. We'll work with the simplest possible arrow there is (the function) and build some toy programs strictly in the aims of getting acquainted with the syntax. Fire up your text editor and create a Haskell file, say toyArrows.hs: import Control.Arrow (returnA) idA :: a -> a idA = proc a -> returnA -< a plusOne :: Int -> Int plusOne = proc a -> returnA -< (a+1)

These are our first two arrows. The first is the identity function in arrow form, and second, slightly more exciting, is an arrow that adds one to its input. Load this up in GHCi, using the -farrows extension and see what happens. % ghci -farrows toyArrows.hs ___ ___ _ / _ \ /\ /\/ __(_) / /_\// /_/ / / | | GHC Interactive, version 6.4.1, for Haskell 98.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks / /_\\/ __ / /___| | \____/\/ /_/\____/|_|

Page 148

http://www.haskell.org/ghc/ Type :? for help.

Loading package base-1.0 ... linking ... done. Compiling Main ( toyArrows.hs, interpreted ) Ok, modules loaded: Main. *Main> idA 3 3 *Main> idA "foo" "foo" *Main> plusOne 3 4 *Main> plusOne 100 101

Thrilling indeed. Up to now, we have seen three new constructs in the arrow notation: the keyword proc -< the imported function returnA Now that we know how to add one to a value, let's try something twice as difficult: adding TWO: plusOne = proc a -> returnA -< (a+1) plusTwo = proc a -> plusOne -< (a+1)

One simple approach is to feed (a+1) as input into the plusOne arrow. Note the similarity between plusOne and plusTwo. You should notice that there is a basic pattern here which goes a little something like this: proc FOO -> SOME_ARROW -< (SOMETHING_WITH_FOO) Exercises 1. plusOne is an arrow, so by the pattern above returnA must be an arrow too. What do you think returnA does?

do notation Our current implementation of plusTwo is rather disappointing actually... shouldn't it just be plusOne twice? We can do better, but to do so, we need to introduce the do notation: plusTwoBis = proc a -> do b <- plusOne -< a plusOne -< b

Now try this out in GHCi: Prelude> :r Compiling Main Ok, modules loaded: Main.

( toyArrows.hs, interpreted )

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 149

*Main> plusTwoBis 5 7

You can use this do notation to build up sequences as long as you would like: plusFive = proc a -> do b <- plusOne c <- plusOne d <- plusOne e <- plusOne plusOne -< e

-< -< -< -<

a b c d

Monads and arrows FIXME: I'm no longer sure, but I believe the intention here was to show what the difference is having this proc notation instead to just a regular chain of dos

Understanding arrows We have permission to import material from the Haskell arrows page (http://www.haskell.org/arrows) . See the talk page for details.

The factory and conveyor belt metaphor In this tutorial, we shall present arrows from the perspective of stream processors, using the factory metaphor from the monads module as a support. Let's get our hands dirty right away. You are a factory owner, and as before you own a set of processing machines. Processing machines are just a metaphor for functions; they accept some input and produce some output. Your goal is to combine these processing machines so that they can perform richer, and more complicated tasks. Monads allow you to combine these machines in a pipeline. Arrows allow you to combine them in more interesting ways. The result of this is that you can perform certain tasks in a less complicated and more efficient manner. In a monadic factory, we took the approach of wrapping the outputs of our machines in containers. The arrow factory takes a completely different route: rather than wrapping the outputs in containers, we wrap the machines themselves. More specifically, in an arrow factory, we attach a pair of conveyor belts to each machine, one for the input and one for the output. So given a function of type b -> c, we can construct an equivalent a arrow by attaching a b and c conveyer belt to the machine. The equivalent arrow is of type a b c, which we can pronounce as an arrow a from b to c.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 150

Plethora of robots We mentioned earlier that arrows give you more ways to combine machines together than monads did. Indeed, the arrow type class provides six distinct robots (compared to the two you get with monads). arr The simplest robot is arr with the type signature arr :: (b -> c) -> a b c. In other words, the arr robot takes a processing machine of type b -> c, and adds conveyor belts to form an a arrow from b to c.

(>>>) The next, and probably the most important, robot is (>>>). This is basically the arrow equivalent to the monadic bind robot (>>=). The arrow version of bind (>>>) puts two arrows into a sequence. That is, it connects the output conveyor belt of the first arrow to the input conveyor belt of the second one.

What we get out of this is a new arrow. One consideration to make, though is what input and output types our arrows may take. Since we're connecting output and the input conveyor belts of the first and second arrows, the second arrow must accept the same kind of input as what the first arrow outputs. If the first arrow is of type a b c, the second arrow must be of type a c d. Here is the same diagram as above, but with things on the conveyor belts to help you see the issue with types.

Exercises What is the type of the combined arrow? first

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 151

Up to now, our arrows can only do the same things that monads can. Here is where things get interesting! The arrows type class provides functions which allow arrows to work with pairs of input. As we will see later on, this leads us to be able to express parallel computation in a very succinct manner. The first of these functions, naturally enough, is first. If you are skimming this tutorial, it is probably a good idea to slow down at least in this section, because the first robot is one of the things that makes arrows truly useful.

Given an arrow f, the first robot attaches some conveyor belts and extra machinery to form a new, more complicated arrow. The machines that bookend the input arrow split the input pairs into their component parts, and put them back together. The idea behind this is that the first part of every pair is fed into the f, whilst the second part is passed through on an empty conveyor belt. When everything is put back together, we have same pairs that we fed in, except that the first part of every pair has been replaced by an equivalent output from f.

Now the question we need to ask ourselves is that of types. Say that the input tuples are of type (b,d) and the input arrow is of type a b c (that is, it is an arrow from b to c). What is the type of the output? Well, the arrow converts all bs into cs, so when everything is put back together, the type of the output must be (c,d). Exercises What is the type of the first robot? second If you understand the first robot, the second robot is a piece of cake. It does the same exact thing, except that it feeds the second part of every input pair into the given arrow f instead of the first part.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 152

What makes the second robot interesting is that it can be derived from the previous robots! Strictly speaking, the only robots you need to for arrows are arr, (>>>) and first. The rest can be had "for free". Exercises 1. Write a function to swap two components of a tuple. 2. Combine this helper function with the robots arr, (>>>) and first to implement the second robot

*** One of the selling points of arrows is that you can use them to express parallel computation. The (***) robot is just the right tool for the job. Given two arrows, f and g, the (***) combines them into a new arrow using the same bookend-machines we saw in the previous two robots

Conceptually, this isn't very much different from the robots first and second. As before, our new arrow accepts pairs of inputs. It splits them up, sends them on to seperate conveyor belts, and puts them back together. The only difference here is that, rather than having one arrow and one empty conveyor belt, we have two distinct arrows. But why not?

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 153

Exercises 1. What is the type of the (***) robot? 2. Given the (>>>), first and second robots, implement the (***) robot.

&&& The final robot in the Arrow class is very similar to the (***) robot, except that the resulting arrow accepts a single input and not a pair. Yet, the rest of the machine is exactly the same. How can we work with two arrows, when we only have one input to give them?

The answer is simple: we clone the input and feed a copy into each machine!

Exercises 1. Write a simple function to clone an input into a pair. 2. Using your cloning function, as well as the robots arr, (>>>) and ***, implement the &&& robot 3. Similarly, rewrite the following function without using &&&: addA f g = f &&& g >>> arr (\ (y, z) -> y + z)

Functions are arrows

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 154

Now that we have presented the 6 arrow robots, we would like to make sure that you have a more solid grasp of them by walking through a simple implementations of the Arrow class. As in the monadic world, there are many different types of arrows. What is the simplest one you can think of? Functions. Put concretely, the type constructor for functions (->) is an instance of Arrow instance Arrow (->) where arr f = f f >>> g = g . f first f = \(x,y) -> (f x, y)

Now let's examine this in detail: arr - Converting a function into an arrow is trivial. In fact, the function already is an arrow. (>>>) - we want to feed the output of the first function into the input of the second function. This is nothing more than function composition. first - this is a little more elaborate. Given a function f, we return a function which accepts a pair of inputs (x,y), and runs f on x, leaving y untouched. And that, strictly speaking, is all we need to have a complete arrow, but the arrow typeclass also allows you to make up your own definition of the other three robots, so let's have a go at that: first f second f f *** g f &&& g

= = = =

\(x,y) \(x,y) \(x,y) \x

-> -> -> ->

(f ( (f (f

x, y) -x, f y) -x, g y) -x, g x) --

for comparison's sake like first takes two arrows, and not just one feed the same input into both functions

And that's it! Nothing could be simpler. Note that this is not the official instance of functions as arrows. You should take a look at the haskell library (http://darcs.haskell.org/ packages/base/Control/Arrow.hs) if you want the real deal.

The arrow notation In the introductory Arrows chapter, we introduced the proc and -< notation. How does this tie in with all the arrow robots we just presented? Sadly, it's a little bit less straightforward than do-notation, but let's have a look.

Maybe functor

It turns out that any monad can be made into arrow. We'll go into that later on, but for now, FIXME: transition

Using arrows

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 155

At this point in the tutorial, you should have a strong enough grasp of the arrow machinery that we can start to meaningfully tackle the question of what arrows are good for. Stream processing Avoiding leaks Arrows were originally motivated by an efficient parser design found by Swierstra & Duponcheel. To describe the benefits of their design, let's examine exactly how monadic parsers work. If you want to parse a single word, you end up with several monadic parsers stacked end to end. Taking Parsec as an example, the parser string "word" can also be viewed as word = do char 'w' >> char 'o' >> char 'r' >> char 'd' return "word"

Each character is tried in order, if "worg" is the input, then the first three parsers will succeed, and the last one will fail, making the entire string "word" parser fail. If you want to parse one of two options, you create a new parser for each and they are tried in order. The first one must fail and then the next will be tried with the same input. ab = do char 'a' <|> char 'b' <|> char 'c'

To parse "c" successfully, both 'a' and 'b' must have been tried. one = do char 'o' >> char 'n' >> char 'e' return "one" two = do char 't' >> char 'w' >> char 'o' return "two" three = do char 't' >> char 'h' >> char 'r' >> char 'e' >> char 'e' return "three" nums = do one <|> two <|> three

With these three parsers, you can't know that the string "four" will fail the parser nums until the last parser has failed. If one of the options can consume much of the input but will fail, you still must descend down the chain of parsers until the final parser fails. All of the input that can possibly be consumed by later parsers must be retained in memory in case one of them does consume it. That can lead to much more space usage than you would naively expect, this is often called a space leak. The general pattern of monadic parsers is that each option must fail or one option must succeed. So what's better? Swierstra & Duponcheel (1996) noticed that a smarter parser could immediately fail upon seeing the very first character. For example, in the nums parser above, the choice of first letter parsers was limited to either the letter 'o' for "one" or the letter 't' for both "two" and "three". This smarter parser would also be able to

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 156

garbage collect input sooner because it could look ahead to see if any other parsers might be able to consume the input, and drop input that could not be consumed. This new parser is a lot like the monadic parsers with the major difference that it exports static information. It's like a monad, but it also tells you what it can parse. There's one major problem. This doesn't fit into the monadic interface. Monads are (a -> m b), they're based around functions only. There's no way to attach static information. You have only one choice, throw in some input, and see if it passes or fails. The monadic interface has been touted as a general purpose tool in the functional programming community, so finding that there was some particularly useful code that just couldn't fit into that interface was something of a setback. This is where Arrows come in. John Hughes's Generalising monads to arrows proposed the arrows abstraction as new, more flexible tool. Static and dynamic parsers Let us examine Swierstra & Duponcheel's parser in greater detail, from the perspective of arrows. The parser has two components: a fast, static parser which tells us if the input is worth trying to parse; and a slow, dynamic parser which does the actual parsing work. data Parser s a b = P (StaticParser s) (DynamicParser s a b) data StaticParser s = SP Bool [s] newtype DynamicParser s a b = DP ((a,[s]) -> (b,[s]))

The static parser consists of a flag, which tells us if the parser can accept the empty input, and a list of possible starting characters. For example, the static parser for a single character would be as follows: spCharA :: Char -> StaticParser Char spCharA c = SP False [c]

It does not accept the empty string (False) and the list of possible starting characters consists only of c. The rest of this section needs to be verified The dynamic parser needs a little more dissecting : what we see is a function that goes from (a,[s]) to (b,[s]). It is useful to think in terms of sequencing two parsers : Each parser consumes the result of the previous parser (a), along with the remaining bits of input stream ([s]), it does something with a to produce its own result b, consumes a bit of string and returns that. Ooof. So, as an example of this in action, consider a dynamic parser (Int,String) -> (Int,String), where the Int represents a count of the characters parsed so far. The table belows shows what would happen if we sequence a few of them together and set them loose on the string "cake" : result remaining before

0

cake

after first parser

1

ake

after second parser 2

ke

after third parser

e

3

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 157

So the point here is that a dynamic parser has two jobs : it does something to the output of the previous parser (informally, a -> b), and it consumes a bit of the input string, (informally, [s] -> [s]), hence the type DP ((a,[s]) -> (b,[s]). Now, in the case of a dynamic parser for a single character, the first job is trivial. We ignore the output of the previous parser. We return the character we have parsed. And we consume one character off the stream : dpCharA :: Char -> DynamicParser Char Char Char dpCharA c = DP (\(_,x:xs) -> (c,xs))

This might lead you to ask a few questions. For instance, what's the point of accepting the output of the previous parser if we're just going to ignore it? The best answer we can give right now is "wait and see". If you're comfortable with monads, consider the bind operator (>>=). While bind is immensely useful by itself, sometimes, when sequencing two monadic computations together, we like to ignore the output of the first computation by using the anonymous bind (>>). This is the same situation here. We've got an interesting little bit of power on our hands, but we're not going to use it quite yet. The next question, then, shouldn't the dynamic parser be making sure that the current charcter off the stream matches the character to be parsed? Shouldn't x == c be checked for? No. And in fact, this is part of the point; the work is not neccesary because the check would already have been performed by the static parser. Anyway, let us put this together. Here is our S+D style parser for a single character: charA :: Char -> Parser Char Char Char charA c = P (SP False [c]) (DP \(_,x:xs) -> (c,xs))

Arrow combinators (robots) Up to this point, we have explored two somewhat independent trains of thought. On the one hand, we've taken a look at some arrow machinery, the combinators/robots from above, although we don't exactly know what it's for. On the other hand, we have introduced a type of parser using the Arrow class. We know that the goal is to avoid space leaks and that it somehow involves separating a fast static parser from its slow dynamic part, but we don't really understand how that ties in to all this arrow machinery. In this section, we will attempt to address both of these gaps in our knowledge and merge our twin trains of thought into one. We're going to implement the Arrow class for Parser s, and by doing so, give you a glimpse of what makes arrows useful. So let's get started: instance Arrow (Parser s) where

One of the simplest things we can do is to convert an arbitrary function into a parsing arrow. We're going to use "parse" in the loose sense of the term: our resulting arrow accepts the empty string, and only the empty string (its set of first characters is []). Its sole job is take the output of the previous parsing arrow and do something with it. Otherwise, it does not consume any input. arr f = P (SP True []) (DP (\(b,s) -> (f b,s))

Likewise, the first combinator is relatively straightforward. Recall the conveyor belts from above. Given a parser, we want to produce a new parser that accepts a pair of

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 158

inputs (b,d). The first part of the input b, is what we actually want to parse. The second part is passed through completely untouched: first (P sp (DP p)) = (P sp (\((b,d),s) -> let (c, s') = p (b,s) in ((c,d),s')

On the other hand, the implementation of (>>>) requires a little more thought. We want to take two parsers, and returns a combined parser incorporating the static and dynamic parsers of both arguments: (P (SP empty1 start1) (DP p1)) >>> (P (SP empty2 start2) (DP p2)) = P (SP (empty1 && empty2)) (if not empty1 then start1 else start1 `union` start2) (DP (p2.p1))

Combining the dynamic parsers is easy enough; we just do function composition. Putting the static parsers together requires a little bit of thought. First of all, the combined parser can only accept the empty string if both parsers do. Fair enough, now about about the starting symbols? Well, the parsers are supposed to be in a sequence, so the starting symbols of the second parser shouldn't really matter. If life were simple, the starting symbols of the combined parser would only be start1. Alas, life is NOT simple, because parsers could very well accept the empty input. If the first parser accepts the empty input, then we have to account for this possibility by accepting the starting symbols from both the first and the second parsers. Exercises 1. Consider the charA parser from above. What would charA 'o' >>> charA 'n' >>> charA 'e' result in? 2. Write a simplified version of that combined parser. That is: does it accept the empty string? What are its starting symbols? What is the dynamic parser for this?

So what do arrows buy us in all this?

Monads can be arrows too The real flexibility with arrows comes with the ones that aren't monads, otherwise it's just a clunkier syntax -- Philippa Cowderoy It turns out that all monads can be made into arrows. Here's a central quote from the original arrows papers: Just as we think of a monadic type m a as representing a 'computation delivering an a '; so we think of an arrow type a b c, (that is, the application of the parameterised type a to the two parameters b and c) as representing 'a computation with input of type b delivering a c'; arrows make the dependence on input explicit. One way to look at arrows is the way the English language allows you to noun a verb, for example, "I had a chat." Arrows are much like that, they turn a function from a to b into a value. This value is a first class transformation from a to b.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 159

Arrows in practice Arrows are a relatively new abstraction, but they already found a number of uses in the Haskell world Hughes' arrow-style parsers were first described in his 2000 paper, but a usable implementation wasn't available until May 2005. Einar Karttunen wrote an implementation called PArrows that approaches the features of standard Haskell parser combinator library, Parsec. The Fudgets library for building graphical interfaces FIXME: complete this paragraph Yampa - FIXME: talk briefly about Yampa The Haskell XML Toolbox (HXT (http://www.fh-wedel.de/~si/HXmlToolbox/index.html) ) uses arrows for processing XML. There is a Wiki page in the Haskell Wiki with a somewhat Gentle Introduction to HXT (http://www.haskell.org/haskellwiki/HXT) . Arrows Aren't The Answer To Every Question Arrows do have some problems. Several people on the #haskell irc channel have done nifty arrows experiments, and some of those experiments ran into problems. Some notable obstacles were typified by experiments done by Jeremy Shaw, Einar Karttunen, and Peter Simons. If you would like to learn more about the limitations behind arrows, follow the references at the end of this article

See also Generalising Monads to Arrows - John Hughes http://www.haskell.org/arrows/biblio.html Arrow uses Arrow limitations Jeremy Shaw Einar Kartunnen Peter Simons Current research

Acknowledgements This module uses text from An Introduction to Arrows by Shae Erisson, originally written for The Monad.Reader 4

Continuation passing style

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 160

Continuation passing style, or CPS, is a style of programming where functions never return values, but instead take an extra parameter which they give their result to — this extra parameter represents what to do next, and is called a continuation.

Starting simple To begin with, we're going to explore two simple examples which illustrate what CPS and continuations are. square Let's start with a very simple module which squares a number, then outputs it:

Example: A simple module, no continuations square :: Int -> Int square x = x ^ 2 main = do let x = square 4 print x

We're clearly doing two things here. First, we square four, then we print the result. If we were to make the square function take a continuation, we'd end up with something like the following:

Example: A simple module, using continuations square :: Int -> (Int -> a) -> a square x k = k (x ^ 2) main = square 4 print

That is, square takes an extra parameter which is the function that represents what to do next — the continuation of the computation. quadRoots 2

Consider the quadratic equation. Recall that for the equation ax + bx + c = 0, the quadratic equation states that:

When considering only real numbers, we may have zero, one, or two roots. The quantity under the radical, is known as the determinant. When d < 0, there are no (real) roots; when d = 0 we have

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 161

one real root; and finally, with d > 0, we have two real roots. We then write a Haskell function to compute the roots of a quadratic as follows:

Example: quadRoots, no continuations data Roots = None | One Double | Two Double Double quadRoots :: Double -> Double -> Double -> Roots quadRoots a b c | d < 0 = None | d == 0 = One $ -b/2/a | d > 0 = Two ((-b + sqrt d)/2/a) ((-b - sqrt d)/2/a) where d = b*b - 4*a*c

To use this function, we need to pattern match on the result, for instance:

Example: Using the result of quadRoots, still no continuations printRoots :: Double -> Double -> Double -> IO () printRoots a b c = case quadRoots a b c of None -> putStrLn "There were no roots." One x -> putStrLn $ showString "There was one root: " $ show x Two x x' -> putStrLn $ showString "There were two roots found: " $ shows x $ showString " and " $ show x'

To write this in continuation passing style, we will begin by modifying the quadRoots function. It will now take three additional parameters: functions that will be called with the resulting number of roots.

Example: quadRoots' using continuations quadRoots' :: Double -> Double -> Double -- The three coefficients -> a -- What to do with no roots -> (Double -> a) -- What to do with one root -> (Double -> Double -> a) -- What to do with two roots -> a -- The final result quadRoots' a b c f0 f1 f2 | d < 0 = f0 | d == 0 = f1 $ -b/2/a | d > 0 = f2 ((-b + sqrt d)/2/a) ((-b - sqrt d)/2/a) where d = b*b - 4*a*c

One may notice that the body of quadRoots' is identical to quadRoots, except that we've subsituted arbitrary functions for the constructors of Roots. Indeed, quadRoots may be rewritten to use quadRoots', by passing the constructors for Roots. Now we no longer need to pattern match on the result, we just pass in the expressions from the case matches.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

This is how data is often expressed in lambda calculi: note that quadRoots'

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Example: Using the result of quadRoots, with continuations

Page 162 doesn't use Roots at all.

printRoots :: Double -> Double -> Double -> IO () printRoots a b c = quadRoots' a b c f0 f1 f2 where f0 = putStrLn "There were no roots." f1 x = putStrLn $ "There was one root: " ++ show x f2 x x' = putStrLn $ "There were two roots found: " ++ show x ++ " and " ++ show x'

Exercises FIXME: write some exercises

Using the Cont monad By now, you should be used to the pattern that whenever we find a pattern we like (here the pattern is using continuations), but it makes our code a little ugly, we use a monad to encapsulate the 'plumbing'. Indeed, there is a monad for modelling computations which use CPS.

Example: The Cont monad newtype Cont r a = Cont { runCont :: (a -> r) -> r }

Removing the newtype and record cruft, we obtain that Cont r a expands to (a -> r) -> r. So how does this fit with our idea of continuations we presented above? Well, remember that a function in CPS basically took an extra parameter which represented 'what to do next'. So, here, the type of Cont r a expands to be an extra function (the continuation), which is a function from things of type a (what the result of the function would have been, if we were returning it normally instead of throwing it into the continuation) , to things of type r, which becomes the final result type of our function. All of that was a little vague and abstract so let's crack out an example.

Example: The square module, using the Cont monad square :: Int -> Cont r Int square x = return (x ^ 2) main = runCont (square 4) print {- Result: 16 -}

If we expand the type of square, we obtain that square :: Int -> (Int -> r) -> r, which is precisely what we had before we introduced Cont into the picture. So we can see that type Cont r a

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 163

expands into a type which fits our notion of a continuation, as defined above. Every function that returns a Cont-value actually takes an extra parameter, which is the continuation. Using return simply throws its argument into the continuation. How does the Cont implementation of (>>=) work, then? It's easiest to see it at work:

Example: The (>>=) function for the Cont monad square :: Int -> Cont r Int square x = return (x ^ 2) addThree :: Int -> Cont r Int addThree x = return (x + 3) main = runCont (square 4 >>= addThree) print {- Result: 19 -}

The Monad instance for return and Cont are given below: return n = \k -> k n m >>= f = \k -> m (\a -> f a k)

So return n is Cont-value that throws n straight away into whatever continuation it is applied to. m >>= f is a Cont-value that runs m with the continuation \a -> f a k, which receives the result of m, then applies it to f to get another Cont-value. This is then called with the continuation we got at the top level; in essence m >>= f is a Cont-value that takes the result from m, applies it to f, then throws that into the continuation. Exercises To come.

callCC By now you should be fairly confident using the basic notions of continuations and Cont, so we're going to skip ahead to the next big concept in continuation-land. This is a function called callCC, which is short for 'call with current continuation'. We'll start with an easy example.

Example: square using callCC -- Without callCC square :: Int -> Cont r Int square n = return (n ^ 2) -- With callCC

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 164

square :: Int -> Cont r Int square n = callCC $ \k -> k (n ^ 2)

We pass a function to callCC that accepts one parameter that is in turn a function. This function (k in our example) is our tangible continuation: we can see here we're throwing a value (in this case, n ^ 2) into our continuation. We can see that the callCC version is equivalent to the return version stated above because we stated that return n is just a Cont-value that throws n into whatever continuation that it is given. Here, we use callCC to bring the continuation 'into scope', and immediately throw a value into it, just like using return. However, these versions look remarkably similar, so why should we bother using callCC at all? The power lies in that we now have precise control of exactly when we call our continuation, and with what values. Let's explore some of the surprising power that gives us. Deciding when to use k We mentioned above that the point of using callCC in the first place was that it gave us extra power over what we threw into our continuation, and when. The following example shows how we might want to use this extra flexibility.

Example: Our first proper callCC function foo :: Int -> Cont r String foo n = callCC $ \k -> do let n' = n ^ 2 + 3 when (n' > 20) $ k "over twenty" return (show $ n' - 4)

foo is a slightly pathological function that computes the square of its input and adds three; if the result of this computation is greater than 20, then we return from the function immediately, throwing the String value "over twenty" into the continuation that is passed to foo. If not, then we subtract four from our previous computation, show it, and throw it into the computation. If you're used to imperative languages, you can think of k like the 'return' statement that immediately exits the function. Of course, the advantages of an expressive language like Haskell are that k is just an ordinary first-class function, so you can pass it to other functions like when, or store it in a Reader, etc. Naturally, you can embed calls to callCC within do-blocks:

Example: More developed callCC example involving a do-block bar :: Char -> String -> Cont r Int bar c s = do msg <- callCC $ \k -> do let s' = c : s when (s' == "hello") $ k "They say hello." let s'' = show s'

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 165

return ("They appear to be saying " ++ s'') return (length msg)

When you call k with a value, the entire callCC call takes that value. In other words, k is a bit like a 'goto' statement in other languages: when we call k in our example, it pops the execution out to where you first called callCC, the msg <- callCC $ ... line. No more of the argument to callCC (the inner doblock) is executed. Hence the following example contains a useless line:

Example: Popping out a function, introducing a useless line bar :: Cont r Int bar = callCC $ \k -> do let n = 5 k n return 25

bar will always return 5, and never 25, because we pop out of bar before getting to the return 25 line. A note on typing Why do we exit using return rather than k the second time within the foo example? It's to do with types. Firstly, we need to think about the type of k. We mentioned that we can throw something into k, and nothing after that call will get run (unless k is run conditionally, like when wrapped in a when). So the return type of k doesn't matter; we can never do anything with the result of running k. We say, therefore, that the type of k is: k :: a -> Cont r b

We universally quantify the return type of k. This is possible for the aforementioned reasons, and the reason it's advantageous is that we can do whatever we want with the result of k. In our above code, we use it as part of a when construct: when :: Monad m => Bool -> m () -> m ()

[10] As soon as the compiler sees k being used in this when, it infers that we want a () result type for k . So the final expression in that inner do-block has type Cont r () too. This is the crux of our problem. There are two possible execution routes: either the condition for the when succeeds, in which case the do-block returns something of type Cont r String. (The call to k makes the entire do-block have a type of Cont r t, where t is the type of the argument given to k. Note that this is different from the return type of k itself, which is just the return type of the expression involving the call to k, not the entire do-block.) If the condition fails, execution pases on and the do-block returns something of type Cont r (). This is a type mismatch.

If you didn't follow any of that, just make sure you use return at the end of a do-block inside a call to callCC, not k.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 166

The type of callCC We've deliberately broken a trend here: normally when we've introduced a function, we've given its type straight away, but in this case we haven't. The reason is simple: the type is rather horrendously complex, and it doesn't immediately give insight into what the function does, or how it works. Nevertheless, you should be familiar with it, so now you've hopefully understood the function itself, here's it's type: callCC :: ((a -> Cont r b) -> Cont r a) -> Cont r a

This seems like a really weird type to begin with, so let's use a contrived example. callCC $ \k -> k 5

You pass a function to callCC. This in turn takes a parameter, k, which is another function. k, as we remarked above, has the type: k :: a -> Cont r b

The entire argument to callCC, then, is a function that takes something of the above type and returns Cont r t, where t is whatever the type of the argument to k was. So, callCC's argument has type: (a -> Cont r b) -> Cont r a

Finally, callCC is therefore a function which takes that argument and returns its result. So the type of callCC is: callCC :: ((a -> Cont r b) -> Cont r a) -> Cont r a

Example: a complicated control structure This example was originally taken from the 'The Continuation monad' section of the All about monads tutorial (http://www.haskell.org/all_about_monads/html/index.html) , used with permission.

Example: Using Cont for a complicated control structure {- We use the continuation monad to perform "escapes" from code blocks. This function implements a complicated control structure to process numbers: Input (n) ========= 0-9 10-199 200-19999 20000-1999999 >= 2000000

Output ====== n number of digits in (n/2) n (n/2) backwards sum of digits of (n/2)

List Shown ========== none digits of (n/2) digits of (n/2) none digits of (n/2)

-}

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks fun :: Int -> String fun n = (`runCont` id) $ do str <- callCC $ \exit1 -> do "exit1" when (n < 10) (exit1 $ show n) let ns = map digitToInt (show $ n `div` 2) n' <- callCC $ \exit2 -> do "exit2" when (length ns < 3) (exit2 $ length ns) when (length ns < 5) (exit2 n) when (length ns < 7) $ do let ns' = map intToDigit (reverse ns) exit1 (dropWhile (=='0') ns') 2 levels return $ sum ns return $ "(ns = " ++ show ns ++ ") " ++ show n' return $ "Answer: " ++ str

Page 167

-- define

-- define

-- escape

Because it isn't initially clear what's going on, especially regarding the usage of callCC, we will explore this somewhat. Analysis of the example Firstly, we can see that fun is a function that takes an integer n. We basically implement a control structure using Cont and callCC that does different things based on the range that n falls in, as explained with the comment at the top of the function. Let's dive into the analysis of how it works. 1. Firstly, the (`runCont` id) at the top just means that we run the Cont block that follows with a final continuation of id. This is necessary as the result type of fun doesn't mention Cont. 2. We bind str to the result of the following callCC do-block: 1. If n is less than 10, we exit straight away, just showing n. 2. If not, we proceed. We construct a list, ns, of digits of n `div` 2. 3. n' (an Int) gets bound to the result of the following inner callCC do-block. 1. If length ns < 3, i.e., if n `div` 2 has less than 3 digits, we pop out of this inner do-block with the number of digits as the result. 2. If n `div` 2 has less than 5 digits, we pop out of the inner do-block returning the original n. 3. If n `div` 2 has less than 7 digits, we pop out of both the inner and outer do-blocks, with the result of the digits of n `div` 2 in reverse order (a String). 4. Otherwise, we end the inner do-block, returning the sum of the digits of n `div` 2. 4. We end this do-block, returning the String "(ns = X) Y", where X is ns, the digits of n `div` 2, and Y is the result from the inner do-block, n'. 3. Finally, we return out of the entire function, with our result being the string "Answer: Z", where Z is the string we got from the callCC do-block.

Example: exceptions One use of continuations is to model exceptions. To do this, we hold on to two continuations: one that takes us out to the handler in case of an exception, and one that takes us to the post-handler code in case of a success. Here's a simple function that takes two numbers and does integer division on them, failing when the denominator is zero.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 168

Example: An exception-throwing div divExcpt :: Int -> Int -> (String -> Cont r Int) -> Cont r Int divExcpt x y handler = callCC $ \ok -> do err <- callCC $ \notOk -> do when (y == 0) $ notOk "Denominator 0" ok $ x `div` y handler err {- For example, runCont (divExcpt 10 2 error) id runCont (divExcpt 10 0 error) id -}

--> -->

5 *** Exception: Denominator 0

How does it work? We use two nested calls to callCC. The first labels a continuation that will be used when there's no problem. The second labels a continuation that will be used when we wish to throw an exception. If the denominator isn't 0, x `div` y is thrown into the ok continuation, so the execution pops right back out to the top level of divExcpt. If, however, we were passed a zero denominator, we throw an error message into the notOk continuation, which pops us out to the inner do-block, and that string gets assigned to err and given to handler. A more general approach to handling exceptions can be seen with the following function. Pass a computation as the first parameter (which should be a function taking a continuation to the error handler) and an error handler as the second parameter. This example takes advantage of the generic MonadCont class which covers both Cont and ContT by default, plus any other continuation classes the user has defined.

Example: General try using continuations. tryCont :: MonadCont m => ((err -> m a) -> m a) -> (err -> m a) -> m a tryCont c h = callCC $ \ok -> do err <- callCC $ \notOk -> do x <- c notOk; ok x h err

For an example using try, see the following program.

Example: Using try data SqrtException = LessThanZero deriving (Show, Eq) sqrtIO :: (SqrtException -> ContT r IO ()) -> ContT r IO () sqrtIO throw = do ln <- lift (putStr "Enter a number to sqrt: " >> readLn) when (ln < 0) (throw LessThanZero) lift $ print (sqrt ln)

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 169

main = runContT (tryCont sqrtIO (lift . print)) return

Example: coroutines

Notes ↑

1. ↑ At least as far as types are concerned, but we're trying to avoid that word :) 2. More technically, fst and snd have types which limit them to pairs. It would be impossible to define projection functions on tuples in general, because they'd have to be able to accept tuples of different sizes, so the type of the function would vary. ↑ 3. ↑ In fact, these are one and the same concept in Haskell. 4. This isn't quite what chr and ord do, but that description fits our purposes well, and it's close enough. ↑ 5. To make things even more confusing, there's actually even more than one type for integers! Don't worry, we'll come on to this in due course. ↑ 6. This has been somewhat simplified to fit our purposes. Don't worry, the essence of the function is there. ↑ 7. Some of the newer type system extensions to GHC do break this, however, so you're better off just always putting down types anyway. ↑ 8. This is a slight lie. That type signature would mean that you can compare two values of any type whatsoever, but this clearly isn't true: how can you see if two functions are equal? Haskell includes a kind of 'restricted polymorphism' that allows type variables to range over some, but not all types. Haskell implements this using type classes, which we'll learn about later. In this case, the correct type of ↑ (==) is Eq a => a -> a -> Bool. 9. In mathematics, n! normally means the factorial of n, but that syntax is impossible in Haskell, so we don't use it here. ↑ 10. It infers a monomorphic type because k is bound by a lambda expression, and things bound by lambdas always have monomorphic types. See Polymorphism.

Mutable objects Although Haskell normally deals with immutable variables, that is, ones that never change, several advanced techniques have been developed to simulate being able to change variables.

The ST monad State references: STRef and IORef Mutable arrays

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 170

Examples

Zippers Theseus and the Zipper The Labyrinth "Theseus, we have to do something" said Homer, chief marketing officer of Ancient Geeks Inc.. Theseus put the Minotaur action figure™ back onto the shelf and nods. "Today's children are no longer interested in the ancient myths, they prefer modern heroes like Spiderman or Sponge Bob." Heroes. Theseus knew well how [11]

much he has been a hero in the labyrinth back then on Crete . But those "modern heroes" did not even try to appear realistic. What made them so successful? Anyway, if the pending sales problems could not be resolved, the shareholders would certainly arrange a passage over the Styx for Ancient Geeks Inc. "Heureka! Theseus, I have an idea: we implement your story with the Minotaur as a computer game! What do you say?" Homer was right. There had been several books, epic (and chart breaking) songs, a mandatory movie trilogy and uncountable Theseus & the Minotaur™ gimmicks, but a computer game was missing. "Perfect, then. Now, Theseus, your task is to implement the game". A true hero, Theseus chose Haskell as the language to implement the company's redeeming product in. Of course, exploring the labyrinth of the Minotaur was to become one of the game's highlights. He pondered: "We have a two-dimensional labyrinth whose corridors can point in many directions. Of course, we can abstract from the detailed lengths and angles: for the purpose of finding the way out, we only need to know how the path forks. To keep things easy, we model the labyrinth as a tree. This way, the two branches of a fork cannot join again when walking deeper and the player cannot go round in circles. But I think there is enough opportunity to get lost; and this way, if the player is patient enough, he can explore the entire labyrinth with the left-hand rule." data Node a = DeadEnd a | Passage a (Node a) | Fork a (Node a) (Node a)

An example labyrinth and its representation as tree.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 171

Theseus made the nodes of the labyrinth carry an extra parameter of type a. Later on, it may hold game relevant information like the coordinates of the spot a node designates, the ambience around it, a list of game items that lie on the floor, or a list of monsters wandering in that section of the labyrinth. We assume that two helper functions get :: Node a -> a put :: a -> Node a -> Node a

retrieve and change the value of type a stored in the first argument of every constructor of Node a. Exercises 1. Implement get and put. One case for get is get (Passage x _) = x. 2. To get a concrete example, write down the labyrinth shown in the picture as a value of type Node (Int,Int). The extra parameter (Int,Int) holds the cartesian coordinates of a node.

"Mh, how to represent the player's current position in the labyrinth? The player can explore deeper by choosing left or right branches, like in" turnRight :: Node a -> Maybe (Node a) turnRight (Fork _ l r) = Just r turnRight _ = Nothing

"But replacing the current top of the labyrinth with the corresponding sub-labyrinth this way is not an option, because he cannot go back then." He pondered. "Ah, we can apply Ariadne's trick with the thread for going back. We simply represent the player's position by the list of branches his thread takes, the labyrinth always remains the same." data Branch = | | type Thread =

KeepStraightOn TurnLeft TurnRight [Branch]

Representation of the player's position by Ariadne's thread.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 172

"For example, a thread [TurnRight,KeepStraightOn] means that the player took the right branch at the entrance and then went straight down a Passage to reach its current position. With the thread, the player can now explore the labyrinth by extending or shortening it. For instance, the function turnRight extends the thread by appending the TurnRight to it." turnRight :: Thread -> Thread turnRight t = t ++ [TurnRight]

"To access the extra data, i.e. the game relevant items and such, we simply follow the thread into the labyrinth." retrieve retrieve retrieve retrieve retrieve

:: Thread -> Node a [] (KeepStraightOn:bs) (TurnLeft :bs) (TurnRight :bs)

-> a n (Passage _ n) (Fork _ l r) (Fork _ l r)

= = = =

get n retrieve bs n retrieve bs l retrieve bs r

Exercises Write a function update that applies a function of type a -> a to the extra data at the player's position. Theseus' satisfaction over this solution did not last long. "Unfortunately, if we want to extend the path or go back a step, we have to change the last element of the list. We could store the list in reverse, but even then, we have to follow the thread again and again to access the data in the labyrinth at the player's position. Both actions take time proportional to the length of the thread and for large labyrinths, this will be too long. Isn't there another way?" Ariadne's Zipper While Theseus was a skillful warrior, he did not train much in the art of programming and could not find a satisfying solution. After intense but fruitless cogitation, he decided to call his former love Ariadne to ask her for advice. After all, it was her who had the idea with the thread. "Ariadne Consulting. What can I do for you?" Our hero immediately recognized the voice. "Hello Ariadne, it's Theseus." An uneasy silence paused the conversation. Theseus remembered well that he had abandoned her on the island of Naxos and knew that she would not appreciate his call. But Ancient Geeks Inc. was on the road to Hades and he had no choice. "Uhm, darling, ... how are you?" Ariadne retorted an icy response, "Mr. Theseus, the times of darling are long over. What do you want?" "Well, I uhm ... I need some help with a programming problem. I'm programming a new Theseus & the Minotaur™ computer game." She jeered, "Yet another artifact to glorify your 'heroic being'? And you want me of all people to help you?" "Ariadne, please, I beg of you, Ancient Geeks Inc. is on the brink of insolvency. The game is our last resort!" After a pause, she came to a decision. "Fine, I will help you. But only if you transfer a substantial part of Ancient Geeks Inc. to me. Let's say thirty percent." Theseus turned pale. But what could he do? The situation was desperate enough, so he agreed but only after negotiating Ariadne's share to a tenth.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 173

After Theseus told Ariadne of the labyrinth representation he had in mind, she could immediately give advice, "You need a zipper." "Huh? What does the problem have to do with my fly?" [12]

"Nothing, it's a data structure first published by Gérard Huet ." "Ah." "More precisely, it's a purely functional way to augment tree-like data structures like lists or binary trees with a single focus or finger that points to a subtree inside the data structure and allows constant time updates and [13]

lookups at the spot it points to . In our case, we want a focus on the player's position." "I know for myself that I want fast updates, but how do I code it?" "Don't get impatient, you cannot solve problems by coding, you can only solve them by thinking. The only [14][15]

place where we can get constant time updates in a purely functional data structure is the topmost node . So, the focus necessarily has to be at the top. Currently, the topmost node in your labyrinth is always the entrance, but your previous idea of replacing the labyrinth by one of its sub-labyrinths ensures that the player's position is at the topmost node." "But then, the problem is how to go back, because all those sub-labyrinths get lost that the player did not choose to branch into." "Well, you can use my thread in order not to lose the sub-labyrinths." Ariadne savored Theseus puzzlement but quickly continued before he could complain that he already used Ariadne's thread, "The key is to glue the lost sub-labyrinths to the thread so that they actually don't get lost at all. The intention is that the thread and the current sub-labyrinth complement one another to the whole labyrinth. With 'current' sub-labyrinth, I mean the one that the player stands on top of. The zipper simply consists of the thread and the current sub-labyrinth." type Zipper a = (Thread a, Node a)

The zipper is a pair of Ariadne's thread and the current sub-labyrinth that the player stands on top. The main thread is colored red and has sublabyrinths attached to it, such that the whole labyrinth can be reconstructed from the pair.

Theseus didn't say anything. "You can also view the thread as a context in which the current sub-labyrinth resides. Now, let's find out how to define Thread a. By the way, Thread has to take the extra parameter a because it now stores sublabyrinths. The thread is still a simple list of branches, but the branches are different from before." data Branch a

= KeepStraightOn a | TurnLeft a (Node a)

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

type Thread a

Page 174

| TurnRight a (Node a) = [Branch a]

"Most importantly, TurnLeft and TurnRight have a sub-labyrinth glued to them. When the player chooses say to turn right, we extend the thread with a TurnRight and now attach the untaken left branch to it, so that it doesn't get lost." Theseus interrupts, "Wait, how would I implement this behavior as a function turnRight? And what about the first argument of type a for TurnRight? Ah, I see. We not only need to glue the branch that would get lost, but also the extra data of the Fork because it would otherwise get lost as well. So, we can generate a new branch by a preliminary" branchRight (Fork x l r) = TurnRight x l

"Now, we have to somehow extend the existing thread with it." "Indeed. The second point about the thread is that it is stored backwards. To extend it, you put a new branch in front of the list. To go back, you delete the topmost element." "Aha, this makes extending and going back take only constant time, not time proportional to the length length as in my previous version. So the final version of turnRight is" turnRight :: Zipper a -> Maybe (Zipper a) turnRight (t, Fork x l r) = Just (TurnRight x l : t, r) turnRight _ = Nothing

Taking the right subtree from the entrance. Of course, the thread is initially empty. Note that the thread runs backwards, i.e. the topmost segment is the most recent.

"That was not too difficult. So let's continue with keepStraightOn for going down a passage. This is even easier than choosing a branch as we only need to keep the extra data:" keepStraightOn :: Zipper a -> Maybe (Zipper a) keepStraightOn (t, Passage x n) = Just (KeepStraightOn x : t, n) keepStraightOn _ = Nothing

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 175

Now going down a passage.

Exercises Write the function turnLeft. Pleased, he continued, "But the interesting part is to go back, of course. Let's see..." back back back back back

:: Zipper a -> Maybe (Zipper ([] , _) = (KeepStraightOn x : t , n) = (TurnLeft x r : t , l) = (TurnRight x l : t , r) =

a) Nothing Just (t, Passage x n) Just (t, Fork x l r) Just (t, Fork x l r)

"If the thread is empty, we're already at the entrance of the labyrinth and cannot go back. In all other cases, we have to wind up the thread. And thanks to the attachments to the thread, we can actually reconstruct the sub-labyrinth we came from." Ariadne remarked, "Note that a partial test for correctness is to check that each bound variable like x, l and r on the left hand side appears exactly once at the right hands side as well. So, when walking up and down a zipper, we only redistribute data between the thread and the current sub-labyrinth." Exercises 1. Now that we can navigate the zipper, code the functions get, put and update that operate on the extra data at the player's position. 2. Zippers are by no means limited to the concrete example Node a, they can be constructed for all tree-like data types. Go on and construct a zipper for binary trees data Tree a = Leaf a | Bin (Tree a) (Tree a)

Start by thinking about the possible branches Branch a that a thread can take. What do you have to glue to the thread when exploring the tree? 3. Simple lists have a zipper as well. data List a = Empty | Cons a (List a)

What does it look like?

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 176

4. Write a complete game based on Theseus' labyrinth.

Heureka! That was the solution Theseus sought and Ancient Geeks Inc. should prevail, even if partially sold to Ariadne Consulting. But one question remained: "Why is it called zipper?" "Well, I would have called it 'Ariadne's pearl necklace'. But most likely, it's called zipper because the thread is in analogy to the open part and the sub-labyrinth is like the closed part of a zipper. Moving around in the data structure is analogous to zipping or unzipping the zipper." "'Ariadne's pearl necklace'," he articulated disdainfully. "As if your thread was any help back then on Crete." "As if the idea with the thread was yours," she replied. "Bah, I need no thread," he defied the fact that he actually did need the thread to program the game. Much to his surprise, she agreed, "Well, indeed you don't need a thread. Another view is to literally grab the tree at the focus with your finger and lift it up in the air. The focus will be at the top and all other branches of the tree hang down. You only have to assign the resulting tree a suitable algebraic data type, most likely that of the zipper."

Grab the focus with your finger, lift it in the air and the hanging branches will form new tree with your finger at the top, ready to be structured by an algebraic data type.

"Ah." He didn't need Ariadne's thread but he needed Ariadne to tell him? That was too much. "Thank you, Ariadne, good bye." She did not hide her smirk as he could not see it anyway through the phone. Exercises Take a list, fix one element in the middle with your finger and lift the list into the air. What type can you give to the resulting tree?

Half a year later, Theseus stopped in front of a shop window, defying the cold rain that tried to creep under his buttoned up anorak. Blinking letters announced "Spider-Man: lost in the Web" - find your way through the labyrinth of threads the great computer game by Ancient Geeks Inc.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 177

He cursed the day when he called Ariadne and sold her a part of the company. Was it she who contrived the unfriendly takeover by WineOS Corp., led by Ariadne's husband Dionysus? Theseus watched the raindrops finding their way down the glass window. After the production line was changed, nobody would produce Theseus and the Minotaur™ merchandise anymore. He sighed. His time, the time of heroes, was over. Now came the super-heroes.

Differentiation of data types The previous section has presented the zipper, a way to augment a tree-like data structure Node a with a finger that can focus on the different subtrees. While we constructed a zipper for a particular data structure Node a, the construction can be easily adapted to different tree data structures by hand. Exercises Start with a ternary tree data Node a = Leaf a | Node (Node a) (Node a) (Node a)

and derive the corresponding Thread a and Zipper a. But there is also an entirely mechanical way to derive the zipper of any (suitably regular) data type. Surprisinigly, 'derive' is to be taken literally, for the zipper can obtained by the derivative of the data type, a discovery first described by Conor McBride wonderful mathematical gem.

[16]

. The subsequent section is going to explicate this truly

For a systematic construction, we need to calculate with types. The basics of structural calculations with types are outlined in a separate chapter Generic Programming and we will heavily rely on this material. Let's look at some examples to see what their zippers have in common and how they hint differentiation. The type of binary tree is the fixed point of the recursive equation . When walking down the tree, we iteratively choose to enter the left or the right subtree and then glue the notentered subtree to Ariadne's thread. Thus, the branches of our thread have the type . Similarly, the thread for a ternary tree

has branches of type

because at every step, we can choose between three subtrees and have to store the two subtrees we don't enter. Isn't this strikingly similar to the derivatives

and

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

?

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 178

The key to the mystery is the notion of the one-hole context of a data structure. Imagine a data structure . If we were to remove one of the items of this parameterised over a type X, like the type of trees type X from the structure and somehow mark the now empty position, we obtain a structure with a marked hole. The result is called "one-hole context" and inserting an item of type X into the hole gives back a completely filled . The hole acts as a distinguished position, a focus. The figures illustrate this.

Removing a value of type X from a a hole at that position.

leaves

A more abstract illustration of plugging X into a one-hole context.

Of course, we are interested in the type to give to a one-hole context, i.e. how to represent it in Haskell. The problem is how to efficiently mark the focus. But as we will see, finding a representation for one-hole contexts by induction on the structure of the type we want to take the one-hole context of automatically leads [17]

to an efficient data type . So, given a data structure with a functor F and an argument type X, we want to calculate the type of one-hole contexts from the structure of F. As our choice of notation already reveals, the rules for constructing one-hole contexts of sums, products and compositions are exactly Leibniz' rules for differentiation. One-hole context

Illustration There is no X in one-hole contexts must be empty.

, so the type of its

There is only one position for items X in . Removing one X leaves no X in the result. And as there is only one position we can remove it from, there is . Thus, the type exactly one one-hole context for of one-hole contexts is the singleton type. As an element of type F + G is either of type F or of type G, a one-hole context is also either or .

The hole in a one-hole context of a pair is either in the first or in the second component.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 179

Chain rule. The hole in a composition arises by making a hole in the enclosing structure and fitting the enclosed structure in.

Of course, the function plug that fills a hole has the type

.

So far, the syntax denotes the differentiation of functors, i.e. of a kind of type functions with one slightly more suitable for calculation. argument. But there is also a handy expression oriented notation The subscript indicates the variable with respect to which we want to differentiate. In general, we have

An example is

Of course,

is just point-wise whereas

is point-free style. Exercises

1. Rewrite some rules in point-wise style. For example, the left hand side of . the product rule becomes 2. To get familiar with one-hole contexts, differentiate the product of exactly n factors formally and convince yourself that the result is indeed the corresponding one-hole context. 3. Of course, one-hole contexts are useless if we cannot plug values of type X back into them. Write the plug functions corresponding to the five rules. 4. Formulate the chain rule for two variables and prove that it yields onehole contexts. You can do this by viewing a bifunctor as an normal functor in the pair (X,Y). Of course, you may need a handy notation for partial derivatives of bifunctors in point-free style.

The above rules enable us to construct zippers for recursive data types where F is a polynomial functor. A zipper is a focus on a particular subtree, i.e. substructure of type F inside a large tree of the same type. As in the previous chapter, it can be represented by the subtree we want to focus at and the thread, that is the context in which the subtree resides .

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 180

Now, the context is a series of steps each of which chooses a particular subtree F among those in

.

Thus, the unchosen subtrees are collected together by the one-hole context . The hole of this context comes from removing the subtree we've chosen to enter. Putting things together, we have . or equivalently . To illustrate how a concrete calculation proceeds, let's systematically construct the zipper for our labyrinth data type data Node a = DeadEnd a | Passage a (Node a) | Fork a (Node a) (Node a)

This recursive type is the fixed point

of the functor . In other words, we have . The derivative reads

and we get . Thus, the context reads . Comparing with data Branch a

data Thread a

= | | =

KeepStraightOn a TurnLeft a (Node a) TurnRight a (Node a) [Branch a]

we see that both are exactly the same as expected!

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 181

Exercises 1. Redo the zipper for a ternary tree, but with differentiation this time. 2. Construct the zipper for a list. 3. Rhetorical question concerning the previous exercise: what's the difference between a list and a stack?

There is more to data types than sums and products, we also have a fixed point operator with no direct correspondence in calculus. Consequently, the table is missing a rule of differentiation, namely how to differentiate fixed points : . As its formulation involves the chain rule in two variables, we delegate it to the exercises. Instead, we will calculate it for our concrete example type :

Of course, expanding arrive at

further is of no use, but we can see this as a fixed point equation and

with the abbreviations

and . The recursive type is like a list with element types , only that the empty list is replaced by a base case of . But given that the list is finite, we can replace the base case with 1 and pull out of the list: type . Comparing with the zipper we derived in the last paragraph, we see that the list type is our context

and that . In the end, we have . Thus, differentiating our concrete example

with respect to A yields the zipper up to an A!

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 182

Exercises 1. Use the chain rule in two variables to formulate a rule for the differentiation of a fixed point. 2. Maybe you know that there are inductive ( ) and coinductive fixed points (ν ). What's the rule for coinductive fixed points?

In general however, zippers and one-hole contexts denote different things. The zipper is a focus on arbitrary subtrees whereas a one-hole context can only focus on the argument of a type constructor. Take for example the data type data Tree a = Leaf a | Bin (Node a) (Node a)

which is the fixed point . The zipper can focus on subtrees whose top is Bin or Leaf but the hole of one-hole context of may only focus a Leafs because this is where the items of type A reside. The derivative of turned out to be the zipper because every top of a subtree is always decorated with an A.

only

Exercises 1. Surprisingly, and the zipper for again turn out to be the same type. Doing the calculation is not difficult but can you give a reason why this has to be the case? 2. Prove that the zipper construction for F can be obtained by introducing an with respect to it and auxiliary variable Y, differentiating re-substituting Y = 1. Why does this work? 3. Find a type whose zipper is different from the one-hole context.

We close this section by asking how it may happen that rules from calculus appear in a discrete setting. Currently, nobody knows. But at least, there is a discrete notion of linear, namely in the sense of "exactly once". The key feature of the function that plugs an item of type X into the hole of a one-hole context is the fact that the item is used exactly once, i.e. linearly. We may think of the plugging map as having type

where denotes a linear function, one that does not duplicate or ignore its argument like in linear logic. In a sense, the one-hole context is a representation of the function space , which can be thought of being a linear approximation to .

Notes ↑

1. ↑ At least as far as types are concerned, but we're trying to avoid that word :) 2. More technically, fst and snd have types which limit them to pairs. It would be impossible to define projection functions on tuples in general, because they'd have to be able to accept tuples of different sizes, so the type of the function would vary.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 183



3. ↑ In fact, these are one and the same concept in Haskell. 4. This isn't quite what chr and ord do, but that description fits our purposes well, and it's close enough. ↑ 5. To make things even more confusing, there's actually even more than one type for integers! Don't worry, we'll come on to this in due course. ↑ 6. This has been somewhat simplified to fit our purposes. Don't worry, the essence of the function is there. ↑ 7. Some of the newer type system extensions to GHC do break this, however, so you're better off just always putting down types anyway. ↑ 8. This is a slight lie. That type signature would mean that you can compare two values of any type whatsoever, but this clearly isn't true: how can you see if two functions are equal? Haskell includes a kind of 'restricted polymorphism' that allows type variables to range over some, but not all types. Haskell implements this using type classes, which we'll learn about later. In this case, the correct type of ↑ (==) is Eq a => a -> a -> Bool. 9. In mathematics, n! normally means the factorial of n, but that syntax is impossible in Haskell, so we don't use it here. ↑ 10. It infers a monomorphic type because k is bound by a lambda expression, and things bound by lambdas always have monomorphic types. See Polymorphism. ↑ 11. Ian Stewart. The true story of how Theseus found his way out of the labyrinth. Scientific American, February 1991, page 137. ↑ 12. Gérard Huet. The Zipper. Journal of Functional Programming, 7 (5), Sept 1997, pp. 549--554. PDF (http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/huet-zipper.pdf) ↑ 13. Note the notion of zipper as coined by Gérard Huet also allows to replace whole subtrees even if there is no extra data associated with them. In the case of our labyrinth, this is irrelevant. We will come back to this in the section Differentiation of data types. ↑ 14. Of course, the second topmost node or any other node at most a constant number of links away from the ↑ top will do as well. 15. Note that changing the whole data structure as opposed to updating the data at a node can be achieved in amortized constant time even if more nodes than just the top node is affected. An example is incrementing a number in binary representation. While incrementing say 111..11 must touch all digits to yield 1000..00, the increment function nevertheless runs in constant amortized time (but not ↑ in constant worst case time). 16. Conor Mc Bride. The Derivative of a Regular Type is its Type of One-Hole Contexts. Available online. PDF (http://www.cs.nott.ac.uk/~ctm/diff.pdf) ↑ 17. This phenomenon already shows up with generic tries.

See Also TheZipper (http://www.haskell.org/hawiki/TheZipper) on haskell.org and Zipper (http:// www.haskell.org/haskellwiki/Zipper) on the new wiki of the same community Generic Zipper and its applications (http://okmij.org/ftp/Computation/Continuations.html#zipper) Zipper-based file server/OS (http://okmij.org/ftp/Computation/Continuations.html#zipper-fs)

Fun with Types Existentially quantified types

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 184

Existential types, or 'existentials' for short, are a way of 'squashing' a group of types into one, single type. Firstly, a note to those of you following along at home: existentials are part of GHC's type system extensions. They aren't part of Haskell98, and as such you'll have to either compile any code that contains them with an extra command-line parameter of -fglasgow-exts, or put {-# LANGUAGE ExistentialQuantification #-} at the top of your sources that use existentials.

The forall keyword The forall keyword is used to explicitly bring type variables into scope. For example, consider something you've innocuously seen written a hundred times so far:

Example: A polymorphic function map :: (a -> b) -> [a] -> [b]

But what are these a and b? Well, they're type variables, you answer. The compiler sees that they begin with a lowercase letter and as such allows any type to fill that role. Another way of putting this is that those variables are 'universally quantified'. If you've studied formal logic, you will have undoubtedly come across the quantifiers: 'for all' (or ) and 'exists' (or ). They 'quantify' whatever comes after them: for example, (where P is any assertion. For example, P could be x > 5) means that there is at least one x such that P. means that for every x you could imagine, P. The forall keyword quantifies types in a similar way. We would rewrite map's type as follows:

Example: Explicitly quantifying the type variables map :: forall a b. (a -> b) -> [a] -> [b]

The forall can be seen to be 'bringing the type variables a and b into scope'. In Haskell, any use of a lowercase type implicitly begins with a forall keyword, so the two type declarations for map are equivalent, as are the declarations below:

Example: Two equivalent type statements id :: a -> a id :: forall a . a -> a

What makes life really interesting is that you can override this default behaviour by explicitly telling Haskell where the forall keyword goes. One use of this is for building existentially quantified types, also known as existential types, or simply existentials.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 185

But wait... isn't forall the universal quantifier? How do you get an existential type out of that? We look at this in a later section. However, first, let's see an example of the power of existential types in action.

Example: heterogeneous lists Haskell's typeclass system is powerful because it allows extensible groupings of types. So if you know a type instantiates some class C, you know certain things about that type. For example, Int instantiates Eq, so we know that Ints can be compared for equality. Suppose we have a group of values. We don't know if they are all the same type, but we do know they all instantiate some class, i.e. we know we can do a certain thing with all the values (like compare them for equality were the class Eq). It might be useful to throw all these values into a list. We can't do this normally because lists are homogeneous with respect to types: they can only contain a single type. However, existential types allow us to loosen this requirement by defining a 'type hider' or 'type box':

Example: Constructing a heterogeneous list data ShowBox = forall s. Show s => SB s hetroList :: [ShowBox] hetroList = [SB (), SB 5, SB True]

Now we know something about all the elements of this list: they can be converted to a string via show. In fact, that's pretty much the only thing we know about them.

Example: Using our heterogeneous list instance Show ShowBox where show (SB s) = show s main :: IO () main = mapM_ print hetroList

How does this work? In the definition of show for ShowBox, we don't know the type of s: when we originally wrapped the value, it didn't matter what its type was (as long as it was an instance of Show), so its type has been forgotten. We do know that the type is an instance of Show due to the constraint on the SB constructor. Therefore, it's legal to use the function show on s, as seen in the right-hand side of the function definition. As for main, recall the type of print:

Example: Types of the functions involved

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 186

print :: Show s => s -> IO () -- print x = putStrLn (show x) mapM_ :: (a -> m b) -> [a] -> m () mapM_ print :: Show s => [s] -> IO ()

As we just declared ShowBox an instance of Show, we can print the values in the list.

True existential types Let's get back to the question we asked ourselves a couple of sections back. Why are we calling these existential types if forall is the universal quantifier?

Since you can get existential types with forall, Haskell forgoes the use of an exists keyword, which would just be redundant.

Firstly, forall really does mean 'for all'. One way of thinking about types is as sets of values with that type, for example, Bool is the set {True, False, _|_} (remember that bottom (often written _|_) is a member of every type!), Integer is the set of integers (and bottom), String the set of all possible strings (and bottom), and so on. forall serves as an intersection over those sets. For example, forall a. a is the intersection over all types, {_|_}, that is, the type (i.e. set) whose only value (i.e. element) is bottom. Why? Think about it: how many of the elements of Bool appear in String? Bottom is the only value common to all types. A few more examples: 1. [forall a. a] is the type of a list whose elements all have the type forall a. a, i.e. a list of bottoms. 2. [forall a. Show a => a] is the type of a list whose elements all have the type forall a. Show a => a. The Show class constraint limits the sets you intersect over (here we're only intersect over instances of Show), but _|_ is still the only value common to all these types, so this too is a list of bottoms. 3. [forall a. Num a => a]. Again, the list where each element is a member of all types that instantiate Num. This could involve numeric literals, which have the type forall a. Num a => a, as well as bottom. 4. forall a. [a] is the type of the list whose elements have some (the same) type a, which can be assumed to be any type at all by a callee (and therefore this too is a list of bottoms). In the last section, we developed a heterogeneous list using a 'type hider'. Conceptually, the type of a heterogeneous list is [exists a. a], i.e. the list where all elements have type exists a. a. This 'exists' keyword (which isn't present in Haskell) is, as you may guess, a union of types. Therefore the aforementioned type is that of a list where all elements could take any type at all (and the types of different elements needn't be the same). We can't get the same behaviour using foralls except by using the approach we showed above: datatypes. Let's declare one.

Example: An existential datatype

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 187

data T = forall a. MkT a

This means that:

Example: The type of our existential constructor MkT :: forall a. a -> T

So we can pass any type we want to MkT and it'll convert it into a T. So what happens when we deconstruct a MkT value?

Example: Pattern matching on our existential constructor foo (MkT x) = ... -- what is the type of x?

As we've just stated, x could be of any type. That means it's a member of some arbitrary type, so has the type x :: exists a. a. In other words, our declaration for T is isomorphic to the following one:

Example: An equivalent version of our existential datatype (pseudoHaskell) data T = MkT (exists a. a)

And suddenly we have existential types. Now we can make a heterogeneous list:

Example: Constructing the hetereogeneous list heteroList = [MkT 5, MkT (), MkT True, MkT map]

[18] Of course, when we pattern match on heteroList we can't do anything with its elements , as all we know is that they have some arbitrary type. However, if we are to introduce class constraints:

Example: A new existential datatype, with a class constraint

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 188

data T' = forall a. Show a => MkT' a

Which is isomorphic to:

Example: The new datatype, translated into 'true' existential types data T' = MkT' (exists a. Show a => a)

Again the class constraint serves to limit the types we're unioning over, so that now we know the values inside a MkT' are elements of some arbitrary type which instantiates Show. The implication of this is that we can apply show to a value of type exists a. Show a => a. It doesn't matter exactly which type it turns out to be.

Example: Using our new heterogenous setup heteroList' = [MkT' 5, MkT' (), MkT' True] main = mapM_ (\(MkT' x) -> print x) heteroList' {- prints: 5 () True -}

To summarise, the interaction of the universal quantifier with datatypes produces existential types. As most interesting applications of forall-involving types use this interaction, we label such types 'existential'.

Example: runST One monad that you haven't come across so far is the ST monad. This is essentially the State monad on steroids: it has a much more complicated structure and involves some more advanced topics. It was originally written to provide Haskell with IO. As we mentioned in the Understanding monads chapter, IO is basically just a State monad with an environment of all the information about the real world. In fact, inside GHC at least, ST is used, and the environment is a type called RealWorld. To get out of the State monad, you can use runState. The analogous function for ST is called runST, and it has a rather particular type:

Example: The runST function

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 189

runST :: forall a. (forall s. ST s a) -> a

This is actually an example of a more complicated language feature called rank-2 polymorphism, which we don't go into detail here. It's important to notice that there is no parameter for the initial state. Indeed, ST uses a different notion of state to State; while State allows you to get and put the current state, ST provides an interface to references. You create references, which have type STRef, with newSTRef :: a -> ST s (STRef s a), providing an initial value, then you can use readSTRef :: STRef s a -> ST s a and writeSTRef :: STRef s a -> a -> ST s () to manipulate them. As such, the internal environment of a ST computation is not one specific value, but a mapping from references to values. Therefore, you don't need to provide an initial state to runST, as the initial state is just the empty mapping containing no references. However, things aren't quite as simple as this. What stops you creating a reference in one ST computation, then using it in another? We don't want to allow this because (for reasons of thread-safety) no ST computation should be allowed to assume that the initial internal environment contains any specific references. More concretely, we want the following code to be invalid:

Example: Bad ST code let v = runST (newSTRef True) in runST (readSTRef v)

What would prevent this? The effect of the rank-2 polymorphism in runST's type is to constrain the scope of the type variable s to be within the first parameter. In other words, if the type variable s appears in the first parameter it cannot also appear in the second. Let's take a look at how exactly this is done. Say we have some code like the following:

Example: Briefer bad ST code ... runST (newSTRef True) ...

The compiler tries to fit the types together:

Example: The compiler's typechecking stage newSTRef True :: forall s. ST s (STRef s Bool) runST :: forall a. (forall s. ST s a) -> a together, forall a. (forall s. ST s (STRef s Bool)) -> STRef s Bool

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 190

The importance of the forall in the first bracket is that we can change the name of the s. That is, we could write:

Example: A type mismatch! together, forall a. (forall s'. ST s' (STRef s' Bool)) -> STRef s Bool

This makes sense: in mathematics, saying is precisely the same as saying ; you're just giving the variable a different label. However, we have a problem with our above code. Notice that as the forall does not scope over the return type of runST, we don't rename the s there as well. But suddenly, we've got a type mismatch! The result type of the ST computation in the first parameter must match the result type of runST, but now it doesn't! The key feature of the existential is that it allows the compiler to generalise the type of the state in the first parameter, and so the result type cannot depend on it. This neatly sidesteps our dependence problems, and 'compartmentalises' each call to runST into its own little heap, with references not being able to be shared between different calls.

Further reading GHC's user guide contains useful information (http://haskell.org/ghc/docs/latest/html/users_guide/typeextensions.html#existential-quantification) on existentials, including the various limitations placed on them (which you should know about). Lazy Functional State Threads (http://citeseer.ist.psu.edu/launchbury94lazy.html) , by Simon PeytonJones and John Launchbury, is a paper which explains more fully the ideas behind ST. The old Haskell wiki contains some details on rank-2 polymorphism (http://haskell.org/hawiki/ RankTwoPolymorphism) .

Polymorphism Terms depending on types Ad-hoc and parametric polymorphism Polymorphism in Haskell Higher-rank polymorphism

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 191

Advanced type classes Although seemingly innocuous, much more research into type classes has been done, resulting in several advancements and generalisations which make them a very powerful tool.

Multi-parameter type classes Multi-parameter type classes are a generalisation of the single parameter type classes, and are supported by some Haskell implementations. Suppose we wanted to create a 'Collection' type class that could be used with a variety of concrete data types, and supports two operations -- 'insert' for adding elements, and 'member' for testing membership. A first attempt might look like this: class Collection c where insert :: c -> e -> c member :: c -> e -> Bool -- Make lists an instance of Collection: instance Collection [a] where insert xs x = x:xs member = flip elem

This won't compile, however. The problem is that the 'e' type variable in the Collection operations comes from nowhere -- there is nothing in the type of an instance of Collection that will tell us what the 'e' actually is, so we can never define implementations of these methods. Multi-parameter type classes solve this by allowing us to put 'e' into the type of the class. Here is an example that compiles and can be used: class Eq e => Collection c e where insert :: c -> e -> c member :: c -> e -> Bool instance Eq a => Collection [a] a where insert = flip (:) member = flip elem

Functional dependencies A problem with the above example is that, in this case, we have extra information that the compiler doesn't know, which can lead to false ambiguities and over-generalised function signatures. In this case, we can see intuitively that the type of the collection will always determine the type of the element it contains - so if 'c' is ' [a]', then 'e' will be 'a'. If 'c' is 'Hashmap a', then 'e' will be 'a'. (The reverse is not true: many different collection types can hold the same element type, so knowing the element type was e.g. Int, would not tell you the collection type). In order to tell the compiler this information, we add a functional dependency, changing the class declaration to class Eq e => Collection c e | c -> e where ...

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 192

The extra '| c -> e' should be read 'c uniquely identifies e', meaning for a given 'c', there will only be one 'e'. You can have more than one functional dependency in a class -- for example you could have 'c -> e, e -> c' in the above case. And you can have more than two parameters in multi-parameter classes. (TODO - example of ambiguities and how they are solved)

Examples

Phantom types Phantom types are a way to embed a language with a stronger type system than Haskell's. FIXME: that's about all I know, and it's probably wrong. :) I'm yet to be convinced of PT's usefulness, I'm not sure they should have such a prominent position. DavidHouse 17:42, 1 July 2006 (UTC)

Phantom types An ordinary type data T = TI Int | TS String plus :: T -> T -> T concat :: T -> T -> T

its phantom type version data T a = TI Int | TS String

Nothing's changed - just a new argument a that we don't touch. But magic! plus :: T Int -> T Int -> T Int concat :: T String -> T String -> T String

Now we can enforce a little bit more! This is useful if you want to increase the type-safety of your code, but not impose additional runtime overhead: -- Peano numbers at the type level. data Zero = Zero data Succ a = Succ a -- Example: 3 can be modeled as the type -- Succ (Succ (Succ Zero))) data Vector n a = Vector [a] deriving (Eq, Show) vector2d :: Vector (Succ (Succ Zero)) Int vector2d = Vector [1,2]

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 193

vector3d :: Vector (Succ (Succ (Succ Zero))) Int vector3d = Vector [1,2,3] -- vector2d == vector3d raises a type error -- at compile-time, while vector2d == Vector [2,3] works.

GADT Introduction Explain what a GADT (Generalised Algebraic Datatype) is, and what it's for

GADT-style syntax Before getting into GADT-proper, let's start out by getting used to the new syntax. Here is a representation for the familiar List type in both normal Haskell style and the new GADT one: normal style data List x = Nil | Cons x (List x)

GADT style data List x where Nil :: List x Cons :: x -> List x -> List x

Up to this point, we have not introduced any new capabilities, just a little new syntax. Strictly speaking, we are not working with GADTs yet, but GADT syntax. The new syntax should be very familiar to you in that it closely resembles typeclass declarations. It should also be easy to remember if you like to think of constructors as just being functions. Each constructor is just defined like a type signature for any old function.

What GADTs give us Given a data type Foo a, a constructor for Foo is merely a function that takes some number of arguments and gives you back a Foo a. So what do GADTs add for us? The ability to control exactly what kind of Foo you return. With GADTs, a constructor for Foo a is not obliged to return Foo a; it can return any Foo ??? that you can think of. In the code sample below, for instance, the GadtedFoo constructor returns a GadtedFoo Int even though it is for the type GadtedFoo x.

Example: GADT gives you more control data BoringFoo x where MkBoringFoo :: x -> BoringFoo x

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 194

data GadtedFoo x where MkGadtedFoo :: x -> GadtedFoo Int

But note that you can only push the idea so far... if your datatype is a Foo, you must return some kind of Foo or another. Returning anything else simply wouldn't work

Example: Try this out. It doesn't work data Bar where MkBar :: Bar -- This is ok data Foo where MkFoo :: Bar -- This is bad

Safe Lists Prerequisite: We assume in this section that you know how a List tends to be represented in functional languages We've now gotten a glimpse of the extra control given to us by the GADT syntax. The only thing new is that you can control exactly what kind of data structure you return. Now, what can we use it for? Consider the humble Haskell list. What happens when you invoke head []? Haskell blows up. Have you ever wished you could have a magical version of head that only accepts lists with at least one element, lists on which it will never blow up? To begin with, let's define a new type, SafeList x y. The idea is to have something similar to normal Haskell lists [x], but with a little extra information in the type. This extra information (the type variable y) tells us whether or not the list is empty. Empty lists are represented as SafeList x Empty, whereas nonempty lists are represented as SafeList x NonEmpty. -- we have to define these types data Empty data NonEmpty -- the idea is that you can have either -SafeList x Empty -- or SafeList x NonEmpty data SafeList x y where -- to be implemented

Since we have this extra information, we can now define a function safeHead on only the non-empty lists! Calling safeHead on an empty list would simply refuse to type-check. safeHead :: SafeList x NonEmpty -> x

So now that we know what we want, safeHead, how do we actually go about getting it? The answer is GADT. The key is that we take advantage of the GADT feature to return two different kinds of lists,

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 195

SafeList x Empty for the Nil constructor, and SafeList x NonEmpty for the Cons constructors respectively: data SafeList x y where Nil :: SafeList x Empty Cons :: x -> SafeList x y -> SafeList x NonEmpty

This wouldn't have been possible without GADT, because all of our constructors would have been required to return the same type of list; whereas with GADT we can now return different types of lists with different constructors. Anyway, let's put this altogether, along with the actual definition of SafeList:

Example: safe lists via GADT data Empty data NonEmpty data SafeList x y where Nil :: SafeList x Empty Cons:: x -> SafeList x y

-> SafeList x NonEmpty

safeHead :: SafeList x NonEmpty -> x safeHead (Cons x _) = x

We now urge you to copy this listing into a file and load in ghci -fglasgow-exts. You should notice the following difference, calling safeHead on an non-empty and an empty-list respectively:

Example: safeHead is... safe Prelude Main> safeHead (Cons "hi" Nil) "hi" Prelude Main> safeHead Nil :1:9: Couldn't match `NonEmpty' against `Empty' Expected type: SafeList a NonEmpty Inferred type: SafeList a Empty In the first argument of `safeHead', namely `Nil' In the definition of `it': it = safeHead Nil

This complaint is a good thing: it means that we can now ensure during compile-time if we're calling safeHead on an appropriate list. However, this is a potential pitfall that you'll want to look out for. Consider the following function. What do you think its type is?

Example: Trouble with GADTs

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 196

silly 0 = Nil silly 1 = Cons 1 Nil

Now try loading the example up in GHCi. You'll notice the following complaint:

Example: Trouble with GADTs - the complaint Couldn't match `Empty' against `NonEmpty' Expected type: SafeList a Empty Inferred type: SafeList a NonEmpty In the application `Cons 1 Nil' In the definition of `silly': silly 1 = Cons 1 Nil

FIXME: insert discussion Exercises 1. Could you implement a safeTail function?

A simple expression evaluator Insert the example used in Wobbly Types paper... I thought that was quite pedagogical

Discussion More examples, thoughts From FOSDEM 2006, I vaguely recall that there is some relationship between GADT and the below... what? Phantom types Existential types If you like Existentially quantified types, you'd probably want to notice that they are now subsumbed by GADTs. As the GHC manual says, the following two type declarations give you the same thing. data TE a = forall b. MkTE b (b->a) data TG a where { MkTG :: b -> (b->a) -> TG a }

Witness types

References

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 197

At least part of this page was imported from the Haskell wiki article Generalised algebraic datatype (http://www.haskell.org/haskellwiki/ Generalised_algebraic_datatype) , in accordance to its Simple Permissive License. If you wish to modify this page and if your changes will also be useful on that wiki, you might consider modifying that source page instead of this one, as changes from that page may propagate here, but not the other way around. Alternately, you can explicitly dual license your contributions under the Simple Permissive License.

Wider Theory Denotational semantics New readers: Please report stumbling blocks! While the material on this page is intended to explain clearly, there are always mental traps that innocent readers new to the subject fall in but that the authors are not aware of. Please report any tricky passages to the Talk page or the #haskell IRC channel so that the style of exposition can be improved.

Introduction This chapter explains how to formalize the meaning of Haskell programs, the denotational semantics . It may seem to be nit-picking to formally specify that the program square x = x*x means the same as the mathematical square function that maps each number to its square, but what about the meaning of a program like f x = f (x+1) that loops forever? In the following, we will exemplify the approach first taken by Scott and Strachey to this question and obtain a foundation to reason about the correctness of functional programs in general and recursive definitions in particular. Of course, we will concentrate on those topics needed to understand Haskell programs

[19]

.

Another aim of this chapter is to illustrate the notions strict and lazy that capture the idea that a function needs or needs not to evaluate its argument. This is a basic ingredient to predict the course of evaluation of Haskell programs and hence of primary interest to the programmer. Interestingly, these notions can be formulated consisely with denotational semantics alone, no reference to an execution model is necessary. They will be put to good use in Graph Reduction, but it is this chapter that will familiarize the reader with the denotational definition and involved notions such as ⊥ ("Bottom"). The reader only interested in strictness may wish to poke around in section Bottom and Partial Functions and quickly head over to Strict and NonStrict Semantics.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 198

What are Denotational Semantics and what are they for? What does a Haskell program mean? This question is answered by the denotational semantics of Haskell. In general, the denotational semantics of a programming language map each of its programs to a mathematical object, the meaning of the program in question. As an example, the mathematical object for the Haskell programs 10, 9+1, 2*5 and sum [1..4] is likely to be the integer 10. We say that all those programs denote the integer 10. The collection of mathematical objects is called the semantic domain. The mapping from program codes to a semantic domain is commonly written down with double square brackets (Wikibooks doesn't seem to support \llbrackets in math formulas...) as

It is compositional, i.e. the meaning of a program like 1+9 only depends on the meaning of its constituents:

The same notatio is used for types, i.e.

For simplicity however, we will silently identify expressions with their semantic objects in subsequent chapters and use this notation only when clarification is needed. It is one of the key properties of purely functional languages like Haskell that a direct mathematical interpretation like "1+9 denotes 10" carries over to functions, too: in essence, the denotation of a program of type Integer -> Integer is a mathematical function between integers. While we will see that this needs refinement to include non-termination, the situation for imperative languages is clearly worse: a procedure with that type denotes something that changes the state of a machine in possibly unintended ways. Imperative languages are tied tightly to an operational semantics which describes how they are executed on a machine. It is possible to define a denotational semantics for imperative programs and to use it to reason about such programs, but the semantics often has an operational nature and sometimes must extend [20]

on the denotational semantics for functional languages. In contrast, the meaning of purely functional languages is by default completely independent from their execution. The Haskell98 standard even goes as far as to only specify Haskell's non-strict denotational semantics and leaving open how to implement them. In the end, denotational semantics enables us to develop formal proofs that programs indeed do what we want them to do mathematically. Ironically, for proving program properties in day-to-day Haskell, one can use Equational reasoning which transfrom programs into equivalent ones without seeing much of the underlying mathematical objects we are concentrating on in this chapter. But the denotational semantics actually show up whenever we have to reason about non-terminating programs, for instance in Infinite Lists. Of course, because they only state what a program is, denotational semantics cannot answer questions about how long a program takes or how much memory it eats. This is governed by the evaluation strategy which dictates how the computer calculates the normal form of an expression. But on the other hand, the implementation has to respect the semantics and to a certain extend, they determine how Haskell programs must to be evaluated on a machine. We will elaborate this in Strict and Non-Strict Semantics. What to choose as Semantic Domain? We are now looking for suitable mathematical objects that we can attribute to every Haskell program. In case of the example 10, 2*5 and sum [1..4], it is clear that all expressions should denote the integer 10.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 199

Generalizing, every value x of type Integer is likely to be an element of the set . The same can be done with values of type Bool. For functions like f :: Integer -> Integer, we can appeal to the mathematical definition of "function" as a set of (argument,value)-pairs, its graph. But interpreting functions as their graph was too quick, because it does not work well with recursive definitions. Consider the definition shaves :: Integer -> Integer -> Bool 1 `shaves` 1 = True 2 `shaves` 2 = False 0 `shaves` x = not (x `shaves` x) _ `shaves` _ = False

We can think of 0,1 and 2 as being male persons with long beards and the question is who shaves whom. Person 1 shaves himself, but 2 gets shaved by the barber 0 because evaluating the third equation yields 0 `shaves` 2 == True. In general, the third line says that the barber 0 shaves all persons that do not shave themselves. What about the barber himself, is 0 `shaves` 0 true or not? If it is, then the third equation says that it is not. If it is not, then the third equation says that it is. Puzzled, we see that we just cannot attribute True or False to 0 `shaves` 0, the graph we use as interpretation for the function shaves must have a empty spot. We realize that our semantic objects must be able to incorporate partial functions, functions that are undefined for some arguments. It is well known that this famous example gave rise to serious foundational problems in set theory. It's an example of an impredicative definition, a definition which uses itself, a logical circle. Unfortunately for recursive definitions, the circle is not the problem but the feature.

Bottom and Partial Functions

⊥ Bottom To handle partial functions, we introduce , named bottom and commonly written _|_ in typewriter font. We say that is the completely "undefined" value or function. Every data type like Integer, () or Integer -> Integer contains one besides their usual elements. So the possible values of type Integer are

Adding to the set of values is also called lifting. This is often depicted by a subscript like in . While this is the correct notation for the mathematical set "lifted integers", we prefer to talk about "values of type Integer". We do this because suggests that there are "real" integers , but inside Haskell, the "integers" are Integer. As another example, the type () with only one element actually has two inhabitants:

For now, we will stick to programming with Integers. Arbitrary algebraic data types will be treated in section Algebraic Data Types as strict and non-strict languages diverge on how these include .

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 200

In Haskell, the expression undefined denotes . With its help, one can indeed verify some semantic properties of actual Haskell programs. undefined has the polymorphic type forall a . a which of course can be specialized to undefined :: Integer, undefined :: (), undefined :: Integer -> Integer and so on. In the Haskell Prelude, it is defined as undefined = error "Prelude: undefined" As a side note, it follows from the Curry-Howard isomorphism that any function of the polymorphic type forall a . a must denote . Partial Functions and the Semantic Approximation Order Now,

gives us the possibility to denote partial functions:

Here, f(n) yields well defined values for n = 0 and n = 1 but gives for all other n. Note that the notation is overloaded: the function :: Integer -> Integer is given by for all n where the

on the right hand side denotes a value of type Integers

To formalize, partial functions say of type Integer -> Integer are at least mathematical mappings to the lifted integers. But this is not enough, from the lifted integers it does not merit the special role of . For example, the definition

intuitively does not make sense. Why does yield a defined value whereas g(1) is undefined? The intuition is that every partial function g should yield more defined answers for more defined arguments. To formalize, we can say that every concrete number is more defined than :

Here, denotes that b is more defined than a. Likewise, will denote that either b is more defined than a or both are equal (and so have the same definedness). is also called the semantic approximation order because we can approximate defined values by less defined ones thus interpreting "more defined" as "approximating better". Of course, is designed to be the least element of a data type, we always have for all other x. As no number is more defined than another, the mathematical relation neither

nor

does not relate different numbers:

hold.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 201

This is contrasted to the ordinary order between integers which can compare any two numbers. That's also why we use the different symbol . A quick way to remember this is the sentence: "1 and 2 are different in information content but the same in information quantity". One says that specifies a partial order and that the values of type Integer form a partially ordered set (poset for short). A partial order is characterized by the following three laws Reflexivity, everything is just as defined as itself: for all x Transitivity: if and , then Antisymmetry: if both and hold, then x and y must be equal: x = y. Exercises Do the integers form a poset with respect to the order We can depict the order

?

on the values of type Integer by the following graph

where every link between two nodes specifies that the one above is more defined than the one below. Because there is only one level (excluding ), one says that Integer is a flat domain. The picture also explains the name of : it's called bottom because it always sits at the bottom. Monotonicity Our intuition about partial functions now can be formulated as following: every partial function f is a monotone mapping between partially ordered sets. More defined arguments will yield more defined values:

In particular, a function h with that etc. don't hold.

must be constant: h(n) = 1 for all n. Note that here it is crucial

Translated to Haskell, monotonicity means that we cannot pattern match on or its equivalent undefined. Otherwise, the example g from above could be expressed as a Haskell program. As we shall see later, also denotes non-terminating programs, so that the inability to observe inside Haskell is related to the halting problem. Of course, the notion of more defined than can be extended to partial functions by saying that a function is more defined than another if it is so at every possible argument:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks Thus, the partial functions also form a poset with the undefined function element.

Page 202 being the least

Recursive Definitions as Fixed Point Iterations Approximations of the Factorial Function Now that we have a means to describe partial functions, we can give an interpretation to recursive definitions. Lets take the prominent example of the factorial function f(n) = n! whose recursive definition is

Although we saw that interpreting this directly as a set description leads to problems, we intuitively know how to calculate f(n) for every given n by iterating the right hand side. This iteration can be formalized as follows: we calculate a sequence of functions fk with the property that each one arises from the right hand side applied to the previous one, that is

Starting with the undefined function

, the resulting sequence of partial functions reads

and so on. Clearly,

and we expect that the sequence converges to the factorial function. The iteration follows the well known scheme of a fixed point iteration

In our case, x0 is a function and g is a functional, a mapping between functions. We have and

Now, since g is monotone, and

, the iteration sequence is monotone:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 203

(The proof is roughly as follows: since , and anything, can successively apply g to both sides of this relation, yielding , and so on.)

. Since g is monotone, we ,

So each successive application of g, starting with x0, transforms a less defined function to a more defined one. It is very illustrative to formulate this iteration scheme in Haskell. As functionals are just ordinary higher order functions, we have g :: (Integer -> Integer) -> (Integer -> Integer) g x = \n -> if n == 0 then 1 else n * x (n-1) x0 :: Integer -> Integer x0 = undefined (f0:f1:f2:f3:f4:fs) = iterate g x0

We can now evaluate the functions f0,f1,... at sample arguments and see whether they yield undefined or not: > f3 0 1 > f3 1 1 > f3 2 2 > f3 5 *** Exception: Prelude.undefined > map f3 [0..] [1,1,2,*** Exception: Prelude.undefined > map f4 [0..] [1,1,2,6,*** Exception: Prelude.undefined > map f1 [0..] [1,*** Exception: Prelude.undefined

Of course, we cannot use this to check whether f4 is really undefined for all arguments. Convergence To the mathematician, the question whether this sequence of approximations converges is still to be answered. For that, we say that a poset is a directed complete partial order (dcpo) iff every monotone sequence (also called chain) has a least upper bound (supremum) . If that's the case for the semantic approximation order, we clearly can be sure that monotone sequence of functions approximating the factorial function indeed has a limit. For our denotational semantics, we will only meet dcpos which have a least element which are called complete partial orders (cpo ). The Integers clearly form a (d)cpo, because the monotone sequences consisting of more than one element must be of the form

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 204

where n is an ordinary number. Thus, n is already the least upper bound. For functions Integer -> Integer, this argument fails because monotone sequences may be of infinite length. But because Integer is a (d)cpo, we know that for every point n, there is a least upper bound . As the semantic approximation order is defined point-wise, the function f is the supremum we looked for. These have been the last touches for our aim to transform the impredicative definition of the factorial function into a well defined construction. Of course, it remains to be shown that f(n) actually yields a defined value for every n, but this is not hard and far more reasonable than a completely ill-formed definition. Bottom includes Non-Termination It is instructive to try our newly gained insight into recursive definitions on an example that does not terminate: f(n) = f(n + 1) The approximating sequence reads

and consists only of . Clearly, the resulting limit is again. From an operational point of view, a machine executing this program will loop indefinitely. We thus see that may also denote a non-terminating function or value. Hence, given the halting problem, pattern matching on in Haskell is impossible. Interpretation as Least Fixed Point Earlier, we called the approximating sequence an example of the well known "fixed point iteration" scheme. And of course, the definition of the factorial function f can also be thought as the specification of a fixed point of the functional g:

However, there might be multiple fixed points. For instance, there are several f which fulfill the specification , Of course, when executing such a program, the machine will loop forever on f(1) or f(2) and thus not produce any valuable information about the value of f(1). This corresponds to choosing the least defined fixed point as semantic object f and this is indeed a canonical choice. Thus, we say that f = g(f), defines the least fixed point f of g. Clearly, least is with respect to our semantic approximation order

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

.

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 205

The existence of a least fixed point is guaranteed by our iterative construction if we add the condition that g must be continuous (sometimes also called "chain continuous"). That simply means that g respects suprema of monotone sequences:

We can then argue that with

, we have

and the iteration limit is indeed a fixed point of g. You may also want to convince yourself that the fixed point iteration yields the least fixed point possible. Exercises Proof that the fixed point obtained by fixed point iteration starting with is also the least one, that it is smaller than any other fixed point. (Hint: least element of our cpo and g is monotone)

is the

By the way, how do we know that each Haskell function we write down indeed is continuous? Just as with monotonicity, this has to be enforced by the programming language. Admittedly, these properties can somewhat be enforced or broken at will, so the question feels a bit void. But intuitively, monotonicity is guaranteed by not allowing pattern matches on . For continuity, we note that for an arbitrary type a, every simple function a -> Integer is automatically continuous because the monotone sequences of Integer's are of finite length. Any infinite chain of values of type a gets mapped to a finite chain of Integers and respect for suprema becomes a consequence of monotonicity. Thus, all functions of the special case Integer -> Integer must be continuous. For functionals like g ::(Integer -> Integer) -> (Integer -> Integer), the continuity then materializes due to currying, as the type is isomorphic to ::((Integer -> Integer), Integer) -> Integer and we can take a=( (Integer -> Integer), Integer). In Haskell, the fixed interpretation of the factorial function can be coded as factorial = fix g with the help of the fixed point combinator fix :: (a -> a) -> a. We can define it by

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 206

fix f = let x = f x in x which leaves us somewhat puzzled because when expanding factorial, the result is not anything different from how we would have defined the factorial function in Haskell in the first place. But of course, the construction this whole section was about is not at all present when running a real Haskell program. It's just a means to put the mathematical interpretation a Haskell programs to a firm ground. Yet it is very nice that we can explore these semantics in Haskell itself with the help of undefined.

Strict and Non-Strict Semantics After having elaborated on the denotational semantics of Haskell programs, we will drop the mathematical function notation f(n) for semantic objects in favor of their now equivalent Haskell notation f n. Strict Functions A function f with one argument is called strict, if and only if f



=

⊥.

Here are some examples of strict functions id succ power2 power2

x x 0 n

= = = =

x x + 1 1 2 * power2 (n-1)

and there is nothing unexpected about them. But why are they strict? It is instructive to prove that these functions are indeed strict. For id, this follows from the definition. For succ, we have to ponder whether ⊥ + 1 is ⊥ or not. If it was not, then we should for example have ⊥ + 1 = 2 or more general ⊥ + 1 = k for some concrete number k. We remember that every function is monotone, so we should have for example 2 = as





+ 1

4 + 1 = 5

4. But 2 and 5 cannot be compared, a contradiction. This can be generalized to k =



k + 1 = k + 1,

+ 1

a contradiction. We see that the only possible choice is succ



=



+ 1 =



and succ is strict. Exercises Prove that power2 is strict. While one can base the proof on the "obvious" fact n that power2 n is 2 , the latter is preferably proven using fixed point iteration. Non-Strict and Strict Languages Searching for non-strict functions, it happens that there is only one prototype of a non-strict function of type Integer -> Integer:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 207

one x = 1

Its variants are constk x = k for every concrete number k. Why are these the only ones possible? Remember that one n has to be more defined than one ⊥. As Integer is a flat domain, both must be equal. Why is one non-strict? To see that it is, we use a Haskell interpreter and try > one (undefined :: Integer) 1

which is not ⊥. This is reasonable as one completely ignores its argument. When interpreting ⊥ in an operational sense as "non-termination", one may say that the non-strictness of one means that it does not force its argument to be evaluated and therefore avoids the infinite loop when evaluating the argument ⊥. But one might as well say that every function must evaluate its arguments before computing the result which means that one ⊥ should be ⊥, too. That is, if the program computing the argument does not halt, one [21]

should not halt as well. It shows up that one can choose freely this or the other design for a functional programming language. One says that the language is strict or non-strict depending on whether functions are strict or non-strict by default. The choice for Haskell is non-strict. In contrast, the functional languages ML and LISP choose strict semantics. Functions with several Arguments The notion of strictness extends to functions with several variables. For example, a function f of two arguments is strict in the second argument if and only of f x



=



for every x. But for multiple arguments, mixed forms where the strictness depends on the given value of the other arguments, are much more common. An example is the conditional cond b x y = if b then x else y

We see that it is strict in y depending on whether the test b is True or False: cond True cond False

⊥ ⊥

y = ⊥ y = y

and likewise for x. Apparently, cond is certainly ⊥ if both x and y are, but not necessarily when at least one of them is defined. This behavior is called joint strictness. Clearly, cond behaves like the if-then-else statement where it is crucial not to evaluate both the then and the else branches: if null xs then 'a' else head xs if n == 0 then 1 else 5 / n

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 208

Here, the else part is ⊥ when the condition is met. Thus, in a non-strict language, we have the possibility to wrap primitive control statements such as if-then-else into functions like cond. This way, we can define our own control operators. In a strict language, this is not possible as both branches will be evaluated when calling cond which makes it rather useless. This is a glimpse of the general observation that non-strictness [22]

offers more flexibility for code reuse than strictness. See the chapter Laziness

for more on this subject.

Not all Functions in Strict Languages are Strict It is important to note that even in a strict language, not all functions are strict. The choice whether to have strictness and non-strictness by default only applies to certain argument data types. Argument types that solely contain data like Integer, (Bool,Integer) or Either String [Integer] impose strictness, but functions are not necessarily strict in function types like Integer -> Bool. Thus, in a hypothetical strict language with Haskell-like syntax, we would have the interpreter session !> let const1 _ = 1 !> const1 (undefined :: Integer) !!! Exception: Prelude.undefined !> const1 (undefined :: Integer -> Bool) 1

Why are strict languages not strict in arguments of function type? If they were, fixed point iteration would crumble to dust! Remember the fixed point iteration

for a functional g ::(Integer -> Integer) -> (Integer -> Integer). If g would be strict, the sequence would read

which obviously converges to a useless . It is crucial that g makes the argument function more defined. This means that g must not be strict in its argument to yield a useful fixed point. As a side note, the fact that things must be non-strict in function types can be used to recover some non-strict behavior in strict languages. One simply replaces a data type like Integer with () -> Integer where () denotes the well known singleton type. It is clear that every such function has the only possible argument () (besides ⊥) and therefore corresponds to a single integer. But operations may be non-strict in arguments of type () -> Integer. Exercises It's tedious to lift every Integer to a () -> Integer for using non-strict behavior in strict languages. Can you write a function lift :: Integer -> (() -> Integer) that does this for us? Where is the trap?

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 209

Algebraic Data Types After treating the motivation case of partial functions between Integers, we now want to extend the scope of denotational semantics to arbitrary algebraic data types in Haskell. A word about nomenclature: the collection of semantic objects for a particular type is usually called a domain. This term is more a generic name than a particular definition and we decide that our domains are cpos (complete partial orders), that is sets of values together with a relation more defined that obeys some conditions to allow fixed point iteration. Usually, one adds additional conditions to the cpos that ensure that the values of our domains can be represented in some finite way on a computer and thereby avoiding to ponder the twisted ways of uncountable infinite sets. But as we are not going to prove general domain theoretic theorems, the conditions will just happen to hold by construction. Constructors Let's take the example types data Bool = True | False data Maybe a = Just a | Nothing

Here, True, False and Nothing are nullary constructors whereas Just is an unary constructor. The inhabitants of Bool form the following domain:

Remember that [23]

⊥ is added as least element to the set of values True and False, we say that the type is

lifted . A domain whose poset diagram consists of only one level is called a flat domain. We already know that Integer is a flat domain as well, it's just that the level above ⊥ has an infinite number of elements. What are the possible inhabitants of Maybe Bool? They are

⊥,

Nothing, Just

⊥,

Just True, Just False

So the general rule is to insert all possible values into the unary (binary, ternary, ...) constructors as usual but without forgetting ⊥. Concerning the partial order, we remember the condition that the constructors should by monotone just as any other functions. Hence, the partial order looks as follows

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 210

But there is something to ponder: why isn't Just ⊥ = ⊥? I mean "Just undefined" is as undefined as "undefined"! The answer is that this depends on whether the language is strict or non-strict. In a strict language, all constructors are strict by default, i.e. Just ⊥ = ⊥ and the diagram would reduce to

As a consequence, all domains of a strict language are flat. But in a non-strict language like Haskell, constructors are non-strict by default and Just element different from ⊥, because we can write a function that reacts differently to them:

⊥ is a new

f (Just _) = 4 f Nothing = 7

As f ignores the contents of the Just constructor, f (Just ⊥) is 4 but f ⊥ is ⊥ (intuitively, if f is passed ⊥, it will not be possible to tell whether to take the Just branch or the Nothing branch, and so ⊥ will be returned). This gives rise to non-flat domains as depicted in the former graph. What should these be of use for? In the context of Graph Reduction, we may also think of ⊥ as an unevaluated expression. Thus, a value x = Just ⊥ may tell us that a computation (say a lookup) succeeded and is not Nothing, but that the true value has not been evaluated yet. If we are only interested in whether x succeeded or not, this actually saves us from the unnecessary work to calculate whether x is Just True or Just False as would be the case in a flat domain. The full impact of non-flat domains will be explored in the chapter Laziness, but one prominent example are infinite lists treated in section Recursive Data Types and Infinite Lists. Pattern Matching In the section Strict Functions, we proved that some functions are strict by inspecting their results on different inputs and insisting on monotonicity. However, in the light of algebraic data types, there can only

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 211

be one source of strictness in real life Haskell: pattern matching, i.e. case expressions. The general rule is that pattern matching on a constructor of a data-type will force the function to be strict, i.e. matching ⊥ against a constructor always gives ⊥. For illustration, consider const1 _ = 1

const1' True = 1 const1' False = 1

The first function const1 is non-strict whereas the const1' is strict because it decides whether the argument is True or False although its result doesn't depend on that. Pattern matching in function arguments is equivalent to case-expressions const1' x = case x of True -> 1 False -> 1

which similarly impose strictness on x: if the argument to the case expression denotes ⊥ the while case will denote ⊥, too. However, the argument for case expressions may be more involved as in foo k map = case lookup ("Foo." ++ k) map where Nothing -> ... Just x -> ...

and it can be difficult to track what this means for the strictness of foo. An example for multiple pattern matches in the equational style is the logical or: or True _ = True or _ True = True or _ _ = False

Note that equations are matched from top to bottom. The first equation for or matches the first argument against True, so or is strict in its first argument. The same equation also tells us that or True x is nonstrict in x. If the first argument is False, then the second will be matched against True and or False x is strict in x. Note that while wildcards are a general sign of non-strictness, this depends on their position with respect to the pattern matches against constructors. Exercises 1. Give an equivalent discussion for the logical and 2. Can the logical "excluded or" (xor) be non-strict in one of its arguments if we know the other?

There is another form of pattern matching, namely irrefutable patterns marked with a tilde ~. Their use is demonstrated by

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 212

f ~(Just x) = 1 f Nothing = 2

An irrefutable pattern always succeeds (hence the name) resulting in f definition of f to



= 1. But when changing the

f ~(Just x) = x + 1 f Nothing = 2 -- this line may as well be left away

we have f ⊥ = ⊥ + 1 = ⊥ f (Just 1) = 1 + 1 = 2

If the argument matches the pattern, x will be bound to the corresponding value. Otherwise, any variable like x will be bound to ⊥. By default, let and where bindings are non-strict, too: foo key map = let Just x = lookup key map in ...

is equivalent to foo key map = case (lookup key map) of ~(Just x) -> ...

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 213

Exercises 1. The Haskell language definition (http://www.haskell.org/onlinereport/) gives the detailed semantics of pattern matching (http://www.haskell.org/ onlinereport/exps.html#case-semantics) and you should now be able to understand it. So go on and have a look! 2. Consider a function and of two Boolean arguments with the following properties:

⊥ = ⊥ and ⊥ and True ⊥ = True and ⊥ True = True and False y and x False

= y = x

This function is another example of joint strictness, but a much sharper one: the result is only ⊥ if both arguments are (at least when we restrict the arguments to True and ⊥). Can such a function be implemented in Haskell?

Recursive Data Types and Infinite Lists The case of recursive data structures is not very different from the base case. Consider a list of unit values data List = [] | () : List

Though this seems like a simple type, there is a surprisingly complicated number of ways you can fit in here and there, and therefore the corresponding graph is complicated. The bottom of this graph is shown below. An ellipsis indicates that the graph continues along this direction. A red ellipse behind an element indicates that this is the end of a chain; the element is in normal form.

and so on. But now, there are also chains of infinite length like

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks



Cons ()



Cons () (Cons ()

Page 214

⊥)

...

This causes us some trouble as we noted in section Convergence that every monotone sequence must have a least upper bound. This is only possible if we allow for infinite lists. Infinite lists (sometimes also called streams) turn out to be very useful and their manifold use cases are treated in full detail in chapter Laziness. Here, we will show what their denotational semantics should be and how to reason about them. Note that while the following discussion is restricted to lists only, it easily generalizes to arbitrary recursive data structures like trees. The things said about constructors such as Empty and Cons also hold for the empty list constructor [] and infix constructors like (:). In the following, we will switch back to the standard list type data [a] = [] | a : [a]

to close the syntactic gap to practical programming with infinite lists in Haskell. Exercises 1. Draw the non-flat domain corresponding [Bool]. 2. How is the graphic to be changed for [Integer]?

Calculating with infinite lists is best shown by example. For that, we need an infinite list ones :: [Integer] ones = 1 : ones

When applying the fixed point iteration to this recursive definition, we see that ones ought to be the supremum of



1: ⊥

1:1: ⊥

1:1:1: ⊥

...,

that is an infinite list of 1. Let's try to understand what take 2 ones should be. With the definition of take take 0 _ = [] take n (x:xs) = x : take (n-1) xs take n [] = []

we can apply take to elements of the approximating sequence of ones: take 2 ⊥ ==> ⊥ take 2 (1: ⊥) ==> take 2 (1:1: ⊥) ==> ==>

1 : take 1 ⊥ ==> 1 : ⊥ 1 : take 1 (1: ⊥) ==> 1 : 1 : take 0 1 : 1 : []



http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 215

We see that take 2 (1:1:1: ⊥) and so on must be the same as take 2 (1:1: ⊥) = 1:1:[] because 1:1:[] is fully defined. Taking the supremum on both the sequence of input lists and the resulting sequence of output lists, we can conclude take 2 ones = 1:1:[]

Thus, taking the first two elements of ones behaves exactly as expected. Generalizing from the example, we see that reasoning about infinite lists involves considering the approximating sequence and passing to the supremum, the truly infinite list. Still, we did not give it a firm ground. The solution is to identify the infinite list with the whole chain itself and to formally add it as a new element to our domain: the infinite list is the sequence of its approximations. Of course, any infinite list like ones can compactly depicted as ones = 1 : 1 : 1 : 1 : ...

what simply means that ones = ( ⊥

1: ⊥

1:1: ⊥

...)

Exercises 1. Of course, there are more interesting infinite lists than ones. Can you write recursive definition in Haskell for 1. the natural numbers nats = 1:2:3:4:... 2. a cycle like cylce123 = 1:2:3: 1:2:3 : ... 2. Look at the Prelude functions repeat and iterate and try to solve the previous exercise with their help. 3. Use the example from the text to find the value the expression drop 3 nats denotes. 4. Assume that the work in a strict setting, i.e. that the domain of [Integer] is flat. What does the domain look like? What about infinite lists? What value does ones denote?

What about the puzzle of how a computer can calculate with infinite lists? It takes an infinite amount of time, after all? Well, this is true. But the trick is that the computer may well finish in a finite amount of time if it only considers a finite part of the infinite list. So, infinite lists should be thought of as potentially infinite lists. In general, intermediate results take the form of infinite lists whereas the final value is finite. It is one of the benefits of denotational semantics that one treat the intermediate infinite data structures as truly infinite when reasoning about program correctness. Exercises 1. To demonstrate the use of infinite lists as intermediate results, show that take 2 (map (+1) nats) = take 3 nats

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 216

by first calculating the infinite sequence corresponding to map (+1) nats. 2. Of course, we should give an example where the final result indeed takes an infinite time. So, what does filter (< 5) nats

denote? 3. Sometimes, one can replace filter with takeWhile in the previous exercise. Why only sometimes and what happens if one does?

As a last note, the construction of a recursive domain can be done by a fixed point iteration similar to recursive definition for functions. Yet, the problem of infinite chains has to be tackled explicitly. See the literature in External Links for a formal construction. Haskell specialities: Strictness Annotations and Newtypes Haskell offers a way to change the default non-strict behavior of data type constructors by strictness annotations. In a data declaration like data Maybe' a = Just' !a | Nothing'

an exclamation point ! before an argument of the constructor specifies that he should be strict in this argument. Hence we have Just' ⊥ = ⊥ in our example. Further information may be found in chapter Strictness. In some cases, one wants to rename a data type, like in data Couldbe a = Couldbe (Maybe a)

However, Couldbe a contains both the elements definition

⊥ and Couldbe ⊥. With the help of a newtype

newtype Couldbe a = Couldbe (Maybe a)

we can arrange that Couldbe a is semantically equal to Maybe a, but different during type checking. In particular, the constructor Couldbe is strict. Yet, this definition is subtly different from data Couldbe' a = Couldbe' !(Maybe a)

To explain how, consider the functions

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 217

f (Couldbe m) = 42 f' (Couldbe' m) = 42

Here, f' ⊥ will cause the pattern match on the constructor Couldbe' fail with the effect that f' ⊥. But for the newtype, the match on Couldbe will never fail, we get f ⊥ = 42. In a sense, the difference can be stated as: for the strict case, Couldbe' ⊥ is a synonym for for the newtype, ⊥ is a synonym for Couldbe ⊥ with the agreement that a pattern match on



=



⊥ fails and that a match on Constructor ⊥ does not.

Newtypes may also be used to define recursive types. An example is the alternate definition of the list type [a] newtype List a = In (Maybe (a, List a))

Again, the point is that the constructor In does not introduce an additional lifting with

⊥.

Other Selected Topics Abstract Interpretation and Strictness Analysis As lazy evaluation means a constant computational overhead, a Haskell compiler may want to discover where inherent non-strictness is not needed at all which allows him to drop the overhead at these particular places. To that extend, the compiler performs strictness analysis just like we proved in some functions to be strict section Strict Functions. Of course, details of strictness depending on the exact values of arguments like in our example cond are out of scope (this is in general undecidable). But the compiler may try to find approximate strictness information and this works in many common cases like power2. Now, abstract interpretation is a formidable idea to reason about strictness: ...

For more about strictness analysis, see the research papers about strictness analysis on the Haskell wiki (http:/ /haskell.org/haskellwiki/Research_papers/Compilation#Strictness) . Interpretation as Powersets So far, we have introduced ⊥ and the semantic approximation order abstractly by specifying their properties. However, both as well as any inhabitants of a data type like Just ⊥ can be interpreted as ordinary sets. This is called the powerset construction. NOTE: i'm not sure whether this is really true. Someone how knows, please correct this. The idea is to think of ⊥ as the set of all possible values and that a computation retrieves more information this by choosing a subset. In a sense, the denotation of a value starts its life as the set of all values which will be reduced by computations until there remains a set with a single element only. As an example, consider Bool where the domain looks like

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 218

{True} {False} \ / \ / ⊥ = {True, False}

The values True and False are encoded as the singleton sets {True} and {False} and all possible values.

⊥ is the set of

Another example is Maybe Bool: {Just True} {Just False} \ / \ / {Nothing} {Just True, Just False} \ / \ / ⊥ = {Nothing, Just True, Just False}

We see that the semantic approximation order is equivalent to set inclusion, but with arguments switched:

[24]

This approach can be used to give a semantics to exceptions in Haskell

.

Naïve Sets are unsuited for Recursive Data Types In section Naïve Sets are unsuited for Recursive Definitions, we argued that taking simple sets as denotation for types doesn't work well with partial functions. In the light of recursive data types, things become even worse as John C. Reynolds showed in his paper Polymorphism is not set-theoretic

[25]

.

Reynolds actually considers the recursive type newtype U = In ((U -> Bool) -> Bool)

Interpreting Bool as the set {True,False} and the function type A -> B as the set of functions from A to B, the type U cannot denote a set. This is because (A -> Bool) is the set of subsets (powerset) of A which, due to a diagonal argument analogous to Cantor's argument that there are "more" real numbers than natural ones, always has a bigger cardinality than A. Thus, (U -> Bool) -> Bool has an even bigger cardinality than U and there is no way for it to be isomorphic to U. Hence, the set U must not exist, a contradiction. In our world of partial functions, this argument fails. Here, an element of U is given by a sequence of approximations taken from the sequence of domains

⊥, ( ⊥ -> Bool) -> Bool, ((( ⊥ -> Bool) -> Bool) -> Bool) -> Bool and so on where ⊥ denotes the domain with the single inhabitant ⊥. While the author of this text admittedly has no clue on what such a thing should mean, the constructor gives a perfectly well defined object for U. We see that the type (U -> Bool) -> Bool merely consists of shifted approximating sequences which means that it is isomorphic to U.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 219

As a last note, Reynolds actually constructs an equivalent of U in the second order polymorphic lambda calcus. There, it happens that all terms have a normal form, i.e. there are only total functions when we do not include a primitive recursion operator fix :: (a -> a) -> a. Thus, there is no true need for partial functions and ⊥, yet a naïve set theoretic semantics fails. We can only speculate that this has to do with the fact that not every mathematical function is computable. In particular, the set of computable functions A -> Bool should not have a bigger cardinality than A.

Footnotes ↑

1. ↑ At least as far as types are concerned, but we're trying to avoid that word :) 2. More technically, fst and snd have types which limit them to pairs. It would be impossible to define projection functions on tuples in general, because they'd have to be able to accept tuples of different sizes, so the type of the function would vary. ↑ 3. ↑ In fact, these are one and the same concept in Haskell. 4. This isn't quite what chr and ord do, but that description fits our purposes well, and it's close enough. ↑ 5. To make things even more confusing, there's actually even more than one type for integers! Don't worry, we'll come on to this in due course. ↑ 6. This has been somewhat simplified to fit our purposes. Don't worry, the essence of the function is there. ↑ 7. Some of the newer type system extensions to GHC do break this, however, so you're better off just always putting down types anyway. ↑ 8. This is a slight lie. That type signature would mean that you can compare two values of any type whatsoever, but this clearly isn't true: how can you see if two functions are equal? Haskell includes a kind of 'restricted polymorphism' that allows type variables to range over some, but not all types. Haskell implements this using type classes, which we'll learn about later. In this case, the correct type of ↑ (==) is Eq a => a -> a -> Bool. 9. In mathematics, n! normally means the factorial of n, but that syntax is impossible in Haskell, so we don't use it here. ↑ 10. It infers a monomorphic type because k is bound by a lambda expression, and things bound by lambdas always have monomorphic types. See Polymorphism. ↑ 11. Ian Stewart. The true story of how Theseus found his way out of the labyrinth. Scientific American, February 1991, page 137. ↑ 12. Gérard Huet. The Zipper. Journal of Functional Programming, 7 (5), Sept 1997, pp. 549--554. PDF (http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/huet-zipper.pdf) ↑ 13. Note the notion of zipper as coined by Gérard Huet also allows to replace whole subtrees even if there is no extra data associated with them. In the case of our labyrinth, this is irrelevant. We will come back to this in the section Differentiation of data types. ↑ 14. Of course, the second topmost node or any other node at most a constant number of links away from the ↑ top will do as well. 15. Note that changing the whole data structure as opposed to updating the data at a node can be achieved in amortized constant time even if more nodes than just the top node is affected. An example is incrementing a number in binary representation. While incrementing say 111..11 must touch all digits to yield 1000..00, the increment function nevertheless runs in constant amortized time (but not ↑ in constant worst case time). 16. Conor Mc Bride. The Derivative of a Regular Type is its Type of One-Hole Contexts. Available online. PDF (http://www.cs.nott.ac.uk/~ctm/diff.pdf) ↑ 17. ↑ This phenomenon already shows up with generic tries. 18. Actually, we can apply them to functions whose type is forall a. a -> R, for some arbitrary R, as these accept values of any type as a parameter. Examples of such functions: id, const k for any k. So technically, we can't do anything _useful_ with its elements.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks 19.

20. 21. 22. 23. 24.

25.

Page 220



In fact, there are no written down and complete denotational semantics of Haskell. This would be a tedious task void of additional insight and we happily embrace the folklore and common sense semantics. ↑ Monads are one of the most successful ways to give denotational semantics to imperative programs. See ↑ also Haskell/Advanced monads. Strictness as premature evaluation of function arguments is elaborated in the chapter Graph Reduction. ↑ The term Laziness comes from the fact that the prevalent implementation technique for non-strict languages is called lazy evaluation ↑ The term lifted is somewhat overloaded, see also Unboxed Types. ↑ S. Peyton Jones, A. Reid, T. Hoare, S. Marlow, and F. Henderson. A semantics for imprecise exceptions. (http://research.microsoft.com/~simonpj/Papers/imprecise-exn.htm) In Programming Languages Design and Implementation. ACM press, May 1999. ↑ John C. Reynolds. Polymorphism is not set-theoretic. INRIA Rapports de Recherche No. 296. May 1984.

External Links Online books about Denotational Semantics Schmidt, David A. (1986). Denotational Semantics. A Methodology for Language Development (http:// www.cis.ksu.edu/~schmidt/text/densem.html) . Allyn and Bacon.

Equational reasoning Haskell/Equational reasoning

Program derivation Haskell/Program derivation

Category theory This article attempts to give an overview of category theory, insofar as it applies to Haskell. To this end, Haskell code will be given alongside the mathematical definitions. Absolute rigour is not followed; in its place, we seek to give the reader an intuitive feel for what the concepts of category theory are and how they relate to Haskell.

Introduction to categories A category is, in essence, a simple collection. It has three components: A collection of objects. A collection of morphisms , each of which ties two objects (a source object and a target object) together. (These are sometimes called arrows, but we avoid that term here as it has other denotations in Haskell.) If f is

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 221

a morphism with source object A and target object B, we write . composition A notion of of these morphisms. If h is the composition of . morphisms f and g, we write

A simple category, with three objects A, B and C, three identity morphisms id A, id B and id C, and

Lots of things form categories. For example, Set is the category of all sets with morphisms as standard functions and composition being standard function composition. (Category names are often typeset in bold face.) Grp is the category of all groups with morphisms as functions that preserve group operations (the group homomorphisms), i.e. for any two groups G with operation * and H with operation ·, a function is a morphism in Grp iff:

two other morphisms and . The third element (the specification of how to compose the morphisms) is not shown.

It may seem that morphisms are always functions, but this needn't be the ) defines a category where case. For example, any partial order (P, the objects are the elements of P, and there is a morphism between any two objects A and B iff . Moreover, there are allowed to be multiple morphisms with the same source and target objects; using the Set example, sin and cos are both functions with source object and target object [ − 1,1], but they're most certainly not the same morphism! Category laws There are three laws that categories need to follow. Firstly, and most simply, the composition of morphisms needs to be associative. Symbolically,

Secondly, the category needs to be closed under the composition operation. So if , then there must be some morphism in the category such that We can see how this works using the following category:

and .

f and g are both morphisms so we must be able to compose them and get another morphism in the category. So which is the morphism ? The only option is id A. Similarly, we see that . Lastly, given a category C there needs to be for every object A an identity morphism, an identity of composition with other morphisms. Put precisely, for every morphism

that is :

Hask, the Haskell category

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 222

The main category we'll be concerning ourselves with in this article is Hask, the category of Haskell types and Haskell functions as morphisms, using (.) for composition: a function f :: A -> B for types A and B is a morphism in Hask. We can check the first and last laws easily: we know (.) is an associative function, and clearly, for any f and g, f . g is another function. In Hask, the identity morphism is id, and we have trivially: id . f = f . id = f [26]

This isn't an exact translation of the law above, though; we're missing subscripts. The function id in Haskell is polymorphic - it can take many different types for its domain and range, or, in category-speak, can have many different source and target objects. But morphisms in category theory are by definition monomorphic - each morphism has one specific source object and one specific target object. A polymorphic Haskell function can be made monomorphic by specifying its type (instantiating with a monomorphic type), so it would be more precise if we said that the identity morphism from Hask on a type A is (id :: A -> A). With this in mind, the above law would be rewritten as: (id :: B -> B) . f = f . (id :: A -> A) = f However, for simplicity, we will ignore this distinction when the meaning is clear. Exercises As was mentioned, any partial order (P, ) is a category with objects as the elements of P and a morphism between elements a and b iff a b. ? Which of the above laws guarantees the transitivity of (Harder.) If we add another morphism to the above example, it fails to be a category. Why? Hint: think about associativity of the composition operation.

Functors So we have some categories which have objects and morphisms that relate our objects together. The next Big Topic in category theory is the functor, which relates categories together. A functor is essentially a transformation between categories, so given categories C and D, a functor :

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 223

Maps any object A in C to F(A), in D. Maps morphisms in C to

A functor between two categories, C and D. Of note is that the objects A and B both get mapped to the same object in D, and that therefore g becomes a morphism with the same source and target object (but isn't necessarily an identity), and id A and id B become the same morphism. The arrows showing the mapping of objects are shown in a dotted, pale olive. The arrows showing the mapping of morphisms are shown in a dotted, pale blue.

in D. One of the canonical examples of a functor is the forgetful functor which maps groups to their underlying sets and group morphisms to the functions which behave the same but are defined on sets instead of groups. Another example is the power set functor which maps sets to their power sets and maps functions to functions which take inputs and . For any category C, we can return f(U), the image of U under f, defined by define a functor known as the identity functor on C, or , that just maps objects to themselves and morphisms to themselves. This will turn out to be useful in the monad laws section later on. Once again there are a few axioms that functors have to obey. Firstly, given an identity morphism id A on an object A, F(id A) must be the identity morphism on F(A), i.e.: F(id A) = id F(A) Secondly functors must distribute over morphism composition, i.e.

Exercises For the diagram given on the right, check these functor laws. Functors on Hask The Functor typeclass you will probably have seen in Haskell does in fact tie in with the categorical notion of a functor. Remember that a functor has two parts: it maps objects in one category to objects in another and morphisms in the first category to morphisms in the second. Functors in Haskell are from Hask to func, where func is the subcategory of Hask defined on just that functor's types. E.g. the list functor goes from

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 224

Hask to Lst, where Lst is the category containing only list types, that is, [T] for any type T. The morphisms in Lst are functions defined on list types, that is, functions [T] -> [U] for types T, U. How does this tie into the Haskell typeclass Functor? Recall its definition: class Functor (f :: * -> *) where fmap :: (a -> b) -> (f a -> f b)

Let's have a sample instance, too: instance Functor Maybe where fmap f (Just x) = Just (f x) fmap _ Nothing = Nothing

Here's the key part: the type constructor Maybe takes any type T to a new type, Maybe T. Also, fmap restricted to Maybe types takes a function a -> b to a function Maybe a -> Maybe b. But that's it! We've defined two parts, something that takes objects in Hask to objects in another category (that of Maybe types and functions defined on Maybe types), and something that takes morphisms in Hask to morphisms in this category. So Maybe is a functor. A useful intuition regarding Haskell functors is that they represent types that can be mapped over. This could be a list or a Maybe, but also more complicated structures like trees. A function that does some mapping could be written using fmap, then any functor structure could be passed into this function. E.g. you could write a generic function that covers all of Data.List.map, Data.Map.map, Data.Array.IArray.amap, and so on. What about the functor axioms? The polymorphic function id takes the place of id A for any A, so the first law states: fmap id = id

With our above intuition in mind, this states that mapping over a structure doing nothing to each element is equivalent to doing nothing overall. Secondly, morphism composition is just (.), so fmap (f . g) = fmap f . fmap g

This second law is very useful in practice. Picturing the functor as a list or similar container, the right-hand side is a two-pass algorithm: we map over the structure, performing g, then map over it again, performing f. The functor axioms guarantee we can transform this into a single-pass algorthim that performs f . g. This is a process known as fusion. Exercises Check the laws for the Maybe and list functors. Translating categorical concepts into Haskell Functors provide a good example of how category theory gets translated into Haskell. The key points to remember are that: We work in the category Hask and its subcategories.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 225

Objects are types. Morphisms are functions. Things that take a type and return another type are type constructors. Things that take a function and return another function are higher-order functions. Typeclasses, along with the polymorphism they provide, make a nice way of capturing the fact that in category theory things are often defined over a number of objects at once.

Monads Monads are obviously an extremely important concept in Haskell, and in fact they originally came from category theory. A monad is a special type of functor, one that supports some additional structure. Additionally, every monad is a functor from a category to that same category. So, down to definitions. A monad is a functor , along with two morphisms

[27]

for every

object X in C:

When the monad under discussion is obvious, we'll miss out the M superscript for these functions and just talk about unitX

unit and join, the two morphisms that must exist for every object for a given monad.

and join X for some X. Let's see how this translates to the Haskell typeclass Monad, then. class Functor m => Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b

The class constraint of Functor m ensures that we already have the functor structure: a mapping of objects and of morphisms. return is the (polymorphic) analogue to unitX for any X. But we have a problem. Although return's type looks quite similar to that of unit, (>>=) bears no resemblance to join. The monad function join :: Monad m => m (m a) -> m a does however look quite similar. Indeed, we can recover join and (>>=) from each other: join :: Monad m => m (m a) -> m a join x = x >>= id (>>=) :: Monad m => m a -> (a -> m b) -> m b x >>= f = join (fmap f x)

So specifying a monad's return and join is equivalent to specifying its return and (>>=). It just turns out that the normal way of defining a monad in category theory is to give unit and join, whereas Haskell [28] programmers like to give return and (>>=) . Often, the categorical way makes more sense. Any time you have some kind of structure M and a natural way of taking any object X into M(X), as well as a way of taking M(M(X)) into M(X), you probably have a monad. We can see this in the following example section.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 226

Example: the powerset functor is also a monad The power set functor

described above forms a monad. For any set S you have a unitS

(x) = {x}, mapping elements to their singleton set. Note that each of these singleton sets are trivially a subset of S, so unitS returns elements of the powerset of S, as is required. Also, you can define a function join S as follows: we receive an input

. This is:

A member of the powerset of the powerset of S. So a member of the set of all subsets of the set of all subsets of S. So a set of subsets of S We then return the union of these subsets, giving another subset of S. Symbolically, . Hence P is a monad

[29]

.

In fact, P is almost equivalent to the list monad; with the exception that we're talking lists instead of sets, they're almost the same. Compare: Power set functor on Set Function type Given a set S and a morphism

List monad from Haskell Function type

Definition

Definition

Given a type T and a function f :: A -> B

:

fmap f :: fmap f xs [A] -> = [ f b | [B] b <- xs ] return :: return x = T -> [T] [x]

unitS(x) = {x}

join :: [ join xs = [T]] -> concat xs [T]

The monad laws and their importance Just as functors had to obey certain axioms in order to be called functors, monads have a few of their own. We'll first list them, then translate to Haskell, then see why they're important. Given a monad

and a morphism

for

,

1. 2. 3. 4. By now, the Haskell translations should be hopefully self-explanatory:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks 1. 2. 3. 4.

join . join . return join .

Page 227

fmap join = join . join fmap return = join . return = id . f = fmap f . return fmap (fmap f) = fmap f . join

(Remember that fmap is the part of a functor that acts on morphisms.) These laws seem a bit inpenetrable at first, though. What on earth do these laws mean, and why should they be true for monads? Let's explore the laws. The first law In order to understand this law, we'll first use the example of lists. The first law mentions two functions, join . fmap join (the left-hand side) and join . join (the right-hand side). What will the types of these functions be? Remembering that join's type is [[a]] -> [a] (we're talking just about lists for now), the types are both [[[a]]] -> [a] (the fact that they're the same is handy; after all, we're trying to show they're completely the same function!). So we have a list of list of lists. The lefthand side, then, performs fmap join on this 3-layered list, then uses join on the result. fmap is just the familiar map for lists, so we first map across each of the list of lists inside the top-level list, concatenating them down into a list each. So afterward, we have a list of lists, which we then run through join. In summary, we 'enter' the top level, collapse the second and third levels down, then collapse this new level with the top level.

A demonstration of the first law for lists. Remember that join is concat and fmap is map in the list monad.

What about the right-hand side? We first run join on our list of list of lists. Although this is three layers, and you normally apply a two-layered list to join, this will still work, because a [[[a]]] is just [[b]], where b = [a], so in a sense, a three-layered list is just a two layered list, but rather than the last layer being 'flat', it is composed of another list. So if we apply our list of lists (of lists) to join, it will flatten those outer two layers into one. As the second layer wasn't flat but instead contained a third layer, we will still end up with a list of lists, which the other join flattens. Summing up, the left-hand side will flatten the inner two layers into a new layer, then flatten this with the outermost layer. The right-hand side will flatten the outer two layers, then flatten this with the innermost layer. These two operations should be equivalent. It's sort of like a law of associativity for join. We can see this at work more if we recall the definition of join for Maybe:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

join join join join

:: Maybe (Maybe Nothing (Just Nothing) (Just (Just x))

Page 228

a) -> Maybe a = Nothing = Nothing = Just x

So if we had a three-layered Maybe (i.e., it could be Nothing, Just Nothing, Just (Just Nothing) or Just (Just (Just x))), the first law says that collapsing the inner two layers first, then that with the outer layer is exactly the same as collapsing the outer layers first, then that with the innermost layer. Exercises Verify that the list and Maybe monads do in fact obey this law with some examples to see precisely how the layer flattening works. The second law What about the second law, then? Again, we'll start with the example of lists. Both functions mentioned in the second law are functions [a] -> [a]. The left-hand side expresses a function that maps over the list, turning each element x into its singleton list [x], so that at the end we're left with a list of singleton lists. This two-layered list is flattened down into a single-layer list again using the join. The right hand side, however, takes the entire list [x, y, z, ...], turns it into the singleton list of lists [[x, y, z, ...]], then flattens the two layers down into one again. This law is less obvious to state quickly, but it essentially says that applying return to a monadic value, then joining the result should have the same effect whether you perform the return from inside the top layer or from outside it. Exercises Prove this second law for the Maybe monad. The third and fourth laws The last two laws express more self evident fact about how we expect monads to behave. The easiest way to see how they are true is to expand them to use the expanded form: 1. \x -> return (f x) = \x -> fmap f (return x) 2. \x -> join (fmap (fmap f) x) = \x -> fmap f (join x) Exercises Convince yourself that these laws should hold true for any monad by exploring what they mean, in a similar style to how we explained the first and second laws. Application to do-blocks Well, we have intuitive statements about the laws that a monad must support, but why is that important? The answer becomes obvious when we consider do-blocks. Recall that a do-block is just syntactic sugar for a combination of statements involving (>>=) as witnessed by the usual translation: do { x } --> x do { let { y = v }; x } --> let y = v in x

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks do { v <- y; x } do { y; x }

--> -->

Page 229

y >>= \v -> x y >>= \_ -> x

Also notice that we can prove what are normally quoted as the monad laws using return and (>>=) from our above laws (the proofs are a little heavy in some cases, feel free to skip them if you want to): 1. return x >>= f = f x. Proof:

= = = = =

return x >>= f join (fmap f (return x)) -- By the definition of (>>=) join (return (f x)) -- By law 3 (join . return) (f x) id (f x) -- By law 2 f x

2. m >>= return = m. Proof:

= = = =

m >>= return join (fmap return m) (join . fmap return) m id m m

-- By the definition of (>>=) -- By law 2

3. (m >>= f) >>= g = m >>= (\x -> f x >>= g). Proof (recall that fmap f . fmap g = fmap (f . g)):

= = = = = = = = = = = = = =

(m >>= f) >>= g (join (fmap f m)) >>= g -- By the definition of (>>=) join (fmap g (join (fmap f m))) -- By the definition of (>>=) (join . fmap g) (join (fmap f m)) (join . fmap g . join) (fmap f m) (join . join . fmap (fmap g)) (fmap f m) -- By law 4 (join . join . fmap (fmap g) . fmap f) m (join . join . fmap (fmap g . f)) m -- By the distributive law of functors (join . join . fmap (\x -> fmap g (f x))) m (join . fmap join . fmap (\x -> fmap g (f x))) m -- By law 1 (join . fmap (join . (\x -> fmap g (f x)))) m -- By the distributive law of functors (join . fmap (\x -> join (fmap g (f x)))) m (join . fmap (\x -> f x >>= g)) -- By the definition of (>>=) join (fmap (\x -> f x >>= g) m) m >>= (\x -> f x >>= g) -- By the definition of (>>=)

These new monad laws, using return and (>>=), can be translated into do-block notation. Points-free style return x >>= f = f x m >>= return = m

Do-block style do { v <- return x; f v } = do { f x } do { v <- m; return v } = do { m } do { y <- do { x <- m; f x }; g y } = do { x <- m;

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 230

y <- f x; g y }

The monad laws are now common-sense statements about how do-blocks should function. If one of these laws were invalidated, users would become confused, as you couldn't be able to manipulate things within the do-blocks as would be expected. The monad laws are, in essence, usability guidelines. Exercises In fact, the two versions of the laws we gave: -- Categorical: join . fmap join = join . join join . fmap return = join . return = id return . f = fmap f . return join . fmap (fmap f) = fmap f . join -- Functional: m >>= return = m return m >>= f = f m (m >>= f) >>= g = m >>= (\x -> f x >>= g)

Are entirely equivalent. We showed that we can recover the functional laws from the categorical ones. Go the other way; show that starting from the functional laws, the categorical laws hold. It may be useful to remember the following definitions: join m = m >>= id fmap f m = m >>= return . f

Thanks to Yitzchak Gale for suggesting this exercise.

Summary We've come a long way in this chapter. We've looked at what categories are and how they apply to Haskell. We've introduced the basic concepts of category theory including functors, as well as some more advanced topics like monads, and seen how they're crucial to idiomatic Haskell. We haven't covered some of the basic category theory that wasn't needed for our aims, like natural transformations, but have instead provided an intuitive feel for the categorical grounding behind Haskell's structures.

Notes ↑

1. ↑ At least as far as types are concerned, but we're trying to avoid that word :) 2. More technically, fst and snd have types which limit them to pairs. It would be impossible to define projection functions on tuples in general, because they'd have to be able to accept tuples of different sizes, so the type of the function would vary. ↑ 3. ↑ In fact, these are one and the same concept in Haskell. 4. This isn't quite what chr and ord do, but that description fits our purposes well, and it's close enough.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks 5. 6. 7. 8.

9. 10. 11. 12. 13.

14. 15.

16. 17. 18.

19.

20. 21. 22. 23. 24. 25. 26.

Page 231



To make things even more confusing, there's actually even more than one type for integers! Don't worry, we'll come on to this in due course. ↑ This has been somewhat simplified to fit our purposes. Don't worry, the essence of the function is there. ↑ Some of the newer type system extensions to GHC do break this, however, so you're better off just always putting down types anyway. ↑ This is a slight lie. That type signature would mean that you can compare two values of any type whatsoever, but this clearly isn't true: how can you see if two functions are equal? Haskell includes a kind of 'restricted polymorphism' that allows type variables to range over some, but not all types. Haskell implements this using type classes, which we'll learn about later. In this case, the correct type of ↑ (==) is Eq a => a -> a -> Bool. In mathematics, n! normally means the factorial of n, but that syntax is impossible in Haskell, so we don't use it here. ↑ It infers a monomorphic type because k is bound by a lambda expression, and things bound by lambdas always have monomorphic types. See Polymorphism. ↑ Ian Stewart. The true story of how Theseus found his way out of the labyrinth. Scientific American, February 1991, page 137. ↑ Gérard Huet. The Zipper. Journal of Functional Programming, 7 (5), Sept 1997, pp. 549--554. PDF (http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/huet-zipper.pdf) ↑ Note the notion of zipper as coined by Gérard Huet also allows to replace whole subtrees even if there is no extra data associated with them. In the case of our labyrinth, this is irrelevant. We will come back to this in the section Differentiation of data types. ↑ Of course, the second topmost node or any other node at most a constant number of links away from the ↑ top will do as well. Note that changing the whole data structure as opposed to updating the data at a node can be achieved in amortized constant time even if more nodes than just the top node is affected. An example is incrementing a number in binary representation. While incrementing say 111..11 must touch all digits to yield 1000..00, the increment function nevertheless runs in constant amortized time (but not ↑ in constant worst case time). Conor Mc Bride. The Derivative of a Regular Type is its Type of One-Hole Contexts. Available online. PDF (http://www.cs.nott.ac.uk/~ctm/diff.pdf) ↑ This phenomenon already shows up with generic tries. ↑ Actually, we can apply them to functions whose type is forall a. a -> R, for some arbitrary R, as these accept values of any type as a parameter. Examples of such functions: id, const k for any k. So technically, we can't do anything _useful_ with its elements. ↑ In fact, there are no written down and complete denotational semantics of Haskell. This would be a tedious task void of additional insight and we happily embrace the folklore and common sense semantics. ↑ Monads are one of the most successful ways to give denotational semantics to imperative programs. See ↑ also Haskell/Advanced monads. Strictness as premature evaluation of function arguments is elaborated in the chapter Graph Reduction. ↑ The term Laziness comes from the fact that the prevalent implementation technique for non-strict languages is called lazy evaluation ↑ The term lifted is somewhat overloaded, see also Unboxed Types. ↑ S. Peyton Jones, A. Reid, T. Hoare, S. Marlow, and F. Henderson. A semantics for imprecise exceptions. (http://research.microsoft.com/~simonpj/Papers/imprecise-exn.htm) In Programming Languages Design and Implementation. ACM press, May 1999. ↑ John C. Reynolds. Polymorphism is not set-theoretic. INRIA Rapports de Recherche No. 296. May 1984. ↑ Actually, there is a subtlety here: because (.) is a lazy function, if f is undefined, we have that id . f = \_ -> _|_. Now, while this may seem equivalent to _|_ for all extents and purposes, you can actually tell them apart using the strictifying function seq, meaning that the last category law is broken. We can define a new strict composition function, f .! g = ((.) $! f) $! g, that

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 232

makes Hask a category. We proceed by using the normal (.), though, and attribute any discrepancies to ↑ the fact that seq breaks an awful lot of the nice language properties anyway. 27. Experienced category theorists will notice that we're simplifying things a bit here; instead of presenting unit and join as natural transformations, we treat them explicitly as morphisms, and require naturality as extra axioms alongside the the standard monad laws (laws 3 and 4). The reasoning is simplicity; we are not trying to teach category theory as a whole, simply give a categorical background to some of the structures in Haskell. You may also notice that we are giving these morphisms names suggestive of their Haskell analogues, because the names η and don't provide much intuition. ↑ 28. This is perhaps due to the fact that Haskell programmers like to think of monads as a way of sequencing computations with a common feature, whereas in category theory the container aspect of the various structures is emphasised. join pertains naturally to containers (squashing two layers of a container down into one), but (>>=) is the natural sequencing operation (do something, feeding its results into something else). ↑ 29. If you can prove that certain laws hold, which we'll explore in the next section.

Haskell Performance Graph reduction Notes and TODOs

TODO: Pour lazy evaluation explanation from Laziness into this mold. TODO: better section names. TODO: ponder the graphical representation of graphs. No grapical representation, do it with let .. in. Pro: Reduction are easiest to perform in that way anyway. Cons: no graphic. ASCII art / line art similar to the one in Bird&Wadler? Pro: displays only the relevant parts truly as graph, easy to perform on paper. Cons: Ugly, no large graphs with that. Full blown graphs with @-nodes? Pro: look graphy. Cons: nobody needs to know @-nodes in order to understand graph reduction. Can be explained in the implementation section. Graphs without @-nodes. Pro: easy to understand. Cons: what about currying? ! Keep this chapter short. The sooner the reader knows how to evaluate Haskell programs by hand, the better. First sections closely follow Bird&Wadler

Introduction Programming is not only about writing correct programs (denotational semantics) but also about writing fast ones that require little memory (operational semantics). For that, we need to know how they're executed on a machine. This chapter explains how Haskell programs are commonly executed on a real computer and thus serves as foundation for analyzing time and space usage. Note that the Haskell standard deliberately does not give operational semantics, implementations are free to invent their own. But so far, every implementation of Haskell has lazy evaluation in mind.

Evaluation of Expressions

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 233

Reductions Executing a functional program, i.e. evaluating an expression, means to repeatedly apply function definitions until all function applications have been expanded. Take for example the expression pythagoras 3 4 together with the definitions square x = x * x pythagoras x y = square x + square y

One possible sequence of such reductions is pythagoras 3 4 square 3 + square 4 (pythagoras) (3*3) + square 4 (square) 9 + square 4 (*) 9 + (4*4) (square) 9 + 16 (*) 25

⇒ ⇒ ⇒ ⇒ ⇒ ⇒

Every reduction replaces a subexpression, called reducible expression or redex for short, with an equivalent one, either by appealing to a function definition like for square or by using a build in function like (+). An expression without redexes is said to be in normal form. Of course, execution stops once reaching a normal form which thus is the result of the computation. Clearly, the fewer reductions that have to be performed, the faster the program runs. We cannot expect each reduction step to take the same amount of time because its implementation on real hardware looks very different, but in terms of asymptotic complexity, this number of reductions is an accurate measure. Reduction strategies There are many possible reduction sequences and the number of reductions may depend on the order in which reductions are performed. Take for example the expression fst (square 3, square 4). One systematic possibilty is to evaluate all function arguments before applying the function definition fst (square 3, square 4) fst (3*3, square 4) (square) fst ( 9 , square 4) (*) fst ( 9 , 4*4) (square) fst ( 9 , 16 ) (*) 9 (fst)

⇒ ⇒ ⇒ ⇒ ⇒

This is called an innermost reduction strategy and an innermost redex is a redex that has no other redex as subexpression inside. Another systematic possibility is to apply all function definitions first and only then evaluate arguments: fst (square 3, square 4) square 3 (fst) 3*3 (square) 9 (*)

⇒ ⇒ ⇒

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 234

which is named outermost reduction and always reduces outermost redexes that are not inside another redex. Here, the outermost reduction uses less reduction steps than the innermost reduction. Why? Because the function fst doesn't need the second component of the pair and the reduction of square 4 was superflous. Termination For some expressions like loop = 1 + loop

no reduction sequence may terminate and program execution enters a neverending loop, those expressions do not have a normal form. But there are also expressions where some reduction sequences terminate and some do not, an example being fst (42, loop) 42



fst (42, loop) fst (42,1+loop) fst (42,1+(1+loop)) ...

⇒ ⇒ ⇒

(fst)

(loop) (loop)

The first reduction sequence is outermost reduction and the second is innermost reduction which tries in vain to evaluate the loop even though it is ignored by fst anyway. The ability to evaluate function arguments only when needed is what makes outermost optimal when it comes to termination: Theorem (Church Rosser II) If there is one terminating reduction, then outermost reduction will terminate, too. Graph reduction Despite the ability to discard arguments, outermost reduction doesn't always take fewer reduction steps than innermost reduction: square (1+2) (1+2)*(1+2) (1+2)*3 3*3 9

⇒ ⇒ ⇒ ⇒

(square) (+) (+) (*)

Here, the argument (1+2) is duplicated and subsequently reduced twice. But because it is one and the same 3 with all other incarnations of this argument. argument, the solution is to share the reduction (1+2) This can be achieved by representing expressions as graphs. For example,





__________ | | (1+2) ◊ * ◊

represents the expression (1+2)*(1+2). Now, reduction of square (1+2) proceeds as follows

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

square (1+2) __________ | | (1+2) ◊ * ◊ __________ | | 3 ◊ * ◊



↓ ↓



⇒9

Page 235

(square)

(+)

(*)

and the work has been shared. In other words, outermost graph reduction now reduces every argument at most once. For this reason, it always takes fewer reduction steps than the innermost reduction, a fact we will prove when #Reasoning about Time. Sharing of expressions is also introduced with let and where constructs. For instance, consider Heron's formula for the area of a triangle with sides a,b and c: area a b c = let s = (a+b+c)/2 in sqrt (s*(s-a)*(s-b)*(s-c))

Instantiating this to an equilateral triangle will reduce as area 1 1 1 _____________________ | | | | sqrt ( ◊ *( ◊ -a)*( ◊-b)*( ◊ -c)) _____________________ | | | | sqrt ( ◊ *( ◊ -a)*( ◊-b)*( ◊ -c))



↓ ↓

⇒ ⇒ ⇒

(area) ((1+1+1)/2) (+),(+),(/) 1.5

... 0.433012702

which is . Put differently, let-bindings simply give names to nodes in the graph. In fact, one can dispense entiely with a graphical notation and soley rely on let to mark sharing and express a graph structure.

[30]

Exercises 1. Reduce square (square 3) to normal form with innermost, outermost and outermost graph reduction. 2. Consider the fast exponentiation algorithm power x 0 = 1 power x n = x' * x' * (if n `mod` 2 == 0 then 1 else x) where x' = power x (n `div` 2)

that takes x to the power of n. Reduce power 2 5 with innermost and outermost graph reduction. Pattern matching will be treated in the next section, for now simply assume that reducing power x n even with outermost reduction amounts to first reducing n to normal form and then

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 236

deciding which defining equation to choose. What happens to the algorithm if we use "graphless" outermost reduction?

Case expressions, weak head normal form Pattern matching & case expressions are the workhorse of Haskell programs. Scrutinees have to be reduced, but only to weak head formal form = no top-level redex (i.e. either: variable maybe with arguments, built in with too few arguments, constructor or lambda-abstraction. Note that functions types cannot be pattern matched anyway, but the devious seq can evaluate them to WHNF nonetheless.), in the spirit of lazy evaluation. "weak" = no reduction under lambdas. "head" = first the function application, then the arguments. Complex pattern matches (multiple parameters, nested) are translated to case expressions and lambda abstractions, but it suffices to know the following rules when tracing normal order graph reduction: From top to bottom Arguments are only evaluted when their WHNF is indeed needed for the pattern. NOTE: Should we mention lambda-abstractions? I'm not sure whether regarding LHS of function definitions as a redex with multiple arguments is good since this is not adequate in case of f x = let z = .. in \y ->. Better mention lambda-abstractions. At this point, the reader is (should be) able to trace every Haskell program he encounters. So, now come the encounters and exercises. Evaluate the following (with the function definitions from the Prelude, of course): square (square 2) map (1*) (map (2*) [1,2,3]) head ([1,2,3] ++ loop) zip (map (1+) [1..3]) (iterate (+1) 0) take (square 15 - sum (map (^3)) [1..5]) (map square [2718..3146]) f . g -- evaluating a function to WHNF. Needed for Diff-Lists!

Controlling Space NOTE: The chapter Haskell/Strictness is intended to elaborate on the stuff here. NOTE: The notion of strict function is to be introduced before this section. Now's the time for the space-eating fold example: foldl (+) 0 [1..10]

Introduce seq and $! that can force an expression to WHNF. => foldl'.

Tricky space leak example:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 237

(\xs -> head xs + last xs) [1..n] (\xs -> last xs + head xs) [1..n]

The first version runs on O(1) space. The second in O(n). Sharing and CSE NOTE: overlaps with section about time. Hm, make an extra memoization section? How to share foo x y = -- s is not shared foo x = \y -> s + y where s = expensive x -- s is shared

"Lambda-lifting", "Full laziness". The compiler should not do full laziness. A classic and important example for the trade between space and time: sublists [] = [[]] sublists (x:xs) = sublists xs ++ map (x:) sublists xs sublists' (x:xs) = let ys = sublists' xs in ys ++ map (x:) ys

That's why the compiler should not do common subexpression elimination as optimization. (Does GHC?). Tail recursion NOTE: Does this belong to the space section? I think so, it's about stack space. Tail recursion in Haskell looks different.

Reasoning about Time Note: introducing strictness before the upper time bound saves some hassle with explanation? Lazy eval < Eager eval When reasoning about execution time, naively performing graph reduction by hand to get a clue on what's going is most often infeasible. In fact, the order of evaluation taken by lazy evaluation is difficult to predict by humans, it is much easier to trace the path of eager evaluation where arguments are reduced to normal form before being supplied to a function. But knowing that lazy evaluation always performs less reduction steps than eager evaluation (present the proof!), we can easily get an upper bound for the number of reductions by pretending that our function is evaluated eagerly. Example:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 238

or = foldr (||) False isPrime n = not $ or $ map (\k -> n `mod` k == 0) [2..n-1]

=> eager evaluation always takes n steps, lazy won't take more than that. But it will actually take fewer. Throwing away arguments Time bound exact for functions that examine their argument to normal form anyway. The property that a function needs its argument can concisely be captured by denotational semantics: f

⊥= ⊥

Argument in WHNF only, though. Operationally: non-termination -> non-termination. (this is an ⊥ approximation only, though because f anything = doesn't "need" its argument). Non-strict functions don't need their argument and eager time bound is not sharp. But the information whether a function is strict or not can already be used to great benefit in the analysis. isPrime n = not $ or $ (n `mod` 2 == 0) : (n `mod` 3 == 0) : ...

It's enough to know or True

⊥ = True.

Other examples:



foldr (:) [] vs. foldl (flip (:)) [] with . ⊥ Can head . mergesort be analyzed only with ? In any case, this example is too involed and belongs to Haskell/Laziness. Persistence & Amortisation NOTE: this section is better left to a data structures chapter because the subsections above cover most of the cases a programmer not focussing on data structures / amortization will encounter. Persistence = no updates in place, older versions are still there. Amortisation = distribute unequal running times across a sequence of operations. Both don't go well together in a strict setting. Lazy evaluation can reconcile them. Debit invariants. Example: incrementing numbers in binary representation.

Implementation of Graph reduction Smalltalk about G-machines and such. Main definition:

λ λ

closure = thunk = code/data pair λ on the heap. What do they do? Consider ( x. y.x + y)2. This is a function that returns a function, namely y.2 + y in this case. But when you want to compile code, it's prohibitive to actually perform the substitution λin memory and replace all occurences of x by 2. So, you return a closure that consists of the function code y.x + y and an environment {x = 2} that assigns values to the free variables appearing in there.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 239

GHC (?, most Haskell implementations?) avoid free variables completely and use supercombinators. In other words, they're supplied as extra-parameters and the observation that lambda-expressions with too few parameters don't need to be reduced since their WHNF is not very different. Note that these terms are technical terms for implementation stuff, lazy evaluation happily lives without them. Don't use them in any of the sections above.

References Bird, Richard (1998). Introduction to Functional Programming using Haskell. Prentice Hall. ISBN 013-484346-0. Peyton-Jones, Simon (1987). The Implementation of Functional Programming Languages (http:// research.microsoft.com/~simonpj/papers/slpj-book-1987/) . Prentice Hall.

Laziness Introduction By now you are aware that Haskell uses lazy evaluation in the sense that nothing is evaluated until necessary. The problem is what exactly does "until necessary" mean? In this chapter, we will see how lazy evaluation works (how little black magic there is), what exactly it means for functional programming and how to make the best use of it. But first, let's consider for having lazy evaluation. At first glance, it is tempting to think that lazy evaluation is meant to make programs more efficient. After all, what can be more efficient than not doing anything? This is only true in a superficial sense. Besides, in practice, laziness often introduces an overhead that leads programmers to hunt for places where they can make their code stricter. The real benefit of laziness is not merely that it makes things efficient, but that it makes the right things efficient enough. Lazy evaluation allows us to write simple, elegant code which would simply not be practical in a strict environment.

Nonstrictness versus Laziness There is a slight difference between laziness and nonstrictness. Nonstrict semantics refers to a given property of Haskell programs that you can rely on: nothing will be evaluated until it is needed. Lazy evaluation is how you implement nonstrictness, using a device called thunks which we explain in the next section. However, these two concepts are so closely linked that it is beneficial to explain them both together: a knowledge of thunks is useful for understanding nonstrictness, and the semantics of nonstrictness explains why you would be using lazy evaluation in the first place. As such, we introduce the concepts simultaneously and make no particular effort to keep them from intertwining, with the exception of getting the terminology right.

Thunks and Weak head normal form There are two principles you need to understand to get how programs execute in Haskell. Firstly, we have the property of nonstrictness: we evaluate as little as possible for as long as possible. Secondly, Haskell values

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 240

are highly layered; 'evaluating' a Haskell value could mean evaluating down to any one of these layers. To see what this means, let's walk through a few examples using a pair. let (x, y) = (length [1..5], reverse "olleh") in ...

(We'll assume that in the 'in' part, we use x and y somewhere. Otherwise, we're not forced to evaluate the let-binding at all; the right-hand side could have been undefined and it would still work if the 'in' part doesn't mention x or y. This assumption will remain for all the examples in this section.) What do we know about x? Looking at it we can see it's pretty obvious x is 5 and y "hello", but remember the first principle: we don't want to evaluate the calls to length and reverse until we're forced to. So okay, we can say that x and y are both thunks: that is, they are unevaluated values with a recipe that explains how to evaluate them. For example, for x this recipe says 'Evaluate length [1..5]'. However, we are actually doing some pattern matching on the left hand side. What would happen if we removed that? let z = (length [1..5], reverse "olleh") in ...

Although it's still pretty obvious to us that z is a pair, the compiler sees that we're not trying to deconstruct the value on the right-hand side of the '=' sign at all, so it doesn't really care what's there. It lets z be a thunk on its own. Later on, when we try to use z, we'll probably need one or both of the components, so we'll have to evaluate z, but for now, it can be a thunk. Above, we said Haskell values were layered. We can see that at work if we pattern match on z: let z = (length [1..5], reverse "olleh") (n, s) = z in ...

After the first line has been executed, z is simply a thunk. We know nothing about the sort of value it is because we haven't been asked to find out yet. In the second line, however, we pattern match on z using a pair pattern. The compiler thinks 'I better make sure that pattern does indeed match z, and in order to do that, I need to make sure z is a pair.' Be careful, though. We're not as of yet doing anything with the component parts (the calls to length and reverse), so they can remain unevaluated. In other words, z, which was just a thunk, gets evaluated to something like (*thunk*, *thunk*), and n and s become thunks which, when evaluated, will be the component parts of the original z. Let's try a slightly more complicated pattern match: let z = (length [1..5], reverse "olleh") (n, s) = z 'h':ss = s in ...

The pattern match on the second component of z causes some evaluation. The compiler wishes to check that the 'h':ss pattern matches the second component of the pair. So, it:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 241

Evaluates the top level of s to ensure it's a cons cell: s = *thunk* : *thunk*. (If s had been an empty list we would encounter an pattern match failure error at this point.) Evaluates the first thunk it just revealed to make sure it's 'h': s = 'h' : *thunk* The rest of the list stays unevaluated, and ss becomes a thunk which, when evaluated, will be the rest of this list. So it seems that we can 'partially evaluate' (most) Haskell values. Also, there is some sense of the minimum amount of evaluation we can do. For example, if we have a pair thunk, then the minimum amount of evaluation takes us to the pair constructor with two unevaluated components: (*thunk*, *thunk*). If we have a list, the minimum amount of evaluation takes us Evaluating the value (4, [1, 2]) step by step. The first stage either to a cons cell *thunk* : *thunk* is completely unevaluated; all subsequent forms are in WHNF, or an empty list []. Note that in the second and the last one is also in normal form. case, no more evaluation can be performed on the value; it is said to be in normal form. If we are at any of the intermediate steps so that we've performed at least some evaluation on a value, it is in weak head normal form (WHNF). (There is also a 'head normal form', but it's not used in Haskell.) Fully evaluating something in WHNF reduces it to something in normal form; if at some point we needed to, say, print z out to the user, we'd need to fully evaluate it to, including those calls to length and reverse, to (5, "hello"), where it is in normal form. Performing any degree of evaluation on a value is sometimes called forcing that value. Note that for some values there is only one. For example, you can't partially evaluate an integer. It's either a thunk or it's in normal form. Furthermore, if we have a constructor with strict components (annotated with an exclamation mark, as with data MaybeS a = NothingS | JustS !a), these components become evaluated as soon as we evaluate the level above. I.e. we can never have JustS *thunk*, as soon as we get to this level the strictness annotation on the component of JustS forces us to evaluate the component part. So in this section we've explored the basics of laziness. We've seen that nothing gets evaluated until its needed (in fact the only place that Haskell values get evaluated is in pattern matches, and inside certain primitive IO functions), and that this principle even applies to evaluting values we do the minimum amount of work on a value that we need to compute our result.

Lazy and strict functions Functions can be lazy or strict 'in an argument'. Most functions need to do something with their arguments, and this will involve evaluating these arguments to different levels. For example, length needs to evaluate length *thunk* only the cons cells in the argument you give it, not the contents of those cons cells might evaluate to something like length (*thunk* : *thunk* : *thunk* : []), which in turn evaluates to 3. Others need to evaluate their arguments fully, like show. If you had show *thunk*, there's no way you can do anything other than evaulate that thunk to normal form.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 242

So some functions evaluate their arguments more fully than others. Given two functions of one parameter, f and g, we say f is stricter than g if f x evaluates x to a deeper level than g x. Often we only care about WHNF, so a function that evaluates its argument to at least WHNF is called strict and one that performs no evaluation is lazy. What about functions of more than one parameter? Well, we can talk about functions being strict in one parameter, but lazy in another. For example, given a function like the following: f x y = show x

Clearly we need to perform no evaluation on y, but we need to evaluate x fully to normal form, so f is strict in its first parameter but lazy in its second. Exercises 1. Why must we fully evaluate x to normal form in f x y = show x? 2. Which is the stricter function? f x = length [head x] g x = length (tail x)

TODO: explain that it's also about how much of the input we need to consume before we can start producing output. E.g. foldr (:) [] and foldl (flip (:)) [] both evaluate their arguments to the same level of strictness, but foldr can start producing values straight away, whereas foldl needs to evaluate cons cells all the way to the end before it starts anything.

Black-box strictness analysis Imagine we're given some function f which takes a single parameter. We're not allowed to look at its source code, but we want to know whether f is strict or not. How might we do this? Probably the easiest way is to use the standard Prelude value undefined. Forcing undefined to any level of evaluation will halt our program and print an error, so all of these print errors: let (x, y) = undefined in x length undefined head undefined JustS undefined -- Using MaybeS as defined in the last section

So if a function is strict, passing it undefined will result in an error. Were the function lazy, passing it undefined will print no error and we can carry on as normal. For example, none of the following produce errors:

If f returns an error when passed undefined, it must be strict. Otherwise, it's lazy.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 243

let (x, y) = (4, undefined) in x length [undefined, undefined, undefined] head (4 : undefined) Just undefined

So we can say that f is a strict function if, and only if, f undefined results in an error being printed and the halting of our program. In the context of nonstrict semantics What we've presented so far makes sense until you start to think about functions like id. Is id strict? Our gut reaction is probably to say "No! It doesn't evaluate its argument, therefore its lazy". However, let's apply our black-box strictness analysis from the last section to id. Clearly, id undefined is going to print an error and halt our program, so shouldn't we say that id is strict? The reason for this mixup is that Haskell's nonstrict semantics makes the whole issue a bit murkier. Nothing gets evaluated if it doesn't need to be, according to nonstrictness. In the following code, will length undefined be evaluated? [4, 10, length undefined, 12]

If you type this into GHCi, it seems so, because you'll get an error printed. However, our question was something of a trick one; it doesn't make sense to say whether a value get evaluated, unless we're doing something to this value. Think about it: if we type in head [1, 2, 3] into GHCi, the only reason we have to do any evaluation whatsoever is because GHCi has to print us out the result. Typing [4, 10, length undefined, 12] again requires GHCi to print that list back to us, so it must evaluate it to normal form. You can think of anything you type into GHCi as being passed to show. In your average Haskell program, nothing at all will be evaluated until we come to perform the IO in main. So it makes no sense to say whether something is evaluated or not unless we know what it's being passed to, one level up. So when we say "Does f x force x?" what we really mean is "Given that we're forcing f x, does x get forced as a result?". Now we can turn our attention back to id. If we force id x to normal form, then x will be forced to normal form, so we conclude that id is strict. id itself doesn't evaluate its argument, it just hands it on to the caller who will. One way to see this is in the following code: -- We evaluate the right-hand of the let-binding to WHNF by pattern-matching -- against it. let (x, y) = undefined in x -- Error, because we force undefined. let (x, y) = id undefined in x -- Error, because we force undefined.

id doesn't "stop" the forcing, so it is strict. Contrast this to a clearly lazy function, const (3, 4): let (x, y) = undefined in x -- Error, because we force undefined. let (x, y) = const (3, 4) undefined -- No error, because const (3, 4) is lazy.

The denotational view on things If you're familiar with denotational semantics (perhaps you've read the wikibook chapter on it?), then the strictness of a function can be summed up very succinctly:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 244

Assuming that you say that everything with type forall a. a, including undefined, error "any string", throw and so on, has denotation ⊥.

Lazy pattern matching You might have seen pattern matches like the following in Haskell sources.

Example: A lazy pattern match -- From Control.Arrow (***) f g ~(x, y) = (f x, g y)

The question is: what does the tilde sign (~) mean in the above pattern match? ~ makes a lazy pattern or irrefutable pattern. Normally, if you pattern match using a constructor as part of the pattern, you have to evaluate any argument passed into that function to make sure it matches the pattern. For example, if you had a function like the above, the third argument would be evaluated when you call the function to make sure the value matches the pattern. (Note that the first and second arguments won't be evaluated, because the patterns f and g match anything. Also it's worth noting that the components of the tuple won't be evaluated: just the 'top level'. Try let f (Just x) = 1 in f (Just undefined) to see the this.) However, prepending a pattern with a tilde sign delays the evaluation of the value until the component parts are actually used. But you run the risk that the value might not match the pattern -- you're telling the compiler 'Trust me, I know it'll work out'. (If it turns out it doesn't match the pattern, you get a runtime error.) To illustrate the difference:

Example: How ~ makes a difference Prelude> let f (x,y) = 1 in f undefined Undefined Prelude> let f ~(x,y) = 1 in f undefined 1

In the first example, the value is evaluated because it has to match the tuple pattern. You evaluate undefined and get undefined, which stops the preceedings. In the latter example, you don't bother evaluating the parameter until it's needed, which turns out to be never, so it doesn't matter you passed it undefined. To bring the discussion around in a circle back to (***):

Example: How ~ makes a difference with (***))

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 245

Prelude> (const 1 *** const 2) undefined (1,2)

If the pattern weren't irrefutable, the example would have failed. When does it make sense to use lazy patterns? Essentially, when you only have the single constructor for the type, e.g. tuples. Multiple equations won't work nicely with irrefutable patterns. To see this, let's examine what would happen were we to make head have an irrefutable pattern:

Example: Lazier head head' :: [a] -> a head' ~[] = undefined head' ~(x:xs) = x

The fact we're using one of these patterns tells us not to evaluate even the top level of the argument until absolutely necessary, so we don't know whether it's an empty list or a cons cell. As we're using an irrefutable pattern for the first equation, this will match, and the function will always return undefined. Exercises Why won't changing the order of the equations to head' help here? More to come...

Techniques with Lazy Evaluation This section needs a better title and is intended to be the workhorse of this chapter.

Separation of concerns without time penality Examples: or = foldr (||) False isSubstringOf x y = any (isPrefixOf x) (tails y) take n . quicksort take n . mergesort prune . generate

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 246

The more examples, the better! What about the case of (large data -> small data) where lazy evaluation is space-hungry but doesn't take less reductions than eager evaluation? Mention it here? Elaborate it in Haskell/Strictness? xs++ys xs ++ ys is O(min(length xs,k)) where k is the length of the part of the result which you observe. This follows directly from the definition of (++) and laziness. [] ++ ys = ys -- case 1 (x:xs) ++ ys = x : (xs ++ ys) -- case 2

Let's try it in a specific case, completely expanding the definition: [1,2,3] ++ ys = 1 : ([2,3] ++ = 1 : (2 : ([3] = 1 : (2 : (3 : = 1 : (2 : (3 :

ys) ++ ys)) ([] ++ ys))) ys))

-----

by by by by

case case case case

2 2 2 1

Here, the length of the left list was 3, and it took 4 steps to completely reduce the definition of (++). As you can see, the length and content of ys actually doesn't matter at all, as it just ends up being a tail of the resulting list. You can see fairly easily that it will take length xs + 1 steps to completely expand the definition of (++) in xs ++ ys in general. However, this won't actually happen until you go about actually using those elements of the list. If only the first k elements of the list are demanded, where k <= length xs, then they will be available after only k steps, so indeed, head (xs ++ ys)

(or getting any constant number of elements from the head) will evaluate in constant time.

isSubstringOf TODO:rewrite introduction to this section / now redundant with main intro Often code reuse is far better. Here's a simple example:

Example: Laziness helps code reuse -- From the Prelude or = foldr (||) False any p = or . map p -- From Data.List isPrefixOf [] _

= True

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 247

isPrefixOf _ [] = False isPrefixOf (x:xs) (y:ys) = x == y && isPrefixOf xs ys tails [] = [[]] tails xss@(_:xs) = xss : tails xs -- Our function isSubstringOf x y = any (isPrefixOf x) (tails y)

Where any, isPrefixOf and tails are the functions taken from the Data.List library. This function determines if its first parameter, x occurs as a substring of its second, y. Read in a strict way, it forms the list of all the tails of y, then checks them all to see if any of them have x as a prefix. In a strict language, writing this function this way (relying on the already-written programs any, isPrefixOf, and tails) would be silly, because it would be far slower than it needed to be. You'd end up doing direct recursion again, or in an imperative language, a couple of nested loops. You might be able to get some use out of isPrefixOf, but you certainly wouldn't use tails. You might be able to write a usable shortcutting any, but it would be more work, since you wouldn't want to use foldr to do it. Now, in a lazy language, all the shortcutting is done for you. You don't end up rewriting foldr to shortcut when you find a solution, or rewriting the recursion done in tails so that it will stop early again. You can reuse standard library code better. Laziness isn't just a constant-factor speed thing, it makes a qualitative impact on the code which it's reasonable to write. In fact, it's commonplace to define infinite structures, and then only use as much as is needed, rather than having to mix up the logic of constructing the data structure with code that determines whether any part is needed. Code modularity is increased, as laziness gives you more ways to chop up your code into small pieces, each of which does a simple task of generating, filtering, or otherwise manipulating data. Why Functional Programming Matters (http://www.md.chalmers.se/~rjmh/Papers/whyfp.html) -- largely focuses on examples where laziness is crucial, and provides a strong argument for lazy evaluation being the default. Infinite Data Structures Examples: fibs = 1:1:zipWith (+) fibs (tail fibs) "rock-scissors-paper" example from Bird&Wadler prune . generate

Infinite data structures usually tie a knot, too, but the Sci-Fi-Explanation of that is better left to the next section. One could move the next section before this one but I think that infinite data structures are simpler than tying general knots Tying the Knot More practical examples? repMin

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 248

Sci-Fi-Explanation: "You can borrow things from the future as long as you don't try to change them". Advanced: the "Blueprint"-technique. Examples: the one from the haskellwiki, the one from the mailing list. At first a pure functional language seems to have a problem with circular data structures. Suppose I have a data type like this: data Foo a = Foo {value :: a; next :: Foo}

If I want to create two objects "x" and "y" where "x" contains a reference to "y" and "y" contains a reference to "x" then in a conventional language this is straightforward: create the objects and then set the relevant fields to point to each other: -- Not Haskell code x := new Foo; y := new Foo; x.value := 1; x.next := y; y.value := 2 y.next := x;

In Haskell this kind of modification is not allowed. So instead we depend on lazy evaluation: circularFoo :: Foo Int circularFoo = x where x = Foo 1 y y = Foo 2 x

This depends on the fact that the "Foo" constructor is a function, and like most functions it gets evaluated lazily. Only when one of the fields is required does it get evaluated. It may help to understand what happens behind the scenes here. When a lazy value is created, for example by a call to "Foo", the compiler generates an internal data structure called a "thunk" containing the function call and arguments. When the value of the function is demanded the function is called, as you would expect. But then the thunk data structure is replaced with the return value. Thus anything else that refers to that value gets it straight away without the need to call the function. (Note that the Haskell language standard makes no mention of thunks: they are an implementation mechanism. From the mathematical point of view this is a straightforward example of mutual recursion) So when I call "circularFoo" the result "x" is actually a thunk. One of the arguments is a reference to a second thunk representing "y". This in turn has a reference back to the thunk representing "x". If I then use the value "next x" this forces the "x" thunk to be evaluated and returns me a reference to the "y" thunk. If I use the value "next $ next x" then I force the evaluation of both thunks. So now both thunks have been replaced with the actual "Foo" structures, refering to each other. Which is what we wanted. This is most often applied with constructor functions, but it isn't limited just to constructors. You can just as readily write:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 249

x = f y y = g x

The same logic applies. Memoization, Sharing and Dynamic Programming Dynamic programming with immutable arrays. DP with other finite maps, Hinze's paper "Trouble shared is Trouble halved". Let-floating \x-> let z = foo x in \y -> ... .

Conclusions about laziness

Move conclusions to the introduction? Can make qualitative improvements to performance! Can hurt performance in some other cases. Makes code simpler. Makes hard problems conceivable. Allows for separation of concerns with regard to generating and processing data.

References Laziness on the Haskell wiki (http://www.haskell.org/haskellwiki/Performance/Laziness) Lazy evaluation tutorial on the Haskell wiki (http://www.haskell.org/haskellwiki/Haskell/Lazy_ Evaluation)

Strictness Difference between strict and lazy evaluation Why laziness can be problematic Lazy evaluation often involves objects called thunks. A thunk is a placeholder object, specifying not the data itself, but rather how to compute that data. An entity can be replaced with a thunk to compute that entity. When an entity is copied, whether or not it is a thunk doesn't matter - it's copied as is (on most implementations, a pointer to the data is created). When an entity is evaluated, it is first checked if it is thunk; if it's a thunk, then it is executed, otherwise the actual data is returned. It is by the magic of thunks that laziness can be implemented. Generally, in the implementation the thunk is really just a pointer to a piece of (usually static) code, plus another pointer to the data the code should work on. If the entity computed by the thunk is larger than the pointer to the code and the associated data, then a thunk wins out in memory usage. But if the entity computed by the thunk is smaller, the thunk ends up using more memory.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 250

As an example, consider an infinite length list generated using the expression iterate (+ 1) 0. The size of the list is infinite, but the code is just an add instruction, and the two pieces of data, 1 and 0, are just two Integers. In this case, the thunk representing that list wins over the actual list. However, as another example consider the number generated using the expression 4 * 13 + 2. The value of that number is 54, but in thunk form it is a multiply, an add, and three numbers. In such a case, the thunk loses in terms of memory. Often, the second case above will consume so much memory that it will consume the entire heap and force the garbage collector. This can slow down the execution of the program significantly. And that, in fact, is the reason why laziness can be problematic

Strictness annotations seq DeepSeq

References Strictness on the Haskell wiki (http://www.haskell.org/haskellwiki/Performance/Strictness)

Algorithm complexity Complexity Theory is the study of how long a program will take to run, depending on the size of its input. There are many good introductory books to complexity theory and the basics are explained in any good algorithms book. I'll keep the discussion here to a minimum. The idea is to say how well a program scales with more data. If you have a program that runs quickly on very small amounts of data but chokes on huge amounts of data, it's not very useful (unless you know you'll only be working with small amounts of data, of course). Consider the following Haskell function to return the sum of the elements in a list: sum [] = 0 sum (x:xs) = x + sum xs

How long does it take this function to complete? That's a very difficult question; it would depend on all sorts of things: your processor speed, your amount of memory, the exact way in which the addition is carried out, the length of the list, how many other programs are running on your computer, and so on. This is far too much to deal with, so we need to invent a simpler model. The model we use is sort of an arbitrary "machine step." So the question is "how many machine steps will it take for this program to complete?" In this case, it only depends on the length of the input list.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 251

If the input list is of length 0, the function will take either 0 or 1 or 2 or some very small number of machine steps, depending exactly on how you count them (perhaps 1 step to do the pattern matching and 1 more to return the value 0). What if the list is of length 1. Well, it would take however much time the list of length 0 would take, plus a few more steps for doing the first (and only element). If the input list is of length n, it will take however many steps an empty list would take (call this value y) and then, for each element it would take a certain number of steps to do the addition and the recursive call (call this number x). Then, the total time this function will take is nx + y since it needs to do those additions n many times. These x and y values are called constant values, since they are independent of n, and actually dependent only on exactly how we define a machine step, so we really don't want to consider them all that important. Therefore, we say that the complexity of this sum function is (read "order n"). Basically saying something is means that for some constant factors x and y, the function takes nx + y machine steps to complete. Consider the following sorting algorithm for lists (commonly called "insertion sort"): sort [] = [] sort [x] = [x] sort (x:xs) = insert (sort xs) where insert [] = [x] insert (y:ys) | x <= y = x : y : ys | otherwise = y : insert ys

The way this algorithm works is as follow: if we want to sort an empty list or a list of just one element, we return them as they are, as they are already sorted. Otherwise, we have a list of the form x:xs. In this case, we sort xs and then want to insert x in the appropriate location. That's what the insert function does. It traverses the now-sorted tail and inserts x wherever it naturally fits. Let's analyze how long this function takes to complete. Suppose it takes f(n) stepts to sort a list of length n. Then, in order to sort a list of n-many elements, we first have to sort the tail of the list first, which takes f(n − 1) time. Then, we have to insert x into this new list. If x has to go at the end, this will take steps. Putting all of this together, we see that we have to do amount of work many times, which means that the entire complexity of this sorting algorithm is squared is not a constant value, so we cannot throw it out.

. Here, the

What does this mean? Simply that for really long lists, the sum function won't take very long, but that the sort function will take quite some time. Of course there are algorithms that run much more slowly than simply and there are ones that run more quickly than . (Also note that a algorithm may actually be much faster than a algorithm in practice, if it takes much less time to perform a single step of the

algorithm.)

Consider the random access functions for lists and arrays. In the worst case, accessing an arbitrary element in a list of length n will take time (think about accessing the last element). However with arrays, you can access any element immediately, which is said to be in constant time, or , which is basically as fast an any algorithm can go. There's much more in complexity theory than this, but this should be enough to allow you to understand all is faster than is faster than , the discussions in this tutorial. Just keep in mind that etc.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 252

Optimising

Profiling

Concurrency Concurrency If you need concurrency in Haskell, you should be able to simply consult the docs for Control.Concurrent.* and Control.Monad.STM.

Example Example: Downloading files in parallel downloadFile :: URL -> IO () downloadFile = undefined downloadFiles :: [URL] -> IO () downloadFiles = mapM_ (forkIO . downloadFile)

Choosing data structures Haskell/Choosing data structures

Libraries Reference Hierarchical libraries Haskell has a rich and growing set of function libraries. They fall into several groups: The Standard Prelude (often referred to as just "the Prelude") is defined in the Haskell 98 standard and imported automatically to every module you write. This defines standard types such as strings, lists and numbers and the basic functions on them, such as arithmetic, map and foldr

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 253

The Standard Libraries are also defined in the Haskell 98 standard, but you have to import them when you need them. The reference manuals for these libraries are at http://www.haskell.org/onlinereport/ Since 1998 the Standard Libraries have been gradually extended, and the resulting de-facto standard is known as the Base libraries. The same set is available for both HUGS and GHC. Other libraries may be included with your compiler, or can be installed using the Cabal mechanism. When Haskell 98 was standardised modules were given a flat namespace. This has proved inadequate and a hierarchical namespace has been added by allowing dots in module names. For backward compatibility the standard libraries can still be accessed by their non-hierarchical names, so the modules List and Data.List both refer to the standard list library. For details of how to import libraries into your program, see Modules and libraries. For an explanation of the Cabal system for packaging Haskell software see Distributing your software with the Cabal.

Haddock Documentation Library reference documentation is generally produced using the Haddock tool. The libraries shipped with GHC are documented using this mechanism. You can view the documentation at http://www.haskell.org/ghc/ docs/latest/html/libraries/index.html, and if you have installed GHC then there should also be a local copy. Haddock produces hyperlinked documentation, so every time you see a function, type or class name you can click on it to get to the definition. The sheer wealth of libraries available can be intimidating, so this tutorial will point out the highlights. One thing worth noting with Haddock is that types and classes are cross-referenced by instance. So for example in the Data.Maybe (http://www.haskell.org/ghc/docs/latest/html/ libraries/base/Data-Maybe.html) library the Maybe data type is listed as an instance of Ord: Ord a => Ord (Maybe a)

This means that if you declare a type Foo is an instance of Ord then the type Maybe Foo will automatically be an instance of Ord as well. If you click on the word Ord in the document then you will be taken to the definiton of the Ord class and its (very long) list of instances. The instance for Maybe will be down there as well.

Hierarchical libraries/Lists The List datatype is the fundamental data structure in Haskell — this is the basic building-block of data storage and manipulation. In computer science terms it is a singly-linked list. In the hierarchical library system the List module is stored in Data.List; but this module only contains utility functions. The definition of list itself is integral to the Haskell language.

Theory A singly-linked list is a set of values in a defined order. The list can only be traversed in one direction (ie, you cannot move back and forth through the list like tape in a cassette machine). The list of the first 5 positive integers is written as

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 254

[ 1, 2, 3, 4, 5 ]

We can move through this list, examining and changing values, from left to right, but not in the other direction. This means that the list [ 5, 4, 3, 2, 1 ]

is not just a trivial change in perspective from the previous list, but the result of significant computation (O (n) in the length of the list).

Definition The polymorphic list datatype can be defined with the following recursive definition: data [a] = [] | a : [a]

The "base case" for this definition is [], the empty list. In order to put something into this list, we use the (:) constructor emptyList = [] oneElem = 1:[]

The (:) (pronounced cons) is right-associative, so that creating multi-element lists can be done like manyElems = 1:2:3:4:5:[]

or even just manyElems' = [1,2,3,4,5]

Basic list usage Prepending It's easy to hard-code lists without cons, but run-time list creation will use cons. For example, to push an argument onto a simulated stack, we would use: push :: Arg -> [Arg] -> [Arg] push arg stack = arg:stack

Pattern-matching If we want to examine the top of the stack, we would typically use a peek function. We can try patternmatching for this.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 255

peek :: [Arg] -> Maybe Arg peek [] = Nothing peek (a:as) = Just a

The a before the cons in the pattern matches the head of the list. The as matches the tail of the list. Since we don't actually want the tail (and it's not referenced anywhere else in the code), we can tell the compiler this explicitly, by using a wild-card match, in the form of an underscore: peek (a:_) = Just a

List utilities FIXME: is this not covered in the chapter on list manipulation? Maps Folds, unfolds and scans Length, head, tail etc.

Hierarchical libraries/Randoms Random examples Here are a handful of uses of random numbers by example

Example: Ten random numbers import System.Random main = do gen <- getStdGen let ns = randoms gen print $ take 10 ns

As you can see, creating a random number generator requires the IO monad, but getting a random number out of a generator is pure and functional. You write the large majority of our code without IO and then feed a generator into it from the outside.

Example: Unsorting a list (imperfectly) import Data.List ( sort ) import Data.Ord ( comparing ) import System.Random ( Random, RandomGen, randoms, getStdGen )

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 256

main :: IO () main = do gen <- getStdGen interact $ unlines . unsort gen . lines unsort :: g -> [x] -> [x] unsort g es = map snd $ sortBy (comparing fst) $ zip rs es where rs = randoms g :: [Integer]

There's more to random number generation than randoms. You can, for example, use random (sans 's') to generate a random number from a low to a high range. See below for more ideas.

The Standard Random Number Generator The Haskell standard random number functions and types are defined in the Random module (or System.Random if you use hierarchical modules). The definition is at http://www.haskell.org/onlinereport/ random.html, but its a bit tricky to follow because it uses classes to make itself more general. From the standard: ---------------- The RandomGen class -----------------------class RandomGen genRange :: g next :: g split :: g

g where -> (Int, Int) -> (Int, g) -> (g, g)

---------------- A standard instance of RandomGen ----------data StdGen = ... -- Abstract

OK. This basically introduces StdGen, the standard random number generator "object". Its an instance of the RandomGen class just in case anyone wants to implement a different random number generator. If you have r :: StdGen then you can say: (x, r2) = next r

This gives you a random Int x and a new StdGen r2. The 'next' function is defined in the RandomGen class, and you can apply it to something of type StdGen because StdGen is an instance of the RandomGen class, as below. From the Standard: instance RandomGen StdGen where ... instance Read StdGen where ... instance Show StdGen where ...

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 257

This also says that you can convert a StdGen to and from a string, which is there as a handy way to save the state of the generator. (The dots are not Haskell syntax. They simply say that the Standard does not define an implementation of these instances.) From the Standard: mkStdGen :: Int -> StdGen

This is the factory function for StdGen objects. Put in a seed, get out a generator. The reason that the 'next' function also returns a new random number generator is that Haskell is a functional language, so no side effects are allowed. In most languages the random number generator routine has the hidden side effect of updating the state of the generator ready for the next call. Haskell can't do that. So if you want to generate three random numbers you need to say something like let (x1, r2) = next r (x2, r3) = next r2 (x3, r4) = next r3

The other thing is that the random values (x1, x2, x3) are random integers. To get something in the range, say, (0,999) you would have to take the modulus yourself, which is silly. There ought to be a library routine built on this, and indeed there is. From the Standard: ---------------- The Random class --------------------------class Random a where randomR :: RandomGen g => (a, a) -> g -> (a, g) random :: RandomGen g => g -> (a, g) randomRs :: RandomGen g => (a, a) -> g -> [a] randoms :: RandomGen g => g -> [a] randomRIO :: (a,a) -> IO a randomIO :: IO a

Remember that StdGen is the only instance of type RandomGen (unless you roll your own random number generator). So you can substitute StdGen for 'g' in the types above and get this: randomR :: (a, a) -> StdGen -> (a, StdGen) random :: StdGen -> (a, StdGen)

randomRs :: (a, a) -> StdGen -> [a] randoms :: StdGen -> [a]

But remember that this is all inside *another* class declaration "Random". So what this says is that any instance of Random can use these functions. The instances of Random in the Standard are: instance Random Integer where ... instance Random Float where ...

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks instance Random Double instance Random Bool instance Random Char

Page 258

where ... where ... where ...

So for any of these types you can get a random range. You can get a random integer with: (x1, r2) = randomR (0,999) r

And you can get a random upper case character with (c2, r3) = randomR ('A', 'Z') r2

You can even get a random bit with (b3, r4) = randomR (False, True) r3

So far so good, but threading the random number state through your entire program like this is painful, error prone, and generally destroys the nice clean functional properties of your program. One partial solution is the "split" function in the RandomGen class. It takes one generator and gives you two generators back. This lets you say something like this: (r1, r2) = split r x = foo r1

In this case we are passing r1 down into function foo, which does something random with it and returns a result "x", and we can then take "r2" as the random number generator for whatever comes next. Without "split" we would have to write (x, r2) = foo r1

But even this is often too clumsy, so you can do it the quick and dirty way by putting the whole thing in the IO monad. This gives you a standard global random number generator just like any other language. But because its just like any other language it has to do it in the IO monad. From the Standard: ---------------- The global random generator ---------------newStdGen :: IO StdGen setStdGen :: StdGen -> IO () getStdGen :: IO StdGen getStdRandom :: (StdGen -> (a, StdGen)) -> IO a

So you could write:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 259

foo :: IO Int foo = do r1 <- getStdGen let (x, r2) = randomR (0,999) r1 setStdGen r2 return x

This gets the global generator, uses it, and then updates it (otherwise every random number will be the same). But having to get and update the global generator every time you use it is a pain, so its more common to use getStdRandom. The argument to this is a function. Compare the type of that function to that of 'random' and 'randomR'. They both fit in rather well. To get a random integer in the IO monad you can say: x <- getStdRandom $ randomR (1,999)

The 'randomR (1,999)' has type "StdGen -> (Int, StdGen)", so it fits straight into the argument required by getStdRandom.

Using QuickCheck to Generate Random Data Only being able to do random numbers in a nice straightforward way inside the IO monad is a bit of a pain. You find that some function deep inside your code needs a random number, and suddenly you have to rewrite half your program as IO actions instead of nice pure functions, or else have an StdGen parameter tramp its way down there through all the higher level functions. Something a bit purer is needed. If you have read anything about Monads then you might have recognized the pattern I gave above: let (x1, r2) = next r (x2, r3) = next r2 (x3, r4) = next r3

The job of a monad is to abstract out this pattern, leaving the programmer to write something like: do -x1 x2 x3

Not real Haskell <- random <- random <- random

Of course you can do this in the IO monad, but it would be better if random numbers had their own little monad that specialised in random computations. And it just so happens that such a monad exists. It lives in the Debug.QuickCheck library, and it's called "Gen". And it does lots of very useful things with random numbers. The reason that "Gen" lives in Debug.QuickCheck is historical: that is where it was invented. The purpose of QuickCheck is to generate random unit tests to verify properties of your code. (Incidentally its very good at this, and most Haskell developers use it for testing). See the QuickCheck (http://www.cs.chalmers.se/~rjmh/ QuickCheck) homepage for more details. This tutorial will concentrate on using the "Gen" monad for generating random data.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 260

Most Haskell compilers (including GHC) bundle QuickCheck in with their standard libraries, so you probably won't need to install it separately. Just say import Test.QuickCheck

in your source file. The "Gen" monad can be thought of as a monad of random computations. As well as generating random numbers it provides a library of functions that build up complicated values out of simple ones. So lets start with a routine to return three random integers between 0 and 999: randomTriple :: Gen (Integer, Integer, Integer) randomTriple = do x1 <- choose (0,999) x2 <- choose (0,999) x3 <- choose (0,999) return (x1, x2, x3)

"choose" is one of the functions from QuickCheck. Its the equivalent to randomR. The type of "choose" is choose :: Random a => (a, a) -> Gen a

In other words, for any type "a" which is an instance of "Random" (see above), "choose" will map a range into a generator. Once you have a "Gen" action you have to execute it. The "generate" function executes an action and returns the random result. The type is: generate :: Int -> StdGen -> Gen a -> a

The three arguments are: 1. The "size" of the result. This isn't used in the example above, but if you were generating a data structure with a variable number of elements (like a list) then this parameter lets you pass some notion of the expected size into the generator. We'll see an example later. 1. A random number generator. 1. The generator action. So for example: let triple = generate 1 (mkStdGen 1) randomTriple

will generate three arbitrary numbers. But note that because the same seed value is used the numbers will always be the same (which is why I said "arbitrary", not "random"). If you want different numbers then you have to use a different StdGen argument.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 261

A common pattern in most programming languages is to use a random number generator to choose between two courses of action: -- Not Haskell code r := random (0,1) if r == 1 then foo else bar

QuickCheck provides a more declaritive way of doing the same thing. If "foo" and "bar" are both generators returning the same type then you can say: oneof [foo, bar]

This has an equal chance of returning either "foo" or "bar". If you wanted different odds, say that there was a 30% chance of "foo" and a 70% chance of "bar" then you could say frequency [ (30, foo), (70, bar) ]

"oneof" takes a simple list of Gen actions and selects one of them at random. "frequency" does something similar, but the probability of each item is given by the associated weighting. oneof :: [Gen a] -> Gen a frequency :: [(Int, Gen a)] -> Gen a

General Practices Applications So you want to build a simple application -- a piece of standalone software -- with Haskell.

The Main module The basic requirement behind this is to have a module Main with a main function main -- thingamie.hs module Main where main = do putStrLn "Bonjour, world!"

Using GHC, you may compile and run this file as follows:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 262

$ ghc --make -o bonjourWorld thingamie.hs $ ./bonjourWorld Bonjour, world!

Voilà! You now have a standalone application built in Haskell.

Other modules? Invariably your program will grow to be complicated enough that you want to split it across different files. Here is an example of an application which uses two modules. -- hello.hs module Hello where hello = "Bonjour, world!"

-- thingamie.hs module Main where import Hello main = do putStrLn hello

We can compile this fancy new program in the same way. Note that the --make flag to ghc is rather handy because it tells ghc to automatically detect dependencies in the files you are compiling. That is, since thingamie.hs imports a module 'Hello', ghc will search the haskell files in the current directory for files that implement Hello and also compile that. If Hello depends on yet other modules, ghc will automatically detect those dependencies as well. $ ghc --make -o bonjourWorld thingamie.hs $ ./bonjourWorld Bonjour, world!

If you want to search in other places for source files, including a nested structure of files and directories, you can add the starting point for the dependency search with the -i flag. This flag takes multiple, colon-separated directory names as its argument. As a contrived example, the following program has three files all stored in a src/ directory. The directory structure looks like: HaskellProgram/ src/ Main.hs GUI/ Interface.hs Functions/ Mathematics.hs

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 263

The Main module imports its dependencies by searching a path analogous to the module name — so that import GUI.Interface would search for GUI/Interface (with the appropriate file extension). To compile this program from within the HaskellProgram directory, invoke ghc with: $ ghc --make -i src -o sillyprog Main.hs

Debugging/ Haskell/Debugging/

Testing Quickcheck Consider the following function: getList = find find ch if

find 5 where 0 = return [] n = do <- getChar ch `elem` ['a'..'e'] then do tl <- find (n-1) return (ch : tl) else find n

How would we effectively test this function in Haskell? The solution we turn to is refactoring and QuickCheck. Keeping things pure The reason your getList is hard to test, is that the side effecting monadic code is mixed in with the pure computation, making it difficult to test without moving entirely into a "black box" IO-based testing model. Such a mixture is not good for reasoning about code. Let's untangle that, and then test the referentially transparent parts simply with QuickCheck. We can take advantage of lazy IO firstly, to avoid all the unpleasant low-level IO handling. So the first step is to factor out the IO part of the function into a thin "skin" layer: -- A thin monadic skin layer getList :: IO [Char] getList = fmap take5 getContents -- The actual worker take5 :: [Char] -> [Char] take5 = take 5 . filter (`elem` ['a'..'e'])

Testing with QuickCheck

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 264

Now we can test the 'guts' of the algorithm, the take5 function, in isolation. Let's use QuickCheck. First we need an Arbitrary instance for the Char type -- this takes care of generating random Chars for us to test with. I'll restrict it to a range of nice chars just for simplicity: import Data.Char import Test.QuickCheck instance Arbitrary Char where arbitrary = choose ('\32', '\128') coarbitrary c = variant (ord c `rem` 4)

Let's fire up GHCi (or Hugs) and try some generic properties (it's nice that we can use the QuickCheck testing framework directly from the Haskell REPL). An easy one first, a [Char] is equal to itself: *A> quickCheck ((\s -> s == s) :: [Char] -> Bool) OK, passed 100 tests.

What just happened? QuickCheck generated 100 random [Char] values, and applied our property, checking the result was True for all cases. QuickCheck generated the test sets for us! A more interesting property now: reversing twice is the identity: *A> quickCheck ((\s -> (reverse.reverse) s == s) :: [Char] -> Bool) OK, passed 100 tests.

Great! Testing take5 The first step to testing with QuickCheck is to work out some properties that are true of the function, for all inputs. That is, we need to find invariants. A simple invariant might be: So let's write that as a QuickCheck property: \s -> length (take5 s) == 5

Which we can then run in QuickCheck as: *A> quickCheck (\s -> length (take5 s) == 5) Falsifiable, after 0 tests: ""

Ah! QuickCheck caught us out. If the input string contains less than 5 filterable characters, the resulting string will be less than 5 characters long. So let's weaken the property a bit: That is, take5 returns a string of at most 5 characters long. Let's test this:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 265

*A> quickCheck (\s -> length (take5 s) <= 5) OK, passed 100 tests.

Good! Another property Another thing to check would be that the correct characters are returned. That is, for all returned characters, those characters are members of the set ['a','b','c','d','e']. We can specify that as: And in QuickCheck: *A> quickCheck (\s -> all (`elem` ['a'..'e']) (take5 s)) OK, passed 100 tests.

Excellent. So we can have some confidence that the function neither returns strings that are too long, nor includes invalid characters. Coverage One issue with the default QuickCheck configuration, when testing [Char], is that the standard 100 tests isn't enough for our situation. In fact, QuickCheck never generates a String greater than 5 characters long, when using the supplied Arbitrary instance for Char! We can confirm this: *A> quickCheck (\s -> length (take5 s) < 5) OK, passed 100 tests.

QuickCheck wastes its time generating different Chars, when what we really need is longer strings. One solution to this is to modify QuickCheck's default configuration to test deeper: deepCheck p = check (defaultConfig { configMaxTest = 10000}) p

This instructs the system to find at least 10000 test cases before concluding that all is well. Let's check that it is generating longer strings: *A> deepCheck (\s -> length (take5 s) < 5) Falsifiable, after 125 tests: ";:iD^*NNi~Y\\RegMob\DEL@krsx/=dcf7kub|EQi\DELD*"

We can check the test data QuickCheck is generating using the 'verboseCheck' hook. Here, testing on integers lists: *A> verboseCheck (\s -> length s < 5) 0: [] 1: [0] 2: [] 3: [] 4: []

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 266

5: [1,2,1,1] 6: [2] 7: [-2,4,-4,0,0] Falsifiable, after 7 tests: [-2,4,-4,0,0]

More information on QuickCheck http://haskell.org/haskellwiki/Introduction_to_QuickCheck http://haskell.org/haskellwiki/QuickCheck_as_a_test_set_generator

HUnit Sometimes it is easier to give an example for a test and to define one from a general rule. HUnit provides a unit testing framework which helps you to do just this. You could also abuse QuickCheck by providing a general rule which just so happens to fit your example; but it's probably less work in that case to just use HUnit. TODO: give an example of HUnit test, and a small tour of it More details for working with HUnit can be found in its user's guide (http://hunit.sourceforge.net/HUnit-1.0/ Guide.html) .

At least part of this page was imported from the Haskell wiki article Introduction to QuickCheck (http://www.haskell.org/haskellwiki/ Introduction_to_QuickCheck) , in accordance to its Simple Permissive License. If you wish to modify this page and if your changes will also be useful on that wiki, you might consider modifying that source page instead of this one, as changes from that page may propagate here, but not the other way around. Alternately, you can explicitly dual license your contributions under the Simple Permissive License.

Packaging A guide to the best practice for creating a new Haskell project or program.

Recommended tools Almost all new Haskell projects use the following tools. Each is intrinsically useful, but using a set of common tools also benefits everyone by increasing productivity, and you're more likely to get patches.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 267

Revision control Use darcs (http://darcs.net) unless you have a specific reason not to. It's much more powerful than most competing systems, it's written in Haskell, and it's the standard for Haskell developers. See the wikibook Understanding darcs to get started. Build system Use Cabal (http://haskell.org/cabal) . You should read at least the start of section 2 of the Cabal User's Guide (http://www.haskell.org/ghc/docs/latest/html/Cabal/index.html) . Documentation For libraries, use Haddock (http://haskell.org/haddock) . We recommend using recent versions of haddock (0.8 or above). Testing Pure code can be tested using QuickCheck (http://www.md.chalmers.se/~rjmh/QuickCheck/) or SmallCheck (http://www.mail-archive.com/[email protected]/msg19215.html) , impure code with HUnit (http:// hunit.sourceforge.net/) . To get started, try Haskell/Testing. For a slightly more advanced introduction, Simple Unit Testing in Haskell (http://blog.codersbase.com/2006/09/01/simple-unit-testing-in-haskell/) is a blog article about creating a testing framework for QuickCheck using some Template Haskell.

Structure of a simple project The basic structure of a new Haskell project can be adopted from HNop (http://semantic.org/hnop/) , the minimal Haskell project. It consists of the following files, for the mythical project "haq". Haq.hs -- the main haskell source file haq.cabal -- the cabal build description Setup.hs -- build script itself _darcs -- revision control README -- info LICENSE -- license You can of course elaborate on this, with subdirectories and multiple modules. Here is a transcript on how you'd create a minimal darcs-using and cabalised Haskell project, for the cool new Haskell program "haq", build it, install it and release. The new tool 'mkcabal' automates all this for you, but it's important that you understand all the parts first. We will now walk through the creation of the infrastructure for a simple Haskell executable. Advice for libraries follows after. Create a directory Create somewhere for the source:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 268

$ mkdir haq $ cd haq

Write some Haskell source Write your program: $ cat > Haq.hs --- Copyright (c) 2006 Don Stewart - http://www.cse.unsw.edu.au/~dons -- GPL version 2 or later (see http://www.gnu.org/copyleft/gpl.html) -import System.Environment -- 'main' runs the main program main :: IO () main = getArgs >>= print . haqify . head haqify s = "Haq! " ++ s

Stick it in darcs Place the source under revision control: $ darcs init $ darcs add Haq.hs $ darcs record addfile ./Haq.hs Shall I record this change? (1/?) [ynWsfqadjkc], or ? for help: y hunk ./Haq.hs 1 +-+-- Copyright (c) 2006 Don Stewart - http://www.cse.unsw.edu.au/~dons +-- GPL version 2 or later (see http://www.gnu.org/copyleft/gpl.html) +-+import System.Environment + +-- | 'main' runs the main program +main :: IO () +main = getArgs >>= print . haqify . head + +haqify s = "Haq! " ++ s Shall I record this change? (2/?) [ynWsfqadjkc], or ? for help: y What is the patch name? Import haq source Do you want to add a long comment? [yn]n Finished recording patch 'Import haq source'

And we can see that darcs is now running the show: $ ls Haq.hs _darcs

Add a build system Create a .cabal file describing how to build your project:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

$ cat > haq.cabal Name: Version: Description: License: License-file: Author: Maintainer: Build-Depends:

haq 0.0 Super cool mega lambdas GPL LICENSE Don Stewart [email protected] base

Executable: Main-is: ghc-options:

haq Haq.hs -O

Page 269

(If your package uses other packages, e.g. haskell98, you'll need to add them to the Build-Depends: field.) Add a Setup.lhs that will actually do the building: $ cat > Setup.lhs #! /usr/bin/env runhaskell > import Distribution.Simple > main = defaultMain

Cabal allows either Setup.hs or Setup.lhs, but we recommend writing the setup file this way so that it can be executed directly by Unix shells. Record your changes: $ darcs add haq.cabal Setup.lhs $ darcs record --all What is the patch name? Add a build system Do you want to add a long comment? [yn]n Finished recording patch 'Add a build system'

Build your project Now build it! $ runhaskell Setup.lhs configure --prefix=$HOME $ runhaskell Setup.lhs build $ runhaskell Setup.lhs install

Run it And now you can run your cool project: $ haq me "Haq! me"

You can also run it in-place, avoiding the install phase:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 270

$ dist/build/haq/haq you "Haq! you"

Build some haddock documentation Generate some API documentation into dist/doc/* $ runhaskell Setup.lhs haddock

which generates files in dist/doc/ including: $ w3m -dump dist/doc/html/haq/Main.html haq Contents Index Main Synopsis main :: IO () Documentation main :: IO () main runs the main program Produced by Haddock version 0.7

No output? Make sure you have actually installed haddock. It is a separate program, not something that comes with the Haskell compiler, like Cabal. Add some automated testing: QuickCheck We'll use QuickCheck to specify a simple property of our Haq.hs code. Create a tests module, Tests.hs, with some QuickCheck boilerplate: $ cat > Tests.hs import Char import List import Test.QuickCheck import Text.Printf main

= mapM_ (\(s,a) -> printf "%-25s: " s >> a) tests

instance Arbitrary Char where arbitrary = choose ('\0', '\128') coarbitrary c = variant (ord c `rem` 4)

Now let's write a simple property: $ cat >> Tests.hs -- reversing twice a finite list, is the same as identity prop_reversereverse s = (reverse . reverse) s == id s where _ = s :: [Int]

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 271

-- and add this to the tests list tests = [("reverse.reverse/id", test prop_reversereverse)]

We can now run this test, and have QuickCheck generate the test data: $ runhaskell Tests.hs reverse.reverse/id

: OK, passed 100 tests.

Let's add a test for the 'haqify' function: -- Dropping the "Haq! " string is the same as identity prop_haq s = drop (length "Haq! ") (haqify s) == id s where haqify s = "Haq! " ++ s tests

= [("reverse.reverse/id", test prop_reversereverse) ,("drop.haq/id", test prop_haq)]

and let's test that: $ runhaskell Tests.hs reverse.reverse/id drop.haq/id

: OK, passed 100 tests. : OK, passed 100 tests.

Great! Running the test suite from darcs We can arrange for darcs to run the test suite on every commit: $ darcs setpref test "runhaskell Tests.hs" Changing value of test from '' to 'runhaskell Tests.hs'

will run the full set of QuickChecks. (If your test requires it you may need to ensure other things are built too eg: darcs setpref test "alex Tokens.x;happy Grammar.y;runhaskell Tests.hs"). Let's commit a new patch: $ darcs add Tests.hs $ darcs record --all What is the patch name? Add testsuite Do you want to add a long comment? [yn]n Running test... reverse.reverse/id : OK, passed 100 tests. drop.haq/id : OK, passed 100 tests. Test ran successfully. Looks like a good patch. Finished recording patch 'Add testsuite'

Excellent, now patches must pass the test suite before they can be committed. Tag the stable version, create a tarball, and sell it!

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 272

Tag the stable version: $ darcs tag What is the version name? 0.0 Finished tagging patch 'TAG 0.0'

Tarballs via Cabal Since the code is cabalised, we can create a tarball with Cabal directly: $ runhaskell Setup.lhs sdist Building source dist for haq-0.0... Source tarball created: dist/haq-0.0.tar.gz

This has the advantage that Cabal will do a bit more checking, and ensure that the tarball has the structure expected by HackageDB. It packages up the files needed to build the project; to include other files (such as Test.hs in the above example), we need to add: extra-source-files: Tests.hs

to the .cabal file to have everything included. Tarballs via darcs Alternatively, you can use darcs: $ darcs dist -d haq-0.0 Created dist as haq-0.0.tar.gz

And you're all set up! Summary The following files were created: $ ls Haq.hs Setup.lhs

Tests.hs _darcs

dist haq-0.0.tar.gz

haq.cabal

Libraries The process for creating a Haskell library is almost identical. The differences are as follows, for the hypothetical "ltree" library: Hierarchical source The source should live under a directory path that fits into the existing module layout guide (http:// www.haskell.org/~simonmar/lib-hierarchy.html) . So we would create the following directory structure, for the module Data.LTree:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 273

$ mkdir Data $ cat > Data/LTree.hs module Data.LTree where

So our Data.LTree module lives in Data/LTree.hs The Cabal file Cabal files for libraries list the publically visible modules, and have no executable section: $ cat ltree.cabal Name: Version: Description: License: License-file: Author: Maintainer: Build-Depends: Exposed-modules: ghc-options:

ltree 0.1 Lambda tree implementation BSD3 LICENSE Don Stewart [email protected] base Data.LTree -Wall -O

We can thus build our library: $ runhaskell Setup.lhs configure --prefix=$HOME $ runhaskell Setup.lhs build Preprocessing library ltree-0.1... Building ltree-0.1... [1 of 1] Compiling Data.LTree ( Data/LTree.hs, dist/build/Data/LTree.o ) /usr/bin/ar: creating dist/build/libHSltree-0.1.a

and our library has been created as a object archive. On *nix systems, you should probably add the --user flag to the configure step (this means you want to update your local package database during installation). Now install it: $ runhaskell Setup.lhs install Installing: /home/dons/lib/ltree-0.1/ghc-6.6 & /home/dons/bin ltree-0.1... Registering ltree-0.1... Reading package info from ".installed-pkg-config" ... done. Saving old package config file... done. Writing new package config file... done.

And we're done! You can use your new library from, for example, ghci: $ ghci -package ltree Prelude> :m + Data.LTree Prelude Data.LTree>

The new library is in scope, and ready to go. More complex build systems

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 274

For larger projects it is useful to have source trees stored in subdirectories. This can be done simply by creating a directory, for example, "src", into which you will put your src tree. To have Cabal find this code, you add the following line to your Cabal file: hs-source-dirs: src

Cabal can set up to also run configure scripts, along with a range of other features. For more information consult the Cabal documentation (http://www.haskell.org/ghc/docs/latest/html/Cabal/index.html) .

Automation A tool to automatically populate a new cabal project is available (beta!): darcs get http://www.cse.unsw.edu.au/~dons/code/mkcabal

Usage is: $ mkcabal Project name: haq What license ["GPL","LGPL","BSD3","BSD4","PublicDomain","AllRightsReserved"] ["BSD3"]: What kind of project [Executable,Library] [Executable]: Is this your name? - "Don Stewart " [Y/n]: Is this your email address? - "<[email protected]>" [Y/n]: Created Setup.lhs and haq.cabal $ ls Haq.hs LICENSE Setup.lhs _darcs dist haq.cabal

which will fill out some stub Cabal files for the project 'haq'. To create an entirely new project tree: $ mkcabal --init-project Project name: haq What license ["GPL","LGPL","BSD3","BSD4","PublicDomain","AllRightsReserved"] ["BSD3"]: What kind of project [Executable,Library] [Executable]: Is this your name? - "Don Stewart " [Y/n]: Is this your email address? - "<[email protected]>" [Y/n]: Created new project directory: haq $ cd haq $ ls Haq.hs LICENSE README Setup.lhs haq.cabal

Licenses Code for the common base library package must be BSD licensed or Freer. Otherwise, it is entirely up to you as the author. Choose a licence (inspired by this (http://www.dina.dk/~abraham/rants/license.html) ). Check the licences of things you use, both other Haskell packages and C libraries, since these may impose conditions you must follow.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 275

Use the same licence as related projects, where possible. The Haskell community is split into 2 camps, roughly, those who release everything under BSD, GPLers, and LGPLers. Some Haskellers recommend specifically avoiding the LGPL, due to cross module optimisation issues. Like many licensing questions, this advice is controversial. Several Haskell projects (wxHaskell, HaXml, etc) use the LGPL with an extra permissive clause to avoid the cross-module optimisation problem.

Releases It's important to release your code as stable, tagged tarballs. Don't just rely on darcs for distribution (http:// awayrepl.blogspot.com/2006/11/we-dont-do-releases.html) . darcs dist generates tarballs directly from a darcs repository For example: $ cd fps $ ls Data LICENSE README Setup.hs $ darcs dist -d fps-0.8 Created dist as fps-0.8.tar.gz

TODO

_darcs

cbits dist

fps.cabal tests

You can now just post your fps-0.8.tar.gz You can also have darcs do the equivalent of 'daily snapshots' for you by using a post-hook. put the following in _darcs/prefs/defaults: apply posthook darcs dist apply run-posthook

Advice: Tag each release using darcs tag. For example: $ darcs tag 0.8 Finished tagging patch 'TAG 0.8'

Then people can darcs pull --partial -t 0.8, to get just the tagged version (and not the entire history).

Hosting A Darcs repository can be published simply by making it available from a web page. If you don't have an account online, or prefer not to do this yourself, source can be hosted on darcs.haskell.org (you will need to email Simon Marlow (http://research.microsoft.com/~simonmar/) to do this). haskell.org itself has some user accounts available. There are also many free hosting places for open source, such as Google Project Hosting (http://code.google.com/hosting/)

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 276

SourceForge (http://sourceforge.net/) .

Example A complete example (http://www.cse.unsw.edu.au/~dons/blog/2006/12/11#release-a-library-today) of writing, packaging and releasing a new Haskell library under this process has been documented.

At least part of this page was imported from the Haskell wiki article How to write a Haskell program (http://www.haskell.org/haskellwiki/How_to_write_ a_Haskell_program) , in accordance to its Simple Permissive License. If you wish to modify this page and if your changes will also be useful on that wiki, you might consider modifying that source page instead of this one, as changes from that page may propagate here, but not the other way around. Alternately, you can explicitly dual license your contributions under the Simple Permissive License. Note also that the original tutorial contains extra information about announcing your software and joining the Haskell community, which may be of interest to you.

Specialised Tasks GUI Haskell has at least three toolkits for programming a graphical interface: wxHaskell - provides a Haskell interface to the wxWidgets toolkit Gtk2Hs (http://haskell.org/gtk2hs/) - provides a Haskell interface to the GTK+ library hoc (http://hoc.sourceforge.net/) - provides a Haskell to Objective-C binding which allows users to access to the Cocoa library on MacOS X In this tutorial, we will focus on the wxHaskell toolkit, as it allows you to produce a native graphical interface on all platforms that wxWidgets is available on, including Windows, Linux and MacOS X.

Getting and running wxHaskell To install wxHaskell, you'll need to use the GHC (http://haskell.org/ghc/) . Then, find your wxHaskell package on the wxHaskell download page (http://wxhaskell.sourceforge.net/download.html) .

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 277

The latest version of GHC is 6.6.1, but wxHaskell hasn't been updated for versions higher than 6.4. You can either downgrade GHC to 6.4, or build wxHaskell yourself. Instructions on how to do this can be found on the building page (http://wxhaskell.sourceforge.net/building.html) . Follow the installation instruction provided on the wxHaskell download page. Don't forget to register wxHaskell with GHC, or else it won't run. To compile source.hs (which happens to use wxHaskell code), open a command line and type: ghc -package wx source.hs -o bin

Code for GHCi is similar: ghci -package wx

You can then load the files from within the GHCi interface. To test if everything works, go to $ wxHaskellDir/samples/wx ($wxHaskellDir is the directory you installed it in) and load (or compile) HelloWorld.hs. It should show a window with title "Hello World!", a menu bar with File and About, and a status bar at the bottom, that says "Welcome to wxHaskell". If it doesn't work, you might try to copy the contents of the $wxHaskellDir/lib directory to the ghc install directory.

Hello World Here's the basic Haskell "Hello World" program: module Main where main :: IO () main = putStr "Hello World!"

It will compile just fine, but it isn't really fancy. We want a nice GUI! So how to do this? First, you must import Graphics.UI.WX. This is the wxHaskell library. Graphics.UI.WXCore has some more stuff, but we won't be needing that now. To start a GUI, use (guess what) start gui. In this case, gui is the name of a function which we'll use to build the interface. It must have an IO type. Let's see what we have: module Main where import Graphics.UI.WX main :: IO () main = start gui gui :: IO () gui = do --GUI stuff

To make a frame, we use frame. Check the type of frame. It's [Prop (Frame ())] -> IO (Frame ()). It takes a list of "frame properties" and returns the corresponding frame. We'll look deeper

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 278

into properties later, but a property is typically a combination of an attribute and a value. What we're interested in now is the title. This is in the text attribute and has type (Textual w) => Attr w String. The most important thing here, is that it's a String attribute. Here's how we code it: gui :: IO () gui = do frame [text := "Hello World!"]

The operator (:=) takes an attribute and a value, and combines both into a property. Note that frame returns an IO (Frame ()). You can change the type of gui to IO (Frame ()), but it might be better just to add return (). Now we have our own GUI consisting of a frame with title "Hello World!". Its source: module Main where import Graphics.UI.WX main :: IO () main = start gui gui :: IO () gui = do frame [text := "Hello World!"] return ()

Hello World! (winXP)

The result should look like the screenshot. (It might look slightly different on Linux or MacOS X, on which wxhaskell also runs)

Controls From here on, its good practice to keep a browser window or tab open with the wxHaskell documentation (http://wxhaskell.sourceforge.net/doc/) . It's also available in $wxHaskellDir/doc/index.html.

A text label Simply a frame doesn't do much. In this chapter, we're going to add some more elements. Let's start with something simple: a label. wxHaskell has a label, but that's a layout thing. We won't be doing layout until next chapter. What we're looking for is a staticText. It's in Graphics.UI.WX.Controls. As you can see, the staticText function takes a Window as argument, and a list of properties. Do we have a window? Yup! Look at Graphics.UI.WX.Frame. There we see that a Frame is merely a type-synonym of a special sort of window. We'll change the code in gui so it looks like this: gui :: IO () gui = do f <- frame [text := "Hello World!"] staticText f [text := "Hello StaticText!"] return ()

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 279

Again, text is an attribute of a staticText object, so this works. Try it! A button Now for a little more interaction. A button. We're not going to add functionality to it until the chapter about events, but at least something visible will happen when you click on it. Hello StaticText! (winXP)

A button is a control, just like staticText. Look it up in Graphics.UI.WX.Controls. Again, we need a window and a list of properties. We'll use the frame again. text is also an attribute of a button: gui :: IO () gui = do f <- frame [text := "Hello World!"] staticText f [text := "Hello StaticText!"] button f [text := "Hello Button!"] return ()

Load it into GHCi (or compile it with GHC) and... hey!? What's that? The button's been covered up by the label! We're going to fix that next, in the layout chapter.

Overlapping button and StaticText (winXP)

Layout The reason that the label and the button overlap, is that we haven't set a layout for our frame yet. Layouts are created using the functions found in the documentation of Graphics.UI.WXCore.Layout. Note that you don't have to import Graphics.UI.WXCore to use layouts. The documentation says we can turn a member of the widget class into a layout by using the widget function. Also, windows are a member of the widget class. But, wait a minute... we only have one window, and that's the frame! Nope... we have more, look at Graphics.UI.WX.Controls and click on any occasion of the word Control. You'll be taken to Graphics.UI.WXCore.WxcClassTypes and it is here we see that a Control is also a type synonym of a special type of window. We'll need to change the code a bit, but here it is. gui :: IO () gui = do f <- frame [text := "Hello World!"] st <- staticText f [text := "Hello StaticText!"] b <- button f [text := "Hello Button!"] return ()

Now we can use widget st and widget b to create a layout of the staticText and the button. layout is an attribute of the frame, so we'll set it here: gui :: IO () gui = do f <- frame [text := "Hello World!"] st <- staticText f [text := "Hello StaticText!"]

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 280

b <- button f [text := "Hello Button!"] set f [layout := widget st] return ()

The set function will be covered in the chapter about attributes. Try the code, what's wrong? This only displays the staticText, not the button. We need a way to combine the two. We will use layout combinators for this. row and column look nice. They take an integer and a list of layouts. We can easily make a list of layouts of the button and the staticText. The integer is the spacing between the elements of the list. Let's try something: gui :: IO () gui = do f <- frame [text := "Hello World!"] st <- staticText f [text := "Hello StaticText!"] b <- button f [text := "Hello Button!"] set f [layout := row 0 [widget st, widget b] ] return ()

StaticText with layout (winXP)

A row layout (winXP)

Play around with the integer and see what happens, also change row into column. Try to change the order of the elements in the list to get a feeling of how it works. For fun, try to add widget b several more times in the list. What happens? Here are a few exercises to spark your imagination. Remember to use the documentation!

Column layout with a spacing of 25 (winXP)

Exercises 1. Add a checkbox control. It doesn't have to do anything yet, just make sure it appears next to the staticText and the button when using row-layout, or below them when using column layout. text is also an attribute of the checkbox. row column 2. Notice that and take a list of layouts, and also generates a layout itself. Use this fact to make your checkbox appear on the left of the staticText and the button, with the staticText and the button in a column. 3. Can you figure out how the radiobox control works? Take the layout of the previous exercise and add a radiobox with two (or more) options below the checkbox, staticText and button. Use the documentation! 4. Use the boxed combinator to create a nice looking border around the four controls, and another one around the staticText and the button. (Note: the boxed combinator might not be working on MacOS X - you might get widgets that can't be interacted with. This is likely just a bug in wxhaskell.)

After having completed the exercises, the end result should look like this:

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 281

You could have used different spacing for row and column, or the options of the radiobox are displayed horizontally.

Attributes After all this, you might be wondering things like: "Where did that set function suddenly come from?", or "How would I know if text is an attribute of something?". Both answers lie in the attribute system of wxHaskell. Answer to exercises

Setting and modifying attributes In a wxHaskell program, you can set the properties of the widgets in two ways: 1. during creation: f <- frame [ text := "Hello World!" ] 2. using the set function: set f [ layout := widget st ] The set function takes two arguments: one of any type w, and the other is a list of properties of w. In wxHaskell, these will be the widgets and the properties of these widgets. Some properties can only be set during creation, like the alignment of a textEntry, but you can set most others in any IO-function in your program, as long as you have a reference to it (the f in set f [--stuff). Apart from setting properties, you can also get them. This is done with the get function. Here's a silly example: gui :: IO () gui = do f <- frame [ text := "Hello World!" ] st <- staticText f [] ftext <- get f text set st [ text := ftext] set f [ text := ftext ++ " And hello again!" ]

Look at the type signature of get. It's w -> Attr w a -> IO a. text is a String attribute, so we have an IO String which we can bind to ftext. The last line edits the text of the frame. Yep, destructive updates are possible in wxHaskell. We can overwrite the properties using (:=) anytime with set. This inspires us to write a modify function: modify :: w -> Attr w a -> (a -> a) -> IO () modify w attr f = do val <- get w attr set w [ attr := f val ]

First it gets the value, then it sets it again after applying the function. Surely we're not the first one to think of that... And nope, we aren't. Look at this operator: (:~). You can use it in set, because it takes an attribute and a function. The result is an property, in which the original value is modified by the function. This means we can write: gui :: IO () gui = do

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 282

f <- frame [ text := "Hello World!" ] st <- staticText f [] ftext <- get f text set st [ text := ftext] set f [ text :~ ++ " And hello again!" ]

This is a great place to use anonymous functions with the lambda-notation. There are two more operators we can use to set or modify properties: (::=) and (::~). They do the same as (:=) and (:~), except a function of type w -> orig is expected, where w is the widget type, and orig is the original "value" type (a in case of (:=), and a -> a in case of (:~)). We won't be using them now, though, as we've only encountered attributes of non-IO types, and the widget needed in the function is generally only useful in IO-blocks. How to find attributes Now the second question. Where did I read that text is an attribute of all those things? The easy answer is: in the documentation. Now where in the documentation to look for it? Let's see what attributes a button has, so go to Graphics.UI.WX.Controls, and click the link that says "Button". You'll see that a Button is a type synonym of a special kind of Control, and a list of functions that can be used to create a button. After each function is a list of "Instances". For the normal button function, this is Commanding -- Textual, Literate, Dimensions, Colored, Visible, Child, Able, Tipped, Identity, Styled, Reactive, Paint. This is the list of classes of which a button is an instance. Read through the Class_Declarations chapter. It means that there are some class-specific functions available for the button. Textual, for example, adds the text and appendText functions. If a widget is an instance of the Textual class, it means that it has a text attribute! Note that while StaticText hasn't got a list of instances, it's still a Control, which is a synonym for some kind of Window, and when looking at the Textual class, it says that Window is an instance of it. This is an error on the side of the documentation. Let's take a look at the attributes of a frame. They can be found in Graphics.UI.WX.Frame. Another error in the documentation here: It says Frame instantiates HasImage. This was true in an older version of wxHaskell. It should say Pictured. Apart from that, we have Form, Textual, Dimensions, Colored, Able and a few more. We're already seen Textual and Form. Anything that is an instance of Form has a layout attribute. Dimensions adds (among others) the clientSize attribute. It's an attribute of the Size type, which can be made with sz. Please note that the layout attribute can also change the size. If you want to use clientSize you should set it after the layout. Colored adds the color and bgcolor attributes. Able adds the Boolean enabled attribute. This can be used to enable or disable certain form elements, which is often displayed as a greyed-out option. There are lots of other attributes, read through the documentation for each class.

Events There are a few classes that deserve special attention. They are the Reactive class and the Commanding class. As you can see in the documentation of these classes, they don't add attributes (of the form Attr w

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 283

a), but events. The Commanding class adds the command event. We'll use a button to demonstrate event handling. Here's a simple GUI with a button and a staticText: gui :: IO () gui = do f <- frame [ text := "Event Handling" ] st <- staticText f [ text := "You haven\'t clicked the button yet." ] b <- button f [ text := "Click me!" ] set f [ layout := column 25 [ widget st, widget b ] ]

Before (winXP)

We want to change the staticText when you press the button. We'll need the on function: b <- button f [ text := "Click me!" , on command := --stuff ]

The type of on: Event w a -> Attr w a. command is of type Event w (IO ()), so we need an IO-function. This function is called the Event handler. Here's what we get: gui :: IO () gui = do f <- frame [ text := "Event Handling" ] st <- staticText f [ text := "You haven\'t clicked the button yet." ] b <- button f [ text := "Click me!" , on command := set st [ text := "You have clicked the button!" ] ] set f [ layout := column 25 [ widget st, widget b ] ]

After (winXP)

Insert text about event filters here

Database Haskell/Database

Web programming An example web application, using the HAppS framework, is hpaste (http://hpaste.org) , the Haskell paste bin. Built around the core Haskell web framework, HAppS, with HaXmL for page generation, and binary/zlib for state serialisation.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 284

XML There are several Haskell libraries for XML work, and additional ones for HTML. For more web-specific work, you may want to refer to the Haskell/Web programming chapter. Libraries for parsing XML The Haskell XML Toolbox (hxt) (http://www.fh-wedel.de/~si/HXmlToolbox/) is a collection of tools for parsing XML, aiming at a more general approach than the other tools. HaXml (http://www.cs.york.ac.uk/fp/HaXml/) is a collection of utilities for parsing, filtering, transforming, and generating XML documents using Haskell. HXML (http://www.flightlab.com/~joe/hxml/) is a non-validating, lazy, space efficient parser that can work as a drop-in replacement for HaXml. Libraries for generating XML HSXML represents XML documents as statically typesafe s-expressions. Other options tagsoup (http://www.cs.york.ac.uk/fp/darcs/tagsoup/tagsoup.htm) is a library for parsing unstructured HTML, i.e. it does not assume validity or even well-formedness of the data.

Getting aquainted with HXT In the following, we are going to use the Haskell XML Toolbox for our examples. You should have a working installation of GHC, including GHCi, and you should have downloaded and installed HXT according to the instructions (http://www.fh-wedel.de/~si/HXmlToolbox/#install) . With those in place, we are ready to start playing with HXT. Let's bring the XML parser into scope, and parse a simple XML-formatted string: Prelude> :m + Text.XML.HXT.Parser Prelude Text.XML.HXT.Parser> xread "abcdef" [NTree (XTag (QN {namePrefix = "", localPart = "foo", namespaceUri = ""}) []) [NTree (XText "abc") [],NTree (XTag (QN {namePrefix = "", localPart = "bar", namespaceUri = ""}) []) [],NTree (XText "def") []]]

We see that HXT represents an XML document as a list of trees, where the nodes can be constructed as an XTag containing a list of subtrees, or an XText containing a string. With GHCi, we can explore this in more detail: Prelude Text.XML.HXT.Parser Text.XML.HXT.DOM> data NTree a = NTree a (NTrees a) -- Defined in Prelude Text.XML.HXT.Parser Text.XML.HXT.DOM> type NTrees a = [NTree a] -- Defined in

:i NTree Data.Tree.NTree.TypeDefs :i NTrees Data.Tree.NTree.TypeDefs

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Haskell/Print version - Wikibooks, collection of open-content textbooks

Page 285

As we can see, an NTree is a general tree structure where a node stores its children in a list, and some more browsing around will tell us that XML documents are trees over an XNode type, defined as: data XNode = XText String | XCharRef Int | XEntityRef String | XCmt String | XCdata String | XPi QName XmlTrees | XTag QName XmlTrees | XDTD DTDElem Attributes | XAttr QName | XError Int String

Returning to our example, we notice that while HXT successfully parsed our input, one might desire a more lucid presentation for human consumption. Lucky for us, the DOM module supplies this. Notice that xread returns a list of trees, while the formatting function works on a single tree. Prelude Text.XML.HXT.Parser> :m + Text.XML.HXT.DOM Prelude Text.XML.HXT.Parser Text.XML.HXT.DOM> putStrLn $ formatXmlTree $ head $ xread "abcdef" ---XTag "foo" | +---XText "abc" | +---XTag "bar" | +---XText "def"

This representation makes the structure obvious, and it is easy to see the relationship to our input string. Let's proceed to extend our XML document with some attributes (taking care to escape the quotes, of course): Prelude Text.XML.HXT.Parser> xread "abcdef" [NTree (XTag (QN {namePrefix = "", localPart = "foo", namespaceUri = ""}) [NTree (XAttr (QN {namePrefix = "", localPart = "a1", namespaceUri = ""})) [NTree (XText "my") []],NTree (XAttr (QN {namePrefix = "", localPart = "b2", namespaceUri = ""})) [NTree (XText "oh") []]]) [NTree (XText "abc") [],NTree (XTag (QN {namePrefix = "", localPart = "bar", namespaceUri = ""}) []) [],NTree (XText "def") []]]

Notice that attributes are stored as regular NTree nodes with the XAttr content type, and (of course) no children. Feel free to pretty-print this expression, as we did above. Retrieved from "http://en.wikibooks.org/wiki/Haskell/Print_version" This page was last modified 23:52, 17 January 2007. All text is available under the terms of the GNU Free Documentation License (see Copyrights for details). Wikibooks® is a registered trademark of the Wikimedia Foundation, Inc.

http://en.wikibooks.org/w/index.php?title=Haskell/Print_version&printable=yes

06/20/2007 11:20:16 AM

Related Documents