Compiler Opt

  • Uploaded by: stratusdesign
  • 0
  • 0
  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Compiler Opt as PDF for free.

More details

  • Words: 3,093
  • Pages: 27
Compiler Optimisations © 2009 [email protected] stratusdesign.blogspot.com

Overview • Introduction • Legacy optimisation • Vector SIMD optimisation • DSP optimisation • RISC/Superscalar optimisation • SSA optimisation • Multicore optimisation

Introduction •

• •





This is intended as an overview of the optimisation process only as optimisations can be done in different ways often with subtle machine specific variations. Broadly speaking then, there are four main classes of optimisation available to the implementor and these are Classic legacy optimisations - these are well understood and the majority are technically straightforward to implement. They offer a gain of around 10-25% in performance Classic Vector optimisations - once the reserve of leviathan mainframe CPUs with brand new shiny Vector Units attached but now very commonly found in DSP related technologies. Technically these optimisations are more difficult than the former but still not complicated. For the right class of narrow numerical applications, fully and properly optimised they can yeild gains of 500%-2400% performance improvements RISC based optimisations. Despite their potential speed, scheduling fast code close to the theoretical maximum on a RISC has and continues to be problematic. For example the Alphas brand new GEM compilers when profiled on the machine only achieved speeds approaching what the Alpha was capable of about 30% of the time. That meant that the raw power of Alpha compute was wasted 70% of time in other words all those extra MHz were just used to heat up your datacentre/office. Performance enhancements are of the order of at least 150% Parallel or Hybrid optimisations. Optimisation in these cases is dominated by the underlying memory architecture eg. UMA, NUMA, MIMD or MIMD/SIMD Hybrid. So like RISC memory bandwidth is an issue. The other factors are interprocessor utilisation, interprocessor communication, interprocessor security, interprocessor management and identifying coroutines to schedule on the parallel system. Another issue is that most commercial computer languages to date have typically not been very good at allowing the programmer to express parallelism this means that the compiler has to infer parallelism from what is essentially a missing attribute and this is most difficult to accomplish with any degree of success. Currently most languages rely on rather unsophisticated library or system routines.

Classic legacy optimisation • Copy propogation Before x = y; z = 1 + x;

After x = y; z = 1 + y;

Before optimisation a data dependency is created when z has to wait for the value of x to be written.

• Constant propogation Before

After

x = 42; z = 1 + x;

x = y; z = 1 + 42;

Classic legacy optimisation • Constant folding Before x = 512 * 4;

After x = 2048;

Can be applied to Constant arguments, Statics and Locals.

• Dead code removal o o o

Temporary code created by the compiler for eg. When doing constant propogation Dead variable removal Elimination of unreachable code for eg. in C switch statements

Classic legacy optimisation • Algebraic Before x = 10 * ( x + 5 ) / 10;

After x += 5;

• Strength Reduction Before (1) x = y ** 2; (2) x = y * 2;

After x = y * y; x = y + y;

Classic legacy optimisation • Variable renaming Before x = y * z; x = u * v;

After x = y * z; x0 = u * v;

• Common subexpresion elimination Before x = u * (y + z ); w = ( y + z ) / 2;

After x0 = y + z; x = u * x0; w = x0 / 2;

Classic legacy optimisation • Loop invariant code motion Before for (i=0; i<10; i++) x[i] += v[i] + a + b

After x0 = a + b for (i=0; i<10; i++) x[i] += v[i] + x0

• Loop induction variable simplification Before for (i=0; i<10; i++) x = i * 2 + v;

After x = v; for (i=0; i<10; i++) x += 2;

Classic legacy optimisation • Loop unrolling Before for (i=0; i
After (unroll by factor of 2) for (i=0; i
• Tail recursion elimination recurs( x, y ) { if( !x ) return recurs( x - y ); }

All computation is done by the time the recursive call is made. By simply jumping to the top of the function excessive stack frame creation is avoided. May not be possible in some languages for example C++ usually arranges to call destructors at function exit

GVN •

Global value numbering o



Similar to CSE but can target cases that aren’t considered by CSE (see below)

Idea is an extension of Local Value Numbering (within Basic Block)

Local value numbering b = V1, c=V2 so, a=V1+V2 d=V1 e=V1+V2 Therefore a & e are equivalent

a=b+c d=b e=d+c

Global value numbering ~ has to consider effects of control flow across BBs

x1 = a1

x2 = b1

A1=V1 B1=V2 X1=V3 • V1 X2=V4 • V2 X3=V5=phi( V1, V2 ) • V6

x3 = phi( x1, x2 ) Nb. Later rhs eval ripple through previous nodes

PRE •

Partial redundancy elimination includes analysis for o o o



Loop invariant code motion – see previous Full redundancy elimination – see previous for CSE Partial redundancy elimination – see below – evaluation of x+y is predicated on some condition creating a Partial Redundancy

Some PRE variants applied to SSA values, not just the expressions, effectively combining PRE and GVN

Elimination of Partial Redundancy

CFG cond-eval

b = x+y

T=x+y cond-eval

cond-eval



a = x+y

Elimination of Full Redundancy (ref CSE)

T=x+y a=T

T=x+y …

b=T



a=T

b=T

Classic legacy optimisation • Leaf procedure optimisation A routine which does not call any other routines or require any local storage can be invoked with a simple JSR/RET.

• Procedure inlining This technique avoids the overhead of a call/ret by duplicating the code wherever it is needed. It is best used for small frequently called routines.

Vector SIMD • These optimisations increase performance by using deep vector unit pipelines, data locality and data isolation found when manipulating arrays to parallelise the computation. They also reduce conditional branching over potentially large datasets. • Nowdays SIMD instructions appear most frequently in DSPs for computing FIR/IIR filters or doing FFTs. • Most modern microprocessors also have vector support in their SIMD extensions eg. SSE and Altivec which have traditionally offered cut down functionality in their vector units but future trends are towards doing fuller implementations. • Some studies have shown that when code can be vectorised it can improve performance in some cases by around 500+%.

Vector SIMD Before for( i=0; i<64; i++ ) a[i] = b[i] + 50; Before CISC case movl #1, r0 moval a, r1 moval b, r2 L$1: addl #50, (b)+ movl (b), (a)+

After

After

Classic VP (long vector)

Altivec et. al limited to 4x32b parallelism vspltisw v0, #50 lw r1, 0(a) lw r2, 0(b) lvx v2, 0, r2 vaddsws v1, v2, v0 stvx v1, 0, r1

mtvlr #64 vldl a, v0 vldl b, v1 vvaddl v1, #50 vstl v0, a

aobleq #64, r0, L$1

Nb. Also optimised away another branch

; have 4 words added in parallel lw r1 128(a) lw r2 128(b) lvx v2, 0, r2 vaddsws v1, v2, v0 stvx v1, 0, r1 ; have 8 words added in parallel ; keep going...

Scalar/Superscalar RISC • Load delay slot o

The result of a load cannot be used in the following instruction without having to stall the pipeline before the add can complete. Instead of having the machine stall in this way some useful code is found that can be placed between the load r2 and add r2. If some useful code cannot be found a nop can be inserted instead. u = v + w; z = x + y;

u = v + w; z = x + y;

before

after

ld r1, v ld r2, w add r3, r1, r2 sw u, r1 ld r1, x etc..

ld r1, v ld r2, w ld r1, x add r3, r1, r2 sw u, r1 etc..

DSP optimisation • DSPs have some unique hardware design features which require additional compiler support o

tbd

Scalar/Superscalar RISC • Branch delay slot

The result of a branch cannot be resolved without having to stall the pipeline. Instead of having the machine stall in this way some useful code is found that can be placed immediately after the branch. Several strategies can be used, either find a useful candidate instruction before the branch, take one from the branch target and update the branch target address by 1 instruction or take a candidate from after the branch. If a candidate cannot be found a z = xnop + y; can be inserted instead. if( x == 0 ) goto L1; o

before

after

ld r1, x ld r2, y add r3, r1, r2 cmp r2, 0 bne L1 … L1: sll r3, 4

ld r1, x ld r2, y cmp r1, 0 bne L1 add r3, r1, r2 ... L1: sll r3, 4

Scalar/Superscalar RISC • Branch reduction o

Loop unrolling is one way to reduce branching, other methods exist

Ex. Bitfield setting and rotation if( x == 0 ) y++; ... before

after (branch eliminated)

L1: ... lw r2, x cmpi r1, r2, 10 bne r1, L2 addi r3, r0, 1 L2: …

lw r2, x cmpdi r2, 10 cntlzw r2, r2 addic r2, r2, -32 L2: rlwinm r3, r2, 1, 31, 31 ...

Scalar/Superscalar RISC • Conditional Move o

Another branch reduction technique

Ex. Bitfield setting and rotation if( x == 0 ) y = 1; else z = 20; before ldq r1, x cmp r1, 0 beq r1, L1 mov r3, 1 ... L1: mov r3, 20

after ldq r1, x ldq r2, 1 ldq r3, 20 cmp r1, 0 cmovez r3, r2, r1

Superscalar Scheduling • This is usually achieved by creating another IR or extending an existing IR to associate machine instructions with RISC functional units and in this way a determination can be made as to current FU utilisation and how best to reorder code for superscalar multi-issue. • These IRs are highly guarded and highly proprietary technologies. • This is the reason for example the IBM POWER compiler outperforms current GCC implementations • A simple but innovative example at the time was tracking register pressure in the WHIRL IR originally used by MIPS and SGI

GCC • •

GCC is a fairly standard compiler technology. Historically it had one tree form (the Parse Tree) generated from the front end and a retargetable machine format (RTL) across which the standard optimisations were done. Since 2005 this was expanded and tree forms now include the Parse Tree, the GENERIC (language independent) and GIMPLE (supporting SSA form) trees (C and C++ omit a GENERIC tree). The standard optimisations now occur after an SSA form has been generated (scalar ops only). SSA starts out in GCC by versioning all variables and finishes by merging them back down with PHI functions. o

This solved the problem that the various front-end parse trees did not use a common IR which could be used as the basis for thorough optimisation and that that the RTL IR was also unsuitable because it was at too low a level.



Compiler passes over the IR are handled via an extendable Pass manager which as of 4.1.1 and include preparation for optimisation and optimisation proper. They are separated across interprocedural, intra-procedural and machine forms (consisting SSA c. 100 passes, GIMPLE c.100 passes, RTL c.60 passes [Novillo06]). The majority of these passes centre on the intra-procedural and machine forms.



One criticism I would make of GCC is that in some cases it flagrantly ignores manufacturer architected conventions. This leads to a lack of interoperability with the rest of the manufacturers system software, for example the manufacturers cross-functional software support or the manufacturers system threading package and libraries. Another problem for GCC is to stem the flow of RTL machine dependent based optimisations by handling these in a smarter way. Corporate involvement is accelerating functional releases (2008-2009 4 releases in last year – current 4.4.1)



GCC Gimple • Gimple o o o

o o o

o

Influenced by McCAT Simple IR (GNU Simple) Need for a generic language independent IR Need for an IR that renders complex deep parse trees to an IR that is easier to analyse for optimisation A small grammar covers bitwise, logical, assignment, statement etc. Unlike parse tree, gimple never references more than 3 variables, meaning 2 variable reads High Gimple and Low Gimple  Removes binding scope information and conditional clauses converted to gotos Gimple nodes iterated at tree-level (tsi) and on a doubly linked list at bb level (bsi)

GCC Gimple • 3 Address format ex. Generic form if ( a > b + c ) c=b/a+(b*a)

Gimple form T1 = b + c; If ( a > T1 ) { T2 = b / a; T3 = b * a; c = T2 + T3 }

GCC SSA •

SSA another IR form originally developed to help with dataflow analysis for interpreted systems o o o o

o

SSA evolved from Def-Use chains (Reif & Lewis) when annotated with identity assignments eg. vx became the basis for SSA GCC does Scalar SSA using Kildall Analysis (not Wegman et.al). SSA for ~ Simplification of existing optimisations, for example constant propogation was originally complex to implement but with SSA it is greatly simplified SSA for ~ Classic dataflow analysis - Reaching Definition Analysis or more intuitively Reaching Assignment Analysis since it attempts to pair the current variable reference to the most recent update or write to that variable SSA for ~ significantly faster optimisation during compilation O(n) versus O(n2) when optimising using traditional data-flow equations

Generic form c = 5; if ( a > b + c ) c=b/c+(b*a)

Gimple form c = 5; T1 = b + c; if ( a > T1 ) { T2 = b / c; T3 = b * a; c = T2 + T3; }

SSA form c1 = 5; T11 = b1 + c1; if ( a1 > T11 ) { T21 = b1 / c1; T31 = b1 * a1; c2 = T21 + T31; } c3 = phi ( c1, c2 );

SSA making Reaching Definition Analysis easy to perform. Here it is being used to simplify constant propogation

Fig.1 Basic Blocks contain scalar expressions

Dominators & Φ Fn in SSA Dominators ::=> 1. d dominates n if every path from n must go through d • Every node dominates itself • Nodes also evidently have the property of an Immediate Dominator

A

B

C

B

split D

E

c1 = x;

c2 = a / b;

merge

straight-line

F

c3 = phi (c1, c2);

Fig.1 Clearly the path to G is either from B or F however the path to B and F stems from A so every path from G goes through A therefore G is dominated by A Fig.1 Likewise the path to F is either from D or E however the path to these stems from C so every path from F goes through C therefore F is Block dominated by C Using this we can build a Dominator Tree (Fig.2) and derive Dominator Sets (Fig.3) and a Dominance Frontier. A DF over a given variable A in the BB is used by the compiler to introduce Phi functions this produces a maximal Phi insertion it B can be reduced by various methods eg variable liveness C

C

G

E

Dom Set

F

Fig.3 Dom Set & Dom Frontier Immed Dom Dom Frontier -----

A, B

A

G

A, C

A

G

D

A, C, D

C

F

E

A, C, E

C

F

Set of all CFG nodes for which x dom a predecessor p of n but not the n itself. (Intuitively earliest point F where definition of a variable is not guaranteed to be unique) G This gives maximal insertion of phi nodes and can be optimised several ways for example by doing liveness analysis.

A, C, F

C

G

A, G

A

----

Dominance Frontier of a BB variable ::=>

DF(d) = {n | ∃p∈pred(n), d dom p and d !sdom n} •

Fig.2 Dom Tree

A

G



D

A

-----

Multicore optimisation • Polyhederal representation o o

o

o

o o

o

First proposed by Feautrier in 1991 and appears in research compilers of the time. Complex Loop Nest Optimisation and Array analysis is difficult to do with corresponding AST representation ~ especially with respect to strict observance of loop bounds across the nest which often defeats standard LNO Loop Nest is reformulated as a set of equations, Linear Inequalities (properly affine constraints) and due to this higher level of abstraction a deeper level of optimisation (transformation) can be accomplished by solving the LP system Each loop integer is a point in an XY space the loop bounds of which form a Polyhedra. Ex. The first nest is a point with 2 rays, the second modifies this as a 4 sided 2D polyhedra the third forms a 3D polyhedra. Problem - how to efficiently implement solving for large number of points. The literature reports 20-90% improvement using polyhederal LNO Such an improvement makes it practical and desireable to distribute LN and associated array computation across a set of multicores. AMD are doing this with a lightweight intercore IPC they call streams Polyhederal LNO available in GCC 4.5 as Graphite and IBMs Cell Compiler

Multicore optimisation • The polyhederal model Ref [Bastoul06] ❶ Typical Loop Nest for(i=2; i<=n; i++) z[i]=0; // S1 for(i=1; i<=n; i++) for(j=1; j<=n; j++) z[i+j] += x[i] & y[j]; // S2

Steps 1. Define domain Dn (ref bounds of enclosing loop) • List Access functions Ex. S1 = Z[i]=0 • Transform (optimise) with some affine schedule eg. S1(i) = (i) • Generate code using projection and separation of polyhedra

❸ Transformation scheduling (optimisation)

❷ Reformulated as affine constraints. Ex. outer loop

❹ Regenerate AST for code generation.Ex. Will be DS1 - DS2 ∧ DS2 - DS1 ∧ DS1 ∩ DS2 giving worst case of 3np (n=stmts; p=nest depth) t=2; i=2; DS1-DS2 z[i]=0; for(t=3; t<=2*n; t++) for(i=max(1,t-n-1); i<=min(t-2,n); i++) j=t-i-1; z[i+j] += x[i] * y[i]; i=t; DS1 ∩ DS2 z[i] = 0; t=2*n+1; i=n; j=n; z[i+j] += x[i] * y[j];

DS2-DS1

Related Documents

Compiler Opt
June 2020 10
Compiler
May 2020 15
Compiler
October 2019 27
Compiler
July 2020 10
Compiler
December 2019 24
Compiler
November 2019 22

More Documents from ""