Power System Planning And Reliability (autosaved).docx

  • Uploaded by: GYANA RANJAN
  • 0
  • 0
  • April 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Power System Planning And Reliability (autosaved).docx as PDF for free.

More details

  • Words: 9,613
  • Pages: 58
E-BOOK OF

POWER SYSTEM PLANNING AND RELIABILITY

SEMESTER-VIII

2018-2019 DEPARTMENT OF ELECTRICAL ENGINEERING

1

Course objective:  To understand the different power system planning and forecasting, techniques and reliability evaluation in terms of basic reliability indices.

Course outcomes :  Should be able to make a Generation System Model for the Power system in terms of frequency and duration of failure.  Should be able to calculate reliability indices of the power system based on system model and the load curve.  Should be able to plan a small Generation and Transmission system, predict its behavior, and do the required change in order to achieve reliability.

2

MODULE-1

Load Forecasting CONTENTS: 1.1 Introduction, 1.2 Classification of Load, 1.3 Load Growth Characteristics, 1.4 Peak Load Forecasting, 1.5 Extrapolation and Co-Relation methods of load Forecasting, 1.6 Reactive Load Forecasting 1.7 Impact of weather on load forecasting.

3

1.1 INTRODUCTION: Definition of power system planning: A process in which the aim is to decide on new as well as upgrading existing system elements, to adequately satisfy the loads for a foreseen future. Elements can be: • Generation facilities • Substations • Transmission lines and/or cables • Capacitors/Reactors • Etc. Decision should be • Where to allocate the element (for instance, the sending and receiving end of a line), • When to install the element (for instance, 2020), • What to select, in terms of the element specifications (for instance, number of bundles and conductor type).  The loads should be adequately satisfied.

Load Forecasting: 

The first crucial step for any planning study.



The term forecast stands for predictions of future events and conditions. The process of making such predictions is called forecasting.



Load Forecasting refers to the prediction of the load behavior for the future load. Load forecasting is the predicting of electrical power required to meet the short term, medium term or long term demand. The forecasting helps the utility companies in their operation and management of the supply to their customers.



Electrical load forecasting is an important process that can increase the efficiency and revenues for the electrical generating and distribution companies. It helps them to plan ontheir capacity and operations in order to reliably supply all consumers with the required energy.

4

Classification of Load Forecasting. 

Electricity load forecasting is usually divided into three or four time frame categories. 1) Long-term load forecasting: Long-term for a period of one year up to 20 years. This is used for system expansion planning, long-term financial planning, and tariff studies. 2) Medium-term load forecasting: Medium-term for a period of one to 12 months. The purpose of this forecast is to properly plan maintenance schedules, major tests and commissioning events, and Outage times of plants and major equipment. 3) Short-term load forecasting: Short-term, which covers a period of one day up to several days. It is used for operation planning, unit commitment of generating plants, and load flow studies for economic dispatch. 4) Very short-term is specifically for one to few hours ahead and is used for power exchange and purchase contracts, and tie-line operation. In many power companies the last two forecasts are combined in one under the title short-term forecasting.

Advantages of load forecasting: 1) Enables the utility company to plan well since they have an understanding of the future consumption or load demand. 2) Minimize the risks for the utility company. Understanding the future long term load helps the company to plan and make economically viable decisions in regard to future generation and transmission investments. 3) Helps to determine the required resources such as fuels required to operate the generating plants as well as other resources that are needed to ensure uninterrupted and yet economical generation and distribution of the power to the consumers. This is important for short, medium, and long term planning. 4) The load forecasting helps in planning the future in terms of the size, location and type of the future generating plant. By determining areas or regions with high or growing demand, the utilities will most likely generate the power near the load. This minimizes the transmission and distribution infrastructures as well as the associated losses. 5) Helps in deciding and planning for maintenance of the power systems. By understanding the demand, the utility can know when to carry out the maintenance and ensure that it has the minimum 5

impact on the consumers. For example, they may decide to do maintenance on residential areas during the day when most people are at work and demand is very low. 6) Maximum utilization of power generating plants. The forecasting avoids under generation or over generation.

Challenges in load forecasting: 1) Forecasting is based on expected conditions such as weather. Unfortunately, the weather is sometimes unpredictable and the forecasting may thus be different when the actual weather differs from expected. 2) In addition, different regions may experience different weather conditions which will definitely affect the electricity demand. This may have a negative impact on revenues, especially if the utility generates more to meet an expected high demand and then it turns out that the consumption is much less than what was generated either using expensive methods such as fossil fuel generators, etc. 3) Most of the experienced utility forecasters use manual methods that rely on a thorough understanding of a wide range of contributing factors based on upcoming events or a particular dataset. Relying on the manual forecasting is not sustainable due to the increasing number and complexity of the forecasting. Utilities must therefore look for technologies that can accurately give results and eliminate problems that may occur if experienced forecasters retire or leave employment. 4) The usage behavior varies between consumers using different types of meters and especially between the smart and traditional meters as well as different tariffs. The utility must understand this and develop separate forecast model for each of the metering systems and then add them up for the final forecast value. Otherwise, they will get an inaccurate forecasting. 5) Difficulties getting accurate data on consumption behavior due to changes in factors such as pricing and the corresponding demand based on such a price change. 6) Load forecasting task is difficult due to the complex nature of loads which may vary depending on the seasons and the total consumption for two similar seasons may vary. 7) It is sometimes difficult to accurately fit the numerous complex factors that affect demand for electricity into the forecasting models. In addition, it may not be easy to obtain an accurate demand forecast based on parameters such as change in temperature, humidity, and other factors that influence consumption. 6

8) The utility may suffer losses if they do not understand and decide on an acceptable margin of error in short term load forecasting.

Fig.1.1 factor affecting electrical power demand

1.2 Classification of Load:  Five broad categories: I.

II.

Domestic  Domestic load consists of lights, fans, home electric appliances (including TV, AC, refrigerators, heaters etc.), small motors for pumping water etc. Most of the domestic loads are connected for only some hours during a day. For example, lighting load is connected for few hours during night time. •

Demand factor: 70-100%



Diversity factor: 1.2-1.3



Load factor: 10-15%

Commercial  Commercial load consists of electrical loads that are meant to be used commercially, such as in restaurants, shops, malls etc. This type of load occurs for more hours during the day as compared to the domestic load. •

Demand factor: 90-100%



Diversity factor: 1.1-1.2



Load factor: 25-30% 7

III.

Industrial  Industrial load consists of load demand by various industries. It includes all electrical loads used in industries along with the employed machinery. Industrial loads may be connected during the whole day. •

Small-scale: 0-20 kW



Medium-scale: 20-100 kW



Large-scale: 100 kW and above – Demand factor: 70-80% – Load factor: 60-65%

IV.

V.

Agricultural  This type of load is mainly water pumping load where water is supplied to the crop. •

Demand factor: 90-100%



Diversity factor: 1-1.5



Load factor: 15-25%

Other loads  This type of load consists of street lighting, water supply and drainage systems etc. Street lighting is practically constant during the night hours. Water may be pumped to overhead storage tanks during the off-peak hours to improve the load factor of the system. 

Street lights, bulk supplies, traction etc.

 Commercial and agricultural loads are characterized by seasonal variations.  Industrial loads are base loads and are little weather dependent.

1.3 Electrical load growth 

Reasons for the growth of peak demand and energy usage within an electric utility system: •

New customer additions – Load will increase if more customers are buying the utility's product. – New construction and a net population in-migration to the area will add new customers and increase peak load.



New uses of electricity

8

– Existing customers may add new appliances (replacing gas heaters with electric) or replace existing equipment with improved devices that require more power. – With every customer buying more electricity, the peak load and annual energy sales will most likely increase. 

Load growth caused by new customers who are locating in previously vacant areas. •



Such growth leads to new construction and hence draws the planner's attention.

Changes in usage among existing customers.

Fig 1.2 Load Growth Characteristics

1.4 Peak load forecasting: 

Annual peak load forecasts are important for planning and in particular for securing adequate generation, transmission, and distribution capacities.



More accurate peak forecasts improve decision making capabilities in capital expenditures and improve reliability of the system.



Future peak load is not deterministic and it depends on several uncertain factors including weather conditions.



Basic approach for weekly peak demand forecast is: 1. Determine seasonal weather load model. 2. Separate historical weather-sensitive and non-weather sensitive components of weekly peak demand using weather load model. 3. Forecast mean and variance of non-weather-sensitive component of demand. 4. Extrapolate weather load model and forecast mean and variance of weather sensitive component. 9

5. Determine mean, variance and density function of total weekly forecast. 6. Calculate density function of monthly/annual forecast. 

Assume that the seasonal variations of the peak demand are primarily due to weather.



Otherwise, before step-3 can be undertaken, any additional seasonal variation remaining after weather-sensitive variations must be removed



To use the proposed forecasting method, a data base of at least 12 years is recommended.



To develop weather load models daily peaks and coincident weather variable values are needed.

1.5 Forecasting techniques: 

Three broad categories based on: •



Extrapolation –

Time series method.



Use historical data as the basis of estimating future outcomes.

Correlation –

Econometric forecasting method



Identify the underlying factors that might influence the variable that is being forecast.



Combination of both

1.5.1 Extrapolation: 

Based on curve fitting to previous data available.



With the trend curve obtained from curve fitted load can be forecasted at any future point.



Simple method and reliable in some cases.



Deterministic extrapolation: •



Errors in data available and errors in curve fitting are not accounted.

Probabilistic extrapolation •

Accuracy of the forecast available is tested using statistical measures such as mean and variance. 1 0



Standard analytical functions used in trend curve fitting are:



Best trend curve is obtained using regression analysis.



Best estimate may be obtained using equation of the best trend curve.

1.5.2 Correlation: 

Relates system loads to various demographic and economic factors.



Knowledge about the interrelationship between nature of load growth and other measurable factors.



Forecasting demographic and economic factors is a difficult task.



No forecasting method is effective in all situations.



Designer must have good judgment and experience to make a forecasting method effective.

1.6 Reactive Load Forecasting: 

Reactive power planning is one of the most difficult optimization problem of power system. It requires effective control of reactive power generation by the all reactive power sources present in the system.



The sources of reactive power are generators, tap changing transformers, static capacitors, etc. Reactive power optimization problem mainly deals with the minimization of active power loss.



It is also observed that the optimum use of the above mentioned reactive power sources reduces active power loss to a certain extent. Now if FACTS devices like SVC and TCSC are used simultaneously with the existing reactive power sources present in the system, not only the transmission loss reduces significantly but also satisfactory improvement of the voltage profile is observed through the entire power network.

1 1



Hence, the problem that has to be solved in a reactive power optimization problem is to determine the reactive power generation by the all sources, so as to optimize a certain optimization problem.

1.7 Impact of weather on load forecasting: 

Weather causes variations in domestic load, public lighting, commercial loads etc.



Main weather variables that affect the power consumption are: •

Temperature



Cloud cover



Visibility



precipitation



First two factors affect the heating/cooling loads



Others affect lighting loads.



First two factors affect the heating/cooling loads



Others affect lighting loads.



Cloud cover is measured in terms of: •

height of cloud cover



Thickness



Cloud amount



Time of occurrence and duration before crossing over a population area.



Visibility measurements are made in terms of meters/kilometers with fog indication.



To determine impact of weather variables on load demand, it is essential to analyze data concerning different weather variables through the cross-section of area served by utility and calculate weighted averages for incorporation in the modeling.

MODULE-2

System Planning 1 2

2.1 Introduction 2.2 Short, Medium and Long Term strategic planning, 2.3 Reactive Power Planning. 2.4 Introduction to Generation and Network Planning, 2.5 D.C load flow equation 2.6 Introduction to Successive Expansion and Successive Backward methods

2.1 Introduction: 

The word planning stems of the transitive verb to plan, meant as to arrange a method or scheme beforehand for any work, enterprise, or proceeding. The aim here is to discuss the meanings of 1 3

method or scheme, beforehand and work, enterprise or proceeding for a physical power system. In other words, we are going to discuss the power system planning problem in terms of the issues involved from various viewpoints; the methods to be used; the elements to be affected; the time horizon to be observed, etc. 

Power system planning is a process in which the aim is to decide on new as well as upgrading existing system elements, to adequately satisfy the loads for a foreseen future.

2.1.1 Power System Elements: The main elements of power system are • Generation facilities • Transmission facilities – Substations – Network (lines, cables) • Loads 2.1.2 Power System issues: A. Static Versus Dynamic Planning: 

Let us assume that our task is to decide on the subjects given above for 2015–2020.If the peak loading conditions are to be investigated, the studies involve six loading conditions. One way is to, study each year separately irrespective of the other years. This type of study is referred to as static planning which focuses on planning for a single stage. The other is to focus on all six stages, simultaneously, so that the solution is found for all six stages at the same time. This type of study is named as dynamic planning.



Obviously, although the static planning for a specific year provides some useful information for that year, the process as given above leads to impractical results for the period as the solutions for a year cannot be independent from the solution from the preceding years. One way to solve the problem is to include the results of each year in the studies for the following year. This may be referred to as semi-static, semi-dynamic, quasi-static or quasi-dynamic planning. It is apparent that the dynamic planning solution can be more optimal in comparison with the semi-static planning solution.

B. Transmission Versus Distribution Planning: 

Looking at transmission and sub-transmission levels, these are generally interconnected, as already shown in Fig. 2.1. Normally both may be treated similarly, in terms of, the studies 1 4

required and involved. From here on, with transmission, we mean both transmission and/or subtransmission levels, except otherwise specified. 

As seen, both transmission and distribution networks comprise of lines/cables, substations and generations. However, due to specific characteristic of a distribution System (such as its radial characteristics), its planning is normally separated from a transmission system, although much of the ideas may be similar.

Fig. 2.1 A typical radial distribution network

C. Long-term Versus Short-term Planning: 

There is no golden rule in specifying short-term or long-term planning issues. Normally,\1 year falls into the operational planning and operational issues in which the aim is typically to manage and operate available resources in an efficient manner.



More than that falls into the planning stages. If installing new equipment and predicting system behavior are possible in a shorter time (for instance, f distribution systems, 1–3 years), the term of short-term planning may be used. More than that (3–10 years and even higher) is called long-term planning (typically transmission planning) in which predicting the system behavior is possible for these longer periods.

2.2Basic Issues in Transmission Planning: 2.2.1 Load Forecasting: 1 5

 The forecast may be three Type 1. Long term forecast 2. Medium term forecast 3. Short term forecast (1) Long term forecast  Long‐term forecasts time period varies from 15 to 20 years of studying the energy problems. 

It takes four to six years for the construction, installation and maintenance of the equipment in power‐stations.



Long term forecast indicates the sales and purchase of the equipment.



Long term forecast indicates the energy policies.

(2) Medium term forecast  Medium‐term forecasts time period varies from 5 to 6 years of planning and size of the power station. 

Medium term forecast indicates the transmission and distribution losses.

 Medium term forecast indicates the sales and purchase of the energy.  Medium term forecast indicates the Energy conservation. (3) Short term forecast  Short‐term forecasts of 1 to 2 years are mainly of value in deciding operating procedures and preparing budget estimation.  Short term forecast indicates the sales and purchase of the power.  Short term forecast indicates the development of distribution networks.

2.3 Generation Expansion Planning: 

After predicting the load, the next step is to determine the generation requirements to satisfy the load. An obvious simple solution is to assume a generation increase equal to load increase. If, for instance, in year 2015, the peak load would be 40,000 MW and at that time, the available generation is 35,000 MW, an extra generation of 5,000 MW would be required. Unfortunately, the solution is not so simple at all. Some obvious reasons are



What types of power plants do we have to install (thermal, gas turbine, nuclear,etc.)?



Where do we have to install the power plants (distributed among 5 specific buses, 10 specific buses, etc.)?



What capacities do we have to install (5 9 1000 MW, or 2 9 1000 MW and 6 9 500 MW, or …)?

1 6

2.2.3 Substation Expansion Planning: Once the load is predicted and the generation requirements are known, the next step is to determine the substation requirements, both, in terms of • Expanding the existing ones, • Installing some new ones. This is referred to as Substation Expansion Planning (SEP). SEP is a difficult task as many factors are involved such as • Those constraints due to the upward grid, feeding the substations, • Those constraints due to the downward grid, through which the substation supplies the loads, • Those constraints due to the factors to be observed for the substation itself. 2.4 Network Expansion Planning: 

Network Expansion Planning (NEP) is a process in which the network (transmission lines, cables, etc.) specifications are determined.



In fact, the network is a media for transmitting the power, efficiently and in a reliable manner from generation resources to the load centers. We will see in this book that what efficiently and reliable manner mean in practical terms. We will see how these factors influence our decision so that we have to decide from an enormous number of alternatives.

2.2.5 Reactive Power Planning: 

In running NEP, the voltages are assumed to be flat (i.e. 1 p.u.) and reactive power flows are ignored. The main reason is the fact that constructing a line is not considered as a main tool for voltage improvement. Moreover, the running time of NEP can be exceptionally high or even the solution may not be possible if AC Load Flow (ACLF) is employed.



That is why in practice, NEP is normally based on using Direct Current Load Flow (DCLF).Upon running GEP, SEP and NEP, the network topology is determined. However, it may perform unsatisfactorily, if a detailed AC Load Flow (ACLF) is performed, based on existing algorithms. To solve such a difficulty, static reactive power compensators, such as capacitors and reactors may be used

17

2.5 D.C load flow equation

18

19

2.6 Introduction to Successive Expansion and Successive Backward methods: 2.6.1 Successive Expansion: 

The purpose of the Successive Expansion is to calculate the voltages at each node starting from the feeder source node.

20



The feeder substation voltage is set at its actual value. During the Successive Expansion the effective power in each branch is held constant to the value obtained in backward walk.

2.6.2 Successive Backward methods: 

The updated effective power flows in each branch are obtained in the backward propagation computation by considering the node voltages of previous iteration.



It means the voltage values obtained in the forward path are held constant during the backward propagation and updated power flows in each branch are transmitted backward along the feeder using backward path.



This indicates that the backward propagation starts at the extreme end node and proceeds towards source node.

21

MODULE-3

Reliability of Systems 3.1 Concepts, Terms and Definitions, 3.2 Reliability models, 3.3 Markov process, 3.4 Reliability function, 3.5 Hazard rate function, 3.6 Bathtub Curve. 3.7 Serial Configuration, 3.8 Parallel Configuration, 3.9 Mixed Configuration of systems, 3.10 Minimal Cuts and Minimal Paths, 3.11 Methods to find Minimal Cut Sets, 3.12 System reliability using conditional probability method, 3.13 Cut set method and tie set method.

22

3.1 Concepts, Terms and Definitions: 

Reliability is an engineering discipline for applying scientific know-how to a component, assembly, plant, or process so it will perform its intended function, without failure, for the required time duration when installed and operated correctly in a specified environment.



Reliability is "quality changing over time" or A motion picture instead of a snapshot



Reliability is a measure of the result of the quality of the product over the long run. Reliability terminates with a failure—i.e, unreliability occurs. Business enterprises observe the high cost of unreliability. The high cost of unreliability motivates an engineering solution to control and reduce costs.

MIL-STD-721C Definitions of Terms for Reliability and Maintainability gives the following definition for reliability: Reliability is the probability than an item can perform its intended function without failure for a specified interval under stated conditions. This definition provides the following four aspects of reliability: 1. Reliability is a probability based concept; the numerical value of the reliability is between 0 and 1 2. The functional performance of the product has to meet certain stipulations and a functional definition of failure is needed. For example, a failure means different things to the user and to the repair person. 3. It implies successful operation over a certain period of time 4. Operating or environmental conditions under which product use takes place are specified Cost of Unreliability 

Cost improvement efforts are more productive when motivated from the top-down rather than bottom-up because it is a top management driven effort for improving costs. Finding the cost of unreliability (COUR) starts with a big-picture view and helps direct cost improvement programs by identifying:

23

1. Where is the cost problem--what sections of the plant, 2. What magnitude is the problem--all business loss costs are included in the calculation, and 3. What major types of problems occur 

Cost of unreliability programs study plants as links in a chain for a reliability system, and the costs Incurred when the plant, or a series of plants, fail to produce the desired result.



Cost of unreliability begins with the big picture of failures to produce the desired business results driven by failures of the process or it's equipment. Elements of the process are considered as a series reliability model comprising links in a chain of events that deliver success or failure. Logical block diagrams of major steps or systems are identified. Failure costs are calculated by category expecting that history tends to repeat in a string of chance events unless the problems have been permanently removed and success demonstrated by objective measures.

DESIGNING FOR RELIABILITY Reliability does not just happen. It requires that the following three key elements be in place 1. A commitment from top management to ensuring reliability 2. A reliability policy (that goes hand-in-hand with a Quality Policy) 3. A philosophy that designs reliability in at an early stage WAYS TO IMPROVE RELIABILITY  Use proven designs  Use the simplest possible designs  Use proven components that have undergone reliability component testing  Use redundant parts in high risk areas. Placing two components in parallel will reduce the overall probability of failure  Always design fail safe  Specify and use proven manufacturing methods MEASURES OF RELIABILITY Reliability is the probability that a system will still be functioning at time t.

This can be expressed as “the cumulative distribution of failure” 24

These two measures are the mirror image of each other (Refer Figure below). The reliability will start at 1 and decay to approach 0 over time. The cumulative distribution of failure will start at 0 (no failures) and approach 1 as all the items fail over time. The slope of the reliability curve at any time t is the failure rate at that point in time. These measures give the overall reliability or failure at time t

Probability density function We wish to have an idea of the probability of an item failing in a given unit time period. This is termed the “probability density function” and is given by

The failure or hazard rate gives the failure density over a period of time as with the “probability density function”, but is based on the current population. This gives a much better indication of the changing reliability of a system over time.

3.1 Reliability models: Reliability models are used to describe the process of component working ability. Reliability models are divided into two types: 25

3.2.1 Constant Failure Rate Reliability Models:

26

27

3.2.2 Time-Dependent Failure Rates:  Based on the general characteristics of the bath-tub shape for failure rates, the infant mortality and the aging effects are associated with time-dependent failure rates. The former is associated with a rate that decreases with time; and the latter is associated with a rate that increases with time. These are commonly referred to as the wear-in and wear out phenomena, respectively.

.1 Markov process: 

When Markov chains are used in reliability analysis, the process usually represents the various stages (states) that a system can be in at any given time. The states are connected via transitions that represent the probability, or rate, that the system will move from one state to another during a step, or a given time.



When using probabilities and steps the Markov chain is referred to as a discrete Markov chain, while a Markov chain that uses rate and the time domain is referred to as a continuous Markov chain.



In a discrete Markov chain, we have to define each possible state that the system can be in at any given time, and also the transition probabilities per step that link the states together. The steps can represent time, but they do not have to. Lastly, we must also define the initial state probabilities that give us the starting point(s) of the system.



Mathematically, we can represent the initial state probabilities as a vector X(O) such that Xi represents the initial probability of being in state i:

The transitions between the states can be represented by a matrix P:

28

Where, for example, the term P12 is the transition probability from state 1 to state 2. Then if we want to know the probability of being in a particular state after n steps, we can use the Chapman-Kolmogorov equation to arrive at the following equation:

Where X(n) is the vector that represents the probability of being in a state after n steps. Using this methodology, we can find the point probability of being in a state at each step and from there also calculate the mean probability of being in a state over a certain number of steps.

3.4 Reliability function: 

The most frequently used function in life data analysis and reliability engineering is the reliability function. This function gives the probability of an item operating for a certain amount of time without failure. As such, the reliability function is a function of time, in that every reliability value has an associated time value. In other words, one must specify a time value with the desired reliability value, i.e. 95% reliability at 100 hours. This degree of flexibility makes the reliability function a much better reliability specification than the MTTF, which represents only one point along the entire reliability function.



The reliability function can be derived using the previous definition of the cumulative density function. Note that the probability of an event happening by time t (based on a continuous distribution given by f(x), or f(t) since our random variable of interest in life data analysis is time, or t) is given by: 𝑡

𝐹(𝑡) = ∫ 𝑓(𝑠) 𝑑𝑠 0,𝛾



One could also equate this event to the probability of a unit failing by time t, since the event of interest in life data analysis is the failure of an item. 29



From this fact, the most commonly used function in reliability engineering can then be obtained, the reliability function, which enables the determination of the probability of success of a unit, in undertaking a mission of a prescribed duration.



To mathematically show this, we first define the unreliability function, Q(t), which is the probability of failure, or the probability that our time-to-failure is in the region of 0 (or γ) and t. So, from the previous equation, we have: 𝑡

𝐹(𝑡) = 𝑄(𝑡) = ∫ 𝑓(𝑠) 𝑑𝑠 0,𝛾

In this situation, there are only two situations that can occur: success or failure. These two states are also mutually exclusive. Since reliability and unreliability are the probabilities of these two mutually exclusive states, the sum of these probabilities is always equal to unity. So then: 𝑄(𝑡) + 𝑅(𝑡) = 1 𝑡

𝑅(𝑡) = 1 − ∫ 𝑓(𝑠) 𝑑𝑠 0,𝛾 ∞

𝑅(𝑡) = ∫ 𝑓(𝑠) 𝑑𝑠 𝑡

Where R(t) is the reliability function. Conversely, the pdf can be defined in terms of the reliability function as: 𝑓(𝑡) =

𝑑(𝑅(𝑡)) 𝑑𝑡

The following figure illustrates the relationship between the reliability function and the cdf, or the unreliability function.

30

3.5 Hazard rate function: The hazard rate for any time can be determined using the following equation: hT(t) = f(t) / R(t) f(t) is the probability density function, or the probability that the value (failure or death) will fall in a specified interval (for example, a specific year). R(t) is the survival function, or the probability that something will survive past a certain time (t). The hazard rate cannot be negative, and it is necessary to have a set "lifetime" on which to model the equation. 

The hazard rate function hT(t), also known as the force of mortality or the failure rate, is defined as the ratio of the density function and the survival function. That is, hT(t) = f(t) / R(t) , where

is the survival model of a life or a system being studied. In this definition,

is

usually taken as a continuous random variable with nonnegative real values as support. 

In this post we attempt to define the hazard rate at the places that are point masses (probability masses). This definition will cover discrete survival models as well as mixed survival models (i.e. models that are continuous in some interval and also have point masses).

3.6 Bathtub Curve. 

Most products go through three distinct phases from product inception to wear out. Figure below shows a typical life cycle curve for which the failure rate is plotted as function of time.

31



Infancy / Green / Debugging / Burn-in-period: Many components fail very soon after they are put into service. Failures within this period are caused by defects and poor design that cause an item to be legitimately bad. These are called infant mortality failures and the failure rate in this period is relatively high. Good system vendors will perform an operation called "burn in" where they put together and test a system for several days to try to weed out these types of problems so the customer doesn't see them.



Chance failure / Normal Operating Life: If a component does not fail within its infancy, it will generally tend to remain trouble-free over its operating lifetime. The failure rate during this period is typically quite low. This phase, in which the failure rate is constant, typically represents the useful life of the product.



Wear out / Ageing: After a component reaches a certain age, it enters the period where it begins to wear out, and failures start to increase. The period where failures start to increase is called the wear out phase of component life. SYSTEM RELIABILITY

3.7 Series System: When components are in series and each component has a reliability RI If one component fails, the system fails.



The overall reliability of a series system shown above is:



R AB = R1 x R2 x R3



If R1 = R2 = R3 = 0.95



RAB = R1 x R2 x R3 = 0.95 x 0.95 x 0.95 = 0.86



R total is always < than R1 or R2 or R3

32

3. 8 Parallel System: 

When components are in parallel and each component has a reliability Ri. If one component fails, the system does not fail.



 RAB= 1 - probability (1 & 2 both fail)



The probability of 1 failing is = (1 - R1 )



The probability of 2 failing is = (1 - R 2 )



Overall reliability is R AB =1 - (1 - R 1 ) (1 - R 2)



If R1= 0.9 and R2 =0.8



R AB =1 - (1 - 0.9) (1 - 0.8) = 0.98



R Tota ls always > than R1 or R2

3.10 Minimal Cuts and Minimal Paths: 

The reduction approach presented in the previous section can be used if the number of activities in an IDEF3 model is relatively small. When a process model is large, more efficient methods based on the notion of a path set and a cut set are used.



A minimum path set of an IDEF3 model is the minimum set of activities whose functioning ensures the functioning of the model.



Consider the IDEF3 process model in Figure 5. There are four minimum path sets {1, 2, 3, 4, 5, 7, 9}, {1, 2, 3, 4, 5, 8, 9}, {1, 2, 3, 4, 6, 7, 9}, {1, 2, 3, 4, 6, 8, 9} exist in this system. One can see in Figure 5 that system will function if all the activities of at least one minimum path set are functioning. Therefore, the structure function of the system in Figure 5 is defined by (4)

33

The structure function φ(x) for path sets of the model in Figure 5 is represented in Figure 6.

34

3.11 Methods to find Minimal Cut Sets: The algorithm presented in this section determines the minimum cut sets provides that minimum path sets have been determined. In the literature, the minimum cut sets by combining nodes that break the minimum path sets.

3.12 System reliability using conditional probability method: 

The approach is used to evaluate the power availability in the composite power system and it is based on conditional probability. A system/component is said 35 to be connected if there exists a path between the source and the sink.



The availability of power at the receiving end of a branch not only depends on the failure and repair rates of the components in that branch but also depends on the state of associated components of the branches. 35



These branches can form a power flow path for the particular branch. In the literature, most of the methods are based on the tracing of power flow paths.



For example if a load bus is supplied by three paths a, b, c with power availability at the sending end of each path assumed to P1, P2, P3 and the probabilities of availability of paths a, b, c are Pa, Pb and Pc. Then the probability of power unavailable at the ends of paths a, b, c are (1 – P1Pa), (1 –P2Pb) and (1 –P3Pc). Then net probability of average power available (PL) at the receiving end load bus is given by

36

MODULE-4 Generating Capacity Basic probability methods 4.1 Introduction, 4.2 Generation system model, 4.3 capacity outage probability table, 4.4 recursive algorithm for rated and de-rated states, 4.5 Evaluation of: loss of load indices, 4.6 Loss of load expectation, 4.7 Loss of energy. Frequency and Duration Method: 4.8 Basic concepts, 4.9 Numerical based on Frequency and Duration method.

Book-Reliability Evaluation of Power Systems by Roy Billinton & Ronald N Allan

37

4.1 Introduction: 

Probabilistic evaluation of generation capacity is traditionally performed for one of two classes of decision problems. The first one is the generation capacity planning problem where one wants to determine the long-range generation needs of the system. The second one is the shortterm operational problem where one wants to determine the unit commitment over the next few days or weeks.



We may think of the problem of generation capacity evaluation in terms of Fig.4.1.

Fig.4.1: Evaluation of Generation capacity 

In Fig. 4.1, we see that there are a number of generation units, and there is a single lumped load. Significantly, we also observe that all generation units are modeled as if there were connected directly to the load, i.e., transmission is not modeled. The implication of this is that, in generation adequacy evaluation, transmission is assumed to be perfectly reliable.

4.2 Generation system model 

In the basic capacity planning study, each individual generation unit is represented using the two-state model illustrated in Fig.4.2.

Fig.4.2: Two-State Model 38



Important relations for this model, in terms of long-run availability A and long-run unavailability U, are provided here again, for convenience.



one can approximate the effects of derating and, for peaking plants, of reserve shutdown, by using equivalent forced outage rate, EFOR, according to:

4.3 capacity outage probability table: 

This table is also known as the generation model. It is constructed using Binomial distribution. Although the Binomial distribution has limited applications when the unit sizes and FOR’s are differs, but it can be applied in such cases by starting with the smallest unit and continue to odd one unit at a time to the table until all units have been processed.

4.3.1 Capacity outage table for identical units: A capacity table is simply a probabilistic description of the possible capacity states of the system being evaluated. The simplest case is that of the 1 unit system, where there are two possible capacity states: 0 and C, where C is the maximum capacity of the unit. Table 4.1 shows capacities and corresponding probabilities. Table 4.1: Capacity Table for 1 Unit System Capacity

Probability

C

A

0

U

39

We may also describe this system in terms of capacity outage states. Such a description is generally given via a capacity outage table, as in Table 4.2. Table 4.2: Capacity Outage Table for 1 Unit System Capacity Outage

Probability

0

A

C

U

Figure 4.3 provides the probability mass function (pmf) for this 1 unit system.

Probability

A

U

0

C

2C

Capacity outaged

Fig. 4.3: pmf for Capacity Outage of 1 Unit Example Now consider a two unit system, with both units of capacity C. We can obtain the capacity outage table by basic reasoning, resulting in Table 4.3. Table 4.3: Capacity Outage Table for 2 Identical Units Capacity Outage

Probability

0

A2

C

AU

C

UA

40

U2

2C

We may also more formally obtain Table U19.3 by considering the fact that it provides the pmf of the sum of two random variables. Define X1 as the capacity outage random variable (RV) for unit 1 and X2 as the capacity outage RV for unit 2, with pmfs fX1(x) and fX2(x), each of which appear as in Fig. U19.3. We desire fY(y), the pmf of Y, where Y=X1+X2. Recall from Section U13.3.2 that we can obtain fY(y) by convolving fX1(x) with fX2(x), i.e.,



fY ( y )   f X 1 (t ) f X 2 ( y  t )dt 

(4)

But, inspection of fX1(x) and fX2(x), as given by Fig. U19.3, indicates that, since X1 and X2 are discrete random variables, their pmfs are comprised of impulses. Convolution of any function with an impulse function simply shifts and scales that function. The shift moves the origin of the original function to the location of the impulse, and the scale is by the value of the impulse. Fig. 4.4 illustrates this idea for the case at hand.

A

U

0

C Capacity outaged

Probability

+ AU

C Capacity outaged

2C

U

0

2C

A2

0

Probability

*

Probability

Probability

A

C

2C Capacity outaged

AU

0

C Capacity outaged

U2

2C

Fig. 4.4: Convolution of Generator Outage Capacity pmfs 41

Figure 4.5 shows the resultant pmf for the capacity outage for 2 identical units each of capacity C.

Probability

A2

2AU U2 0

C

2C

Capacity outaged

Fig. 4.5: pmf for Capacity Outage of 2 Unit Example We note that Fig. 4.5 indicates there are only 3 states, but in Table 4.3, there are 4. One may reason from Table 4.3 that there are two possible ways of seeing a capacity outage of C, either unit 1 goes down or unit 2 goes down. Since these two states are the same, we may combine their probabilities, resulting in Table 4.4, which conforms to Fig. 4.5. Table 4.4: Capacity Outage Table for 2 Identical Units Capacity Outage

Probability

0

A2

C

2AU

2C

U2

In fact, we saw this same kind of problem in Section U10.2 of module 10, where we showed that the probabilities can be handled using a binomial distribution, since each unit may be considered as a “trial” with only two possible outcomes (up or down). We may then write the probability of having r units out of service as:

42

PX r  Pr[ X  r , n,U ] 

n! U r ( A)( nr ) r!(n  r )!

(U19.5)

4.4 recursive algorithm for rated and de-rated states: 4.5 Evaluation of: loss of load indices: loss of load probability (LOLP) 

This is the oldest and the most basic probabilistic index. It is defined as the probability that the load will exceed the available generation. Its weakness is that it defines the likelihood of encountering trouble (loss of load) but not the severity; for the same value of LOLP, the degree of trouble may be less than 1 MW or greater than 1000 MW or more. Therefore it cannot recognize the degree of capacity or energy shortage.



This index has been superseded by one of the following expected values in most planning applications because LOLP has less physical significance and is difficult to interpret.

4.6 loss of load expectation (LOLE): 

This is now the most widely used probabilistic index in deciding future generation capacity. It is generally defined as the average number of days (or hours) on which the daily peak load is expected to exceed the available capacity. It therefore indicates the expected number of days (or hours) for which a load loss or deficiency may occur. This concept implies a physical significance not forthcoming from the LOLP, although the two values are directly related.



It has the same weaknesses that exist in the LOLP.

4.7 Loss of energy: This index is defined as the expected energy not supplied (EENS) due to those occasions when the load exceeds the available generation. It is presently less used than LOLE but is a more appealing index since it encompasses severity of the deficiencies as well as their likelihood. It therefore reflects risk more truly and is likely to gain popularity as power systems become more energylimited due to reduced prime energy and increased environmental controls.

43

4.8 Frequency and duration approach: 

The F&D criterion is an extension of LOLE and identifies expected frequencies of encountering deficiencies and their expected durations.



It therefore contains additional physical characteristics but, although widely documented, is not used in practice. This is due mainly to the need for additional data and greatly increased complexity of the analysis without having any significant effect on the planning decisions.



The methods presented in this module so far provide the ability to compute LOLP, LOLE, EDNS, and EENS, but they do not provide the ability to compute o Frequency of occurrence of an insufficient capacity condition o The duration for which an insufficient capacity condition is likely to exist.

A competing method which provides these latter quantities goes, quite naturally, under the name of the frequency and duration (F&D) approach. The F&D approach is based on state space diagrams and Markov models. We touched on this at the end of Section U19.2.1 above by showing that we may represent a 2 generator system via a Markov model and then compute state probabilities, frequencies, and durations for each of the states. The underlying steps for the F&D approach, outlined in chapter 10 of [11], are: 1. Develop the Markov model and corresponding state transition matrix, A for the system. 2. Use the state transition matrix to solve for the long-run probabilities from 0=pA and ∑pj=1 (note that we have dropped the subscript  for brevity, but it should be understood that all probabilities in this section are long-run probabilities). 3. Evaluate the frequency of encountering the individual states from (U16.31), repeated here for convenience:

fj 

  jk p j ,  p j ,   jk

k j

k j

(U19.32)

which can be expressed as: fj=pj,[total rate of departure from state j] 4. Evaluate the mean duration of each state, i.e., the mean time of residing in each state, from (U16.33), repeated here for convenience:

44

p j , 1 Tj   fj   jk k j

(U19.33)

(Note that [11] uses mj to denote the duration for state j and uses Tj to denote the cycle time for state j, which is the reciprocal of the state j frequency fj. One should carefully distinguish between the cycle time and the mean duration. 

The cycle time is the mean time between entering a given state to next entering that same state.



The duration is the mean time of remaining in a given state.)

5. Identify the states corresponding to failure, lumped into a cumulative state denoted as J. 6. Compute the cumulative probability of the failure states pJ as the sum of the individual state probabilities:

pJ   p j jJ

(U19.34)

7. Compute the cumulative frequency fJ of the failure states (see section U16.8.2) as the total of the frequencies leaving a failure state j for an non-failure state k:

fJ 

  f jk

k J jJ

(U19.35)

Because (see (U16.29)) fjk=λjk pj,, (U19.35) can be expressed as

fJ 

   jk p j ,     jk p j ,

kJ jJ

jJ kJ



 p j ,   jk jJ

kJ

(U19.36)

8. Compute the cumulative duration for the failure states, as:

45

TJ 

pJ fJ

(U19.37)

The above approach is quite convenient for a system of just a very few states, and it is important for our purposes because it lays out the underlying principles on which the F&D is based.

46

MODULE-5

Operating Reserve 5.1 General concept, 5.2 PJM method, 5.3 Modified PJM method

47

5.1 General concept: 

Assuming there is sufficient installed capacity in the system, the allocation of operating reserves consists in the decision concerning the capacity and units to commit to replace failed generating units.



The risk of load interruption upon the failure of a generating unit can be minimized keeping part of the reserve ‘spinning’; that is, as units connected to the grid, synchronized and ready to take load, or keeping available a group of units with quick-start capability.



These units can be rapidly brought on-line and pick up load. Both the spinning and nonspinning reserve form the operating reserve of the system. Non-spinning reserve can only be provided by hydraulic or gas turbine units which have start-up times in the order of minutes, whereas spinning reserve can be provided by a broader range of units.



Actually, the division between spinning and no spinning reserve can be actually one of definition. Fast-start units can be considered spinning reserve; interruptible loads and assistance from interconnected systems can be included in both categories. Accordingly, some systems may or may not include no spinning reserve when assessing generation reliability.



Besides the immediate response group, conventionally able to be brought on-line in less than 10 minutes, a slower contingent of reserves, or “hot reserve”, can be kept available. The hot reserve is capacity generally provided by thermal generation where the turbo-alternator is shut down but the boiler is left in a hot state. Thus, some regions like New York and New England require additional reserve that must be fully available within 30 minutes. California ISO requires a replacement reserve to be fully available within 60 minutes.



This additional reserve (replacement, secondary) is used to redispatch after contingencies and to restore operating reserve requirements. Requirements NERC’s Operating Manual recommends keeping half of the operating reserve spinning. NERC also specifies that following a loss of resources, a Control Area shall take appropriates steps to reduce its Area Control Error (ACE) to meet the Disturbance Control Standard.



The ACE is a measure of instantaneous unbalance of actual and However, there is no agreement in minimum operating reserve standards. In the East Central Area Reliability 48

Council (ECAR), the requirements for spinning and non spinning reserve are both 3% of daily peak load. In the mid-Atlantic region, spinning reserve must be greater of 700 MW or the largest unit on line. In Florida, spinning reserve must equal 25% of the largest unit on-line. 

The Western Systems Coordinating Council requires reserves equal to 5% of load supplied by hydroelectric resources plus 7% of the load supplied by thermal generation, with spinning reserves not less than one half of the total operating reserve. In effect, beyond NERC’s operating standards and each region’s operating experience, there are no other bases for the adopted reserve requirements.

Spinning reserve: Reserve capacity that is synchronized and ready to take load. Operating reserve: Spinning reserve plus the quick/rapid start capacity. Methods for Determining Operating Reserve Requirements: I.

Deterministic: such as reserve equal to the largest unit or as a percentage of the load.

ll Probabilistic methods: PJM Method, Security Function Method and Frequency & Duration Method. 2.2 Basic PJM Method:

49

5.2 Modified PJM Method : 1. Using scheduled units at t=0 Risk (𝑡9) = Probability of insufficient generation at 𝑡9 This can be computed using the Basic PJM method 2. Assuming that hot reserve units are not failed at 𝑡9, their failure probabilities at 𝑡; are determined. 3. Generation model at 𝑡; consists of units in operation at t=0 and rapid start units becoming available at 𝑡9. 4. Risk at 𝑡; is given by Risk(𝑡;) = Probability of insufficient capacity at 𝑡; 5. Next probabilities of hot reserve units at 𝑡< are computed. The generation model at tc consists of units in operation at t=0 having operated for 𝑡< , rapid starts having operated for (𝑡< – 𝑡9), and hot reserve units having operated for (𝑡< – 𝑡;) 6. This generation model is then used to compute risk at 𝑡< 7. The following has been suggested as an index of composite risk: 8. Risk=Risk(𝑡9)+[Risk(𝑡;)-Risk(𝑡=)] +[Risk(𝑡< )- Risk(𝑡=)]

A 4-state model of a rapid start unit:

50

A 5-state model of hot reserve unit:

51

MODULE-6 Composite generation and transmission system 6.1 Data requirement, 6.2 Outages, system and load point indices, 6.3 Application to simple system

52

6.1 Data requirement: •

Composite generating and transmission system evaluation is concerned with the total problem of assessing the ability of the generation and transmission system to supply adequate and suitable electrical energy to the major system load points (Hierarchical level II - HL II)



The problem of calculating reliability indices is equivalent to assessing the expected value of a test function F(x), i.e., :



All basic reliability indices can be represented by this expression, by using suitable definitions of the test function.

Applications in power system planning •

Expansion - selection of new generation, transmission, sub transmission configurations;



Operation - selection of operating scenarios;



Maintenance - scheduling of generation and transmission equipment



System indices (sometimes appearing under different names) • LOLP = Loss of load probability • LOLE = Loss of load expectation (h/year) • EPNS = Expected power not supplied (MW) • EENS = Expected energy not supplied (MWh/year) • LOLF = Loss of load frequency (occ./year) • LOLD = Loss of load duration (h) • LOLC = Loss of load cost (US$/year) • etc.

53

6.2 Outages, system and load point indices: Use the property of reliability index is an important aspects for reliability analysis. Reliability indices are classified as system based indices and load point indices.

54

6.3 Application to simple system 

The definition of a simple system involves the concept of series and parallel configurations. So, to understand it better, you should first learn the definitions of series and parallel configurations



A system is said to be simple if either its components are connected in parallel, in series or in a combination of both. In other words, a system is said to be simple if its reliability block diagram can be reduced into subsystems having independent components connected either in parallel or in series.

55



In reliability theory, A system is an assemblage of, say, n identifiable components that perform some function. The n individual components are known as elementary components. The reliability of a system depends on the reliability of elementary components.



To evaluate the reliability of a system, we have to apply the rules of probability theory such as the addition rule, multiplication rule, conditional probability, independence of events or their combination. You are familiar with these rulesas all these have been discussed in detail in Units 1 to 3 of MST-003. The choice of rules from among these for a given situation depends on the logical connectivity of the elementary components or subsystems made up of elementary components in the system. In general, we follow the steps given below to evaluate the reliability of a system: Step 1: We identify the elementary components or subsystems, which constitute the given system such that either their individual reliabilities are given or can be estimated. Let us call them ‘units’ comprising the system. Step 2: We evaluate the reliabilities of those units whose reliabilities are not directly given to us. Step 3: We draw a reliability block diagram of the system to represent the logical connectivity of the units (i.e., elementary components and subsystems). Step 4: We determine the constraints that should be fulfilled for the successful operation of the system. For example, do we need successful operation of all units for the successful operation of the system? Is the successful operation of only one of them enough or some other combination of components needs to operate successfully? Step 5: We apply the rules of probability theory such as addition rule, multiplication rule, conditional probability, and independence of events or their combination to evaluate the reliability of the system

56

APPLICATION:

 Power grids across the globe are subject to constant change and expansion. Demand and generation capacity is continually growing, and new load and generation technologies are introduced. This represents a major challenge in delivering a secure, reliable electricity supply. System operation must be assured. Power system planning and reliability studies can help you ensure reliable operation of existing and future power grids.  Assessments are made using advanced power system studies including: 

Steady state analysis



Transfer capability analysis



Short circuit analysis



Transient stability analysis



Sub-transient switching studies



SSR and SSCI studies 57

58

Related Documents


More Documents from ""