Quanti Research Design

  • Uploaded by: malyn1218
  • 0
  • 0
  • April 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Quanti Research Design as PDF for free.

More details

  • Words: 2,854
  • Pages: 30
QUANTITATIVE RESEARCH DESIGNS

CONCEPTS IMPORTANT TO DESIGN A. Causality - The first assumption one must make in examining causality are that things have causes and that causes lead to effects. - Hume (a positivist philosopher) proposed that the following three conditions must be met to establish causality: 1. there must be a strong correlation between the proposed cause and the effect 2. the proposed cause must precede the effect in time 3. the cause has to be present whenever the effect occurs - A philosophical group known as essentialists proposed that two concepts must be considered in determining causality, necessary and sufficient.

- John Stuart Mill, another philosopher, suggested that in addition to the preceding criteria of causation, there must be no alternative explanations for why a change in one variable seems to lead to a change in a second variable (Cook & Campbell, 1979). - A theoretical understanding of causation is important because it improves the ability to predict, and in some cases, to control events in the real world. - Multicausality is a more recent idea related to causality which is the recognition that a member of interrelating variables can be involved in causing a particular effect. • Cook and Campbell (1979) suggested three levels of causal assertions that one must consider in establishing causality a. Molar causal laws – relate to large and complex objects b. Intermediate mediation – considers causal factors operating between molar and micro levels

of

c. Micromedian – examines causal connections at the level small particles such as atoms. Example (Cook and Campbell, 1979): Turning a light switch, which causes the light to come on. (Molar) An electrician would tend to explain the cause of the light coming on in terms of wires and electrical current. (Intermediate Mediation) The Physicist would explain the cause of the light coming on in terms of ions, atoms and subparticles. (Micromediation) • The essentialists’ idea of necessary and sufficient do not hold up well when one views a phenomenon from the perspective of multiple causation.

Example: The light switch may not be necessary to turn on the light if the insulation has worn off the electrical wires. The light will not come on even though the switch is turned on if the light bulb was burned out. • Very few phenomena in Nursing can be clearly reduced to a single cause and a single effect. The greater the proportion of causal factors that can be identified and explored, the clearer the understanding of phenomena. B. Probability - Causality/Causation may apply to basic sciences such as chemistry or physics but is unlikely to apply in the health sciences or social sciences. - With the complexity of the nursing field, nurses deal in probabilities.

- Probability addresses relative, rather than absolute, causality. From the perspective of probability, a cause will not produce a specific effect each time that particular cause occurs. - Reasoning changes when one thinks in terms of probabilities. Rather than seeking to prove that A causes B, a researcher would state if A occurs, there is a 50% probability that B will occur. C. Bias - The term bias means to slant away from the true or expected. - It is of great concern in research because of its potential effect on the meaning of the study findings. - Many factors related to research can be biased – the researcher, the measurement tools, the individual subjects, the sample, the data, and the statistics. Thus, an important concern in designing a study is to identify possible sources of bias and to eliminate or avoid them.

> Manipulation - In nursing, manipulation tends to have a negative connotation and is associated with one person underhandedly causing another person to behave in a desired way. - The word manipulate means to move around or to control the movement of, such as manipulating a syringe. - In research, manipulation is used in experimental or quasiexperimental research and is sometimes called the treatment. D. Control - It means having the power to direct or manipulate factors to achieve a desired outcome. - This is very important in research, particularly in experimental and quasi-experimental studies. The greater amount of control, the researcher has of the study situation, the more credible the study findings would be.

STUDY VALIDITY > A measure of the truth or accuracy of a claim, which is an important concern throughout the research process. > Questions of validity refer back to the propositions from which the study was developed. Is the theoretical proposition an accurate reflection of reality? Was the study designed well enough to provide a valid test of the proposition? > Validity is a complex idea that is important to the researcher and to those who read the study report and consider using the findings in their practice. > Threats to validity should be critically analyzed and judgments should be made on how seriously these threats affect the integrity of the findings.

> Types: 1. Statistical Conclusion Validity 2. Internal Validity 3. Construct Validity 4. External Validity > To make decisions about validity, the researcher must address a variety of questions, such as the following: 1. Is there a relationship between the two variables? (Statistical Conclusion Validity) 2. Given that there is a relationship, is it plausibly causal from one operational variable to the other, or would the same relationship have been obtained in the absence of any treatment of any kind? (Internal Validity)

3. Given that the relationship is plausibly causal and is reasonably known to be from one variable to another, what are the particular cause – and – effect constructs involved in the relationship? (Construct Validity) 4. Given that there is probably a causal relationship from construct A to Construct B, how generalizable is this relationship across persons, settings, and times? (External Validity) (Cook and Campbell, 1979, p.39)

Internal Validity - The extent to which it is possible to make an inference that the independent variable is truly influencing the dependent variable. - True experiments possess a high degree of internal validity because of the use of such procedures as control groups and randomization enabling the researcher to control extraneous variables thereby ruling out alternative or competing explanations (rival hypotheses) of the results. - Threats to Internal Validity • History – the occurrence of external events that take place concurrently with the independent variable that can affect the dependent variables Ex. The effectiveness of a County – Wide Nurse Outreach Program in Relation to Improved Health – Related Practices Before Delivery (e.g. better nutritional practices, cessation of smoking, earlier prenatal care)

• Selection – Encompasses biases resulting from preexisting differences between groups which happens when individuals are not assigned randomly to groups thus there is a possibility that the groups are nonequivalent. • Maturation – Refers to processes occurring within subjects during the course of the study as a result of the passage of time rather than as a result of a treatment or independent variable (e.g. physical growth, emotional maturity, fatigue and the like). Ex. Effects of a special sensorimotor development program for developmentally delayed children. (to explain further) • Testing – Refers to the effects of taking a pretest on subjects’ performance on a posttest. It has been documented in several studies, particularly in those dealing with opinions and attitudes, that the mere act of collecting data from people changes them.

• Instrumentation – This reflects changes in measuring instruments or methods of measurement between two points of data collection. Ex.: a.Using one measure of stress as baseline and a revised measure at follow-up, any differences might reflect changes in the measuring tool rather than the effect of an independent variable. b. Using same tool but people collecting the data are more experienced for the second time. • Mortality – Arises from differential attrition in groups being compared. - The loss of subjects during the course of a study may differ from one group to another because of initial differences in interest, motivation, health and so on. - The risk of attrition is especially great when the length of time between points of data collection is long.

TABLE 9.2 Research Designs and Threats to Internal Validity THREAT

DESIGNS MOST LIKELY TO BE AFFECTED

History

One-group pretest-posttest Time series Prospective cohort Crossover/repeated measures

Selection

Nonequivalent control group (especially, posttest-only) Case-control “Natural” experiments

Maturation Testing Instrumentation Mortality

One-group pretest-posttest All pretest-posttest designs All pretest-posttest designs Prospective cohort Longitudinal experiments and quasi-experiments

External Validity - The generalizability of the research findings to other settings or samples - The aim of research typically is to reveal enduring relationships, the understanding of which can be used to improve human health and well-being. - A study is externally valid to the extent that the sample is representative of the broader population, and the study setting and experimental arrangements are representative of other environments. - Threats 1. Expectancy Effects Subjects may behave in a particular manner largely because they are aware of their participation in a study (Hawthorne effect)

2. Novelty effects When a treatment is new subjects and research agents alike might alter their behavior in various ways. Results may reflect reactions to the novelty rather than to the intrinsic nature of the intervention; once the treatment is more familiar, results might be different. 3. Interaction of history and treatment effect The results may reflect the impact of the treatment and some other events external to the study. 4. Experimenter Effects Subjects’ behavior may be affected by characteristics of the researchers. 5. Measurement Effects Researchers collect a considerable amount of data in most studies, such as pretest information, background data, and so forth. The results may not apply to another group of people who are not also exposed to the same data collection (and attentiongiving) procedures.

OVERVIEW OF RESEARCH DESIGN TYPES  Quantitative research designs vary along a number of dimensions as shown in the following table.  Quantitative designs tend to be fairly structured. Quantitative researchers specify the nature of any intervention, comparisons to be made, methods to be used to control extraneous variables, timing of data collection, the study site and setting, and information to be given to participants – all before a single piece of data is gathered.  Qualitative modifications gathered.

researchers on the other hand, make deliberate that are sensitive to what is being learned as data are

 Quantitative research often involves making comparisons. It could either be : between separate groups of people – between – subjects design, or involving the same people under two conditions or at two points in time – within – subjects design.

Table 8.1 Dimensions of Research Design DIMENSION Degree of Structure Type of group comparisons

DESIGN Structured Flexible Between-subjects Within-subjects

Time frame

Cross-sectional Longitudinal

Control over independent variable Experimental Quasiexperimental Preexperimental Nonexperimental Measurement of independent and Retrospective Dependent variables Prospective

MAJOR FEATURES Design is specified before data are collected Design evolves during data collection Subjects in groups being compared are different people Subjects in groups being compared are the same people at different times or in different conditions Data are collected at one point in time Data are collected at two or more points in time over an extended period Manipulation of independent variable, control group, randomization Manipulation of independent variable, but no randomization or no control group Manipulation of independent variable, no randomization or control group, limited control over extraneous variables No manipulation of independent variable Study begins with dependent variable and looks backward for cause or antecedent Study begins with independent variable and

TYPES OF QUANTITATIVE RESEARCH DESIGN 1. Experimental - In an experiment, researchers are active agents, not passive observers - Characteristics of True Experiments a. Manipulation - The experimenter does something (experimental treatment or interventions) and observes the effect on the dependent variable. b. Control - This is achieved in an experimental study by manipulating, randomizing, carefully preparing the experimental protocols, and by using a control group. - Control group - group of subjects whose performance on a dependent variable is used to evaluate the performance of the experimental group or treatment group. - Experimental group - the group that receives the intervention on the same dependent variable

c. Randomization - Involves placing subjects in groups at random (also called random assignment). - Random means that every subject has an equal chance of being assigned to any group to avoid systematic bias. - Can be done through: a. Flipping a coin – heads – for 1 group tails – for 1 group b. Pulling names from a hat – names of subjects written in paper then placed in a hat then drawn c. Table of Random Numbers – a table displaying hundreds of digits arranged in a random order

- cluster randomization involves randomly assigning clusters of individuals to different treatment groups ex. Group of patients who enter a hospital unit at the same time - Refer to Table 8.5 Experimental Designs - Experimental Conditions The following are among the questions researchers need to address: • What is the intervention, and how does it differ from usual methods of care? • If there are two alternative interventions, how exactly do they differ? • What are the specific procedures to be used with those receiving the intervention? • What is the dosage or intensity of the intervention?

• Over how long a period will the intervention be administered, how frequently will it be administered, and when will the treatment begin (e.g., 2 hours after surgery)? • Who will administer the intervention? What are their credentials, and what type of special training will they receive? • Under what conditions will the intervention be withdrawn or altered? - The Control Condition Referred to as the counterfactual which is used as a basis of comparison in a study. The following are among the possibilities for the counterfactual: a. alternative intervention b. placebo or pseudointervention presumed to have no therapeutic value placebo effect – because of subjects’ expectation

c. standard methods of care d. different doses or intensities of treatment - Experimental Strengths a. Most powerful method available for testing hypothesis of cause - and - effect relationships between variables b. The confidence with which causal relationship can be inferred Lazarsfeld identified 3 criteria for causality • a cause must precede an effect in time • an emperical relationship between the presumed cause and the presumed effect must be present • the relationship cannot be explained as being caused by a third variable - Experimental Limitations a. There are often constraints that make an experimental approach impractical or impossible.

b. Sometimes criticized for their artificiality • randomization and equal treatment within groups • focus on only a handful of variables while holding all else constant c. experiments without theoretical framework are questioned for observed outcomes of causality d. experiments conducted in clinical settings are often administered by clinical staff rather than researchers and therefore treatment to the experimental group is questionable just as the control group (whether intervention was not done) e. clinical studies are usually conducted in environments over which researchers have little control f. sometimes a problem emerges if subjects themselves have discretion about participation in the treatment g. Hawthorne effect

2. Quasi-Experimental Designs - Quasi-Experiments involve the manipulation of independent variable but randomization to treatment groups is lacking – shown in Figure 8.3 Nonequivalent Control Group Designs • Comparison Group is not a match with the treatment group but is only similar Ex. The effect of introducing primary nursing on staff morale • Two types: Nonequivalent control group pretest-posttest design Nonequivalent control group posttest only - also called by Campbell Stanley as preexperimental Time Series Designs • types: One-group pretest-posttest design

Figure 8.3 Characteristics of different quantitative research designs.

Is there an intervention (control over the independent variable)?

YES

NO

Is there a random assignment to

Nonexperimental Research

treatment groups?

YES

NO

Experimental Research

Are there efforts to compensate For the lack of random assignment?

NO Preexperimental Research

YES Quasi-experimental Research

Interrupted Time Series Design – information is collected over an extended period and an intervention is introduced during that period Time series nonequivalent control group design - Strength Practical -Weakness There may be several rival hypothesis competing with the experimental manipulation as explanations for results.

ADDITIONAL TYPES OF QUANTITATIVE RESEARCH Survey Research - is designed to obtain information from population regarding the prevalence distribution and interrelations of variables within those populations • cross-sectional • longitudinal - data can be collected through • personal interviews • telephone interviews • questionnaires Evaluation Research - An applied form of research that involves finding out how well a program, practice, procedure or policy is working.

Needs Assessment - Represents an effort to provide a decision maker with information for action. Secondary Analysis - Involves the use of data gathered in a previous study to test new hypotheses or explore new relationships. Meta-Analysis - An application of statistical procedures to finding from research reports. Delphi Surveys - Developed as a trial for short-term forecasting

Methodological Research - Refers to controlled investigation of the ways of obtaining, organizing and analysing data. Content Analysis Studies - Involves the quantification of narrative qualitative material

Related Documents

Quanti Research Design
April 2020 11
Research Design
November 2019 14
Research Design
June 2020 12
Research Design
May 2020 19
Research Design
April 2020 14
Research Design
November 2019 16

More Documents from ""

Eyes[1]
April 2020 26
Diet During Infancy
April 2020 28
Nursing Research Lec 09
April 2020 29
Neck[1]2
April 2020 26
Concept Mapping
April 2020 35