Three Types Of Research

  • October 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Three Types Of Research as PDF for free.

More details

  • Words: 2,649
  • Pages: 8
Three Types of Research 1. Causal Reseach When most people think of scientific experimentation, research on cause and effect is most often brought to mind. Experiments on causal relationships investigate the effect of one or more variables on one or more outcome variables. This type of research also determines if one variable causes another variable to occur or change. An example of this type of research would be altering the amount of a treatment and measuring the effect on study participants. 2. Descriptive Research Descriptive research seeks to depict what already exists in a group or population. An example of this type of research would be an opinion poll to determine which Presidential candidate people plan to vote for in the next election. Descriptive studies do not seek to measure the effect of a variable; they seek only to describe. 3. Relational Research A study that investigates the connection between two or more variables is considered relational research. The variables that are compared are generally already present in the group or population. For example, a study that looked at the proportion of males and females that would purchase either a classical CD or a jazz CD would be studying the relationship between gender and music preference. Theory and Hypothesis A theory is a well-established principle that has been developed to explain some aspect of the natural word. A theory arises from repeated observation and testing and incorporates facts, laws, predictions, and tested hypotheses that are widely accepted. A hypothesis is a specific, testable prediction about what you expect to happen in your study. For example, a study designed to look at the relationship between study habits and test anxiety might have a hypothesis that states, “This study is designed to assess the hypothesis that students with better study habits will suffer less test anxiety.” Unless your study is exploratory in nature, your hypothesis should always explain what you expect to happen during the course of your experiment or research. While the terms are sometimes used interchangeably in general practice, the difference between a theory and a hypothesis is important when studying experimental design. Some important distinctions to note include:

1

• •

A theory predicts events in general terms, while a hypothesis makes a specific prediction about a specified set of circumstances. A theory is has been extensively tested and is generally accepted, while a hypothesis is a speculative guess that has yet to be tested.

Effect of Time in Psychology Research There are two types of time dimensions that can be used in designing a research study. 1. Cross-sectional research takes place at a single point in time. o All tests, measures, or variables are administered to participants on one occasion. o This type of research seeks to gather data on present conditions instead of looking at the effects of a variable over a period of time.

2. Longitudinal research is a study that takes place over a period of time. o Data is first collected at the outset of the study, and may then be gathered repeatedly throughout the length of the study. o Some longitudinal studies may occur over a short period of time, such as a few days, while others may take place over a period of decades. o The effects of aging are often investigated using longitudinal research. Causal Relationships Between Variables What do we mean when we talk about a “relationship” between variables? In psychological research, we are referring to a connection between two or more factors that we can measure or systematically vary. One of the most important distinctions to make when discussing the relationship between variables is the meaning of causation. •

A causal relationship is when one variable causes a change in another variable. These types of relationships are investigated by experimental research in order to determine if changes in one variable truly causes changes in another variable.

Correlational Relationships Between Variables A correlation is the measurement of the relationship between two variables. These variables already occur in the group or population and are not controlled by the experimenter. •

A positive correlation is a direct relationship where as the amount of one variable increases, the amount of a second variable also increases.

2

• •

In a negative correlation, as the amount of one variable goes up, the levels of another variable go down. In both types of correlation, there is no evidence or proof that changes in one variable cause changes in the other variable. A correlation simply indicates that there is a relationship between the two variables.

The most important concept to take from this is that correlation does not equal causation. Many popular media sources make the mistake of assuming that simply because two variables are related that there a causal relationship exists. Q. What is Validity? A. Validity is the extent to which a test measures what it claims to measure. It is vital for a test to be valid in order for the results to be accurately applied and interpreted. Validity isn’t determined by a single statistic, but by a body of research that demonstrates the relationship between the test and the behavior it is intended to measure. There are three types of validity: Content validity: When a test has content validity, the items on the test represent the entire range of possible items the test should cover. Individual test questions may be drawn from a large pool of items that cover a broad range of topics. In some instances where a test measures a trait that is difficult to define, an expert judge may rate each item’s relevance. Because each judge is basing their rating on opinion, two independent judges rate the test separately. Items that are rated as strongly relevant by both judges will be included in the final test. Criterion-related Validity: A test is said to have criterion-related validity when the test is demonstrated to be effective in predicting criterion or indicators of a construct. There are two different types of criterion validity: •

Concurrent Validity occurs when the criterion measures are obtained at the same time as the test scores. This indicates the extent to which the test scores accurately estimate an individual’s current state with regards to the criterion. For example, on a test that measures levels of depression, the test would be said to have concurrent validity if it measured the current levels of depression experienced by the test taker.



Predictive Validity occurs when the criterion measures are obtained at a time after the test. Examples of test with predictive validity are career or aptitude tests, which are helpful in determining who is likely to succeed or fail in certain subjects or occupations.

3

Construct Validity: A test has construct validity if it demonstrates an association between the test scores and the prediction of a theoretical trait. Intelligence tests are one example of measurement instruments that should have construct validity. Q. What Is Reliability? A. Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly. For example, if a test is designed to measure a trait (such as introversion), then each time the test is administered to a subject, the results should be approximately the same. Unfortunately, it is impossible to calculate reliability exactly, but there several different ways to estimate reliability. Test-Retest Reliability To gauge test-retest reliability, the test is administered twice at two different points in time. This kind of reliability is used to assess the consistency of a test across time. This type of reliability assumes that there will be no change in the quality or construct being measured. Test-retest reliability is best used for things that are stable over time, such as intelligence. Generally, reliability will be higher when little time has passed between tests. Inter-rater Reliability This type of reliability is assessed by having two or more independent judges score the test. The scores are then compared to determine the consistency of the raters estimates. One way to test inter-rater reliability is to have each rater assign each test item a score. For example, each rater might score items on a scale from 1 to 10. Next, you would calculate the correlation between the two rating to determine the level of inter-rater reliability. Another means of testing inter-rater reliability is to have raters determine which category each observations falls into and then calculate the percentage of agreement between the raters. So, if the raters agree 8 out of 10 times, the test has an 80% inter-rater reliability rate. Parallel-Forms Reliability Parellel-forms reliability is gauged by comparing to different tests that were created using the same content. This is accomplished by creating a large pool of test items that measure the same quality and then randomly dividing the items into two separate tests. The two tests should then be administered to the same subjects at the same time. Internal Consistency Reliability This form of reliability is used to judge the consistency of results across items on the same test. Essentially, you are comparing test items that measure the same construct to determine the tests internal consistency. When you see a question that seems very similar to another test question, it may indicate that the two questions are being used to gauge reliability. Because the two questions are similar and designed to measure the same thing, the test taker should answer both questions the same, which would indicate that the test has internal consistency. 4

The Simple Experiment Finding Cause-and-Effect Relationships What is a Simple Experiment? A simple experiment is used to establish cause and effect, so this type of study is often used to determine the effect of a treatment. In a simple experiment, study participants are randomly assigned to one of two groups. Generally, one group is the control group and receives no treatment, while the other group is the experimental group and receives the treatment. Parts of a Simple Experiment The experimental hypothesis: a statement that predicts that the treatment will cause an effect. The experimental hypothesis will always be phrased as a causeand-effect statement. The null hypothesis: a hypothesis that the experimental treatment will have no effect on the participants or dependent variables. It is important to note that failing to find an effect of the treatment does not mean that there is no effect. The treatment might impact another variable that the researchers are not measuring in the current experiment. The independent variable: the treatment variable that is manipulated by the experimenter. The dependent variable: the response that the experimenter is measuring. The control group: made up of individuals who are randomly assigned to a group but do not receive the treatment. The measures takes from the control group are then compared to those in the experimental group to determine if the treatment had an effect. The experimental group: made up of individuals who are randomly assigned to the group and then receive the treatment. The scores of these participants are compared to those in the control group to determine if the treatment had an effect. Determining the Results of a Simple Experiment Once the data from the simple experiment has been gathered, researchers then compare the results of the experimental group to those of the control group to determine if the treatment had an effect. But how do researchers determine this effect? Due to the always present possibility of errors, we can never be 100% sure of the relationship between two variables. However, there are ways to determine if there most likely is a meaningful relationship. Experimenters use inferential statistics to determine if the results of an experiment are meaningful. Inferential statistics is a branch of science that deals with drawing inferences about a population based upon measures taken from a

5

representative sample of that population. The key to determining if a treatment had an effect is to measure the statistical significance. Statistical significance shows that the relationship between the variables is probably not due to mere chance and that a real relationship most likely exists between the two variables. Statistical significance is often represented like this: p < .05 A p-value of less than .05 indicates that the possibility that the results are due merely to chance is less than 5%. Occasionally, smaller p-values are seen such as p < .01. There are a number of different means of measuring statistical significance. The type of statistical test used depends largely upon the type of research design that was used. Correlational Studies Psychology Research with Correlational Studies The Purpose of Correlational Studies: Correlational studies are used to look for relationships between variables. There are three possible results of a correlational study: a positive correlation, a negative correlation, and no correlation. The correlation coefficient is a measure of correlation strength and can range from –1.00 to +1.00. •





Positive Correlations: Both variables increase or decrease at the same time. A correlation coefficient close to +1.00 indicates a strong positive correlation. Negative Correlations: Indicates that as the amount of one variable increases, the other decreases (and vice versa). A correlation coefficient close to -1.00 indicates a strong negative correlation. No Correlation: Indicates no relationship between the two variables. A correlation coefficient of 0 indicates no correlation.

Limitations of Correlational Studies: While correlational studies can suggest that there is a relationship between two variables, they cannot prove that one variable causes a change in another variable. In other words, correlation does not equal causation. For example, a correlational study might suggest that there is a relationship between academic success and self-esteem, but it cannot show if academic success increases or decreases self-esteem. Other variables might play a role, including social relationships, cognitive abilities, personality, socio-economic status, and a myriad of other factors. Types of Correlational Studies: 1. Naturalistic Observation

6

Naturalistic observation involves observing and recording the variables of interest in the natural environment without interference or manipulation by the experimenter. Advantages of Naturalistic Observation: • • •

Gives the experimenter the opportunity to view the variable of interest in a natural setting. Can offer ideas for further research. May be the only option if lab experimentation is not possible.

Disadvantages of Naturalistic Observation: • • • •

Can be time consuming and expensive. Does not allow for scientific control of variables. Experimenters cannot control extraneous variables. Subjects may be aware of the observer and may act differently as a result.

2. The Survey Method Survey and questionnaires are one of the most common methods used in psychological research. In this method, a random sample of participants completes a survey, test, or questionnaire that relates to the variables of interest. Random sampling is a vital part of ensuring the generalizability of the survey results. Advantages of the Survey Method: • •

It’s fast, cheap, and easy. Researchers can collect large amount of data in a relatively short amount of time. More flexible than some other methods.

Disadvantages of the Survey Method: • •

Can be affected by an unrepresentative sample or poor survey questions. Participants can affect the outcome. Some participants try to please the researcher, lie to make themselves look better, or have mistaken memories.

3. Archival Research Archival research is performed by analyzing studies conducted by other researchers or by looking at historical patient records. For example, researchers recently analyzed the records of soldiers who served in the Civil War to learn more about PTSD ("The Irritable Heart"). 7

Advantages of Archival Research: • • •

The experimenter cannot introduce changes in participant behavior. Enormous amounts of data provide a better view of trends, relationships, and outcomes. Often less expensive than other study methods. Researchers can often access data through free archives or records databases.

Disadvantages of Archival Research: • • •

The researchers have not control over how data was collected. Important date may be missing from the records. Previous research may be unreliable.

8

Related Documents