LESSON 5
Dr. John M. Ritz
CHAPTER III
* METHODS AND PROCEDURES Introduction (research type and sub-sections to be described)
* Population
* Research Variables (experimental studies)
* Instrument Design or Use
* Field, Classroom or Lab Procedures (experimental studies)
* Methods of Data Collection
* Statistical Analysis
* Summary
Sampling Process
* Select a sample that represents a group.
* Gather data from the sample.
* Use the data to make references about a larger group, the population or universe.
* Make decisions based upon what the sample told us.
Sampling Techniques
* Population (all of a select group)
* Sample (select group or finite set of people, objects, or things taken from the whole)
* Random Sample (typical sample, representative of the whole)
* Stratified Random Sample (sub-populations or strata included to give a truer picture of the whole population, guarantees sub-group representation)
Small vs Large Samples
* Small - convenient and economical, however there is a greater chance of error.
* Large - more accurate, however more costly and time consuming.
Validity
* relates to instrument design It deals with whether the instrument is appropriate to the particular problem being investigated.
* The instrument must be relevant to the particular questions being asked, not merely related questions.
* It is concerned with the kind of information a score, rating or evaluation yields about an individual's behavior in a particular setting.
* A measurement instrument is said to be valid if it measures what it is supposed to measure.
* Does it do what it is supposed to do!
Types of Validity
* Construct Validity - related to characteristics that are believed to account for some aspect of behavior - theory. It is based on a theory of what you think the concept is (what really is intelligence, leadership, etc.).
* Content Validity - the adequacy that an instrument samples a given situation. After construct validity is established, identify the content that encompasses the construct (problem solving, IQ, mathematics ability, etc.).
* Criterion-Related Validity - relating the results of one instrument to the results of another test (IQ tests or math or reading tests).
Internal Validity
* focus on the entire experimental design Internal validity in an experimental design asks the question: "did, in fact, the experimental treatments make a difference in this specific instance?"
* Did the independent variable X really produce a change in the dependent variable Y?
* Must control for extraneous variables such as:
o History
o Maturation
o Testing
o Instrumentation
o Statistical Regression
o Selection
o Experimental Mortality
o Selection-Maturation Interaction
External Validity
* focuses on the entire experimental design External validity asks the question: "to what populations, settings, treatment variables and measurement variables can this effect be generalized?"
* What relevance do the findings concerning the effect of X have beyond the confines of the experiment?
* Must control for extraneous variables found in:
o Selection biases and external variables.
o Effects of pre-testing.
o Experimental procedures.
o Multi-treatment interferences.
Reliability
* focuses on instrument design Does the instrument produce the same results consistently (consistency).
* Methods for testing for reliability:
o Test-retest method
o Parallel forms method (equilivant forms method)
o Internal consistency form (split-halves method)