Survey Design & Analysis to support Valid Interpretation and Use Michael C. Rodriguez Campbell Leadership Chair in Education & Human Development March 15, 2019 Educational Psychology This presentation can be found at: http://www.edmeasurement.net/MAG Available in PowerPoint .pptx format
Dillman, D.A., Smyth, J.D., & Christian, L.M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method (4rd ed.). Hoboken, NJ: John Wiley & Sons, Inc. The Tailored Design Method Reducing Survey Error
Sampling Error Coverage Error Measurement Error Nonresponse Error Sampling Error The result of only including some of
the population in the survey literally because we only have a sample Example: The City Council decided to survey residents in each neighborhood about their satisfaction with the work of their neighborhood
council they surveyed 20% of residents in each neighborhood. Coverage Error The result of not giving every member of the population a chance to be included in the sample
Example: The City Council decided to survey residents in each neighborhood by sending postcards to all residents with the URL for the online survey some residents dont have internet access. Measurement Error The result of
asking poorly worded questions so that responses are inaccurate or uninterpretable Example: The City Council asked residents Has the social interaction level changed in the past year using these choices:
(1) increased a lot (2) increased somewhat (3) increased a little (4) decreased Nonresponse Error The result of people who do not respond to the survey being different from those who do respond different in a way
that produces different response patterns Example: Residents with children were less likely to respond their responses are different than those without children Two Critical Points
for quality survey design Responding to a survey requires cognition and motivation Multiple attempts are required to obtain satisfactory response rates regardless of administration method How can we motivate participants? View survey participation as social exchange
Social Exchange A theory of human behavior that helps us understand how human interaction develops and is maintained Individual behavior is motivated by the expected returns from others Three factors are central to predicting behavior in this context: rewards, costs, and trust Ways of providing REWARDS Say thank you
We appreciate very much your help Many thanks in advance Show positive regard You have been selected in our national sample Please take a moment Give a reason for the survey Provide a phone number Personally address
correspondence Ways of providing REWARDS Give tangible rewards $1 or $2 improves response rates much more than promise of later larger rewards Pens or other materials Rewards invite a
sense of obligation in the respondent Ask for advice here you suggest that the respondent is the important one Important decisions need to be made and we need your advice on which route to follow
Ways of providing REWARDS Give social validation You are an important member of the neighborhood Inform respondents that opportunities to respond are scarce Make the survey interesting Highly salient topics
Attend to layout & design Some people just love surveys Support group values Importance of neighborhood vitality Importance of dues paying members Ways of reducing SOCIAL COSTS Avoid embarrassment
Avoid subordinate Ask people questions to language which they should reasonably be expected to know the answers? Dont put folks in a position where they are being judged because of not knowing or doing things on the survey. In order to address the
pressing needs of the neighborhood councils, your input is necessary. Would you please consider providing us with your advice/ opinions/experiences? Ways of reducing SOCIAL COSTS Minimize requests for personal information If needed, place at end
of the survey If required (the focus of the survey), use clear but respectful language Certify anonymity Avoid inconvenience Include an envelope with a real stamp The url link should include the password
The survey should appear short and easy Respondent friendly Easy-to-answer format Balance with more questions on a page Ways of establishing TRUST Legitimate Provide an advanced authority as
token a sponsor of appreciation Government Although $1 sponsored or $2 is of surveys little value have to higher many, it response creates rates
significant than trust surveys from marketing research firms Use of real stamp on return envelop Access to tangible resources Ways of establishing TRUST Invoke other social exchange relationships
As an alum of the University As a resident of the neighborhood As a participant in the program Make the task appear important Personalized cover letters Actual letterhead Insignia or logo of
respected sponsor Questionnaire design that makes sense to ordinary folks Linking Social Exchange Elements Costs and rewards may contribute to nonresponse What rewards are present for not responding? Repetition of appeals diminishes effectiveness Extreme designs create the opposite effect Very short surveys may be seen as unimportant
People differ in what constitutes rewards/costs Strategies work as a whole not in isolation Planning for the Survey Project Preparing for Survey Design handout
content coverage construct maps knowing your target population including local wisdom and community leaders Survey Design Item Writing Item writing evidence versus niceties dos and don'ts the online survey tool might give you interesting
options - doesn't mean you should use them Assess Every Survey Question 1. Do you need to ask the question? 2. Does the question require an answer? 3. Do survey respondents have an accurate readymade answer? 4. Can folks accurately recall past behavior? 5. Will the individual be willing to reveal information? 6. Will the individual be motivated to answer each question? General Guidelines
Professional Looking clean format Brief and concise clear instructions Use check boxes or circles, rather than blanks to obtain responses Personal questions at the end Leave room for comments (and tell folks upfront that there will be room). Proofing Guidelines
Read questions aloud Scan items for - and - or - but - with - except Consider sensitivity review Pilot items look for - Little variation - too many dont know - Too many other - misinterpretation Additional Item-Writing Advice Order scales from low to high or from negative to positive strongly disagree / disagree / agree / strongly
agree This common scale may be redundant Better to allow room for uncertainty disagree / tend to disagree / tend to agree / agree More Item-Writing Advice Avoid middle or neutral options Neutral responses may indicate - Lack of knowledge - Uncooperativeness - Reading difficulty - Inapplicability
- Reluctance to answer Provide a clearly interpretable alternative - not applicable - no basis for judgment - dont know - prefer not to answer Cases when the middle option works less than before / about the same / more than before too little / about right / too much less than acceptable / acceptable / more than acceptable not yet proficient / proficient / exceeds proficient
Even More Item-Writing Advice Avoid other options Avoid ranking responses Avoid loaded questions: 9 out of 10 dentists recommend Pepsi over Coke. Which do you prefer? Avoid double-barreled items: How satisfied are you with the amount and quality of police patrol and road repair?
Avoid open-ended questions. Formats and Guidelines Review Formats from handout Review Survey Item-Writing Guidelines from handout Review Social Exchange Evaluation Tool Race/Ethnicity and Sex/Gender Identity The Williams Institute is dedicated to conducting rigorous, independent research on sexual orientation and gender identity law and public policy. A think tank at UCLA Law, the
Williams Institute produces high-quality research with real-world relevance and disseminates it to judges, legislators, policymakers, media and the public. https://williamsinstitute.law.ucla.edu/ Best Practices for Asking Questions about Sexual Orientation on Surveys Gender-Related Measures Overview Best Practices for Asking Questions to Identify Transgender and Gender Minority Respondents on Population-Based Surveys
Transgender-inclusive measures of sex/gender for population surveys: Mixed-methods evaluation and recommendations https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0178043 Constructing the Survey Criteria Objectives Reduce nonresponse Reduce measurement error
Focus on Social Exchange Elements Improve rewards Reduce costs Develop trust Ordering Questions 1. 2. 3. 4. 5.
Most salient to least salient Does what the cover letter says Personal or objectionable questions are near the end Logical order or progression of questions Group items with the same scale with similar items within topic area Choosing the First Question 1. 2. 3.
4. Applies to everyone Simple and easy to answer Interesting Connect the respondent and the purpose of the survey (as understood by the individual) 5. Clearly indicates the main topic of the survey Pretesting Have knowledgeable colleagues or other experts review the instrument
Conduct a few interviews to understand the cognitive and motivational requirements Pilot the instrument Final proofing Computer Based Administration Computer Based Issues
Coverage Device variation Effects of computer hardware and software Computer literacy Computer logic versus survey logic Positives
Ease of administration Fast response time Lower cost Direct data entry Increased attractiveness may improve willingness of respondents to complete survey
Negatives Coverage limited to those with internet access Time and expense in providing help desk Security - without password protection anyone may respond When to go 1 Online
You can use a convenience sample Your population is self-contained You have e-mail addresses Sample size is relatively large You want to use multimedia elements (opt) Basic Design Principles
2,3 Web Survey Design Principles 1. Provide a password to limit access to those in the sample 2. Introduce the web survey with a motivational welcome screen Emphasize ease of responding Informs respondents how to proceed
Web Survey Design Principles 3. The first question should fit entirely on the screen at the top of the questionnaire. 4. Present questions like they would appear on paper surveys (conventional appearance) 5. Consider variations that may appear due to differences in software, browsers, smaller screens Web Survey Design Principles 6. Provide very specific directions how to take each computer-related action at the point it is needed
7. Limit the line length of each question so that questions dont extent the full width of the screen 8. Restrain the use of color maintain readability and measurement properties Web Survey Design Principles 9. Use drop-down boxes sparingly & limit matrixtype questions 10. Do not require individuals to respond to a given question before they can answer the questions that follow 11. Provide skip directions while encouraging individuals to answer each question
Web Survey Design Principles 12. Construct the web survey so that respondents can scroll from question to question 13. Provide some indicator of survey completion progress 14. Keep graphics to a minimum they increase loading time 15. Automate skips & inform respondents; may consider validating input Defending Survey
Quality Validity The degree to which theory and evidence supports the intended interpretations and uses of scores We validate score interpretations and uses. Interpretation and Use Argument Validity Argument Building a Validity Argument There are many forms of validity evidence, not different forms
of validity. Validity is not an either/or, it is a continuum. Here we explore possible interpretations and uses of survey data. Once an interpretation/use is articulated, we can identify appropriate evidence to collect to support and defend that interpretation/use. This is validation (the collection of evidence to support the intended interpretations and uses). Are the responses plausible? Internal agreement checks (internal consistency in responses) Do responses across similar questions agree Identifying faking, extreme responses, lying
Pattern responding Minimum number of items with responses Measuring person attributes or characteristics What is the attribute we are measuring? [accuracy] Content evidence required Theory, literature, prior measures Individuals from the target population agree with our definition and operationalization of this attribute Is the score consistent or reliable? [precision]
Score reliability Item discrimination Are scores associated with other relevant variables? Criterion-related evidence (correlations with other variables) Accurately summarizing true intent of the population Did respondents interpret questions as you intended? Think alouds Piloting results Key-informant reviews of the survey Followed evidence-based item-writing guidelines
Are respondents consistent in their answers? Internal agreement across items Score consistency or reliability Help inform decision making (policy) to promote community responsiveness and preparedness Representativeness and coverage of the population. Sampling design (stratification) special attention to the sampling frame Set the sample size to achieve certain level of precision
Minimized multiple sources of survey error Nonresponse error report missingness (non-respondent study) Measurement error Help inform decision making (cont.) Generalizability of the topic extent to which responses are context specific Theory about the context-specific variables (validity generalization) Expert review or endorsement Local review and endorsement Look for availability of base-rates in new communities
Examine regional variation within your own sample To inform program design or curriculum design (program staff) Responses are accurate and plausible Minimized measurement error Etc. (as above)
Content-related evidence, domain coverage Program theory of action Program main goals and objectives are included in the survey Participants are the right participants (sample demographics) Response Rates 4 Response Rates based on review of published survey results Previous research suggests average response rates at 5565% for paper-pencil surveys.
With careful attention to design, 70% can be achieved consistently. Others suggest that returns less than 40-50% are common on mail surveys Web Return Rates 4 A meta-analysis of web survey reports (N=68) Mean response rate was 40% (SD=20%) Survey response rate was improved by
# of contacts personalized contact prenotification letter, message issue salience No relationship found between survey length or password requirements and response rates Item Review & Practice Analysis of Ordinal & Categorical Data
Survey Data Quantitative Data (not typical for surveys) Age, income, height, weight Counts of things (children, books, credits earned) Qualitative Data Nominal Categories (gender, race, political affiliation, state of residence) Ordered Categories (level of satisfaction or agreement, frequency of behaviors) Descriptive Comments (open-ended responses)
63 Quantitative Analysis of Qualitative Data There is a tendency to use numeric labels as category labels or as option labels. These are labels, not numeric values. On a scale of 1 to 4, please indicate your level of satisfaction with these services, where 1 is dissatisfied and 4 is satisfied. Somewhat Dissatisfied Dissatisfied Somewhat Satisfied
Satisfied 1 2 3 4
64 Measuring Attributes Continuity is a property of an attribute, not the measurements of that attribute.
e.g., Length is a continuous variable but measured in inches it is discrete. Ordinal variables can have two meanings: Attribute is continuous, but measurement is discrete and ordinal denoting order along the continuum (e.g., rating scales). Discrete variable that denotes rank. 65 Hypothesis Tests for Nominal/Ordinal Variables
Chi-square test Measures of strength of association Phi coefficient (2x2 tables) Cramers phi (larger than 2x2 tables) 66 Chi-square test Actually two-tests Goodness-of-fit test (one variable) Equal or predetermined expected frequencies. Answers the question: Do our observed frequencies fit the
expected frequencies? Or previous frequencies? Test of independence (two variables) Evaluates the similarity of frequencies across groups. Answers the question: Are these two variables independent (i.e., are they related)? 67 Associations among Items Assumption 1: Observations are independent. This is generally met when each person in the table is only in the
table once they are not counted twice or more. Assumption 2: The test statistic is approximately distributed Chi-Square for relatively large samples. This is generally met when expected frequencies in each cell of the contingency table are greater than or equal to 5 (there has to be the potential to observe 5 cases in each cell). Effect Sizes 2 Phi, , is a special case of the Pearson productmoment correlation coefficient for dichotomous
items (0/1) or can be thought of as a correlation in a 2 2 table. There is an adjustment made to for contingency tables larger than 23 or 32 called Cramrs Phi (SPSS calls this Cramers V). Crosstab: Sex Support Support building a football stadium Sex Male
Count % within Gender Female Count % within Gender Total Count
% within Gender No Total Yes 22 58 80
27.5% 72.5% 100.0% 71 59 130
54.6% 45.4% 100.0% 93 117 210
44.3% 55.7% 100.0% Results 2 Pearson Chi-Square Value
df Asymp. Sig. (2-sided) 14.758 1 .000
N of Valid Cases Nominal by Nominal Phi Cramer's V 210 Value
Approx. Sig. -.265 .000 .265 .000 Summary Statement Based on the results, 73% of Males and 45% of Females
( 5%) support building a stadium. There is a statistically significant difference in level of support between males and females, where 2(1, n=210)=14.8, p<.001. This is a small, but statistically significant association where Phi=.264. TIMSS 1994 SPSS analysis of 6 TIMSS middle school questions. 1=Strongly Disagree to 4=Strongly Agree 73
Mean = 2.45 Mean = 2.46 74 Table of Response % 75 Crosstab of Boring/Job
76 Crosstab of Boring/Job 2(9, n=6759) = 1794, p<.001. Cramrs phi magnitude of association = .29 77 Item Response Theory for Survey Data Analysis Wilson, M. (2005). Constructing measures: An item response modeling approach. Mahwah, NJ: Lawrence Erlbaum. Reeve, B.B., & Msse, L.C. (2004). Item response theory modeling for questionnaire evaluation.
One Approach Construct Map Construct Definition (measuring a trait) A simple form: More or less, high to low Item Development Realizations of the construct Outcome Space Aspect of response we value how to score Measurement Model
How we relate scores to constructs From Construct to Item Responses Construct Causality Measurement Model Item
Responses Outcome Space Inferences Source: Mark Wilson, 2005 Background to IRT IRT is a way of thinking about measurement: a probabilistic model. We give an item or task to a person and obtain an
item-person interaction. This results in a score with a probability, given a persons ability. Rasch Philosophy Rasch models provide a basis and justification for obtaining person locations on a continuum from total scores on assessments. Although it is not uncommon to treat total scores directly as measurements, they are actually counts of discrete observations rather than measurements.
Each observation represents the observable outcome of a comparison between a person and item. Such outcomes are directly analogous to the observation of the rotation of a balance scale in one direction or another. This observation would indicate that one or other object has a greater mass, but counts of such observations cannot be treated directly as measurements. Item Characteristic Curve
Test Characteristic Curve Test Characteristic Curve 4 Points on the Raw Score Scale Test Characteristic Curve 4 Points on the Raw Score Scale 0.5 on the Rasch Scale
Test Characteristic Curve 4 Points on the Raw Score Scale 0.5 on the Rasch Scale 4 Points on the Raw Score Scale Test Characteristic Curve 4 Points on the Raw Score Scale 0.5 on the Rasch Scale 4 Points on the Raw Score Scale
1.2 Point on the Rasch Scale From Numbers to Meaning Numbers themselves do not mean much. Is 10 meters a short distance? Long distance? We need context to bring meaning to the measure: 10 meters. However, 10 meters should always be 10 meters, no matter who takes the measure or how it is taken. Sample Dependent Statistics Is an item with a p-value of .90 easy or difficult?
90% passed the item Is a person with a score of 5 out of 50 items low in ability? correctly answered 10% of the items IRT Scaling Person-free item difficult Locates the items on the ability continuum Item-free person ability Locates the person on the ability continuum
Places items and persons on the same scale the ITEM MAP Item Map Construct MAP 1. Explains the construct; interpretation guide 2. Enables design of items that will lead individuals to give responses that inform important levels of the construct map; identify relevant item features 3. Provides criterion to analyze responses regarding degree of consistency with intended construct
4. Item selection or retention should be based on informed professional judgment Construct Map Describing Task Characteristics IRT Assumptions Local independence Responses to differing items on the test are independent of one another, conditional on the trait they have in common (i.e., conditional on the latent trait items are uncorrelated).
Unidimensionality Only one dominant trait is being measured. Multidimensional models now exist. Evaluate the functioning of scale properties We can ask questions about the response scale: Does the 5-point scale work as interpreted? Do we need 5 points? 99
Minnesota Student Survey Consider: Mental Distress Mental Distress Item During the last 30 days, have you felt you were under any stress or pressure? o Yes, almost more than I could take o Yes, quite a bit of pressure o Yes, more than usual o Yes, a little o No
101 102 Mental Distress Item During the last 30 days, have you felt sad? o All the time o Most of the time o Some of the time o A little of the time o None of the time 103
104 Other Possible Analysis Differential Item Functioning Group difference, conditioned on trait level Form of measurement invariance; item bias Equating over time Constant score scale location over time Keep item parameters and fix item locations Examine parameter drift over time
105 Returning to TIMSS Liking Math Mean = 2.45 Rasch = 0.40 Mean = 2.46 Rasch = 0.60 106
Ordering of Items by Item Mean Rasch location Math is important Enjoy learning math Like math Math is easy Like a job involving math Math is boring
Math is important Enjoy learning math Like math Math is boring Math is easy Like a job involving math 107 | | . T+ |
.### | .##### | S| .####### |T 1 . + . | .######### | .############ | Easy .35
Boring .35 Job .35 2 0 -1 Boring .25 Easy
.25 Job .25 .########## | . +M .####### | .##### S| . | .### |S .## | . + .# T|T Job
.15 . | | Boring .15 Easy .15 EASY BORING JOB BORING EASY
JOB JOB BORING EASY 108 Survey Data Analysis examples from the Minnesota Student Survey
3.5% (3.2) 4.9% (5.0) 4.7% (4.0) 4.7% (5.2) 15.4% (14.2) 52.9% (55.4) 13.9% (13.1) Minnesota Student Survey
Population survey of students in grade 5, 8, 9, 11 160,000+ students Optional for schools Administered every 3 years (currently being administered) 336 questions on the longer forms
Administered online Minnesota Student Survey - Intro You can help your community and school learn more about the lives and feelings of young people like you. The questions on this survey cover many areas. Some questions might make you feel uncomfortable. You do not have to answer any question you dont want to. You can choose not to complete the survey. Do NOT write your name on this survey. No one will know how you answered these questions. Your answers will be kept private. Thank you for filling out this survey honestly
and carefully. MSS - Topics Background School Out of School Activities
Health Behavior MSS Scaling Developmental Skills 1. Commitment to Learning 2. Positive Identity and Outlook 3. Social Competence
Developmental Supports 1. Empowerment Developmental Challenges 1. Bullying 2. Supported 2. Bullied
3. Teacher/School Support 3. Mental Distress 4. Family Violence Measurement Model - CFA Model fit is indicated by: RMSEA < .05 is Good Fit; RMSEA < .08 is Adequate Fit CFI > .95 is Good Fit; CFI > .90 is Adequate Fit
TLI > .95 is Good Fit; TLI > .90 is Adequate Fit MSS Commitment to Learning CFA MODEL FIT INFORMATION RMSEA (Root Mean Square Error Of Approximation) Estimate 0.081 90 Percent C.I. 0.080 0.082 Probability RMSEA <= .05 0.000 CFI/TLI CFI TLI 0.943 0.921 MSS Commitment to Learning CFA STANDARDIZED MODEL RESULTS CTL Y18
Y19 Y20 Y21A Y21B Y21C Y31A Y62BR BY Estimate
S.E. Est./S.E. 0.793 0.746 0.393 0.349 0.645 0.664 0.405 0.505
0.002 0.002 0.003 0.003 0.002 0.002 0.003 0.004 459.874 412.521 144.733
117.904 314.142 341.068 159.060 140.043 Two-Tailed P-Value 0.000 0.000 0.000 0.000
0.000 0.000 0.000 0.000 Commitment to Learning: Race Profiles Disparities: Commitment to Learning Hmong Somali Latino Multiple
Black Asian American Indian -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 Effect Size: Difference from White 8th Grade MCA & CtL Disparities Reading 8 Math 8 CTL
American Indian Asian Hispanic Black -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 Bisexual Gay/Lesbian Questioning
Tchr/Sch Support Mental Distress Bullied (Victim) Social Comp Disparities: Heterosexual Orientation Positive Id Commit to Learn
-0.8 -0.4 0 0.4 0.8 1.2
Disparities in OST Participation v Non-Participation Teacher/School Support Social Competence Supported Positive Identity Empowerment Commitment to Learning Grades Mental Distress Family Violence School Violence
Bullied Bullying -0.60 -0.45 -0.30 -0.15 0.00
0.15 0.30 0.45 0.60 % Equipped: Social Competence 100% 90% 80%
68% 70% 58% 60% 50% 40% 54%
65% 49% 51% Multiple Latino 56%
40% 30% 20% 10% 0% American Indian Asian Black
White Somali Hmong Equipped & After-School Participation % in at least one activity by grade level # of skills equipped 5
8 9 11 0 70% 67%
66% 61% 1 77% 77% 77%
74% 2 81% 84% 84% 82%
3 87% 89% 90% 88% Equipped and Grades GPA by grade level
# of skills equipped 5 8 9 11 0
2.7 2.4 2.4 2.5 1 3.1
3.0 3.0 3.0 2 3.3 3.3
3.2 3.2 3 3.5 3.5 3.5
3.5 Does working affect after-school participation? In one anonymous school district: Not by much. Of the students in grades 8-11 (the work question was not given to 5th graders), about 31% report to work at least 1 hour a week (of these, 22% work more than 10 hours per week). Of those working, 77% participate in after-school activities. Of those not working, 70% participate in after-school activities.
Chi-Square Test of Independence Effect Size Magnitude of Effect https://sites.google.com/view/mnydrg Minnesota Youth Development Research Group Reporting & Ethical
Standards Limitations Know the limitations of your methodology. Be sensitive to the limitations when interpreting results and making conclusions. We can never prove anything, only rule out alternative explanations (i.e., hypotheses). Generalizing Findings Cannot make conclusions about populations (or subgroups) that were not sampled appropriately or sampled at all.
Describe your sampling method clearly. Describe your sample completely. The Ethics of Reporting Ethics is not just human-subjects protection. Data analysis and interpretation require ethical decisionmaking. Reporting findings with major repercussions? Reporting some findings but not others? Results that run counter to prevailing beliefs? Results that run counter to the interests or mission of your funding source?
Interpretations and Conclusions Interpretations should be informed, intelligent, creative, data-based, and ethical. Interpretations should not be speculative, selective, biased, or dishonest. Conclusions should follow from research questions, methodology, and analysis. Conclusions demonstrate a familiarity with subject material, theories, and prior research.
AERA Reporting Standards AERA, 2006. Standards for reporting on empirical social science research in AERA publications. Educational Researcher, 35, 33-40. Covers quantitative and qualitative methods. Does not cover research reviews; theoretical, conceptual, or methodological essays; critiques of traditions and practices; or scholarship more grounded in the humanities. AERA Reporting Standards Intended to provide a framework of expectations about
what should be included in an empirical report. Not intended to define the conduct of empirical work. Not intended to define the format of a empirical report. Not a checklist. Two Principles Sufficiency of warrants Empirical reports should be warranted (reasonable, defensible, acceptable). Adequate evidence should be provided to justify results and conclusions. Transparency of report
Reporting should make explicit the logic of inquiry and activities that led from the development of a research question, through definition, collection, and analysis of data to presenting findings. 1: Problem Formulation 1.1 1.2 1.3 1.4 Provide a clear statement of purpose and scope of study.
Make clear how the study is a contribution to knowledge. Include a review of the relevant scholarship. Rationale for conceptual, methodological, or theoretical orientations of study should be described. 1.5 Rationale for problem formulation as it relates to the groups studied should be described. 2: Design and Logic Design and logic flow directly from problem formulation. 2.1 Reporting should follow a clear logic of inquiry that allows readers to trace the path from problem formulation to interpretations and conclusions.
2.2 An unambiguous and specific description of the design should be reported. 3: Sources of Evidence 3.1 The units of study (sites, groups, participants, events, etc) and means of selection should be adequately described. 3.2 The data and empirical materials should be clearly described, including how and when they were gathered, by whom, and for what purpose. 4: Measurement & Classification
4.1 The development of measurements and classifications should be clearly described and show how important characteristics of the phenomenon of study are preserved. 4.2 Any classification scheme should be described and illustrated with concrete examples that represent the range of the phenomenon. 4.3 Measurement reporting should describe data elements and organization in a specific and unambiguous way. 4: Measurement & Classification 4.4 When transcriptions are included all coding conventions
should be clearly described. 4.5 A rationale for the relevance of a measurement or classification should be described. 5: Analysis and Interpretation 5.1 Procedures used for analysis should be clearly described. 5.2 Analytic technique should be clearly described. 5.3 Analysis and presentation of findings should make clear how they support conclusions. 5.4 Include information on any intended or unintended circumstances that may affect interpretations.
5: Analysis and Interpretation 5.5 When presenting conclusions describe: (a) how interpretations address the research question, (b) how conclusions relate to literature, and (c) emphasize the implications of study. 5.6 Clearly state statistical analyses and appropriateness of statistical tests. 5.7 Provide descriptive and inferential statistics that are relevant to each analysis. 5: Analysis and Interpretation 5.8 Any issues in data collection that might compromise the
validity of statistical analyses should be reported (e.g., missing data). 5.9 Any issues in data analysis that might compromise the validity of statistical analyses should be reported (e.g., assumption violations). 5.10 For each statistical results include: (a) an index of the quantitative relation between variables; (b) the standard error for given index; (c) the hypothesis test, test statistic, and significance level; (d) qualitative interpretation (i.e., practical significant). 6: Generalization
6.1 Whether generalization is intentional or not, make clear the specifics of the participants, contexts, activities, data collections, and manipulations. 6.2 When generalization is intentional make intended scope of generalization clear. 6.3 When generalization is intentional make logic by which the findings of the study should apply within the intended scope of generalization clear. 7: Ethics in Reporting 7.1 Ethical considerations involved in data collection, analysis, and reporting should be explicitly addressed.
7.2 Reporting should be presented in a way that honors consent and other agreements. 7.3 Describe conflicts of interest or biases. 7.4 Care should be taken to ensure accuracy of reporting. 7.5 Data and relevant materials should be stored appropriately to allow for potential replication of results. 7.6 Funding support should be acknowledged. 8: Title, Abstract, and Headings 8.1 Title should clearly convey what article is about. 8.2 The abstract should provide a summary of the article that is self-contained, concise, and accurate.
8.3 Headings and subheadings should make clear the logic of inquiry underlying the report. References 1. 2. 3. 4. Schonlau, M., Fricker, Jr., R.D., & Elliott, M. N. (2002). Conducting research surveys via e-mail and the web. Santa Monica, CA: Rand.
Dillman, D. A. (2000). Mail and internet surveys: The tailored design method. New York: John Wiley & Sons, Inc. Dillman, D. A., Tortora, R. D., & Bowker, D. 1998. Principles for Constructing Web Surveys. SESRC Technical Report 98-50, Washington State University. Available online at: http://survey.sesrc.wsu.edu/dillman/papers/websurveyppr.pdf Cook, C., Heath, F., & Thompson, R. L. (2000). A meta-analysis of response rates in web-or internet-based surveys. Educational and Psychological Measurement, 60(6), 821-836.