In an upcoming issue of Cyberpsychology, Behavior and Social Networking, Grohol, Slimowicz and Granda examined the accuracy and trustworthiness of mental health information found on the Internet. This is critical because 8 of every 10 Internet users has searched for health information online, including 59% of the US population. They concluded that information found in early Google and Bing search results is generally accurate but low in readability. The following websites were identified as popular, generally reliable sources of mental health information: Help Guide.org, Mayo Clinic, National Institute of Mental Health, Psych Central, Wikipedia, eMedicineHealth, MedicineNet, and WebMD.
Because 77% of Internet users only look at the first page of search results, these authors examined websites found on the first two pages of search results (20 websites) on Google and Bing for each of 11 major mental health conditions (e.g. anxiety, ADHD, depression), resulting in 440 total rated websites. This struck me as peculiar though, since there should be substantial overlap between Google and Bing – so most likely, some cases were represented multiple times.
After the sites were collected, two graduate student coders (the last two authors, presumably) made ratings on all 440 websites for quality of information presented, readability, commercial status, and the use of the HONCode badge. HONCode is a sort of seal of approval for health websites from an independent regulatory body.
The found, among other things:
- HONCode websites generally contained higher quality information than websites without HONCode certification.
- Commercial websites generally contained lower quality information than non-commercial websites.
- HONCode websites were generally harder to read (higher grade level) than websites without HONCode certification.
- Commercial websites were generally harder to read than non-commercial websites.
- 67.5% of websites were judged to be of good or better quality based upon a generally agreed-upon standard (above a 40 on the DISCERN scale).
Statistically, this study had two peculiar features. First, there is the likely non-independence problem described above (websites were probably in their dataset multiple times). Second, most of their analyses were done via simple correlations, which have a very simple calculation for degrees of freedom: n – 2. Thus, degrees of freedom for all of the correlations they calculated (with the exception of the “Aims Achieved”, which had a lower n for a technical reason), ignoring the independence problem, should be 438. The degrees of freedom for Aims Achieved should be 396. However, in the article, these degrees of freedom are either 338 or 395. So something odd is going on here that is not explained. Perhaps duplicates have been eliminated in these analyses, but this is not stated.
One additional caveat: the owner of Psych Central (identified as one of the reliable sources of mental health information) was the first author of this study. I doubt he would fake information just to get his website mentioned in a journal article, so this probably isn’t much of a concern.
Overall, I am pleased to see that the most common resources available to Internet users on mental health are generally of reasonable quality, and I feel fairly confident in this finding. Like the authors, I am concerned that readability of the most popular websites is generally low. The mean grade level was 12, indicating that a high school education is required to understand the typical popular health website. That seems a bit high, especially considering younger people are most likely to turn to the web first as a source of information about their mental health. Hopefully this article will serve as a call to website operators to broaden their audience.Footnotes:
By now, if you read this blog, I assume you at least have a passing familiarity with massively online open courses (MOOCs), the “disruptive” technology set to flip higher education on its head (or not). Such courses typically involve tens of thousands of learners, and they are associated with companies like Coursera, Udacity, and EdX. To facilitate a 50,000:1 teacher-student ratio, they rely on an instructional model requiring minimal instructor involvement, potentially to the detriment of learners. As the primary product of these companies, MOOCs demand monetization – a fancy term for “who exactly will make money off of this thing?”. None of these companies have a plan for getting self-sufficient let alone profitable, and even non-profits need sufficient revenue to stay solvent.
Despite a couple of years of discussion, the question of monetization remains largely unresolved. MOOCs are about as popular as they were, they still drain resources from the companies hosting them, and they still don’t provide much to those hosts in return. The only real change in the year following “the year of the MOOC” is that these companies have now begun to strike deals with private organizations to funnel in high performing students. To me, this seems like a terrifically clever way to circumvent labor laws. Instead of paying new employees during an onboarding and training period, business can now require employees to take a “free course” before paying them a dime.
Given these issues, you might wonder why faculty would be willing to develop (note that I don’t use the word “teach”) a MOOC. I’ve identified two major motivations.
Some faculty motivate their involvement in MOOCs as a public service – in other words, how else are you going to reach tens of thousands of interested students? The certifications given by MOOCs are essentially worthless in the general job marketplace anyway (aside from those exploitative MOOC-industry partnerships described above). So why not reach out to an audience ready and eager to learn just because they are intrinsically motivated to develop their skills? This is what has motivated me to look into producing an I/O Psychology MOOC. But I am a bit uncomfortable with the idea of a for-profit company earning revenue via my students (probably, eventually, maybe), which is why I’ve been reluctant to push too fast in that direction. Non-profit MOOCs are probably a better option for faculty like me, but even that has unseen costs.
The other major reason faculty might develop a MOOC is to gain access to a creative source of research data. With 10000 students, you also have access to a research sample of 10000 people, which is much larger than typical samples in the social sciences (although likely to be biased and range restricted). But this is where I think MOOCs can become exploitative in a different way than we usually talk about. I concluded this from my participation as a student in a Coursera MOOC.
I’m not going to criticize this MOOC’s pedagogy, because frankly, that’s too easy. It suffers from all the same shortcomings of other MOOCs, the standard trade-offs made when scaling education to the thousands-of-students level, and there’s no reason to go into that here. What I’m instead concerned about is how the faculty in charge of the MOOC gathered research data. In this course, research data was collected as a direct consequence of completing assignments that appeared to have relatively little to do with actual course content. For example, in Week 4, the assignment was to complete this research study, which was not linked with any learning objectives in that week (at least in any way indicated to students). If you didn’t complete the research study, you earned a zero for the assignment. There was no apparent way around it.
In my experience on one of the human subjects review boards at my university, I can tell you emphatically that this would not be considered an ethical course design choice in a real college classroom. Research participation must be voluntary and non-mandatory. If an instructor does require research participation (common in Psychology to build a subject pool), there must always be an alternative non-data-collection-oriented assignment in order to obtain the same credit. Anyone that doesn’t want to be the subject of research must always have a way to do exactly that – skip research and still get course credit.
Skipping research is not possible in this MOOC. If you want the certificate at the end of the course, you must complete the research. The only exception is if you are under 18 years old, in which case you still must complete the research, but the researchers promise that they won’t use your data. We’ll ignore for now that the only way to indicate you are under 18 is to opt-in – i.e. it is assumed you’re over 18 until you say otherwise (which would also be an ethics breach, at least at our IRB).
Students never see a consent form informing them of their rights, and it seems to me that students in fact have no such rights according to the designers of this MOOC. The message is clear: give us research data, or you can’t pass the course.
One potential difference between a MOOC and a college classroom is that there are no “real” credentials on the line, no earned degree that must be sacrificed if you drop. The reasoning goes: if you don’t want to participate in research, stop taking the MOOC. But this ignores the psychological contract between MOOC designer and student. Once a student is invested in a course, they are less likely to leave it. They want to continue. They want to succeed. And leveraging their desire to learn in order to extract data from them unwillingly is not ethical.
The students see right through this apparent scam. I imagine some continue on and others do not. I will copy below a few anonymous quotes from the forums of students who apparently did persist. One student picked up on this theme quite early in a thread titled, “Am I here to learn or to serve as reseach [sic] fodder?”:
I wasn’t sure what to think about this thread or how to feel about the topic in general, but now that I completed the week 4 assignment I feel pretty confident saying that the purpose of this course is exclusively to collect research data… I don’t know that it’s a bad thing, but it means I feel a bit deceived as to the purpose of this course. Not that the research motives weren’t stated, but it seems to me that the course content, utility, and pedagogy is totally subordinate to the research agenda.
The next concerns the Week 6 assignment, which require students to add entries to an online research project database:
So this time I really have the impression I’m doing the researchers’ work for them. Indeed, why not using the thousands of people in this MOOC to fill a database for free, with absolutely no effort? What do we, students, learn from this? We have to find a game which helps learning, which is a good assignment, but then it would be sufficient to enter the name of the game in the forum and that’s it. I am very disappointed by the assignments for the lasts weeks. In weeks 1 and 2 we had to reflect and really think about games for learning. Then there was the ‘research fodder’ assignment. And this week we have to do the researchers’ job.
In response to such criticisms, one of the course instructors posted this in response:
Wow! Thanks for bringing this up! In short, these assignments are designed to be learning experiences. They mirror assignments that we make in the offline courses each semester. I’m really sorry if you, or anyone else felt any other way. It was meant to try and make the assignments relevant, and to create “course content on the fly” so that the results of the studies would be content themselves.
That strikes me as a strange statement. Apparently these assignments have been used in past in-person courses as important “learning experiences” but are also experimental and new. It is even stranger in light of the final week of the class, which asks students to build a worldwide database of learning games for zero compensation without any followup assignments (to be technical: adding to the database is mandatory whereas discussing the experience on the forums – the only part that might conceivably be related to learning – is optional). So I am not sure what to believe. Perhaps the instructors truly believe these assignments are legitimate pedagogy, but we’ll never really know.
As for me, although the lecture videos have been pretty good, I have hit my breaking point. In the first five weeks of the course, I completed the assignments. I saw what they were doing for research purposes before now but decided I didn’t care. Week 6 was too much – when asked to contribute data to a public archive, I decided that getting to experience this material was not worth the price. Perhaps I could watch the videos this week anyway, but I don’t feel right doing it, since this is the price demanded by the instructors. Even right at the finish line, I will not be completing this MOOC, and I can only wonder how many others dropped because they, too, felt exploited by their instructors.
In an upcoming issue of Social Science Computer Review, Villar, Callegaro, and Yang conducted a meta-analysis on impact of the use of progress bars on survey completion. In doing so, they identified 32 randomized experiments from 10 sources where a control group (no progress bar) was compared to an experimental group (progress bar). Among the experiments, they identified three types of progress bars:
- Constant. The progress bar increased in a linear, predictable fashion (e.g. with each page of the survey, or based upon how many questions were remaining).
- Fast-to-slow. The progress bar increased a lot at the beginning of the survey, but slowly at the end, in comparison to the “constant” rate.
- Slow-to-fast. The progress bar increased slowly at the beginning of the survey, but quickly at the end, in comparison to the “constant” rate.
From their meta-analysis, the authors concluded that constant progress bars did not substantially affect completion rates, fast-to-slow reduced drop offs, and slow-to-fast increased drop offs.
However, a few aspects of this study don’t make sense to me, leading me to question this interpretation. For each study, the researchers report calculating the ratio of people leaving the survey early to people starting the survey as the “drop-off rate” (they note this as a formula, so it is very clear which should be divided by which). Yet the mean drop-off rates reported in the tables are always above 14. To be consistent with their own statements, this would mean that 14 people dropped for every 1 person that took the survey – which obviously doesn’t make any sense. My next thought was that perhaps the authors converted their percentages to whole numbers – e.g., 14 is really 14% – or flipped the ratio – e.g. 14 is really 1:14 instead of 14:1.
However, the minimum and maximum associated with the drop off rates in the Constant case are 0.60 and 78.26, respectively, which rules out the “typo” explanation. This min and max imply that for some studies, more people dropped than began the survey, but the opposite for other studies. So something is miscoded somewhere, and it appears to have been done inconsistently. It is unclear if this miscoding was done just in the tabular reporting or if it carried through to analyses.
A forest plot is provided to give a visual indication of the pattern of results; however, the type of miscoding described above would reverse some of the observed effects, so this does not seem reliable. Traditional moderator analysis (as we would normally see produced from meta-analysis) was not presented in a tabular format for some reason. Instead, various subgroup analyses were embedded in the text – expected versus actual duration, incentives, and survey duration. However, with the coding problem described earlier, these are impossible to interpret as well.
Overall, I was saddened by this meta-analysis, because it seems like the researchers have an interesting dataset to work with yet coding errors cause me to question all of their conclusions. Hopefully an addendum/correction will be released addressing these issues.Footnotes:
In an upcoming article in Business Communication Quarterly, Washington, Okoro and Cardon investigated how appropriate people found various mobile-phone-related behaviors during formal business meetings. Highlights from the respondents included:
- 51% of 20-somethings believe it appropriate to read texts during formal business meetings, whereas only 16% of workers 40+ believe the same thing
- 43% of 20-somethings believe it appropriate to write texts during formal business meetings, whereas only 6% of workers 40+ believe the same thing
- 34% of 20-somethings believe it appropriate to answer phone calls during formal business meetings, whereas only 6% of workers 40+ believe the same thing
- People with higher incomes are more judgmental about mobile phone use than people with lower incomes
- At least 54% of all respondents believe it is inappropriate to use mobile phones at all during formal meetings
- 86% believe it is inappropriate to answer phone calls during formal meetings
- 84% believe it is inappropriate to write texts or emails during formal meetings
- 75% believe it is inappropriate to read texts or emails during formal meetings
- At least 22% believe it is inappropriate to use mobile phones during any meetings
- 66% believe it is inappropriate to write texts or emails during any meetings
To collect these tidbits, they conducted two studies. In the first, they conducted an exploratory study asking 204 employees at an eastern US beverage distributor about what types of inappropriate cell phone usage they observed. From this, they identified 8 mobile phone actions deemed potentially objectionable: making or answering calls, writing and sending texts or emails, checking texts or emails, browsing the Internet, checking the time, checking received calls, bringing a phone, and interrupting a meeting to leave it and answer a call.
In the second study, the researchers administered a survey developed around those 8 mobile phone actions on a 4-point scale ranging from usually appropriate to never appropriate. It was stated that this was given to a “random sample…of full-time working professionals” but the precise source is not revealed. Rated appropriateness of behaviors varied by dimension, from 54.6% at the low end for leaving a meeting to answer a call, up to 87% for answering a call in the middle of the meeting. Which leaves me wondering about the 13% who apparently take phone calls in the middle of meetings!
Writing and reading texts and emails was deemed inappropriate by 84% and 74% of respondents respectively; however, there were striking differences on this dimension by age, as depicted below:
Although only 16% of people over age 40 viewed checking texts during formal meetings as acceptable, more than half (51%) of people over 20 saw it as acceptable. It is unclear, at this point, if this pattern is the result of the early exposure to texting by the younger workers or the increased experience with interpersonal interaction at work of the older population. Regardless, it will probably be a point of contention between younger and older workers for quite some time.
So if you’re a younger worker, consider leaving your phone alone in meetings to avoid annoying your coworkers. And if you’re an older worker annoyed at what you believe to be rude behavior, just remember, it’s not you – it’s them!Footnotes:
Recently, Old Dominion University embarked on an initiative to improve the teaching of disciplinary writing across courses university-wide. This is part of ODU’s Quality Enhancement Plan, an effort to improve undergraduate instruction in general. It’s an extensive program, involving extra instructional training and internal grant competitions, among other initiatives.
Writing quality is one of the best indicators of deep understanding of subject matter, and the feedback on that writing is among the most valuable content an instructor can provide. Unfortunately, large class sizes have resulted in a shift of responsibility for grading writing from the faculty teaching courses to the graduate teaching assistants supporting them in those courses. Or more plainly, when you have 150 students in a single class, there’s simply no way to reasonably provide detailed writing feedback by yourself several times in a semester on top of the other duties required of instructors without working substantial overtime.
With that in the background, I was pleasantly surprised to discover a new paper by Doe, Gingerich and Richards appearing in the most recent issue of Teaching of Psychology on the training of graduate student teaching assistants in evaluating writing. In their study, they compared 12 GTA graders with a 5-week course on “theory and best practices” for grading with 8 professional graders over time at two time points, about 3 months apart. For this study, the professional graders were considered to be a “gold standard” for grading quality.
Overall, the researchers found that GTAs were more lenient than professional graders at both time points. Both groups provided higher grades at Time 2 than at Time 1. This is most likely due to student learning over the course of the semester, it might be due to variance in assignment difficulty – the researchers did not describe any attempts to account for this. Professional graders assessed papers blind to time point, ruling out a purely temporal effect.
The researchers also found a significant interaction between GTA grader identity and time, indicating different changes over time between graders, so GTAs were not homogenous, suggesting important individual difference moderators (perhaps GTAs become harsher or more lenient at different rates?). Student comment quality was also assessed by the researchers, finding increases in comment quality over time among student raters, but with differences across dimensions of comment quality (e.g. discussion of strengths and weaknesses, rhetorical concerns, positive feedback, etc). The relative magnitudes of increases were not examined, although these could be computed from the provided table.
One major issue with this study is that the reliabilities of the comment quality outcomes are quite low. Correlations between raters ranged from .69 to .92, but both raters only assessed 10% of the papers. When calculating correlations as an estimate of inter-rater reliability, the use of these correlations as estimates of inter-rater reliabiltiy assumes that everyone rates every paper and the mean is used. Since most papers were assessed by only one rater, these reliabilities are overestimates and should have been corrected down using the Spearman-Brown prophecy formula. Having said that, the effect of low reliability is that observed relationships are attenuated – that is, they are smaller in the dataset than they should be. So had this been done correctly, the increased accurate would have made the researchers’ results stronger. The effect of time (and of grader) may be much larger than indicated here.
Overall, the researcher conclude that the assumption of GTA quality when grading writing assignments is misplaced. Even with training and practice, GTA performance did not reach that of professional graders. On the bright side, training did help. The researcher conclude that continuous training is necessary to ensure high quality grading – a one-time training – or I suspect, more commonly, no training – is insufficient.
One thing that was not assessed by the researchers was a comparison of GTA comment quality and professional comment quality versus professor comment quality. But perhaps that would hit a bit too close to home!Footnotes:
Over the last two days, I attended the OpenVA conference and its Minding the Future preconference. The purpose of these events was to bring together faculty in Virginia to talk about open and digital learning resources. Speakers in the culminating session today described a vision of the future where content creators (faculty, instructional designers, etc.) create educational resources and release them on the web for all the world to use.
That idea feels great to me. Emotionally, I really enjoy the idea of an open and free-wheeling exchange of academic resources. Have a great way to teach a particular concept? Put it online! Share it with faculty of the world! If it’s good, it might be adopted, updated, remixed to make it even better. I think most folks would agree that greater inter-faculty communication and sharing is a good thing.
And yet, a proliferation of open and digital learning resources has not occurred. Thousands of lecture videos from enthusiastic faculty haven’t been posted online. Huge collections of free, innovative interactive Flash and HTML5 apps have not appeared. Massive repositories of activities for each topic in many courses don’t exist. Even the massive open online courses available today are open only in terms of enrollment, not in provided resources. In my field, the number of resources in our biggest teaching repository, which has been around for a few years now, is maybe around 40, many of which are just links to pre-existing YouTube videos. Folks are thrilled to use others’ open resources but reluctant to share their own. So what’s happening? Perhaps it is more useful to think about why people aren’t producing such resources rather than why they are.
Let’s start simple. How many faculty have posted all of their lecture materials to YouTube? All of their slides to Slideshare? I’m guessing that as a proportion of the number of faculty creating such resources, that number is extremely low. I use this example specifically because this would be the easiest content to share – content that has already been created for a course and could be shared with just a few clicks. Although I believe in the value of open educational resources, I don’t post my own either.
Why not? Loss of ownership. This is not in the sense that I want to make money off of my teaching techniques. Instead, I worry that if I were to post such resources, how would they be used? What would stop a University of Phoenix from charging students to take a class using content that I have provided for free? What would stop a University of Walmart from taking my content on unfair treatment of employees in the workplace, replacing a few slides, using it to train its employees on why Walmart is the most fair and equitable employer around, and claiming the ideas were originally mine?
It is this unknown that terrifies me about fully open educational resources. Although I feel comfortable and enthusiastic about sharing resources that I have carefully prepared, vetted, double-checked, and delivered unto the world, I feel much less comfortable sharing everything because I have zero control over what happens to that content once it leaves my computer.
And before you shout “luddite,” I’ll point out that I’m a digital native and a Millennial, at the front of a wave of incoming Millennial faculty. I wrote my first computer program when I was five years old. This is not necessarily an issue of age or tech-savvy. It is deeper than that, and it will not be solved by faculty retirements alone.
I imagine (and hope) there’s a solution to such concerns, but these are the sorts of issues that must be addressed before open resources will be the powerful movement in education that so many desire it to be. If you have a solution, I invite you to share it!
There are two major approaches to data collection with respect to time. Typically, we collect cross-sectional data. This type of data is collected at a single point in time. For example, we might ask someone to complete a single survey. Atypically, we collect longitudinal data. This type of data is collected at multiple points in time, and changes over time are examined. For example, we might collect behavioral data about students today, next week, and the week after that.
The relative advantages and disadvantages of these two approaches can by illustrated by the following similes. Cross-sectional data is like a photograph, whereas longitudinal data is like a video. You might be able to get all the information you need out of a photograph. It does, after all, provide a snapshot of whatever was going on at a particular point in time. But when holding a photograph, you don’t really know what happened before and after that photograph was taken. You must generally assume that what you’re interested in was the same over time. That is not always a safe assumption. Although video solves the timeline problem, it introduces new problems – it’s more complicated to collect (hold that camera still!) and you’ll need to watch the whole thing to figure out what’s going on.
The same is true of cross-sectional versus longitudinal data. Longitudinal data is exceptionally difficult to collect because we usually rely on the kindness of volunteers to complete our studies. When you ask someone to come back to your lab six times, they’re substantially less likely to show up at Time 6 than at Time 1. So that has led researchers to investigate alternate techniques for collecting this type of data. One such approach is to provide mobile phones with pre-installed data collection apps to research participants for long-term use, but little research is available describing how well such an approach should work.
An article in Social Science Computer Review by van Heerden and colleagues sheds some light on this issue. The path from phone purchase to actual data collection in a South African sample is fascinating:
- 1000 phones were purchased.
- 996 phones were functional and distributed to research participants over the course of a year.
- One month after distribution ended, 734 phones were found to still be accessible of the 996 distributed. Of those missing:
- 25 had their SIM card changed (making the phone impossible to track and changing its phone number)
- 32 had deleted the researcher’s data collection software
- 205 were reported as lost, stolen, broken, or given away
- Of the 734 phones, 435 phones were selected at random to be sent two surveys, two weeks apart, via text message.
- Of the 435 phones text messaged, 288 were successfully delivered for Survey 1 and 271 for Survey 2. Of those missing:
- 147 were undeliverable for either Survey
- 21 were successfully received for Survey 1 but not for Survey 2
- 4 were successfully received for Survey 2 but not for Survey 1
- Of the 288 successful text message deliveries for Survey 1, 105 completed Survey 1
- Of the 271 successful text message deliveries for Survey 2, 84 completed Survey 2
Thus, of the original 996 distributed phones, 8.4% resulted in complete data. Even considering only the 288 successful messages, the completion rate was 30.9% for Survey 2. That’s pretty depressing.
The researchers actually present this as positive evidence for the use of mobile devices in this manner, considering that meta-analytic evidence suggests a mean expected response rate for web surveys of 34.0% (higher for mail). But considering the extreme expense involved in this approach, I am not convinced. The authors suggest that the next step would be to probe the use of research participant’s own mobile phones, which is certainly a good idea, but I wonder why they didn’t do this in the first place – perhaps they did not expect their population to own mobile phones already.Footnotes:
I’m starting back to the blog a little later than usual this academic year. You see, this is the dreaded final tenure push, so most of my time has been redirected toward pushing studies out the door (for publication) that I’ve been sitting on. You might wonder why I’ve been sitting on data. The reason is that as an academic, once you have analyzed a dataset, most of the fun is gone. You know the answer. The next step is to prepare the answer that you’ve learned from that analysis for publication, that others might learn from your work, and that part is just honestly substantially less interesting. If only there was a way to communicate your results without writing them all down and waiting anywhere between three months and two years to get through peer review, I think I’d be a lot more motivated!
In any case, regular reviews of academic research will start next week (lots of interesting research out this summer), but now, I wanted to share a guest post I wrote for MediaCommons, which is an online scholarly community. Each month, a big and vague but interesting question is posed, and guest writers contribute their answer for discussion. It’s an ambitious project. September was the month of gamification, asking the question: How does gamification affect learning?
The first bit of my response is below. For the rest, check out my guest post at MediaCommons, and while you’re there, read the rest of the very interesting set of responses found there, which includes some neutral-to-positive considerations of gamification and some quite negative.
One of the problems that educators face when considering gamification for their courses is that gamification is a quite imprecise umbrella term which can refer to several different specific “applications” of games to education. That creates several sources of disagreement when none are necessarily present. When someone says “I would like to gamify my course,” they might be thinking:
- I’d like to add some games to my course.
- I’d like to teach my students using games.
- I’d like to make assignments more fun.
- I’d like to motivate my students to do more work.
These all represent somewhat different models of “gamification” but have quite different implications for success. So when asked to comment on how gamification affects learning, my first response was that it depends on what you mean by gamification.
Despite common usage, the first two items in my list above are not what I would refer to as “gamification.” Instead, this is the application of “educational/instructional games” or “edugaming”. This is not at all a new concept. Games with learning as their purpose have been around for millennia, back at least to the invention of competitive sports (be faster and stronger than your competitors to be judged as superior by your peers!). Even digital games for learning have been around for decades – the first blockbuster of this genre was Oregon Trail, which was created in the early 1970s. Such games are intended to teach through play within a narrative framework. In the very best edugames, we learn by being pulled into a story that we find compelling, where it is necessary to understand complex relationships within that narrative in order to effect change as an agent operating within it. One cannot win at Oregon Trail by running at a breakneck pace across the American West with inadequate supplies. One only wins by understanding resource rationing, caution, and risk management. And the ability to shoot wildlife doesn’t hurt.
(continued at MediaCommons)
Ah, the end of the academic year. A time for reflection and furious attempts to get some research done. We are in the midst of Spring semester finals here at ODU, and getting ready to head into the summer. Like last year, I’ll be using a relaxed posting schedule for the semester – so no more weekly mid-week posts until August/September-ish.
And actually, unlike last year, I won’t be writing my textbook anymore because it’s been published! If you would like a student-centered, practical, one-semester introduction to statistics for business, I’d (obviously) strongly recommend my Step-by-Step Introduction to Statistics for Business. Each chapter starts with a case study describing a small business owner facing a problem that can be solved with statistics. Lots of diagrams and illustrations too! If my post on intra-class correlations is any indicator, apparently I’ve got a knack for describing statistics pretty clearly.
So instead of the textbook, I’ll instead be working on journal article submissions, a couple of software design projects, and taking a special trip to Europe for my 5-year wedding anniversary. A busy summer!
In an upcoming issue of Cyberpsychology, Behavior, and Social Networking, Papay and colleagues provide psychometric evidence for the short-form Problematic Online Gaming Questionnaire (developed earlier and published in PLOS ONE) using a national sample of 5,045 high school students. The short-form version is especially interesting because it has six dimensions over just twelve items. However, the authors found evidence of adequate model fit to the six-factor solution.
Problematic online gaming is related to low self-esteem, depression, and other negative consequences to your psychological health. Below, you can complete the Problematic Online Gaming Questionnaire (short-form) and get an assessment of your own addiction. If you wish to assess someone else, it is important that they complete the assessment themselves; don’t read it to them. After you submit your answers, additional analysis of your responses will appear comparing you to the sample collected by the research team and indicating your risk factors.
Please note: You must respond to every question to learn your results. Your responses are entirely local to your own computer (your responses will remain unknown to anyone but you). You must also complete this survey on the neoacademic.com website on this page (click “read more…” if you see it!).
- Pápay, O., Urbán, R., Griffiths, M., Nagygyörgy, K., Farkas, J., Kökönyei, G., Felvinczi, K., Oláh, A., Elekes, Z., & Demetrovics, Z. (2013). Psychometric Properties of the Problematic Online Gaming Questionnaire Short-Form and Prevalence of Problematic Online Gaming in a National Sample of Adolescents Cyberpsychology, Behavior, and Social Networking DOI: 10.1089/cyber.2012.0484 [↩]
- Demetrovics, Z., Urbán, R., Nagygyörgy, K., Farkas, J., Griffiths, M., Pápay, O., Kökönyei, G., Felvinczi, K., & Oláh, A. (2012). The Development of the Problematic Online Gaming Questionnaire (POGQ) PLoS ONE, 7 (5) DOI: 10.1371/journal.pone.0036417 [↩]
In a recent article appearing in Organizational Psychology Review, Pillutla and Thau make some very strongly worded arguments about the role of theory development in psychological science. I’ll start exploring their paper with a quote in their own words:
The state of [industrial/organizational psychology] and its obsession with novel theoretical contributions is antithetical to the goals of the scientific method.
The authors lament the over-reliance on the development of novel theory as the foundation for a meaningful contribution to scientific research. They suggest, rightfully I think, that the pursuit of novel theory creates a perverse incentive structure for scientists, encouraging them to pursue identification of “facts” which may fit available data but do not really exist, ultimately leading to the creation of theories that do not truly complement or build upon any prior theories, orphans of the scientific literature that provide no true value to our understanding of organizational phenomena. They label this approach the pursuit of the “interesting” at the expense of good science.
They further argue that this state of affairs is in fact the result of a fundamental misunderstanding of the scientific process. The enterprise of science is not one where we should aim to identify facts, because facts do not really exist; instead, it is one where theory is used as a tool to incrementally better understand phenomena, each additional bit of research contributing to a better description of a real research problem. When we pursue conclusions like “this is how organizations function” or “this is how employees behave”, we are misleading both ourselves those that rely upon our research.
Certainly, such misunderstandings have occurred before, with highly damaging implications for the progress of science. One such misunderstanding was the over-reliance on statistical significance testing, where “statistically significant” was mistakenly taken as synonymous for “real” or “important”. Although people still make such mistakes in interpretation, the movement toward effect size interpretation and model specification (as evidenced through changing APA guidelines for reporting) is clearly targeted at reducing this continuing problem. In their article, Pillutla and Thau are arguing that theory development as the sole target of science is the research-methods equivalent of the statistical-significance interpretation problem – that this over-emphasis in academic publishing is pervasive and harmful. Like statistical significance testing, sometimes novel theory development is the right approach, but many times it isn’t.
Through analysis of citations to one of the most dominant proponents of theory-building as the basis of all science, the authors also contend that management, business, and applied psychology have been disproportionately harmed by this viewpoint, in comparison to other areas of the social sciences.
This view is a bit extreme, but may be the front edge of a coming wave of reform. I’ve noticed myself over the last few years that fewer and fewer papers appearing in our top journals - Journal of Applied Psychology, Personnel Psychology, Academy of Management Journal, etc. – have solved any real problems for actual organizations. Instead, they focus on exploring and specifying increasingly minute aspects of organizations – interesting, sure – but not terribly useful in any tangible sense. From chatting with practitioners and other early career scholars, I’ve discovered this view is not all that unusual. One person at the Society for Industrial and Organizational Psychology conference this year even told me, “I haven’t seen anything I could actually use to help my employees in Journal of Applied Psychology for at least five years.” That is a terrible state of affairs. If we’re not solving real problems, what value do we really provide to both organizations (whom we study) and taxpayers (who pay us, or at least those of us working in state institutions)? The push to publish only in top tier journals – where such theory development is required – only exacerbates this problem.
Interesting theories are not, by themselves, worthy of investigation unless they are put to the service of explaining research problems… the quest for novelty and interestingness of facts has infused them with significance without any regard to the knowledge that they generate.
So are things as dire as Pillutla and Thau indicate? There have certainly been a number of high-profile cases in psychology lately that have caused the public to question the value of our entire field. And we should absolutely work to repair that damage, but the path to do so is unclear. Some researchers have taken a purely empirical approach, attempting to replicate controversial papers and examine their convergence, eschewing theory entirely in the quest for reliable knowledge. Is that enough? I’m guessing we’ll find out in the next five or ten years.Footnotes:
Day 3 was the final day of the SIOP conference. Per tradition, sessions were lightly attended – too much late night enjoyment at the end of Day 2, I suspect! I had a slow start myself, only getting to poster sessions a little after 10AM. One that caught my eye explored the effect of organizational support for the use of technology over time; surprisingly, the effect of strong organizational support on technology use weakened over time, not quite in line with what prior theory (and the researcher presenting!) predicted.
I also decided somewhat last-minute to attend a Master Tutorial on Bayesian statistics for I/O. Unfortunately, despite a full house and CE credits on the line for many, the speaker didn’t show! After about 15 minutes, I left and headed back to the poster session on Research Methods, with lots of interesting work, including a bit more on the equivalence of mTurk and undergraduate research samples. The added value on this paper? English-speaking, US-based mTurk samples are similar to undergraduate samples, but non-English-speaker samples are a little different.
Finally, I ended up at the closing plenary, given by Reverend TJ Martinez, founder of the Cristo Rey Jesuit College. He related an inspiring story about starting with zero support to start a school but ending by graduating a fair number of low-SES high school students (with an abysmal high school completion rate prior) with a 100% entrance rate into college. I am still not sure precisely what he had to do with I/O, but he was certainly an entertaining and admirable speaker.
At the very end incoming SIOP President Tammy Allen laid out a 5-section plan for SIOP in the coming year:
- Tie the varied local I/O advocacy and interest groups better to SIOP.
- Increase the visibility of I/O Psychology in Introductory Psychology courses and textbooks. Graduates of undergraduate psychology programs should, at minimum, be able to answer the question, “What is I/O Psychology?”, and this doesn’t happen in most programs.
- Work to improve SIOP’s branding, which may include the launch of a new practitioner-oriented journal.
- Improve national advocacy efforts on behalf of SIOP. Apparently we have already some folks on retainer, but this will expand.
- Better understand SIOP’s role and position in “science.” Psychology is viewed as one of the foundational areas of scholarship on which many other areas are based, but where does I/O fit specifically?
And that’s it for SIOP 2013! I’ve got another day in Houston – which will likely involve the Houston Aquarium and a nice dinner somewhere – and then back to cooler Norfolk (at a relatively chillier 70 degrees F!), where I’ll be writing a slew of follow-up emails to all the intriguing folks I met this year. And hopefully I’ll be sufficiently recovered from this trip in a year to attend SIOP 2014 in sunny Honolulu!
|14:04||#SIOP13 Day 3 begins with another SIOP tradition, for me anyway: I’ve lost my voice! Too much of a good thing I suppose!!|
|16:21||Fascinating poster on org support for tech use in orgs – changes over time in unexpected ways #siop13|
|16:22||@ericknud by that logic, wouldn’t you want to be unattractive? Inverted u-curve perhaps? #siop13|
|18:22||@ericknud yes, that’s what I’m saying – it is probably more complicated than just “moderately attractive is good”|
|18:24||#siop13 is both amazing and frustrating because you run into people you know constantly… Missed a whole session!|
|18:26||@TSP_Consulting @jbrodieg and what do #SIOP points get me, hmm? #siop13 or #siop14 perks perhaps?|
|18:41||Intro to Bayesian Statistics speaker is missing 10 mins into session… Uhoh #siop13|
|18:46||#SIOP13 skipped out on Bayesian tutorial without a speaker for research methods poster session… At least they’ll all show up!|
|18:51||@jsnread that was the consensus of the room!|
|18:53||@BelindaK04 what room is this?|
|21:55||#siop13 closing plenary… Rev Martinez is a pretty funny guy!|
|21:57||#siop13 closing plenary… Low income students as an investment opportunity for orgs… Future customers, future employees|
|22:10||#siop13 closing plenary… Inspiring speaker on the power of the few and disenfranchised to accomplish their goals|
|22:18||#siop13 #SIOP Prez Tammy Allen’s goals: tie local IO to SIOP, IO in intro, branding incl practitioner journal, natl advocacy, map of science|
|22:19||#siop13 video on Hawaii to encourage attendees at #siop14… Do we really need convincing??|
|23:06||End of Day 3 and end of #siop13… Great conference! So many followup emails to write!|
Day 2 of SIOP 2013 was a bit busier for me. After an early breakfast, I headed to my first session of the day at 8:30, a discussion of social media in selection, which ended up really being a discussion of LinkedIn in selection. Like Day 1, this was a bit depressing. Although there was a lot of talk about social media, there was not much data, which is what I was really hoping for. I did hear a few interesting tidbits in this session that I had not heard before. For example, a lot of managers look up job candidates on LinkedIn, where many pieces of information are shared that would not be part of a job application, like profile pictures and affiliated religious groups. Once a hiring manager sees such information, their judgment has potentially been irreparably affected – or more critically, the possibility that it has been so affected creates a substantial legal risk. How can an organization prove that manager wasn’t influenced by the job candidate’s race, skin color, national origin, gender or religion once they have been exposed to it?
The next session on social media was a little better, data-wise, but it wasn’t perfect. One of the presentations provided some data on the criterion-related validity of LinkedIn profiles for predicting job performance, as an extension of a previously published paper doing the same for Facebook profiles. As it turns out, LinkedIn profiles can provide valuable information to hiring managers. But I asked a question of the speakers: Aren’t we just setting ourselves up to redo this same study for every social media platform of the moment? If we seek a scientific understanding of the useful information provided by social media, don’t we need a better way to do it? Everyone seemed to think that was a great idea, but alas, I didn’t get any specific recommendations on how to go about it.
Next, I headed to a symposium on online simulation, which was absolutely packed. Much in line with what I noticed from Day 1, practitioners are hungry for data on how these technologies have and should affect their practice – and yet again, data was a bit light. They presented some interesting assessment tools, but they seemed like relatively small advancements over what we already have – for example, the use of branching video-based simulations rather than branching text or static video assessments. The practitioners did bring up some interesting issues they faced in launching these technologies globally. For example, when they tried to launch their video-based simulation in another country with poor Internet infrastructure, they discovered that bandwidth was not the guarantee that it is in the US – their client couldn’t view the videos at even quasi-decent quality. This resulted in some rather last minute changes to meet client needs.
After this symposium, I headed over to the poster session on training – in an hour, I got through only about 75% of the room. Hopefully I’ll have a bunch of papers incoming to my inbox in the next few days so that I can revisit them in more detail than a 3′ x 5′ poster can provide!
Finally, at 5PM, in a symposium with my colleagues at ICF and ARI on emerging training technologies, I gave my presentation on gamification, which was by a large margin the most successful presentation I have ever given at SIOP in the 9 years I have attended. In it, I presented a theoretical model of gamification providing two specific mechanisms by which gamification can affect learning in the context of education/training, and I tested one of those processes empirically, providing several practical recommendations for those wishing to gamify their own organizational learning. I had literally 25 minutes of questions and comments after the session ended!
All in all, an exceptionally successful Day 2! Day 3 is a bit lighter, and I suspect tonight will be a bit more intense, so Day 3 may also start a bit late. But I am still hopeful for some great learning and some great conversations.
|13:13||Hangovers and spotty session attendance… so starts Day 2 of #siop13… Although many of the hangovers won’t start for hours yet!|
|13:41||Yet another session on social media for selection… Some data in this one, I hope? #siop13|
|14:05||Information on protected classes may be more common on LinkedIn than in resumes… Greater legal risk (managers cannot unsee!) #siop13|
|14:06||@ChrisWieseIO The speakers seem to be mixing selection predictors and selection methods… Seems a very jumbled approach to me. #siop13|
|14:10||I’m getting a little tired of hearing “maybe we’ll have some empirical research later” in these social media symposia #siop13|
|14:26||@ericknud I’ve got several running right now… Just sad there are not more here #siop13|
|14:34||I am sad when people count the number of stat. significant relationships in their results as an indicator of result stability #siop13|
|14:47||@ericknud Open for now! Stick around, will probably head up front|
|15:17||@ericknud was talking to Don until just now, out in the hall now for a bagel|
|15:17||Discovered I was hoarse from last night’s activities while asking a question at a symposium… whoops #siop13|
|17:12||Full house, standing room only at innovations in online simulation #siop13|
|17:29||To practitioners, video-based virtual assessment center apparently = long-form video SJTs with open ended responses #siop13|
|17:36||Cultural issues within orgs can be unexpected and tricky for innovative assessment: “our org doesn’t use email” #siop13|
|17:55||Online assessment center design with branching reminds me of game design process #siop13|
|19:02||Did not even get all the way through the training poster session in an hour – too much interesting stuff! #siop13|
|19:03||RT @sandyfias: It makes me sad that about 8/3800 I/O psychologists at #siop13 r tweeting. And we ask how we can be more visible and rele …|
|19:03||@sandyfias I’d suggest we have a tweetup, but it would just be depressing! #siop13|
|20:22||Hope you’re attending my session at #siop13 on serious games and gamification in Room 339AB at 5PM today! (Room change!)|
|20:25||@BelindaK04 @PsychoSoAnt @sandyfias Are there enough people? I’ve got 3-4:30 tomorrow #siop13|
|21:55||Starting in 5 mins, session at #siop13 on serious games and gamification in Room 339AB!|
|22:05||Standing room only at the session! #siop13 http://t.co/HLZwNOwTpY|
|22:37||RT @jsnread: Gamified learning results in longer time spent on task, thus greater learning. Wish I could have gamified my dissertation. …|
|22:38||@neilmorelli thanks – got a little rushed at the end but hopefully it made sense! #siop13|
|23:27||Wow – 25 minutes of questions about #gamification research after session! Thanks everyone! #siop13|
|23:27||@jsnread ha, why not!?|
|23:50||#siop13 Day 2 is over! Now to the ODU alumni dinner (where I entertain) and the Minnesota alumni event (where I am entertaining)|
SIOP 2013 has begun, and Day 1 was a fascinating set of presentations. The day starting with the opening plenary by SIOP President Doug Reynolds, talking about Big Data and I/O Psychology’s role in it. For some reason, we are not leading the way, despite our expertise being in synthesizing, analyzing and interpreting massive datasets about behavior and personal characteristics. Sounds like it would be a happy marriage to me!
The biggest difference I noticed today in comparison to last year was the massive number of presentations on social media and mobile assessment. Mobile assessment presentations seem to focus on the apparent equivalence of computer-based and mobile-based assessment, but I’ve been seeing some results contradictory to that… so more research is clearly needed. Social media research is progressing, but I heard “we don’t have data” way more times than I can even count. If you don’t have data, why are you presenting at SIOP? This is a data-hungry crowd! Perhaps the most disappointing was the session on “Empirical Evidence” for social media, but where only one presentation had any real data to speak of. I was hopeful though.
Day 1 was actually my lightest day, and I had a couple of time slots that were not scheduled. No worries – just walk 10 feet at SIOP and you’ll run into someone you know. The disadvantage is that it takes an hour to get from one side of the conference hotel to the other!
I also sat in a presentation where “Millenials” were predicted to be dramatically different from everyone else. Surprise! I hate such conclusions – Millenials are a diverse bunch, like every other generation. And I don’t say that just because I’m a Millenial.
Evening activities went pretty late into the night… but it’s SIOP… so that’s just what happens. Totally worth it. Day 2 summary tomorrow!
Archive of Day 1 Twitter posts follows:
|13:38||I wonder if this is the first #siop with a 2-minute silence for award winners… #siop13|
|13:48||So many #siop13 awards… where is the award for “best tweet”?|
|13:56||Production values have gone up a bit for #siop13 fellow announcements. The muzak is an… interesting touch.|
|14:07||RT @fanseel: Very proud of my #ugent colleague Filip Lievens being awarded fellowship by Society for Ind&Org Psychology #SIOP13 http …|
|14:10||Who has made the biggest impact on your professional development? Donate to #siop in their honor #siop13 http://t.co/1qyjQJeRes|
|14:21||Quite the risquÃ© exposÃ© on SIOP President Doug Reynolds from Tammy Allen #siop13|
|14:33||Will this address on #siop branding effect change differently than the last several years of addresses on branding? #SIOP13|
|14:38||@sioptweets #SIOP13 is the official SIOP hashtag, right? Or did we switch back to #siop2013 this year?|
|14:39||@ChrisWieseIO I prefer #siop = “scientist-practitioners who care about both” #siop13|
|14:43||Doug Reynolds saying we are behind on Big Data… But we’ve been doing a lot of this for years, just didn’t call it “Big Data” #siop13|
|14:57||@jsnread Thought so! Saw folks using #siop2013 instead of #siop13 and wanted to be sure. Last year was #siop12, didn’t think it would change|
|14:58||With the closing of the opening plenary, #SIOP13 has officially begun! First stop coffee and scones?|
|16:20||@neilmorelli Which presentation is this from? We have contradictory results.|
|16:21||At community of interest on virtual work #SIOP13|
|16:45||RT @ftcniek: JAP editor Kozlowski in panel session on causality: “I do not send out cross-sectional self-report studies to reviewers” #s …|
|18:01||Andalocua tapas bar near the conference hotel highly recommended! #siop13|
|19:45||It’s always 3 simultaneous sessions I want to attend or zero #siop13|
|20:35||In state of social media symposium #siop13|
|20:52||#siop13 symposium on social media so far: lots of ideas, not much data, not many specifics|
|21:01||Most common statement about social media in Schmit’s talk: “we don’t have much research” #siop13|
|21:05||43% of orgs ban social network sites on their networks according to SHRM survey; guess no one thought about smartphones #siop13|
|21:22||Most states prohibit using lawful off-work activity to make hiring decisions; probably includes Facebook photos of drinking, smoking #siop13|
|21:29||@ChrisWieseIO Agreed. Perhaps a new Journal of Social Media in Organizations? #siop13|
|21:44||Getting the impression that the speakers in this social media session don’t know what time it is set to end #siop13|
|21:52||Third speaker just started, 2 mins after end of session… Uh oh #siop13|
|22:01||Now at Empirical Evidence for social media, so hopefully more hard evidence in this one! #siop13|
|22:03||Not going to be “a lot of numbers”, uh oh again #siop13|
|22:04||I do support a new hashtag for common use at #siop13 suggested by speaker: #thispresentationisawesome|
|22:11||@fanseel So it seems. He has a theory though. Just didn’t test it in his study. Not sure why.|
|22:14||@fanseel Ha, yes exactly!|
|22:23||Millennial job applicants usually use combination of social media and websites to investigate orgs, but some use social media alone #siop13|
|22:28||21% of youths felt it inappropriate that the military use social media for recruiting #siop13|
|22:30||@DrDavidBallard I know of precisely 2 studies… I’m not sure if she found anything else in her SHRM-funded review! #siop13|
|22:53||Day 1 of sessions wrapped up! Now hours and hours of socializing and networking until passing out and doing it again tomorrow #siop13|
As in 2010 and 2011 and 2012, I’ll be live-blogging the SIOP conference, which begins Thursday, April 11 and runs through Saturday, April 13. This post contains a list of all the sessions that I interested in attending, which are generally focused on technology, training, and assessment. Barring any unexpected battery problems, my live blogging will be on Twitter, with a permanent record stored here.
In the graphic below, the symposium that I presenting in is colored red. As you can see, it’s a bit packed, so I won’t be attending all of these, but this is everything I’m interested in. If you’d like to meet up at any of these events, or if you think I missed something that I should definitely attend, please let me know!
|No.||Day||Start||End||Session Title||Room||Session Type|
|1||4/11||8:30||10:00||Opening Plenary Session||Grand A||Special Events|
|16-16||4/11||11:00||12:00||Antecedents and Consequences of Voluntary Professional Developmentong STEM Majors||Ballroom||Poster|
|75||4/11||3:30||5:00||The Science and Practice of Social Media Use in Organizations||346 AB||Master Tutorial|
|76-9||4/11||3:30||4:30||Predicting Persistence in Science, Technology, Engineering, and Math Fields||Ballroom||Poster|
|89||4/11||5:00||6:00||Back to the Future of Technology-Enhanced I-O Practice||335 BC||Panel Discussion|
|91||4/11||5:00||6:00||Empirical Evidence for Successfully Using Social Media in Organizations||340 AB||Symposium|
|110||4/12||8:30||10:00||The Promise and Perils of Social Media Data for Selection||Grand E||Symposium|
|114||4/12||8:30||10:00||Team Leadership in Culturally Diverse, Virtual Environments||Grand J||Symposium|
|127-18||4/12||10:30||11:30||Creation and Validation of a Technological Adaptation Scale||Ballroom||Poster|
|138-32||4/12||11:30||12:30||Personality Perceptions Based on Social Networking Sites||Ballroom||Poster|
|148||4/12||12:00||1:30||Innovations in Online Simulations: Design, Assessment, and Scoring Issues||344 AB||Symposium|
|159-22||4/12||1:00||2:00||Learner Control: Individual Differences, Control Perceptions, and Control Usage||Ballroom||Poster|
|188-22||4/12||3:30||4:30||Another Look Into the File Drawer Problem in Meta-Analysis||Ballroom||Poster|
|206||4/12||5:00||6:00||I-O’s Role in Emerging Training Technologies||342||Symposium|
|225||4/13||8:30||10:00||New Insights Into Personality Test Faking: Consequences and Detection||340 AB||Symposium|
|274-27||4/13||12:00||1:00||Using Bogus Items to Detect Faking in Service Jobs||Ballroom||Poster|
|274-1||4/13||12:00||1:00||Examining Ability to Fake and Test-Taker Goals in Personality Assessments||Ballroom||Poster|
|274-13||4/13||12:00||1:00||Applicant Withdrawal for Online Testing: Investigating Personality Differences||Ballroom||Poster|
|280||4/13||12:00||1:30||Technology Enhanced Assessments, A Measurement Odyssey||Grand F||Symposium|
|305||4/13||2:00||3:00||Advances in Technology-Based Innovative Item Types: Practical Considerations for Implementation||337 AB||Symposium|
|306-12||4/13||2:00||3:00||Is Crowdsourcing Worthwhile? Measurement Equivalence Across Data Collection Techniques||Ballroom||Poster|
|306-21||4/13||2:00||3:00||Is the Policy Capturing Technique Resistant to Response Distortion?||Ballroom||Poster|
|316||4/13||4:30||5:30||Closing Plenary Session||Grand A||Special Events|