Social:Web-based experiments

From HandWiki
Revision as of 12:43, 5 February 2024 by JOpenQuest (talk | contribs) (fix)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Experiment conducted over the Internet

A web-based experiment or Internet-based experiment is an experiment that is conducted over the Internet. In such experiments, the Internet is either "a medium through which to target larger and more diverse samples with reduced administrative and financial costs" or "a field of social science research in its own right."[1] Psychology and Internet studies are probably the disciplines that have used these experiments most widely, although a range of other disciplines including political science and economics also use web-based experiments. Within psychology most web-based experiments are conducted in the areas of cognitive psychology and social psychology.[2][3] This form of experimental setup has become increasingly popular because researchers can cheaply collect large amounts of data from a wider range of locations and people. A web-based experiment is a type of online research method. Web based experiments have become significantly more widespread since the COVID-19 pandemic, as researchers have been unable to conduct lab-based experiments.[4]

Introduction

Experiments are an integral part of research, however, their integration with the Internet has been gradual. There are three main categories of experiments:

  • Controlled experiments, done in a laboratory setting, attempt to control for all variables then test for a single effect.
  • Natural experiments, conducted after a large-scale event which was prohibitively difficult or impossible to control, collect as many variables as possible then draw correlations.
  • Field experiments, observed in a natural setting where less controls can be applied, have the advantage of better external validity.

The adaption of each type of experiment online faces some hurdles.

Benefits

Web-based experiments are significantly less expensive, potentially allowing the researcher to:

  • Reach more diverse samples, as well as rare or specific sub-populations[5]
  • Run experiments more quickly[5]
  • Recruit larger subject pools that provide higher statistical power[6][7]
  • Conduct cross-cultural social experiments in real time[8]

These benefits have the potential to translate into greater external validity and generalizability for the study. For instance, in web-based experiments there is less reliance on data gathered from populations of Western undergraduate students who are often used as the default research subjects in social science disciplines.[9] Because participants remain in their homes or offices while participating in the experiment, scholars have also argued that such experiments have greater ecological validity.[5]

Criticisms and limitations

Web-based experiments may have weaker experimental controls compared to laboratory-based experiments, and may face greater difficulties coming up with procedures that ensure reliability and internal validity.[10] Online natural and field experiments may also face challenges generalizing findings beyond the online context in which they were conducted. Some potential difficulties faced by web-based experiments include:

  • Difficulty verifying the identity of subjects participating in the experiment
  • Experimental instructions ignored or read too carelessly, leading to lower quality data[11]
  • Shorter decision times of online participants triggering instinctive and emotional reasoning processes rather than cognitive and rational ones, which could cause subjects to make more pro-social decisions on average[1][12]
  • Significant distractions occurring during the course of the experiment unbeknownst to the researchers
  • Subjects selectively dropping out of the experiment, especially if drop-out is correlated with the independent variable(s)
  • Variance in the data due to network connection speed and reliability, browser and computer types, screen size and resolution, etc.[5]
  • Subjects taking the experiment less seriously and behaving with less risk-aversion[13]
  • Subjects not believing that they are interacting with real human partners
  • Subject concerns about compensation at the end of the experiment or anonymity of payment processing[14]
  • The non-representative nature of the mostly English-speaking computer users who participate[5]

In the face of these criticisms, some researchers have argued that brick-and-mortar experiments are just as affected by these problems, if not more so.[15][16][17]

Studies have been conducted to test the internal validity of web-based experiments, comparing across experimental conditions (online and offline) and successfully replicating findings. For example, Schoeffler et al. (2013) compared laboratory- and web-based results (62 and 1,168 subjects) of an auditory experiment and found no significant differences.[18] A paired experiment in behavioral economics split into online and traditional lab environments produced substantively similar results.[1] Uncompensated and unsupervised subjects on LabintheWild have been shown to replicate previous in-lab study results with comparable data quality.[19]

Methodologies

Experimental protocols have been suggested to prevent or control difficulties associated with web-based experimentation. Methods like sequential subject matching, background timing and mouse use tracking, and instantaneous compensation through PayPal have the potential to address many of the concerns about the internal validity of web-based experiments.[1] These methods control for differences in response times, address issues of selective attrition, concentration, and distraction, minimize subject concern about compensation, improve subject confidence that they have a real human partner in the experiment, and ensure that subjects have an appropriate understanding of the instructions and the decision problems in the experiment.[1]

Scholars have also formulated techniques to decrease or account for drop-outs, including the high-hurdle technique (motivationally adverse information is clustered at the beginning of the study), the seriousness check (requesting participant's probability estimate that they'll complete the study), and the warm-up phase (placing consent forms or other pre-study materials first to winnow the samples before the study begins).[5]

Examples

Use in psychology

A wide range of psychology experiments are conducted on the web. The Web Experiment List provides a way to recruit participants and archives past experiments (over 700 and growing).[20] A good resource for designing a web experiment is the free Wextor tool, which "dynamically creates the customized Web pages needed for the experimental procedure" and is remarkably easy to use.[21] Web experiments have been used to validate results from laboratory research and field research and to conduct new experiments that are only feasible if done online.[5] Further, the materials created for web experiments can be used in a traditional laboratory setting if later desired.

Interdisciplinary research using web experiments is rising. For example, a number of psychology and law researchers have used the web to collect data. Lora Levett and Margaret Bull Kovera examined whether opposing expert witnesses are effective in educating jurors about unreliable expert evidence.[22] Rather than sensitizing jurors to flaws in the other expert's testimony, the researchers found that jurors became more skeptical of all expert testimony. In her experiment, this led to more guilty verdicts. Levett and Kovera's research used a written transcript of a trial, which participants then read before making their decision. This type of stimulus has been criticized by some researchers as lacking ecological validity—that is, it does not closely approximate a real-life trial. Many recommend the use of video where possible. Researchers at New York University are currently conducting a psychology and law study that uses video of a criminal trial.[23]

Researchers at University of Salford are currently conducting a number of studies online to explore sound perception.[24] Sound experiments over the web are particularly difficult due to lack of control over sound reproduction equipment.

Salganik, Dodds, and Watts conducted an experiment to measure social influence, specifically in the popularity rating of songs. Their use of the Internet allowed them to collect over 14,000 participants and examine the relationship between individual and collective behavior.[7]

Use in economics

As more experiments have been conducted in economics, questions about appropriate methodology and study organization has been raised. Jerome Hergueux and Nicolas Jacquemet developed an "online laboratory" to compare social preferences and risk aversion online and in person. They administered a risk aversion assessment, Public good game, a Trust game, a Dictator game, and an Ultimatum game to groups both online and in a lab as a way of assessing the internal validity of web-based experimentation in economics.[1]

Use in political science

An online field experiment conducted on 61 million Facebook users tested whether receiving information about voting, polling places, and the voting behavior of one's friends led individuals to seek out political information, influenced political self-expression, and changed real-world voting behavior.[8]

Use in internet studies

Web-based experiments have particular salience in studies of how online communities operate. Internet studies, including studies of online communities and social networks, have used natural and field experiments to understand the effects of informal rewards in peer production on Wikipedia,[25][26] as well as the impact of early recognition and support on future successes on Kickstarter, Change.org, Epinions, and Wikipedia.[27] Another experiment looked at the effect on edit rates of introducing a program of intelligent task assignment on Wikipedia.[28]

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 Hergueux, J.; Jacquemet, N. (2014). "Social preferences in the online laboratory: a randomized experiment". Experimental Economics 18 (2): 251–283. doi:10.1007/s10683-014-9400-5. https://halshs.archives-ouvertes.fr/halshs-00984211/file/OnlineLabWP.pdf. 
  2. Reips, U.-D. (2007). The methodology of Internet-based experiments. In A. Joinson, K. McKenna, T. Postmes, & U.-D. Reips (Eds.), The Oxford Handbook of Internet Psychology (pp. 373-390). Oxford: Oxford University Press.
  3. Reips, U.-D. & Krantz, J. H. (2010). Conducting true experiments on the Web. In S. Gosling & J. Johnson, Advanced Internet Methods in the Behavioral Sciences (pp. 193-216). Washington, DC: American Psychological Association.
  4. Lourenco, Stella F.; Tasimi, Arber (2020-08-01). "No Participant Left Behind: Conducting Science During COVID-19" (in en). Trends in Cognitive Sciences 24 (8): 583–584. doi:10.1016/j.tics.2020.05.003. ISSN 1364-6613. PMID 32451239. 
  5. 5.0 5.1 5.2 5.3 5.4 5.5 5.6 Reips, Ulf-Dietrich (2002). "Standards for Internet-Based Experimenting". Experimental Psychology 49 (4): 243–56. doi:10.1026/1618-3169.49.4.243. PMID 12455331. http://www.uni-konstanz.de/iscience/reips/pubs/papers/ExPsyReipsReprint.pdf. 
  6. Kramer, A. D. I.; Guillory, J. E.; Hancock, J. T. (2014). "Experimental evidence of massive-scale emotional contagion through social networks". Proceedings of the National Academy of Sciences 111 (24): 8788–8790. doi:10.1073/pnas.1320040111. PMID 24889601. Bibcode2014PNAS..111.8788K. 
  7. 7.0 7.1 Salganik, M. J.; Dodds, P. S.; Watts, D. J. (2006). "Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market". Science 311 (5762): 854–856. doi:10.1126/science.1121066. PMID 16469928. Bibcode2006Sci...311..854S. 
  8. 8.0 8.1 Bond, R. M.; Fariss, C. J.; Jones, J. J.; Kramer, A. D. I.; Marlow, C.; Settle, J. E.; Fowler, J. H. (2012). "A 61-million-person experiment in social influence and political mobilization". Nature 489 (7415): 295–298. doi:10.1038/nature11421. PMID 22972300. Bibcode2012Natur.489..295B. 
  9. Henrich, J.; Heine, S. J.; Norenzayan, A. (2010). "The weirdest people in the world?". Behavioral and Brain Sciences 33 (2–3): 61–83. doi:10.1017/s0140525x0999152x. PMID 20550733. https://www2.psych.ubc.ca/~henrich/pdfs/WeirdPeople.pdf. 
  10. Hoffman, M., & Morgan, J. (2011). Who’s Naughty? Who’s Nice? Social Preferences in Online Industries. UC Berkeley Working Paper.
  11. Anderhub, V.; Müller, R.; Schmidt, C. (2001). "Design and evaluation of an economic experiment via the Internet". Journal of Economic Behavior & Organization 46 (2): 227–247. doi:10.1016/s0167-2681(01)00195-0. http://collections.unu.edu/view/UNU:1091. 
  12. Kahneman, D. (2003). "Maps of bounded rationality: Psychology for behavioral economics". The American Economic Review 93 (5): 1449–1475. doi:10.1257/000282803322655392. 
  13. Shavit, T.; Sonsino, D.; Benzion, U. (2001). "A comparative study of lotteries-evaluation in class and on the Web". Journal of Economic Psychology 22 (4): 483–491. doi:10.1016/S0167-4870(01)00048-4. 
  14. Eckel, C. C.; Wilson, R. K. (2006). "Internet cautions: Experimental games with Internet partners.". Experimental Economics 9: 53–66. doi:10.1007/s10683-006-4307-4. 
  15. Reips, U.-D. (1996, October). Experimenting on the World WideWeb. Paper presented at the 1996 Society for Computers in Psychology conference, Chicago.
  16. "Virtual labs: Is there wisdom in the crowd?". New Scientist. 16 March 2007. https://www.newscientist.com/blog/shortsharpscience/2007/03/virtual-labs-is-there-wisdom-in-crowd.html. 
  17. "A Web of research". http://www.apa.org/monitor/apr00/research.html. 
  18. Schoeffler et al. (2013). "An Experiment About Estimating the Number of Instruments in Polyphonic Music: A Comparison Between Internet and Laboratory Results"
  19. Reinecke, Katharina; Gajos, Krzysztof Z. (2015-01-01). "LabintheWild". Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. CSCW '15. New York, NY, USA: ACM. pp. 1364–1378. doi:10.1145/2675133.2675246. ISBN 9781450329224. 
  20. "Home". http://wexlist.net/. 
  21. "WEXTOR Webserver". http://wextor.eu. 
  22. Levett; Kovera (2008). "The Effectiveness of Opposing Expert Witnesses for Educating Jurors about Unreliable Expert Evidence". Law and Human Behavior 32 (4): 363–74. doi:10.1007/s10979-007-9113-9. PMID 17940854. 
  23. Virginia vs. McNamara
  24. "Internet sound experiments | psychoacoustic tests University of Salford". http://www.sound101.org/. 
  25. Restivo, Michael; Rijt, Arnout van de (2012-03-29). "Experimental Study of Informal Rewards in Peer Production". PLOS ONE 7 (3): e34358. doi:10.1371/journal.pone.0034358. ISSN 1932-6203. PMID 22479610. Bibcode2012PLoSO...734358R. 
  26. Zhu, Haiyi; Zhang, Amy; He, Jiping; Kraut, Robert E.; Kittur, Aniket (2013-01-01). "Effects of peer feedback on contribution". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI '13. New York, NY, USA: ACM. pp. 2253–2262. doi:10.1145/2470654.2481311. ISBN 9781450318990. 
  27. Rijt, Arnout van de; Kang, Soong Moon; Restivo, Michael; Patil, Akshay (2014-05-13). "Field experiments of success-breeds-success dynamics" (in en). Proceedings of the National Academy of Sciences 111 (19): 6934–6939. doi:10.1073/pnas.1316836111. ISSN 0027-8424. PMID 24778230. Bibcode2014PNAS..111.6934V. 
  28. Cosley, Dan; Frankowski, Dan; Terveen, Loren; Riedl, John (2007-01-01). "SuggestBot". Proceedings of the 12th international conference on Intelligent user interfaces. IUI '07. New York, NY, USA: ACM. pp. 32–41. doi:10.1145/1216295.1216309. ISBN 978-1595934819. 

External links