fredag 29 november 2013

Preparation for theme 4

I picked the paper "Historicizing New Media: A Content Analysis of Twitter" by Humphreys, Gill, Krishnamurthy, Newbury et al. from the Journal of Communication, Volume 63, Issue 3 (June 2013). It can be found at: http://onlinelibrary.wiley.com.focus.lib.kth.se/doi/10.1111/jcom.12030/full

In the paper, the authors employ a content analyzing scheme to a relatively large sample of Twitter messages (tweets) in order to analyze and classify 'microblogging' content on Twitter in a historical context, comparing with 19-18th century diaries.

The report repeatedly gathered public tweets in a three week time interval in early 2008. In total, they looked at over 100 000 tweets and narrowed them down to English tweets, and then randomly sampled over 2 000 tweets for the content analysis.

Since the authors want to look at microblogging (on Twitter) from a historical perspective, it's important that they look at a statistically significant amount of tweets in order to draw any conclusions from their data. In other words, deploying an automatic analyzing scheme to get as many sources as possible is of high importance and benefitted the paper in this regard. Although perhaps possible in this specific case, this type of study would have taken an enormous, maybe unfeasible amount of time to conduct without the aid of automatic analysis.

At the same time, every time computers are analyzing (human) content there is a chance of misinterpretation or computational errors, which either gives false data or removes data that cannot be classified, thus conforming the results to the model of the content analyzing algorithm. This is acknowledged as a problem by the authors, as they mention how they only used 8 content topic categories, which excluded tweets that were not recognizable in this range. Another potential limitation is the fact that the sample data dates back all the way to 2008, which is an enormous amount of time considering the developments in mobile and tablet technology since then.

Also, the report doesn't really say much of why. While the authors discuss some details such as how interactivity may or may not be greater today, and how information seeking (naturally) is a new category and how twitterers potentially can reach a much larger audience, it doesn't really explain the indicated trends fully. Therefore, with Gregor's theory classification this paper would more likely have fallen under "Analysis" or "Prediction", rather than "Explanation and prediction".

---

The paper Physical activity, stress, and self-reported upper respiratory tract infection by Fondell, E., Lagerros, Y. T., Sundberg, C. J., Lekander, M., Bälter, O., Rothman, K., & Bälter, K. (2010) examined, through an extensive study with 1509 Swedish men and women, how stress and exercise affect reported cases of upper respiratory tract infection (URTI).

I say reported because the study doesn't seem take into account the psychological side other than the stress factor, just the number of reported cases and the content of each report. The study was conducted over a four month period where an email questionnaire was answered every three weeks.

The researchers came to the conclusion that high physical activity was linked to a lower risk of contracting URTI - but I see a potential problem in that a reported case is directly translated to an actual case of URTI without a doctor's confirmation. Also, a questionnaire with predefined answers may also affect the result in that symptoms or data outside the box could be missed and misinterpreted.

---

Which are the benefits and limitations of using quantitative methods?

Quantitative methods have the benefit of (potentially) giving statistical significance. They may employ automatic analyzing schemes to a greater, more accurate effect seeing as the wanted data should have a limited variance. This means that huge amounts of data can be processed, especially so in our digital age. Limitations, however, include misinterpretation and erroneous analyzing algorithms as well as sometimes incorrectly identifying that target data is of low variance, thus missing out on answers outside the box of predefined parameters or questions. Lastly, the truthfulness or authenticity of answers may be harder to confirm. For instance, online anonymity can be both a boon (in getting more answers) and a fault (in getting false answers).

Which are the benefits and limitations of using qualitative methods?

Qualitative methods have the benefit of capturing answers that can vary in range that e.g. a predefined questionnaire might not even have as an option. The truthfulness of answers also gets down to a more personal level, mostly pointing towards higher accuracy in the actual answers given. At the same time, interviews can of course still be faked by a researcher claiming anonymity. Also, since the sample count is normally of a very low count (compared to quantitative methods, at least), it is often difficult to draw any generalized conclusions.


Inga kommentarer:

Skicka en kommentar