torsdag 19 december 2013

Comments throughout the course


Hey Ingrid, I think it's interesting that you decided to pick a paper about music piracy, since it in some papers seems to be of negative impact, while others point to positive aspects. In this post you write that the study itself presented too little evidence to support its conclusion. What do you think would have been ideal to gather this evidence? I imagine that one has to look at a numerous amount of factors, both from the (quantitative) statistics of both record sales and ticket show sales by the artist, as well as qualitative views from artists or consumers who use bootlegging on a day-to-day basis.

Hey Adam & Ekaterina, thanks for your comments.
Since I consider quantitative methods to be of more use when you want statistical significance, I would argue that qualitative methods is better utilized prior to a survey, questionnaire or data gathering in order to catch all possible answers. At the same time, Ekaterina you're completely right in that sometimes you need to explain the results you have at hand, in which case follow-up interviews (or other qualitative method) is a very good idea.
In the end, I guess it all depends on the situation. If your research benefits more from qualitative research, then it might be better to start off with a questionnaire to, as you say Adam, form a better understanding of a subject.

Hi Johan. Thanks for an interesting post. Regarding your definition of a case study, I was quite intrigued by your explanation that a case study can be used to create a hypothesis. I've always considered case studies to be utilized after you've already formed a hypothesis and want to narrow it down to one or several specific cases, to investigate its validity or delve deeper into a problem. 
In the case of the second paper that you picked (closure of a state university due to traditional and social media), you write that a weak point was that the population was limited and thus hard to generalize. Since its problem statement seemed to be within this one case, how would one have gathered a more general population, in your opinion? I think it is as you say earlier, that it "limits the unnecessary variation and sharpens the external validity." In taking these results to a more general conclusion to encompass other scenarios as well might be stretching the problem too far out of its area, I reckon.

Hej Matteo!
I was referencing my research paper that could have benefitted from interviewing people who used Twitter, not meaning that they should interview people ON Twitter. It was a bit poorly worded by me. 
However I do think qualitative methods can be used for simply developing your idea or hypothesis further. And if a research project would interview people on Twitter, this could definitely be utilized, even if you just gain a very small new insight. Quantitative or more comprehensive qualitative methods could then benefit from this new insight as that research paper develops.

Reflection of theme 6

This week I've had a couple comments on my stance on qualitative and quantitative methods combined. My initial thought have been (since theme 4) that one should utilize qualitative methods before using quantitative methods to ensure a good survey or questionnaire that would catch all different responses possible.

However, I've naturally realized since then that how and when you use qualitative methods depend largely on the type of situation you're in. If you have great background knowledge, maybe you don't need any introducing interviews to help you start off your research. Instead you might want to understand your (quantitative) results better by for example conducting final interviews to make sense of a large amount of data.

---

The paper about student's use of Welsh in Facebook - the first paper I had picked for this theme - contained both qualitative and quantitative methods. Looking back at previous themes, I kind of see some problems in its quite broad, investigating problem statement. Using Gregor's 'Theory Types', I think it could be classified more as 'Analysis' than anything else, in large because its results are hard to validate or compare with other studies. Although the results were comprehensive, the data was also more or less all self-reported by students, which is another problem in its own.

Its qualitative method was all about small focus groups, and a problem with this, as discussed, was how someone can take over the group discussion while others may be too embarrassed or shy to speak their opinion. At the same time, a focus group might discover new things together that one-on-one interviews might completely miss, for instance if one focus group participant comes up with an idea that nobody else had thought about, but agree upon, and then is able to expand upon this idea together.

---

My second paper about online participation in public service news in Europe was a more fitting ending to this course, I think. It had a well defined problem statement in investigating online participation, and utilized case studies from quantitative data. Although one might think it would have been good to also include some interviews with people from each public service company, it wasn't really necessary due to its quantitative results speaking for themselves. In fact, applying interviews or other qualitative methods in this case might have been wrong, simply because the problem statement in a way excluded potential personal wish-thinking from company directors or such, by just looking at the statistics gathered. And since the statistics were clear enough to understand by themselves, it managed to answer its research questions. Therefore I would classify its 'Theory Type' as 'Explanation'. However, had the problem statement been broader to include people's perceptions and needs, it would have been very interesting to conduct interviews or focus group discussions to evaluate what more needs to be done, and thus further evolving this paper's 'Theory Type' to both 'Explanation' and 'Prediction'.

When discussing this paper with some classmates, I realized that my perceived thought of it being skewed due to only taking data from 'uneventful' days perhaps was not a big issue, since the study included five case studies in total, and so "they were in that case all skewed in the same way".

Reaching theoretical closure is something I would consider almost impossible, at least if your research question isn't extremely narrow. I've learnt that it's also interesting (in a research paper) to reach the conclusion that more research is needed in some aspects X and Y, or that results were difficult to analyze for some reason Z. This still brings the research community forward in a way.

fredag 13 december 2013

Preparation for theme 6

I read the research paper "Young Bilinguals' Language Behaviour in Social Networking Sites: The Use of Welsh on Facebook", published in the April 2013 issue of the Journal of Computer-Mediated Communication (impact factor 1.778).

The authors Cunliffe, Morris & Prys investigated the use of Welsh as a minority language in the social networking site Facebook.

The research makes use of a selection of a one year long study from 2010 among four schools in northern and southern Wales. The study itself contained both quantitative and qualitative methods, starting off with an extensive questionnaire and then further narrowing it down to small focus group discussions among the students of these four schools.

This is in contrast to what me and some peers of this course have previously talked about in how qualitative methods would be good to set up a quantitative study, where results would then be used in a verifying way. That way you would be able to catchup a wide range of questions for the questionnaire, minimizing missed opportunities or results.

And so I think one limitation is in the order of method use in the study that is referenced in the paper, because as the authors say themselves, the answers from the students were self-reported. In connection to this, the authors argue that the increased sample size of the questionnaire made it possible to verify the answers of the focus group.

However, had they created the questionnaire after actually talking face-to-face to the students, I think they would have been in a much stronger position to verify the focus group answers simply because they by then would know what answers that were conflicting among the students.

I did learn, however, how you could construct a study of a subject that, ideally (as the authors put it) would have been conducted using very private sources (private communications) yet keep it fully anonymous.

---

Select a media technology research paper that is using the case study research method. The paper should have been published in a high quality journal, with an “impact factor” of 1.0 or above. Your tasks are the following:

1. Briefly explain to a first year university student what a case study is.

One can compare a case study with a focus group that you wish to interview / discuss with to find nuances and perspectives of a certain research problem that you have presented for them. However, the focus group in this example can be an entire organization, one person or other. A case study is also typically followed for a longer amount of time than a typical focus group session. Eisenhardt (1989) explains how a case study utilizes the data from different research methods, be it qualitative or quantitative. She writes that case studies can help provide description, test theory or generate theory.

2. Use the "Process of Building Theory from Case Study Research" (Eisenhardt, summarized in Table 1) to analyze the strengths and weaknesses of your selected paper.

I read the article "Public Service Broadcasting's Participation in the Reconfiguration of Online News Content" from the April 2013 issue of the Journal of Computer-Mediated Communication (impact factor 1.778), by Calvet, Montoya & García.

The authors base their paper on research conducted through an empirical study of five case studies (of public online news sites) in the European market.

I think the authors did a good job in defining a problem statement in terms of online participation possibilities, which could focus their research attention to this problem.

The authors picked five of the biggest European public service websites and compared them in terms of participatory options. From an intitial outlook one may think it would have been better to limit or randomize the sites, but selecting the largest countries and websites (cases) I think favors the statistical significance of the results.

Two samples were taken, one in a week's worth of news in early spring 2010, and the other in two days in May. These dates were carefully picked to avoid any major planned events. This in a way may have skewed the results of the case study, in my eyes, since perhaps it's not too imaginative to think that a news station may have more interactivity or participatory options for a big news item.

Literature on the subject confirmed results mentioned, such as the trend of user comments taking over the role previously held by forums and such. The quantiative data obtained helped answer the two research questions, but theoretical closure is a bit far off since the study could be complemented with more countries and public service news websites.

torsdag 12 december 2013

Reflection of theme 5

The lectures this week contained talks about the importance of the problem area and it's implication on a research paper. In order to write a good paper, one must fully understand what one wants to achieve.

Reflecting back on the 'most academic' work I've written so far - my bachelor thesis - I kind of realize how it wasn't just a waste of time redefining and working on the actual problem area ("problemformulering"). Also, especially Haibo Li's notion of a more 'entreprenuerial' approach to argumenting/explaining results or facts in a research paper caught my attention, as I'm interested in entrepreneurship and start-ups in general.

Looking back on my entreprenurship course, I saw clear connections in how you are supposed to present a business idea to a possible investor in how Li explained the fine balance between just stating cold facts and selling a product. I will definitely keep this in mind when writing my master thesis next year.

Overall the theme this week was rather interesting because I've previously often approached prototype work as something more suitable for actual work projects, rather than research projects where the solution is not often considered a product that will be used. Research results of prototype work, or rather, evaluations of prototype work, is something that I feel is a bit difficult to reference or 'reuse' in future projects because the problem area or implementation so often is closely connected.

As I was thinking about prototypes I came to think about how the scrum software development method is gaining a following in software companies because of its flexible and easily manageable development cycle, as well as good results. Prototype work here is often minimal as the goal is to produce a working subsystem with each implementation cycle ("sprint"). This production code is then evaluated with feedback and extra features included in the next sprint. In other words, you have a "live" product that constantly evolves, rather than a number of prototypes before the actual product is developed.

Maybe research could be done similar to the scrum methodology in how you separate and limit the big, main problem into smaller subsets of problems that you target each sprint. This way a research paper would 'evolve' and be forced to consist of small 'sub-solutions', which could perhaps result in more generalized and less 'hardwired' solutions to the problem area, and as such could be referenced and 'reused' to a greater extent. Although this methodology might be hurtful in situations or projects where it's harder to divide and conquer a difficult academic question.

fredag 6 december 2013

Preparation for theme 5


Comics, Robots, Fashion and Programming: outlining the concept of actDresses 
Fernaeus, Y. & Jacobsson, M. delves in a very unique subject: physical programming. They take an initial look at existing technologies (e.g. LEGO Mindstorm) and point to the fact that the user is forced to move to a setting outside the previous use of the artifact in question, e.g. programming at a PC when actually constructing a physical Mindstorm robot.

The authors point to the fact that clothing today acts as a communicative medium where people's different clothing tell us if they are e.g. going outside, sitting indoors, training, etc. They paint similarities between programming languages and clothing as a physical language.

"actDresses" is formed as a framework for physical programming by using clothes as signifiers, rather than actual code in a PC environment. To illustrate the concept, different cases are presented. However, proof of concept prototypes are not evaluated using user studies, meaning this research paper acts more as a large hypothesis rather than new theory.

Question: How would you evaluate or design a potential user study of a "actDresses" prototype? What would be the main questions you wanted answered?

---

1. How can media technologies be evaluated?
Since media technology is a very broad subject, I think it could be evaluated with a wide array of tools, although of course depending somewhat on the technology in question. Overall however, media technology is used by people and so benefits greatly from both qualitative and quantitative methods concretized through interviews, questionnaires etc. about how the technology is used and/or seen by the person. 

A very good way to evaluate technology overall is of course to let users test them, and by doing that you need prototypes. A simple prototype could let people imagine how they would use it and give a first impression. A proof of concept prototype could let the users actually test the idea practically, rather than imaginatively. 

2. What role will prototypes play in research?
As an idea - hypothesis - or theory is formed, it is at first very abstract and often difficult to grasp. A prototype is a way of visualizing and 'humanizing' an idea, so that more people can understand it. Evaluating and developing this prototype might change the theory behind it, or reinforce it. 

3. Why could it be necessary to develop a proof of concept prototype?
Since a prototype is more of a visualization of a concept, rather than a full implementation, people (researchers) are often left wondering if the wonderful visualization could also work. Developing and implementing a proof of concept is therefore a way to further evaluate - prove - that the initial theory works. In difference to only visualizing an idea (simple prototype), a proof of concept's keyword is all about credibility rather than understandability. 

4. What are characteristics and limitations of prototypes?
A prototype is often characterized by being a very simple draft and not necessarily very aesthetically pleasing. It's a first hands-on visualization of an abstract idea, and so needs a few evaluations and tests before it can be considered for more advanced proof of concept tests or even commercial use. Therefore, since they are not fully developed, they can be confusing to the user. In some cases (such as the research paper "Turn Your Mobile Into the Ball: Rendering Live Football Game Using Vibration") users would benefit from training, which in turn potentially skewed questionnaire results as not everyone may have access to training in a more general outlook.


torsdag 5 december 2013

Reflection of theme 4

What I have learnt this week is that there certainly are many problems, both with qualitative methods as well as quantitative ones. Since the paper I picked last Friday used automated information gathering, I didn't delve into the problem aspect of questionnaire bias in a method that I considered quantitative (although I did raise other issues in this paper). My vision of quantitative methods being devoid of (what I had previously considered exclusively) qualitative problems regarding feelings and opinions was shattered.

For instance, by looking at the example of questionnaire answers I realize one must consider qualitative aspects of how people reply to these things. Although I have both answered and created a few questionnaires, I haven't really thought much about how different personalities or individuals get overrepresented certain studies since they are more willing to take the time to answer a questionnaire. And so bias by not participating or by overrepresentation of certain groups of people is certainly an issue. But the same can of course be said about qualitative methods in how some persons are more willing to e.g. attend an interview.

This week we discussed a bit about how theory comes first and quantitative study last to 'prove' your theory. For instance, one might have an idea of how something worked, but wants confirmation by sending out a questionnaire to answer. But how does one come up with this questionnaire so it doesn't limit the answers or affect the outcome in its sheer design? In Wednesday's lecture it was mentioned how questionnaires definitely should be tested before they are sent out. So how do you test a survey before you send it out? Send it out twice, or send it out to another huge group of people? The answer is by discussing with other people, interviewing them, etc. in a smaller more manageable context - i.e. in a qualitative way.

In my paper, the 'theory' consisted of a hypothesis or thought that 'microblogging' is not an entirely new idea in today's society, and in a way proved it by classifying thousands of tweets in historical categories. But it also raised new questions and brought to light new categories or classifications that the authors didn't think of. Because of this, the paper itself was limited in how the predetermined data categories formed the result, and didn't pick up on data that was not in this state space.

This is why one could view qualitative methods in another light: to gain new insights, develop theory, etc. Especially if theory is more or less just hypothesis based on research on 18-19th century diaries (like in my paper) - it could very well be developed by interviewing and discussing with people using Twitter today, for instance. By looking at the two methods in this way, one does not have to discuss quantitative vs. qualitative methods as if the one excluded the other, but rather in how they can best be utilized together.

fredag 29 november 2013

Preparation for theme 4

I picked the paper "Historicizing New Media: A Content Analysis of Twitter" by Humphreys, Gill, Krishnamurthy, Newbury et al. from the Journal of Communication, Volume 63, Issue 3 (June 2013). It can be found at: http://onlinelibrary.wiley.com.focus.lib.kth.se/doi/10.1111/jcom.12030/full

In the paper, the authors employ a content analyzing scheme to a relatively large sample of Twitter messages (tweets) in order to analyze and classify 'microblogging' content on Twitter in a historical context, comparing with 19-18th century diaries.

The report repeatedly gathered public tweets in a three week time interval in early 2008. In total, they looked at over 100 000 tweets and narrowed them down to English tweets, and then randomly sampled over 2 000 tweets for the content analysis.

Since the authors want to look at microblogging (on Twitter) from a historical perspective, it's important that they look at a statistically significant amount of tweets in order to draw any conclusions from their data. In other words, deploying an automatic analyzing scheme to get as many sources as possible is of high importance and benefitted the paper in this regard. Although perhaps possible in this specific case, this type of study would have taken an enormous, maybe unfeasible amount of time to conduct without the aid of automatic analysis.

At the same time, every time computers are analyzing (human) content there is a chance of misinterpretation or computational errors, which either gives false data or removes data that cannot be classified, thus conforming the results to the model of the content analyzing algorithm. This is acknowledged as a problem by the authors, as they mention how they only used 8 content topic categories, which excluded tweets that were not recognizable in this range. Another potential limitation is the fact that the sample data dates back all the way to 2008, which is an enormous amount of time considering the developments in mobile and tablet technology since then.

Also, the report doesn't really say much of why. While the authors discuss some details such as how interactivity may or may not be greater today, and how information seeking (naturally) is a new category and how twitterers potentially can reach a much larger audience, it doesn't really explain the indicated trends fully. Therefore, with Gregor's theory classification this paper would more likely have fallen under "Analysis" or "Prediction", rather than "Explanation and prediction".

---

The paper Physical activity, stress, and self-reported upper respiratory tract infection by Fondell, E., Lagerros, Y. T., Sundberg, C. J., Lekander, M., Bälter, O., Rothman, K., & Bälter, K. (2010) examined, through an extensive study with 1509 Swedish men and women, how stress and exercise affect reported cases of upper respiratory tract infection (URTI).

I say reported because the study doesn't seem take into account the psychological side other than the stress factor, just the number of reported cases and the content of each report. The study was conducted over a four month period where an email questionnaire was answered every three weeks.

The researchers came to the conclusion that high physical activity was linked to a lower risk of contracting URTI - but I see a potential problem in that a reported case is directly translated to an actual case of URTI without a doctor's confirmation. Also, a questionnaire with predefined answers may also affect the result in that symptoms or data outside the box could be missed and misinterpreted.

---

Which are the benefits and limitations of using quantitative methods?

Quantitative methods have the benefit of (potentially) giving statistical significance. They may employ automatic analyzing schemes to a greater, more accurate effect seeing as the wanted data should have a limited variance. This means that huge amounts of data can be processed, especially so in our digital age. Limitations, however, include misinterpretation and erroneous analyzing algorithms as well as sometimes incorrectly identifying that target data is of low variance, thus missing out on answers outside the box of predefined parameters or questions. Lastly, the truthfulness or authenticity of answers may be harder to confirm. For instance, online anonymity can be both a boon (in getting more answers) and a fault (in getting false answers).

Which are the benefits and limitations of using qualitative methods?

Qualitative methods have the benefit of capturing answers that can vary in range that e.g. a predefined questionnaire might not even have as an option. The truthfulness of answers also gets down to a more personal level, mostly pointing towards higher accuracy in the actual answers given. At the same time, interviews can of course still be faked by a researcher claiming anonymity. Also, since the sample count is normally of a very low count (compared to quantitative methods, at least), it is often difficult to draw any generalized conclusions.