torsdag 19 december 2013

Comments throughout the course


Hey Ingrid, I think it's interesting that you decided to pick a paper about music piracy, since it in some papers seems to be of negative impact, while others point to positive aspects. In this post you write that the study itself presented too little evidence to support its conclusion. What do you think would have been ideal to gather this evidence? I imagine that one has to look at a numerous amount of factors, both from the (quantitative) statistics of both record sales and ticket show sales by the artist, as well as qualitative views from artists or consumers who use bootlegging on a day-to-day basis.

Hey Adam & Ekaterina, thanks for your comments.
Since I consider quantitative methods to be of more use when you want statistical significance, I would argue that qualitative methods is better utilized prior to a survey, questionnaire or data gathering in order to catch all possible answers. At the same time, Ekaterina you're completely right in that sometimes you need to explain the results you have at hand, in which case follow-up interviews (or other qualitative method) is a very good idea.
In the end, I guess it all depends on the situation. If your research benefits more from qualitative research, then it might be better to start off with a questionnaire to, as you say Adam, form a better understanding of a subject.

Hi Johan. Thanks for an interesting post. Regarding your definition of a case study, I was quite intrigued by your explanation that a case study can be used to create a hypothesis. I've always considered case studies to be utilized after you've already formed a hypothesis and want to narrow it down to one or several specific cases, to investigate its validity or delve deeper into a problem. 
In the case of the second paper that you picked (closure of a state university due to traditional and social media), you write that a weak point was that the population was limited and thus hard to generalize. Since its problem statement seemed to be within this one case, how would one have gathered a more general population, in your opinion? I think it is as you say earlier, that it "limits the unnecessary variation and sharpens the external validity." In taking these results to a more general conclusion to encompass other scenarios as well might be stretching the problem too far out of its area, I reckon.

Hej Matteo!
I was referencing my research paper that could have benefitted from interviewing people who used Twitter, not meaning that they should interview people ON Twitter. It was a bit poorly worded by me. 
However I do think qualitative methods can be used for simply developing your idea or hypothesis further. And if a research project would interview people on Twitter, this could definitely be utilized, even if you just gain a very small new insight. Quantitative or more comprehensive qualitative methods could then benefit from this new insight as that research paper develops.

Reflection of theme 6

This week I've had a couple comments on my stance on qualitative and quantitative methods combined. My initial thought have been (since theme 4) that one should utilize qualitative methods before using quantitative methods to ensure a good survey or questionnaire that would catch all different responses possible.

However, I've naturally realized since then that how and when you use qualitative methods depend largely on the type of situation you're in. If you have great background knowledge, maybe you don't need any introducing interviews to help you start off your research. Instead you might want to understand your (quantitative) results better by for example conducting final interviews to make sense of a large amount of data.

---

The paper about student's use of Welsh in Facebook - the first paper I had picked for this theme - contained both qualitative and quantitative methods. Looking back at previous themes, I kind of see some problems in its quite broad, investigating problem statement. Using Gregor's 'Theory Types', I think it could be classified more as 'Analysis' than anything else, in large because its results are hard to validate or compare with other studies. Although the results were comprehensive, the data was also more or less all self-reported by students, which is another problem in its own.

Its qualitative method was all about small focus groups, and a problem with this, as discussed, was how someone can take over the group discussion while others may be too embarrassed or shy to speak their opinion. At the same time, a focus group might discover new things together that one-on-one interviews might completely miss, for instance if one focus group participant comes up with an idea that nobody else had thought about, but agree upon, and then is able to expand upon this idea together.

---

My second paper about online participation in public service news in Europe was a more fitting ending to this course, I think. It had a well defined problem statement in investigating online participation, and utilized case studies from quantitative data. Although one might think it would have been good to also include some interviews with people from each public service company, it wasn't really necessary due to its quantitative results speaking for themselves. In fact, applying interviews or other qualitative methods in this case might have been wrong, simply because the problem statement in a way excluded potential personal wish-thinking from company directors or such, by just looking at the statistics gathered. And since the statistics were clear enough to understand by themselves, it managed to answer its research questions. Therefore I would classify its 'Theory Type' as 'Explanation'. However, had the problem statement been broader to include people's perceptions and needs, it would have been very interesting to conduct interviews or focus group discussions to evaluate what more needs to be done, and thus further evolving this paper's 'Theory Type' to both 'Explanation' and 'Prediction'.

When discussing this paper with some classmates, I realized that my perceived thought of it being skewed due to only taking data from 'uneventful' days perhaps was not a big issue, since the study included five case studies in total, and so "they were in that case all skewed in the same way".

Reaching theoretical closure is something I would consider almost impossible, at least if your research question isn't extremely narrow. I've learnt that it's also interesting (in a research paper) to reach the conclusion that more research is needed in some aspects X and Y, or that results were difficult to analyze for some reason Z. This still brings the research community forward in a way.

fredag 13 december 2013

Preparation for theme 6

I read the research paper "Young Bilinguals' Language Behaviour in Social Networking Sites: The Use of Welsh on Facebook", published in the April 2013 issue of the Journal of Computer-Mediated Communication (impact factor 1.778).

The authors Cunliffe, Morris & Prys investigated the use of Welsh as a minority language in the social networking site Facebook.

The research makes use of a selection of a one year long study from 2010 among four schools in northern and southern Wales. The study itself contained both quantitative and qualitative methods, starting off with an extensive questionnaire and then further narrowing it down to small focus group discussions among the students of these four schools.

This is in contrast to what me and some peers of this course have previously talked about in how qualitative methods would be good to set up a quantitative study, where results would then be used in a verifying way. That way you would be able to catchup a wide range of questions for the questionnaire, minimizing missed opportunities or results.

And so I think one limitation is in the order of method use in the study that is referenced in the paper, because as the authors say themselves, the answers from the students were self-reported. In connection to this, the authors argue that the increased sample size of the questionnaire made it possible to verify the answers of the focus group.

However, had they created the questionnaire after actually talking face-to-face to the students, I think they would have been in a much stronger position to verify the focus group answers simply because they by then would know what answers that were conflicting among the students.

I did learn, however, how you could construct a study of a subject that, ideally (as the authors put it) would have been conducted using very private sources (private communications) yet keep it fully anonymous.

---

Select a media technology research paper that is using the case study research method. The paper should have been published in a high quality journal, with an “impact factor” of 1.0 or above. Your tasks are the following:

1. Briefly explain to a first year university student what a case study is.

One can compare a case study with a focus group that you wish to interview / discuss with to find nuances and perspectives of a certain research problem that you have presented for them. However, the focus group in this example can be an entire organization, one person or other. A case study is also typically followed for a longer amount of time than a typical focus group session. Eisenhardt (1989) explains how a case study utilizes the data from different research methods, be it qualitative or quantitative. She writes that case studies can help provide description, test theory or generate theory.

2. Use the "Process of Building Theory from Case Study Research" (Eisenhardt, summarized in Table 1) to analyze the strengths and weaknesses of your selected paper.

I read the article "Public Service Broadcasting's Participation in the Reconfiguration of Online News Content" from the April 2013 issue of the Journal of Computer-Mediated Communication (impact factor 1.778), by Calvet, Montoya & García.

The authors base their paper on research conducted through an empirical study of five case studies (of public online news sites) in the European market.

I think the authors did a good job in defining a problem statement in terms of online participation possibilities, which could focus their research attention to this problem.

The authors picked five of the biggest European public service websites and compared them in terms of participatory options. From an intitial outlook one may think it would have been better to limit or randomize the sites, but selecting the largest countries and websites (cases) I think favors the statistical significance of the results.

Two samples were taken, one in a week's worth of news in early spring 2010, and the other in two days in May. These dates were carefully picked to avoid any major planned events. This in a way may have skewed the results of the case study, in my eyes, since perhaps it's not too imaginative to think that a news station may have more interactivity or participatory options for a big news item.

Literature on the subject confirmed results mentioned, such as the trend of user comments taking over the role previously held by forums and such. The quantiative data obtained helped answer the two research questions, but theoretical closure is a bit far off since the study could be complemented with more countries and public service news websites.

torsdag 12 december 2013

Reflection of theme 5

The lectures this week contained talks about the importance of the problem area and it's implication on a research paper. In order to write a good paper, one must fully understand what one wants to achieve.

Reflecting back on the 'most academic' work I've written so far - my bachelor thesis - I kind of realize how it wasn't just a waste of time redefining and working on the actual problem area ("problemformulering"). Also, especially Haibo Li's notion of a more 'entreprenuerial' approach to argumenting/explaining results or facts in a research paper caught my attention, as I'm interested in entrepreneurship and start-ups in general.

Looking back on my entreprenurship course, I saw clear connections in how you are supposed to present a business idea to a possible investor in how Li explained the fine balance between just stating cold facts and selling a product. I will definitely keep this in mind when writing my master thesis next year.

Overall the theme this week was rather interesting because I've previously often approached prototype work as something more suitable for actual work projects, rather than research projects where the solution is not often considered a product that will be used. Research results of prototype work, or rather, evaluations of prototype work, is something that I feel is a bit difficult to reference or 'reuse' in future projects because the problem area or implementation so often is closely connected.

As I was thinking about prototypes I came to think about how the scrum software development method is gaining a following in software companies because of its flexible and easily manageable development cycle, as well as good results. Prototype work here is often minimal as the goal is to produce a working subsystem with each implementation cycle ("sprint"). This production code is then evaluated with feedback and extra features included in the next sprint. In other words, you have a "live" product that constantly evolves, rather than a number of prototypes before the actual product is developed.

Maybe research could be done similar to the scrum methodology in how you separate and limit the big, main problem into smaller subsets of problems that you target each sprint. This way a research paper would 'evolve' and be forced to consist of small 'sub-solutions', which could perhaps result in more generalized and less 'hardwired' solutions to the problem area, and as such could be referenced and 'reused' to a greater extent. Although this methodology might be hurtful in situations or projects where it's harder to divide and conquer a difficult academic question.

fredag 6 december 2013

Preparation for theme 5


Comics, Robots, Fashion and Programming: outlining the concept of actDresses 
Fernaeus, Y. & Jacobsson, M. delves in a very unique subject: physical programming. They take an initial look at existing technologies (e.g. LEGO Mindstorm) and point to the fact that the user is forced to move to a setting outside the previous use of the artifact in question, e.g. programming at a PC when actually constructing a physical Mindstorm robot.

The authors point to the fact that clothing today acts as a communicative medium where people's different clothing tell us if they are e.g. going outside, sitting indoors, training, etc. They paint similarities between programming languages and clothing as a physical language.

"actDresses" is formed as a framework for physical programming by using clothes as signifiers, rather than actual code in a PC environment. To illustrate the concept, different cases are presented. However, proof of concept prototypes are not evaluated using user studies, meaning this research paper acts more as a large hypothesis rather than new theory.

Question: How would you evaluate or design a potential user study of a "actDresses" prototype? What would be the main questions you wanted answered?

---

1. How can media technologies be evaluated?
Since media technology is a very broad subject, I think it could be evaluated with a wide array of tools, although of course depending somewhat on the technology in question. Overall however, media technology is used by people and so benefits greatly from both qualitative and quantitative methods concretized through interviews, questionnaires etc. about how the technology is used and/or seen by the person. 

A very good way to evaluate technology overall is of course to let users test them, and by doing that you need prototypes. A simple prototype could let people imagine how they would use it and give a first impression. A proof of concept prototype could let the users actually test the idea practically, rather than imaginatively. 

2. What role will prototypes play in research?
As an idea - hypothesis - or theory is formed, it is at first very abstract and often difficult to grasp. A prototype is a way of visualizing and 'humanizing' an idea, so that more people can understand it. Evaluating and developing this prototype might change the theory behind it, or reinforce it. 

3. Why could it be necessary to develop a proof of concept prototype?
Since a prototype is more of a visualization of a concept, rather than a full implementation, people (researchers) are often left wondering if the wonderful visualization could also work. Developing and implementing a proof of concept is therefore a way to further evaluate - prove - that the initial theory works. In difference to only visualizing an idea (simple prototype), a proof of concept's keyword is all about credibility rather than understandability. 

4. What are characteristics and limitations of prototypes?
A prototype is often characterized by being a very simple draft and not necessarily very aesthetically pleasing. It's a first hands-on visualization of an abstract idea, and so needs a few evaluations and tests before it can be considered for more advanced proof of concept tests or even commercial use. Therefore, since they are not fully developed, they can be confusing to the user. In some cases (such as the research paper "Turn Your Mobile Into the Ball: Rendering Live Football Game Using Vibration") users would benefit from training, which in turn potentially skewed questionnaire results as not everyone may have access to training in a more general outlook.


torsdag 5 december 2013

Reflection of theme 4

What I have learnt this week is that there certainly are many problems, both with qualitative methods as well as quantitative ones. Since the paper I picked last Friday used automated information gathering, I didn't delve into the problem aspect of questionnaire bias in a method that I considered quantitative (although I did raise other issues in this paper). My vision of quantitative methods being devoid of (what I had previously considered exclusively) qualitative problems regarding feelings and opinions was shattered.

For instance, by looking at the example of questionnaire answers I realize one must consider qualitative aspects of how people reply to these things. Although I have both answered and created a few questionnaires, I haven't really thought much about how different personalities or individuals get overrepresented certain studies since they are more willing to take the time to answer a questionnaire. And so bias by not participating or by overrepresentation of certain groups of people is certainly an issue. But the same can of course be said about qualitative methods in how some persons are more willing to e.g. attend an interview.

This week we discussed a bit about how theory comes first and quantitative study last to 'prove' your theory. For instance, one might have an idea of how something worked, but wants confirmation by sending out a questionnaire to answer. But how does one come up with this questionnaire so it doesn't limit the answers or affect the outcome in its sheer design? In Wednesday's lecture it was mentioned how questionnaires definitely should be tested before they are sent out. So how do you test a survey before you send it out? Send it out twice, or send it out to another huge group of people? The answer is by discussing with other people, interviewing them, etc. in a smaller more manageable context - i.e. in a qualitative way.

In my paper, the 'theory' consisted of a hypothesis or thought that 'microblogging' is not an entirely new idea in today's society, and in a way proved it by classifying thousands of tweets in historical categories. But it also raised new questions and brought to light new categories or classifications that the authors didn't think of. Because of this, the paper itself was limited in how the predetermined data categories formed the result, and didn't pick up on data that was not in this state space.

This is why one could view qualitative methods in another light: to gain new insights, develop theory, etc. Especially if theory is more or less just hypothesis based on research on 18-19th century diaries (like in my paper) - it could very well be developed by interviewing and discussing with people using Twitter today, for instance. By looking at the two methods in this way, one does not have to discuss quantitative vs. qualitative methods as if the one excluded the other, but rather in how they can best be utilized together.

fredag 29 november 2013

Preparation for theme 4

I picked the paper "Historicizing New Media: A Content Analysis of Twitter" by Humphreys, Gill, Krishnamurthy, Newbury et al. from the Journal of Communication, Volume 63, Issue 3 (June 2013). It can be found at: http://onlinelibrary.wiley.com.focus.lib.kth.se/doi/10.1111/jcom.12030/full

In the paper, the authors employ a content analyzing scheme to a relatively large sample of Twitter messages (tweets) in order to analyze and classify 'microblogging' content on Twitter in a historical context, comparing with 19-18th century diaries.

The report repeatedly gathered public tweets in a three week time interval in early 2008. In total, they looked at over 100 000 tweets and narrowed them down to English tweets, and then randomly sampled over 2 000 tweets for the content analysis.

Since the authors want to look at microblogging (on Twitter) from a historical perspective, it's important that they look at a statistically significant amount of tweets in order to draw any conclusions from their data. In other words, deploying an automatic analyzing scheme to get as many sources as possible is of high importance and benefitted the paper in this regard. Although perhaps possible in this specific case, this type of study would have taken an enormous, maybe unfeasible amount of time to conduct without the aid of automatic analysis.

At the same time, every time computers are analyzing (human) content there is a chance of misinterpretation or computational errors, which either gives false data or removes data that cannot be classified, thus conforming the results to the model of the content analyzing algorithm. This is acknowledged as a problem by the authors, as they mention how they only used 8 content topic categories, which excluded tweets that were not recognizable in this range. Another potential limitation is the fact that the sample data dates back all the way to 2008, which is an enormous amount of time considering the developments in mobile and tablet technology since then.

Also, the report doesn't really say much of why. While the authors discuss some details such as how interactivity may or may not be greater today, and how information seeking (naturally) is a new category and how twitterers potentially can reach a much larger audience, it doesn't really explain the indicated trends fully. Therefore, with Gregor's theory classification this paper would more likely have fallen under "Analysis" or "Prediction", rather than "Explanation and prediction".

---

The paper Physical activity, stress, and self-reported upper respiratory tract infection by Fondell, E., Lagerros, Y. T., Sundberg, C. J., Lekander, M., Bälter, O., Rothman, K., & Bälter, K. (2010) examined, through an extensive study with 1509 Swedish men and women, how stress and exercise affect reported cases of upper respiratory tract infection (URTI).

I say reported because the study doesn't seem take into account the psychological side other than the stress factor, just the number of reported cases and the content of each report. The study was conducted over a four month period where an email questionnaire was answered every three weeks.

The researchers came to the conclusion that high physical activity was linked to a lower risk of contracting URTI - but I see a potential problem in that a reported case is directly translated to an actual case of URTI without a doctor's confirmation. Also, a questionnaire with predefined answers may also affect the result in that symptoms or data outside the box could be missed and misinterpreted.

---

Which are the benefits and limitations of using quantitative methods?

Quantitative methods have the benefit of (potentially) giving statistical significance. They may employ automatic analyzing schemes to a greater, more accurate effect seeing as the wanted data should have a limited variance. This means that huge amounts of data can be processed, especially so in our digital age. Limitations, however, include misinterpretation and erroneous analyzing algorithms as well as sometimes incorrectly identifying that target data is of low variance, thus missing out on answers outside the box of predefined parameters or questions. Lastly, the truthfulness or authenticity of answers may be harder to confirm. For instance, online anonymity can be both a boon (in getting more answers) and a fault (in getting false answers).

Which are the benefits and limitations of using qualitative methods?

Qualitative methods have the benefit of capturing answers that can vary in range that e.g. a predefined questionnaire might not even have as an option. The truthfulness of answers also gets down to a more personal level, mostly pointing towards higher accuracy in the actual answers given. At the same time, interviews can of course still be faked by a researcher claiming anonymity. Also, since the sample count is normally of a very low count (compared to quantitative methods, at least), it is often difficult to draw any generalized conclusions.


torsdag 28 november 2013

Reflection of theme 3

This week I learned more about how other people define theory and how lack of proper theory can make or break a paper. While discussing papers in seminar one, we decided to focus on one that gave light to problems, or rather lack of problem definitions.  By doing this, I think people in our seminar group (including myself) learned more about how important it is to properly narrow down the scientific issue and properly define the problem in order to produce a paper of good scientific value.

At the same time, Stefan Hrastinski also discussed with us how a paper not always have to come up with a perfect solution or follow the initial hypothesis or prediction. In fact, we discussed how the failure itself is also a valuable knowledge, many times more interesting than the success story that many researchers strive for.

Thinking back on my bachelor thesis, I at least want to believe I had this with me in the back of my head. And so I took the opportunity this week to revisit my old thesis and read it with a new mindset. I came to the conclusion that while many students look for a positive outcome of their experiment or investigation, thankfully we didn't fully fall in the same category. While we could definitely have benefitted from narrowing down our problem area, we acknowledged how (some of) the data we had didn't automatically lead to an easy explanation. In fact, we proposed new outlines for future research in our area based on our experiences and conclusions. At the time, I don't quite believe we would have achieved a very high impact rating...

In keeping with this mindset, I also read a lot of blog posts regarding theory this week. Many seem to agree that theory is something very wide and hard to grasp, while at the same time acknowledging how theory is formed by different, all crucial, parts. These parts are often about the type of data that strengthen the hypothesis or predictions, but also more specifically how data gathering can differ and impact the perception of an entire paper. For instance, when discussing this among my peers we established how studies with low participation counts automatically seem to be viewed with less credibility, but when actually looking at how they conducted the study it may change the perception once again.

fredag 22 november 2013

Preparation for theme 3

Select a research journal that you believe is relevant for media technology research. The journal should be of high quality, with an “impact factor” of 1.0 or above. Write a short description of the journal and what kind of research it publishes.

I chose the Journal of Communication (with an impact factor of 2.011), which is a a journal containing communication theory in both social and cultural fields as well as computer-mediated communication. In other words, it's a journal with a wide coverage of communication theories.


Select a research paper that is of high quality and relevant for media technology research. The paper should have been published in a high quality journal, with an “impact factor” of 1.0 or above. Write a short summary of the paper and provide a critical examination of, for example, its aims, theoretical framing, research method, findings, analysis or implications. You can use some of the questions in Performing research article critiques as support for your critical examination.

I picked the article "To Your Health: Self-Regulation of Health Behavior Through Selective Exposure to Online Health Messages" (October 2013) by Knobloch-Westerwick, Johnson and Westerwick from the journal above, because I think it touches an interesting subject; the choice to listen or not to listen to health advice when it may conflict with your behaviors or beliefs.

It investigates the problem of reaching crucial target groups in online health campaigns. The authors conduct a study with 419 participants where source credibility affected three different defined motivations; bolstering, self-motivating, and self-defending. The method involved users in browsing topics for a specific set of time after which behaviors and perceptions were analyzed. The problem I find with this method is its very artificial method sampling, by having the users in a lab environment and browsing each topic for a set duration of time, far from a natural environment. Another point of critique lies in its very limited age and behavior diversity, all participants being young students with heavy Internet usage, which may not reflect the general populace.

The results support the notion that self-bolstering was reinforced by selective message exposure when the user was already familiar with messages that confirmed their own health behaviors. Self-motivation also came in effect for people falling short of promoted health behaviors, among others.


Briefly explain to a first year student what theory is, and what theory is not.

Theory is made up from several components. One may consider citations, diagrams and data theory in itself, but theory is in fact made up by many of these components. Theory answers (or tries to answer) the question "Why?" by analyzing, explaining, predicting or designing. Hypothesis could be looked upon as the "pre-form" of theory in that it gives a prediction of results, but doesn't really answer the question on a deeper level of logic. And so theory may be an explanation of results that is possible to test.


Describe the major theory or theories that are used in your selected paper. Which theory type (see Table 2 in Gregor) can the theory or theories be characterized as?

According to Gregor, there are four different types of theory. The one I think fits the research paper above best is Prediction in how the authors set up a testable environment with some predictions before-hand, but fail to provide more diverse study participants and to explain all results thoroughly. However, the paper does fall close to Explanation and prediction (EP) due to to some specific explanations in its discussion.


Which are the benefits and limitations of using the selected theory or theories?

The good thing is that the theory is possible to test further to verify and build upon these foundations. By illuminating specific problems and giving some insight in specific cases, one can also tailor a design proposal built on these predictions.

torsdag 21 november 2013

Reflection of theme 2

First of all, I'm sorry to say I missed the seminar due to moving to a new apartment. However, to delve deeper into the central themes presented by Adorno and Horkheimer in Dialectic of Enlightenment as well as gain perspective on one of the authors, I decided to read more about Adorno himself as well as read an article by him titled "Culture Industry Reconsidered" from 1963.

After reading up on Adorno, I realized how the culture industry is a pretty broad subject that can be defined in various ways. In a historical context, Adorno himself grew up in Nazi Germany where he first-handedly experienced the results of mass deception in the forms of state propaganda from radio speeches and cinemas, for example. The culture industry in that time and place was severely limited and tainted by state ideals which conformed people to a single minded, brainwashed mass.

And so the link for the notion of mass deception in Nazi Germany to a broader perspective in an entire culture industry seems natural. The industry in culture industry is not an industrial one, but rather a sociological one built for the mind. Perhaps it's a bit harsh, but there are certainly parallels to state propaganda in how mainstream media produces money-making entities (e.g. singers) in a (by Adorno introduced term) "Star system" to a single minded, pacified mass of people who are taught what to like and what to wear.






fredag 15 november 2013

Preparation for theme 2

What is Enlightenment?
Enlightenment has strong connotations with science prevailing over religion, a way of scientifically seeking knowledge through experiments and observation - empirical evidence - rather than through myth and religion. In Adorno & Horkheimer's text, "enlightenment" is explained as reducing the fear among people while reinforcing their knowledge and control over the world. 

What is the meaning and function of “myth” in Adorno and Horkheimer’s argument?
As science and "facts" are regarded today, so were once myths and folktales historically, according to Adorno & Horkheimer. A "myth" would be considered an easy way out for hard questions, sometimes resulting as part of a religion that became the next level of "truth" or "knowledge" for people.

What are the “old” and “new” media that are discussed in the Dialectic of Enlightenment?
The "new" media is viewed harshly by Adorno & Horkheimer in that it pacifies the consumer by limiting their imagination, deceptively tricking them to consume even more and simply look to make money. In contrast, "old" media is explained with less of a mass-consumer focus where artistic values, deep discussion and critical thinking is encouraged on a much higher intellectual level.

What is meant by “culture industry”?
Adorno & Horkheimer uses this notion in reference to the businesses which goal is to entertain the masses, for instance the movie business, TV-companies, books, etc. The general philosophy that is discussed is one that views the consumers as easily persuaded and tricked into consuming the "culture" that this "industry" produces. In particular "new" media is seen as part of the culture industry, according to the authors.

What is the relationship between mass media and “mass deception”, according to Adorno and Horkheimer?
As mentioned above, the culture industry is just like any other industry interested in making people buy their product. Since culture can be seen as a way in itself to affect people, for instance letting James Bond drive a fancy new BMW, it is also deceptive in the way it subconsciously affects people. In this way, mass media is looked upon with distaste by Adorno & Horkheimer in how it manipulates people into making certain choices that benefit the producers of said media. 

Please identify one or two concepts/terms that you find particularly interesting. Motivate your choice.
I find the concept of deception and pacification particularly interesting, especially looking at the very latest spawn of the culture industry; computer games. Take for example World of Warcraft, which is featured numerous times in the news because people have become so pacified by the game that they simply can't stop playing. Deception is also interesting because it's related in how we perceive the world. If all we see is a computer generated world, the step into thinking that world is the real world isn't too far off. Putting this in the perspective of countries with news organizations that feed from fear and ignorance, the result is a warped, almost "mythical" view of the world.

torsdag 14 november 2013

Reflection of theme 1

First of all, I hope everything is well with Leif Dahlberg, who was the first person to welcome us to Media Technology at KTH. I feel he managed to inspire us all to study this program, something that has turned out to be filled with both courses built upon layers of logic (mathematics, programming, etc.) as well as courses filled with creativity and not necessarily 'statements of facts'.

Although it would have been good to further discuss this week's theme in the seminar groups and lectures, I feel like it has also been a good opportunity to think about these subjects on my own while reading fellow students' blog posts.

A lot of people, myself included, are quite puzzled by the choice to read 100-year-old philosophical pieces. However, as this course in essence is a preparation for the master thesis, I feel like credibility, bias and good research can benefit from the notion of sense-data (as all blog posts have pointed out) data being perceived by sense, and that logic and 'facts' are ultimately artificial things that are created by people. As I wrote before, one must consider who is given the mandate of credibility, given this point of view of knowledge being built in layers.

That being said, I'm looking forward to learn about concrete methods about how to conduct scientific research in further themes.

torsdag 7 november 2013

Preparation for theme 1


- What does Russell mean by "sense data" and why does he introduce this notion?

Russel means to differentiate the act of sensing (the experience we have when smelling, hearing, etc.) and the result of said act. The result of this is the sense-data - what is the color, how does it smell, and so forth. Fundamentally, Russel aims to investigate how we perceive and ultimately define “physical objects”. “Is there any such thing as matter?”, he asks. But the true goal I feel is to introduce Descartes’ system of “methodical doubt”, where scepticism is used to invites us all to view things in a new light.


- What is the meaning of the terms "proposition" and "statement of fact"? How does propositions and statement of facts differ from other kinds of verbal expressions?

A statement of fact is a proposition that is commonly agreed upon amongst peers, sometimes erroneously. Different social groups may have conflicting perceptions of facts, take for example religious Creationists in the U.S. who reject evolution. Russel writes about empirical knowledge, and how experience and sense-data make up what we today may propose to be a fact, and later accepted as one. Propositions are, in turn, made up by data (be it historical knowledge or sensor-data) from acquaintances to matters or simply a description of an object or abstract idea.  In all, a proposition is required to include constituents that we are acquainted with, according to Russel.

In the modern scientific society we live in today, the (indirect) consensus from scientific observation is that a proposition constitutes for a statement of fact once it has been tested and thoroughly analyzed from the scientific community. One could argue, however, about who is given the mandate of credibility, and what impact this has on “statement of fact”?


- In chapter 5 ("Knowledge by Acquaintance and Knowledge by Description") Russell introduces the notion "definite description". What does this notion mean?

When describing objects - or propositions, for that matter - one can talk about ambiguous and definite descriptions, according to Russel. In regards to physical objects, sensor-data and description by acquaintances is what gives our perception strength, but when looking at non-physical objects one must more finely tune the very definition of the object itself. It is therefore of value to look at a more definite description of said object, in singular, to minimize the presence of ambiguity.


- In chapter 13 ("Knowledge, Error and Probable Opinion") and in chapter 14 ("The Limits of Philosophical Knowledge") Russell attacks traditional problems in theory of knowledge (epistemology). What are the main points in Russell's presentation?

Argues that our perceived knowledge may be based on either false or true beliefs, and that neither is true. He mentions the example of a newspaper announcing the death of a king, but what if the newspaper simply lied? That would be categorized as a false belief, resulting in no knowledge.

Russel means that because of this, self evidence needs to play a critical part where the sense-data is gradable in terms of truthfulness. Thus, he draws the conclusion that since knowledge needs to be inferred from self evidence or intuitive knowledge, the majority of what we today call knowledge is simply probable opinion. Russel further argues that since knowledge or science is based on agreeing on these kinds of individual probable opinions in larger masses, ideas that are based on probable opinion (or propositions, or theories) can never "transform [it] into indubitable knowledge”.