Currently I am reading lots of research papers and books in order to build the theoretical framework for my dissertation. As I am digging deeper and deeper into older literature I stumble across stunning pieces of thought. An excellent example is David Gelernter’s book Mirror Worlds (1992).
From the perspective of the early nineties, Gelernter describes a vision of future information systems called “mirror worlds”. While his implementation idea sounds a little bit like Second Life and virtual worlds, in my opinion the most significant part of the book are the first 30 pages. Gelernter motivates his ideas and explains the necessary of open information and the concept of information flows which could be the foundation of little agents supporting people. It is hard to paraphrase, so I am going to let the original text speak:
First of all, Gelernter motivates mirror worlds with ubiquitous information on city life:
“Suppose you are sitting in a room somewhere in a city, and you catch yourself wondering – what’s going on out there? What’s happening? At this very instant, traffic on every street is moving or blocked, your local government is making brilliant decisions, public money is flowing out at a certain rate, the police are deployed in some pattern, there’s a fire here and there, the schools are staffed and attended in some way or other, oil and cauliflower are selling for whatever in local markets… This list could fill the rest of the book. Suppose you’d like to have some of this information? Why? Who are you to be so nosy? Let’s say you’re a commuter or an investment house or a school principle or a CEO or journalist or politician or policeman or even a mere, humble, tax-paying citizen. Let’s say you’re just curious.” (p. 3)
“The software model of your city, once it’s set up, wil be available (like a public park) to however many people are interested, hundreds or thousands or millions at the same time. It will show each visitor exactly what he wants to see – it will sustain a million different views, a million different focuses on the same city simultaneously.” (p. 5)
He goes on in noticing that, while the task is very complex, supporting software has to be based on only a few basic and easy principles:
“At the same time we develop vast complex software worlds, the simple machines of information structure are also just being invented. The wheel, the ramp, the wedge, the screw, the lever. Much of today’s software-structures research amounts precisely to this search for universal, simple information-machines that can support vast complex structures. It makes no sense to reinvent the bolt and the geartrain every time you design a mechanical device. Builders of information machinery too would prefer to start with the universal, basic stuff in hand. But what are the simple information machines?” (p. 10)
From today’s perspective, it is possible that we have already found the key mechanisms: status updates, following, and hashtags.
Afterwards, Gelernter introduces the concept of intelligent agents. He discusses a hospital example, where agents dig into available information and support users by hints and other information:
“A note might say that ‘Dr. X’s agent is surprised that you haven’t tried W yet.’ Some agents are public […], others designed only for their creators.” (p. 21)
We have such agents on Twitter today for lots of simple tasks. In a broader context, combined with data mining, professional tools like Akibot are jumping in. So Gelernter’s future becomes reality.
Finally, the author discusses business uses and hits the nail on the head. Imagine the following passage with “Enterprise Microblogging” as its headline. It would completely make sense:
“Complex high-tech manufacturing requires teams of design, engineering, manufacturing-process and production specialists. Traditionally, new products went their way back and forth among these groups, from one to the next and (if necessary) back again until an initial idea has been transformed into a buildable product. Nowadays we hear that this ad hoc process is too expensive. An integrated, coordinated design process is essential, with designers and engineers and production people working simultaneously on the same project. […]
The problems in coordinated design center on information flow. Everyone needs to be up to date. Everyone needs to know immediately if his own group’s work has been jeopardized or in any way affected by another group’s decision.” (p. 26)
Given current developments with microblogging, ubiquitous microblogging and activitiy streams, Gelernter’s foresight is astonishing. A truly recommended read:
Gelernter, D. (1992). Mirror Worlds: Or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean. Oxford University Press.
I have not written very much here in the last weeks. The reason is that I was very busy organising our #ubimic initiative together with Martin, Lutz and Stefan. #ubimic stands for “ubiquitous microblogging”. In this vision everyone and everything uses microblogging to publish/subscribe to information. We are working with a number of worldwide partners including ESME, Communote, Akibot and various research partners.
You may want to subscribe to the #ubimic blog, too: http://ubimic.org/en/blog/.
The term ‚microblogging‘ indicates that the only difference between Twitter and classic blogs is the size. Pretty clearly, this is not the case. It feels like Twitter users are somehow more connected and everything is more interactive. I wrote my thoughts on that in a working paper which I have published now on Sprouts.
The findings suggest that classical blogging and microblogging use the same concepts (channels and items) but differ in the support of interaction between them. See the following figure to see the different forms of interaction in blogging and microblogging:
On the other hand, it seems clear that the foundation for the richer interaction experience of microblogging is its lack of interoperability and its centralistic approach. Please see the working paper for a detailed argumentation:
Böhringer, M. (2009). “Really Social Syndication: A Conceptual View on Microblogging,” . Sprouts: Working Papers on Information Systems, 9(31). http://sprouts.aisnet.org/9-31
Disclaimer: please note that this is a working paper without academic rigor.
As I wrote in the posting Microblogging – What’s next?, I strongly believe that there could be huge value in including non-human information sources to (enterprise) microblogging. The original idea for such a scenario including processes, machines and sensors (see the image for examples from Twitter) reaches back to my master thesis one year ago. It took that time that I finally came up with a term which could name this idea: ubiquitous microblogging.
Obvious, ubiquitous microblogging leans on the well-known research field of ubiquitous computing. While the latter understands ubiquity in a way that artificial computing devices are everywhere in the real world, the meaning of ubiquitous microblogging is that of real world objects being integrated and represented in an artificial computing space. In our definition, ubiquitous microblogging means a microblogging system including everyone and everything in an organisation. Therefore we borrow the conceptual meaning of ‘ubiquitous’ in the sense of its Latin origin ‘everywhere’.
Weiser (1991) in his vision about ubiquitous computing stated that ‘the most profound technologies are those that disappear’. Figuratively speaking, this is also true for ubiquitous microblogging as the goal behind our approach is to hide real world’s information access complexity with providing a flat information space accessible by the easy following-mechanism.
The approach of ubiquitous microblogging has much to do with the search for enterprise use cases of microblogging and a rising number of researchers is thinking about this topic. Michael Rosemann from Queensland University of Technology described how microblogging could be used for business process management. Alexander Dreiling from SAP shows a prototype for collaborative modelling with Google Wave (is Wave microblogging? I am going to discuss this question in a future posting). But the other way round is also possible, as the guys from Akibot show with their microblogging bot using NLP (Natural Language Processing). And finally, our research group is currently involved in several microblogging projects including ‘microblogging for logistics’ (think of tweeting RFID chips).
To implement a full ubiquitous microblogging scenario, still lots of work has to be done. Today’s examples from Twitter are individual programmed prototypes. In terms of enterprise-wide ubiquitous microblogging we need much more sophisticated architectural approaches. Currently, we are thinking about how such a ‘microblogging middleware’ could look like.
Weiser, M. (1991). The Computer for the 21st Century. Scientific American, 265(3), 94-104.
Our enterprise microblogging cases study “Towards an understanding of social software: the case of Arinia” has been accepted for HICSS. HICSS is a leading Information Systems conference taking place in Koloa (Hawaii).
Here is the abstract:
“This paper presents the case of Arinia, a custom made piece of social software with strong similarities to today’s microblogging applications. Arinia has been in use in a medium-sized technology company for more than 10 years; therefore, it is considered that the software is a valuable source of insights into the underlying principles of microblogging in an enterprise context. Due to the unique nature of the case we used an interpretive approach to learn about Arinia, its users and their contexts, involving semi-structured interviews, a survey, quantitative usage data and an excerpt from the posting base in order to achieve a comprehensive view on the case. The results suggest that there is reasonable potential in sharing micro-level information inside organizations. In particular, the findings provide evidence of enabling factors and allow us to introduce the concept of the ‘information food chain’. Together, these findings present a foundation for further research on current microblogging applications.”
The citation will be as follows:
Barnes, Stuart J.; Böhringer, Martin; Kurze, Christian; Stietzel, Jacqueline: Towards an understanding of social software: the case of Arinia, in: Proceedings of the 43nd Hawaii International Conference on System Sciences (HICSS-43), Koloa, Kauai, Hawaii, 05.-08. Januar 2010, in press.
I just came by an interesting piece of my master thesis which I should share with you. I wanted to identify important use cases for microblogging but discovered that there is a really broad spectrum of different scenarios. Therefore, I tried to order these thoughts towards the different viewpoints of ‘microblogging stakeholders’. If you look at your employees such things like company culture, motivation or social awareness could be important, while your process manager most probably is thrilled because of the documenting and tracking possibilities. This resulted in the following usage pyramid.
I chose the form of a pyramid as the basic levels seem to provide a very good argumentation for microblogging in terms of hard facts and ROI, while the benefits on the top are more ‘soft’ and hard to quantify. However, they are useful and in my view the full spectrum is exactly what makes microblogging so powerful.
I was involved in enterprise microblogging right from its start. Being part of something, it is always hard to have a neutral opinion on it. However, due to my academic view on the topic I would claim to stand somewhat outside the hype centre. So I ask myself these days: what is next?
While others made several predictions yet (i.e. Gartner’s view on the topic) I am especially interested in the level of microblogging adoption. Is ‘microblogging’ really what we see today for example in Twitter? Is that the end of the development? This would mean that the only challenge in enterprise contexts is to adopt it in the right way to create enterprise twitterers.
When I look at microblogging I primary see a huge instrument for information transmission with the recipient choosing the sources. Sources today mostly are humans. There are funny exceptions like the London Tower Bridge (http://twitter.com/towerbridge) or the Tweeting Cat Door (http://twitter.com/GusAndPenny). However, we do not see such implementations in the enterprise. Most enterprise information would fit for microblogging usage but we cannot find it there. Think on new quotes, new orders or new customers (coming from an ERP system), or alerts from the fire control system, or oil level alerts from the company’s cars. There is huge potential in integrating the company’s stream of microinformation using microblogging. Human text messages are only one part of it.
The problem is that in using 100% human postings we started with the most difficult part. Every Twitter user can tell from the problems one have with several 100 followed users. You simply cannot be aware of every posting. However, every posting might be worthy. Therefore intelligent systems should help us to find connections between postings and to filter the most important ones. The bad thing is that this is very hard to achieve in an unstructured 140 character long piece of text. In combining these contents with well-structured streams of machine-readable data we (respectively our computer) could better understand the whole information ecology evolving out of microblogging. I expect the future to bring further developments in this area. Let’s see
Yesterday evening I found a very interesting new research paper (via the Twitter search for ‘RT microblogging’). It was presented by Daniel R. Sandler at IPTPS09 on 21 April and deals with decentralized microblogging:
Birds of a FETHR: Open, Decentralized Micropublishing. 8th International Workshop on Peer-to-Peer Systems (IPTPS ‘09) April 21, 2009, Boston, MA, 2009. [BibSonomy: microblogging p2p] URL
Wow. At first that was quite shocking as one of my current research projects deals with the same thing. On the other hand it is great to find researchers with the same interests and thoughts. And: a nearer look at their work shows that they have a different solution for the same problem.
The first part of the paper is a great motivation for decentralized microblogging. They show the disadvantages of Twitter’s monolithic architecture and I strongly agree with them. However, their solution is a new protocol, ‘FETHR’, which has to be spoken by all applications in their decentralized microblogging space. Further, via FETHR the microblogging postings are sent to the subscribers (rather than fetched by the subscribers).
Personally, I strongly believe that the big advantage of microblogging is its characteristics of blogging enhanced with a social network (following/followers, @-refers, replies) and combined with the publish-subscribe-mechanism. There currently exist wide-spread standards in the web which could help us implementing decentralized microblogging. In my opinion there is no need for a new protocol.
However, they wrote a great paper, they go in the right direction and they were the first to publish their approach. Kudos! I am looking forward to future discussions on the topic!
Have a look at this great project: http://socialcollider.net/.
The application is completely based on Java Script - no Flash or other non-standard technologies. It visualises your (or anyone else’s) Twitter history:
It is always hard to explain the Web 2.0 phenomenon to people who are not used to it. I tried to compare microblogging with Newspapers earlier. I had another analogy in mind on my way to work today.
Let us compare cars with their drivers and the user in the web. The old web was build like a car without windows on the sides and in the back. It had no horn, no headlamps and no reflectors. The front window was just so big that the driver could see all things 1 meter in front of her. This is like the anonymous web user who discovers his own lonely way throughout the huge web and its knowledge.
But this is not the way we build cars. What we need for effective traveling is awareness. For this reason we have all these windows, headlamps, winkers and brake lights. Modern cars even have electronic systems based on infrared to enhance the awareness of objects around. We have navigation devices with integrated traffic jam detection and so on.
This is exactly what happened to the web. Users are aware of each other. Comments on blogs lead to other blogs and persons. Social Networking Services like Facebook tell you what is new in your network. You can read recommendations from other users before you buy a book online. And so on. Web 2.0 today even goes beyond this car analogy. This is because the modern web is personalized. If every car had your name and email address on the bonnet it would be something similar. Let us see what happens next