Emerging Technologies Conference

A list of downloadable presentation files is available.

One of the sessions on social software by Clay Shirky has been widely blogged and discussed. Summarises Daniel Smith:

What is it that makes a large, long lived group successful? Clay’s answer: “It Depends!”

Some things are universally true:

– cannot separate technical and social patterns/concerns
– conversation can’t be forked
– cannot completely program social issues in tech
– the group will assert its rights somehow
– members are different from end-users – there will always be some group of users that cares more than average about the success of the group.
– there will be a core group which “gardens” the environment

Some more:

– The core group has rights that trump the individual in some situations
– The one-person, one-vote system does not always work (such as an example to create a controversial newsgroup on Usenet. The people voting against it weren’t going to use it anyway)
– The people that want to have the discussion are the ones that matter

Clay thinks that you have to design a way to have members in good standing. You need barriers to participation. He thinks this killed Usenet. There must be some sort of segmentation of ability. Social software needs to have ease of use from the group point of view, as opposed to the individual. Ways need to be found to spare the group from scaling. Two way conversations do not scale well.

Daid Weinberger also has an elaborate discussion.

Personal Calendar

Phillip Windley has some interesting ideas related to events that he’s planning to attend and the courses he is teaching.

We should look also at how to integrate a calendar with a personal blog. One of the things I have been thinking about is how to integrate the notes that i make in my book and the blog. Both have ideas embedded in them, but the book is not searchable while the blog is. The book becomes a silo. So, I could think of extending the blog to include both appointments and other events, and daily personal notes, which can serve as links to the details in my book.

Global Finance Survey

Economist has a survey of global finance.

Trade in goods and services is simple: what governments need to do, through the World Trade Organisation, through this or that regional trade agreement, or best of all unilaterally, is abolish their barriers. When it comes to finance, there is no such straightforward advice. Let capital flow where it may is bad policy. Finance must be intelligently regulated, at home as well as internationally, in ways that ordinary commerce does not require. When capital flows are liberalised, it needs to be done cautiously and within prudent limits. To that extent, global finance must indeed be impeded.

Governments and their advisers are a long way from understanding how this should ideally be done, let alone from putting any such understanding into practice. There is no detailed consensus on the right approach to international financial regulation, any more than there is on the domestic sort: there is plenty of activity, but for the most part it is co-operation without conviction.

The risks of international finance need to be frankly acknowledged, and then reduced so far as possible. That means weighing the costs and benefits of different kinds of capital mobility, and setting policies accordingly. It means abandoning certain orthodoxies of international economic policy. The danger cannot be eliminated altogether, but the remaining risk is worth taking because the potential gains from international capital flows are large, especially for the world’s poor countries.

To ignore that potential would be an even greater mistake than to liberalise recklessly. The global capital market is a treacherous aid to economic growth, but in the end, above all for the poor, an indispensable one.

Mobile High-Speed Wireless

Rafe Needleman writes about ipWireless and Flarion. Both have their own high-speed wireless technologies, which can be used by cellular operators. The challenge is figuring out what to do with the bandwidth. “The real issue with mobile high-speed wireless is the demand side. Do people really want cable-modem speed on their handhelds? I have no doubt that clever companies could come up with uses for all this bandwidth, but the carriers haven’t yet convinced the public that it needs videophones or push-to-talk service, both of which use a great deal of data bandwidth. Maybe if picture-phone services were more robust, or if carriers set up really useful wireless e-mail or instant messaging services (which don’t require massive bandwidth), consumers would get the portable wireless bug.”

There is an opportunity in the emerging markets, especially in the rural areas, for wireless services which can connect up villages, using WiFi. The service needs to work over distances of 10-15 kilometres, or a few kilometres if there can be a relay approach.

Digital Newspapers Software

WSJ writes about Microsoft’s efforts:

Microsoft unveiled programs under development that will make newspaper articles as clear and easy to read on a computer screen as in the paper. With the new software, newspapers will deliver information and services like video messaging and bill payment to readers across a variety of computer-like devices.

Publishers who continue to view their Web sites as a side venture for their print editions are ignoring the next generation of readers, warned Mr. Gates. “We see the online newspaper as one where it takes all the strengths [of current newspapers] … and then adds in new capabilities,” he said.

The idea is to both duplicate the reader’s experience from the printed publication to the computer screen and allow for extra features, said Gregg Brown of Microsoft’s e-periodicals team. Not only will people be able to read about an upcoming concert, but they will have the ability to hear audio clips and even purchase tickets, he said.

Wikis

Many-to-Many is Corante’s new blog on social software. A post discusses blogs and wikis:

Context is gained from the page’s revisions, links and how it is referenced by and navigated from other pages. Wikis excel at logical context whereas blog excel at temporal context.

Blogs, emphasize form over function over aesthetics. The form of posts in reverse chronological order and blogrolls constricts possible uses and design. Sure many a blogger has tweaked their template and design to achieve superior ascetics, but they are bound to constraints that if surpassed looses recognition as a blog.

Wiki’s aren’t pretty, but that’s the point. Except in rare instances where design creates function, the more you design the more user functionality you sacrifice.

Wikis emphasize both reading and writing. Sure they could be a little more readable, but that would come at a cost for writing. Costs to be carefully considered for a tool that enables a writable web.

I think I need to take a closer look at Wikis in the context of what I have been writing about the Memex, and my desire to have an outline (a sort of personal directory) for my blog.

Joi Ito’s post has an extended discussion on this topic.

TECH TALK: Constructing the Memex: Imagine

We have our own memory and we have Google as our other memory. (We also have the option of the Yahoo and DMOZ directories.) Now imagine, if we could bridge the chasm between directories and search engines, making it much more customized to our likes and trails that we leave as we surf the Internet, and also taking into account all that we write in emails, blogs or otherwise.

Imagine a system that uses our memory and knowledge as the starting point. We begin by outlining our interest areas – the topics that form the ecosystem of our lives. This is akin to the Yahoo or DMOZ directory of topics only, much more relevant to us. For example, in my case the main categories of this list would be something like this: Affordable Computing, ICT for Development, Emerging Markets, Enterprise Software, Information Management, New Technologies and India.

If one were to search these topics in Google, the resulting set of links would be helpful only to a small degree and only for the first few times that we did the search (since the results would be nearly the same each time in a short span of time).

These topics are wide topics, and need to be narrowed down. What is needed is a taxonomy for each of the topics, which helps in further refining our interests. The Google search results, perhaps the Yahoo (or DMOZ directory) and our own knowledge form the basis of this hierarchy. For example, my outline for Affordable Computing could look like this: Hardware (Thin Clients, Refurbished PCs, PDAs), Software (Linux, Applications, Language Computing), Communications (Ethernet, WiFi, WLL, VSAT).

This hierarchy of topics serves as the basis for our interests. It gives a unique lens and context to the information that we browse on the Web, write in emails and receive as attachments. These topics will evolve as our interests change and as we come across experts who may have done a better job in building out a certain part of the information ecosystem.

This is an evolving information base built not by a centralised organization, but in a distributed manner by each of us. We all have expertise in specific areas. This was manifested in the early days of the Web through the millions of home pages created on Geocities and Tripod. At that time, the only way to build out these pages were by explicit and time-consuming personal involvement something few of us were prepared to do. (Basically, the web was good for reading, but not as friendly for writing.)

So, now, imagine if each of us could build out these personal directories outlines of topics and connections to other directories, people and documents. Much of this would happen automatically as we browsed and marked pages of interest, embellishing them with our comments. When we search, it would first scan our world of relevant information rather than the world wide web of documents.

In other words, each of us would have a microcosm of the information space, created and updated continuously by what we did. It would ensure that our ideas would have a context, that we would never forget something, and that we could leverage on similar work done by millions of others like us. This is the real two-way web linking not just documents, but people, ideas and information.

Vannevar Bush imagined just such a system in 1945. He called it the Memex.

Next Week: Constructing the Memex (continued)

Continue reading