Influence and Blogs

Jonathon Delacour connects Cialdini’s excellent book “Influence: The Psychology of Persuasion” and weblogs:

Cialdini identifies six principles of persuasion. One only needs to have had a weblog for about five minutes to see the relevance to blogging of Cialdinis ideas about how we are persuaded and how we reach decisionsparticularly concerning whom one links to or adds to ones blogroll. If youre honest, youll recognize that at least some of Cialdinis principles have determined your linking/blogrolling preferences:

Reciprocity: When we receive an unsolicited gift, we feel an obligation to give something in return. ((If I put you on my blogroll, youll feel obliged to put me on yours.)

Commitment and Consistency: Once we make a commitment, there is a natural tendency to behave in ways that are stubbornly consistent with our earlier decision, even if that decision turns out to be mistaken. (Now that youre on my blogroll Im unlikely to remove you.)

Social Proof: In a given situation, our view of whether a particular behavior is correct or not is directly proportional to the number of other people we see performing that behaviour. (If all those other people have X on their blogrolls, then he definitely should be on my blogroll.)

Liking: We prefer to say yes to people we know and likeespecially people who are physically attractive, who are similar to us, who praise us (subtly), whom we encounter regularly, and who are associated with individuals or events we admire. (The people I link to and have on my blogroll are similar to me, have praised me, are associated with events or projects Id like to be a part of at the very least, since Im never going to reach the A-list, I can bask in the A-listers reflected glory.)

Authority: Since we have been socialized to obey legitimate authorities, we tend to also obey individuals whom we perceive to possess high levels of knowledge, wisdom, and power. (Anyone on the Technorati Top 100 must automatically be knowledgeable, wise, and powerful.)

Scarcity: We assign greater value to opportunities when they become less available and frequently assume that scarcity is an indicator of quality. (Since the A-list has so few members relative to the total blogging population, what A-listers write must necessarily be of high quality. Similarly, a link from an A-lister is enormously valuableregardless of the quality of the item at the end of that link.)

Tapping Students to Create Local Content

Satya wrote recently about an interesting idea on how to use school and college studets to create local information resources:

Students will create databases of information relating to their locality and issues of immediate importance to the residents of the locality (neighbourhood, city, state or even country).

Students will learn how to gather information, analyse it, organise it and publish/disseminate it to those who can use the information to their advantage.

Possible focus areas to start with include:

  • local geography (creating a map of the school and its neighbourhood, a local GIS database with information on population, soil, climate, flora & fauna, pollution levels, civic facilities, utilities, infrastructure etc.)
  • local history (mapping history starting from the present day and going back in time)
  • local arts, crafts, literature, cultural traditions, practices etc.
  • local businesses (create an online directory of local businesses, maintain a local classifieds web site and publish a daily neighbourhood blog online or even a weekly/monthly newspaper including advertisements from local businesses/traders to meet costs
  • availability/price of basic essentials, rental values, land values in the neighbourhood and comparison of different products and services and merchants in the neighbourhood
  • generating local neighbourhood census data with the students collecting the data themselves

    The college students can focus on creating more value-added information including economic and financial information, scientific and technical information and tracking the activities of all local councillors and legislators etc.

    Each school/college does all of the above for its neighbourhood and the schools form a network so students can interact with their peers in other localities and share information and experiences through blogs, online groups and web sites. All the local databases (GIS data, classifieds etc.) can then be integrated together to create larger city-wide, state-wide and nation-wide databases. These databases can then be commercialised with the school/college serving as the information consultant to local businesses and organisations by helping them address their specific information requirements using the databases and undertake customised market research surveys or polls, or develop customised databases as well. All of this can generate sizeable revenues for the school/college as well.

  • This dovetails nicely with my IndiaMirror idea.

    Can RSS, Sun, Apple challenge MS Office?

    Steve Gillmor thinks so:

    [RSS] could be as disruptive to personal computing as the digital video recorder has been to television…Generated by Weblog authoring tools such as the pioneering Radio UserLand, RSS feeds were consumed by a growing circle of cross-linking bloggers and a spillover audience from the trade press. But vendors and developers soon saw the opportunity to deliver content directly to the technical audience, and users saw a way to route around the growing inefficiency of e-mail and Web browsing.

    Suddenly, the Windows advantage as the essential platform for applications was neutralized. In a pre-RSS world on a ThinkPad, I spent about 40 percent of my time in the browser, an equal amount in my e-mail client, and the rest in Word, Excel or PowerPoint. Now, on the Mac PowerBook, I spend 40 percent of my time in NetNewsWire (the leading Mac RSS reader), 20 percent in Entourage X (the Mac Office mail client), an equal amount in the Safari browser, and the rest in Word, Excel and PowerPoint.

    With [Apple’s] Safari, browsing is now an operating system service. So are spelling checking, Zip compression and, most important, instant messaging services. iChat AV brings usable videoconferencing to the table, integrating IM presence information with any tool that wants to take advantage of its service.

    It’s the combination of these system services that produces the RSS information router. IM presence can be used to signal users that important RSS items are available for immediate downloading, eliminating the latency of 30-minute RSS feed polling while shifting strategic information transfer out of e-mail and into collaborative groups.

    Advances in RSS search, offline storage, authenticated feeds, embedded browser rendering and rich authoring tools are in progress, and all kinds of data are yielding to the RSS momentum.

    Sure, but as one e-mailer asked me, “Why would developers switch to a platform of only 7 million users?” Perhaps they won’t. But they will take a careful look at a Linux look-alike such as Sun’s Java Desktop System, particularly with its forthcoming Looking Glass user interface and a rumored RSS tool based on Mozilla’s cross-platform browser.

    Sun has no problem disrupting Outlook’s market share with a free RSS router, something Microsoft is loath to do. RSS puts users in charge and at a price they can afford: free.

    RSS:2003::HTML:1994

    Scott Rosenberg echoes what I feel about the potential of RSS.

    The simple combination of blogs and RSS presages a whole new model for personal publishing and communication online that’s already taking shape.

    The beauty of RSS is that it lets you build an ad hoc network of experts and friends whose postings you want to tune in to. Then you don’t have to think about it again. Along with blogs, RSS fulfills the Internet visionaries’ prediction that we’d all find a set of “human filters” to help us navigate the new information seas.

    As the lines between “publisher” and “subscriber,” “producer” and “audience” get increasingly blurred and decreasingly useful, RSS will be at the center of the action — helping deliver on the Internet’s promise of personal publishing for all.

    Continue reading RSS:2003::HTML:1994

    Where’s the Syndication in RSS?

    Gary Lawrence Murphy writes about the problem of aggregators accessing RSS feeds and the resultant increase in network traffic that it causes for the feed provider:

    If your feed works, if you are successful in attracting subscriptions on a global scale, if you do it right, you are doomed.

    As friends tell friends, as links lead to visits which lead to subscribers, the snowball rolls on towards that day like last Friday. RSS may have the potential to be a saver on bandwidth, but when you are getting hit once an hour or more by thousands of sites, 24,000 extra hits ads up, and it’s all the worse when so many are using broken clients that ignore the caching rules.

    This is where centralised aggregators can play a role.

    RSS and Handhelds

    Dan Gillmor uses the Treo 600, finds an RSS reader for it, and envisions the future of “headline news – RSS style on handhelds”. What he now wants: “This is a great start, being able to read this way. But the two-way Web means I need better ways to write, too. My blogging software doesn’t give me an easy way to make a quick posting into just those two fields, with an extremely low-bandwidth page that’s easily readable on the handheld.”

    I think I ought to get one of these devices. My cellphone is a 3-year-old triband Motorola L-series. I must be using one of the oldest cellphones in the world!

    RSS in 2004

    Come December and it is the season for predictions for the next year. Steve Gillmor on RSS:

    RSS information routers will emerge in 2004 with the following characteristics:

  • Persistent storage of XHTML full-text/graphics/audio/video of RSS feeds
  • XPATH search across local and Net stores
  • Self-forming and reordering subscriptions lists based on the aggregated priorities of user-chosen domain experts
  • Use of IM notification for post notification to aggregate affinity groups and active conversations
  • Integration of Hydra-like collaborative tools for multi-author conference transcripts
  • Videoconferencing routing and broadcast/recording tools
  • Integration of speech recognition and real-time indexing to allow quoting of linear audio and video streams
  • Mesh networked peer-to-peer synchronization engine for item propagation across shared spaces on multiple clients, including phones; iPods; and eventually Longhorn PDAs (circa 2006).

    Armed with these tools, new industries will emerge in rapid succession:

  • Metadata-driven directories that dynamically create RSS feeds based on affinity
  • Virtual conferences
  • IM/RSS presence networks for rich collaboration and e-mail replacement
  • Content-generation tools based on small, routable XHTML objects
  • A DRM network with enough creative and hardware support to blunt the Microsoft/RIAA DRM threat to peer-to-peer port hijacking.
  • Continue reading RSS in 2004

    Scott “Feedster” Johnson Interview

    Scott’s been doing a fantastic effort with Feedster. Excerpts from an interview in waffle:

    In a world of essentially infinite information and the consequent infinite information overload, you have to pick how you want to deal with things and were seeing an increasing number of people turning to a temporal approach i.e. Whats going on now? as opposed to Whats best (google). Our temporal, current orientation is a huge difference from Google. It sounds small but it really isnt.

    One of the brilliant things that occurred and I think we have to credit Dave for is correctly realizing that RSS + Blogs was a natural pairing. Think of blogs as your friends. No one really wants to travel to see their friends not when you have a lot of them. What you want is them to come to you. And thats what RSS does it lets your friends (blogs you read) come to you.

    I think the next big thing to happen to the blogosphere is probably some type of commerce. Now I dont know if thats people paying for content, advertising or what but its bound to happen. And the people creating the content that make up the blogosphere deserve something for their time. I think there needs to be a way to compensate people in small bits otherwise the content will start to dry up.

    Kottke’s Blog Redesign

    Jason Kottke redesigns his weblog:

    If you scroll down the front page of the site, you’ll notice that sprinkled in with the regular posts are remaindered links (the 1-line, 1-link posts that have formerly lived in the sidebar), movie “reviews”, book “reviews”, and excerpts from comments I’ve made on other sites. Five types of content, one list.

    A post is a post is a post. The newest content should appear at the top of the list of posts regardless of whether it’s a short movie review, one-line link, latest photo, or any other type of update to your site that doesn’t fit the typical title/text/category weblog paradigm and each type of content should displayed appropriately. And then if you want to view the complete list of movies, books, or all the remaindered links, you can.

    What I’ve actually done is created 5 separate weblogs with MT and, using a bunch of MT plugins (MTSQL, Compare, MTAmazon, ExtraFields, etc.), have aggregated the 5 weblogs on the front page of the site. Which sounds complicated (and is!). But only in implementation (due to the limitations of the software). Really it’s just the appropriate data presented with the appropriate design(s) in the appropriate context(s). One site, lots of content, many ways to view it.

    That reminds me – I haven’t changed the look of my blog in the 18 months since its been up – perhaps its time to look at a fresh design soon.

    Recipe Web

    Les Orchard has some interesting ideas on building out [1 2] a microcontent client for recipes, based on RecipeML: “The real strength in a recipe web would come from cooking bloggers. Supply them with tools to generate RecipeML, post them on a blog server, and index them in an RSS feed. Then, geeks get to work building the recipe aggregators…Since I’d really like to play with some RDF concepts, maybe I’ll write some adaptors to munge RecipeML and MealMaster into RDF recipe data. Cross that with FOAF and other RDF whackyness, and build an empire of recipe data.”

    A respone from Troy Hakala:

    We (Recipezaar) wrote a natural language recipe parser to make this possible and it’s a difficult job.

    Imagine a world of XML recipes distributed around the web on weblogs. An aggregator would need to aggregate millions of weblogs just to cull together a few hundred or thousand recipes. Now imagine millions of aggregator users doing this daily or hourly the way they do this today for weblogs. And if a weblogger had 1,000 recipes on their weblog archives, they wouldn’t want millions of aggregators eating their bandwidth every day to maintain the database for each individual using an aggregator (webloggers today already complain about aggregators costing them too much money in bandwidth costs). Additionally, 99.999% of people who create recipes are unlikely to have a weblog to post their XML recipes so you’d lose the majority of the potential content.

    A centralized repository provides a place for regular users to post their recipes and get them seen by the most number of people. And a centralized repository provides an easy way to search for recipes, browse for recipes, review & rate recipes, discuss recipes, etc. And let’s talk numbers…. today, Recipezaar has 73,000 recipes in the database and, while it’s the largest database of recipes on the internet, people still can’t find a particular recipe because there is an infinite number of possible recipes that can be created. Having a few hundred or a few thousand recipes is not a useful database to people. More is better. And acquiring more via an aggregator is a big and expensive job.

    Distributed databases are useful in some contexts and centralized databases are useful in other contexts. Each has their own advantages and disadvantages, but like auctions, recipes are best stored centrally where everyone has access to them.

    Adds Orchard:

    If the people behind RecipeZaar like the idea, is to borrow their parser via web service for use in my hypothetical MovableType plugin. This could also be used for any number of other blogging tools. On the upside, we get the benefit of all the work done by Troy and company, and they get to pull in more recipes. On the downside, were dependant on a web service not under our control for the basic functionality of this plugin.

    Im excited to see more varieties of micro-content shared between the people of the web, but the thing I see least talked about is how this stuff will be authored. I read about data formats and all that, but in terms of user interface, we havent progressed much past the HTML textarea. Also, I often see handwaving and assumptions that the content is really pretty simple — but as Troy Hakala would tell you, not even something as simple as a recipe is a slam dunk in terms of digestion by a machine. There needs to be some happy medium between a natural human expression of information, and the rigorous structuring required by a machine, mediated by good user interface.

    As I read all this, I couldn’t help thinking that we need is an Information Marketplace. I think I have to speed up the thinking and just get it done. There are many areas I can now think of applying it: for SMEs to find each other, an IndiaMirror and now recipes.