Stigmergy yet again – well, there is no escaping it. While I wouldn’t call stigmergy a new paradigm (that’s too pompous – in any event, paradigms only have shape in a historical sense), stigmergy is one of the most fruitful mechanisms around that speaks to distributed cognition.
Check out this paper nicely titled “Standing on the shoulders of ants: stigmergy in the Web” by a student who is working on the stigmergic properties of the Web. A nice overture to further more technical work highlighting some important issues that need to be addressed. Two points are in order.
(1) I disagree that “we don‟t yet have a clear definition of stigmergy” – I don’t think that one can prescribe necessary and sufficient conditions but we sure can specify typical features.
(2) And “combining bio-inspired designs and algorithms based on stigmergy with social network analysis might facilitate the creation of a more sophisticated web application.” This is already here, the preeminent example being Amazon’s recommendation algorithm.
Recommendation algorithms generally come in two varieties – collaborative filtering (CF) and cluster models (CM). CF attempts to mimic the process of ‘‘word-of-mouth’’ by which people recommend products or services to one another. CF runs on the notion that people who agreed in the past will agree in the future. CF aggregates ratings of items to recognize similarities between users, and generates a new recommendation of an item by weighting the ratings of similar users for the same item. But this technique is computationally expensive because ‘‘the average customer vector is extremely sparse’’ (Linden, Smith, & York, 2003, p. 77). By contrast CM divides the agent base into segments, treating the task as a classificatory problem. An agent is assigned a category comprised of similar agent profiles. Only then are recommendations generated. CM is computationally efficient since it only searches segments, rather than the complete database. Amazon.com’s recommendation algorithm is a derivative form of CF and CM. Consider an example. A search on Amazon for ‘‘stigmergy’’ returns 176 items, the default sort being by relevance (as opposed to price, reviews, publication date). Also given some prominence is a category ‘‘Customers who bought items in your Recent History also bought x, y, z . . ..’’ supplemented by Listmania, lists of salient material compiled by agents (all-comers as in Wikipedia) who ostensibly have some intimacy with the topic. There are also so-called ‘‘reviews’’ of a given title. All this over and above a record of my recent purchases which included stigmergy related material, assuming one hasn’t expunged Amazon’s cookies from one’s browser. Even on offer is the opportunity, for many titles, to peruse the contents page, read an excerpt and even be enticed by the dustjacket hyperbole. Furthermore, one can be alerted by email when a new title or new edition of a book matching one’s previous trails of interest, will become available: a preorder entitling the buyer to a discount. This all adds up to a highly bespoke experience that is better tailored than being in a bookstore, because it is unlikely the bookstore even stocks a title you have yet to discover as one scans the shelves – there is no ‘‘pheromone’’ trail. The Amazon algorithm rather than matching user-to-user finds items that customers tend to purchase together. It is computationally efficient (and easily scalable) because much of the computation has already been done off-line. The stigmergic interest of Amazon’s algorithm is patently clear: an item-to-item search generates a trail that gives rise to novel patterns of behavior. CF’s great virtue is that suppliers can be finely attuned to consumer behavior. The downside is that there runs the risk of ‘‘a kind of dysfunctional communal narrowing of attention’’ that can be self-fulfilling (Clark, 2003, p. 158; Gureckis & Goldstone, 2006, p. 296). Excerpt from Stigmergic epistemology, stigmergic cognition.