In yesterday’s blog post under the heading “5 Reasons Why Activity Streams Will Save You From Information Overload” I mentioned how crucial and critical collaborative and social filtering of relevant content and people would become within the Social Computing realm in the coming years, if not already altogether, in order to allow knowledge workers manage their various streams much more efficiently and effectively. I still believe it’s going to be one of the Holy Grails from a successful Enterprise 2.0 deployment within whatever business. However, serendipity, once again, did its magic, and yesterday it pointed me to a couple of resources that have got me somewhat worried, quite a bit, to a certain extent, thanks to a different scenario: what happens if us, human knowledge workers, are no longer in control of that social / collaborative filtering? Would it still make sense? Can we rely on the system to set the stage and the landscape of those filters of content that should be relevant to us, but that we haven’t got much to do about it all? Can we trust those hidden algorithms that are not very open and transparent in the first place while presenting the supposedly desired results? Well, it looks like we may need to “Beware online “filter bubbles”“. And pretty seriously!
Through the always resourceful Rachel Happe I bumped into this really interesting, and somewhat disturbing TED Talk by Eli Pariser, under the rather suggestive title “Beware Online “Filter Bubbles”“, which comes to confirm a growing trend, started up by Google, amongst several others from the good old Web 1.0 world, and followed by some of the most popular social networking sites at the moment today: like Facebook. Oh, talking about popular, allow me to point out to you folks a fascinating read by the one and only, Seth Godin, on “What’s the point of popular?“, which would fit in quite nicely with this blog entry about filtering relevant content from the Social Web out there in an effective manner considering what popular could add further on… Worth while a read, for sure, with plenty of food for thought.
Now, back to Eli’s TED Talk. It lasts for about 9 minutes and it would surely be worth it every minute of it. Eli demonstrates, and quite effectively, in my opinion (At least, he did with me!), how different Web sites and services are changing our very own perception of how we see the world, not only around us, but also elsewhere, whenever we are looking for relevant content. It started with Google, back in the day, and we are seeing it quite a bit as well with Facebook, amongst several others, lately. So this is not something harmful for the overall Social Web alone, but for the Internet as a whole, as we have known it for years altogether.
Eli comes to explain what he has coined, as a growing problem, as “Filter Bubble“; basically, how we “don’t get exposed to information that could challenge or broaden our worldview” any longer when surfing the (Social) Web. The growing number of system driven algorithms that supposedly filter the relevant content on the Web for us is resulting in a situation where we may need to start questioning the validity of those, system driven, filtering mechanisms themselves, because in a good number of cases they are no longer providing the desired results, as they lack the embedded ethics with which humans have been providing not only relevant, but also important, uncomfortable, challenging content, as well as other points of view.
He also talks about encouraging those system driven algorithms, as much as we all love them!, to have a touch of civic responsibility, be rather transparent and provide knowledge workers with a bit more control of what goes through and what doesn’t. Rachel Happe herself put it quite nicely in a recent blog post under the heading “Social Media Algorithms, Context and Decision Making“, explaining some of the various problems that current filtering mechanisms seem to have. Another good friend, Larry Hawes, shared another worth while reading piece under “Filtering in Social Software: Protective Bubble V. Serendipitous Awareness“, where he also exposes some of the various different issues with this Filter Bubble, after a conversation that took place in Twitter yesterday with a bunch of other folks on this very same topic.
Overall, I must confess I am very much in favour of personalisation of the content available to us out there, through the use of those social / collaborative filters under a given context, of course, which I still think is going to be critical, but at the same time, and like Eli mentions, I, too, would love to have some more control of what gets exposed to me AND, most importantly, what not! Larry explains quite nicely how it could work out eventually:
“Organizations must consciously balance the need to protect (and maximize the productivity of) their constituents from information overload with the desire to encourage and increase innovation (through serendipitous connection of individuals, their knowledge and ideas, and information they produce and consume.) That balance point is different for every organization and every individual who works in or with it
Enterprise social software must be designed to accommodate the varying needs of organizations with respect to the productivity versus awareness issue. Personalization algorithms should be easily tunable, so an organization can configure an appropriate level of personalization […]”
Now, I do realise that this is probably a much more important issue to tackle than just a single blog post, like this one, but, I, for once, would really love for those algorithm driven systems to start thinking about Smart Filtering, i.e. combining both machines AND humans into providing the best of experiences in helping track down relevant content without losing the larger context. Essentially, that’s what social and collaborative filtering should be like, don’t you think? That social aspect is us, the human side, that needs to be part of the whole equation and participate actively promoting transparency and smart personalisation, where there is somewhat some control left behind, before it’s too late and we see how the Web gets consistently alienated from us all. Like he mentioned in his talk, that was not the main purpose of the birth of the (Social) Web in the first place.
It was born as an opportunity to help people connect with other people and connect people with relevant content, according to their needs and wants, as well as within the right context. And now, the last thing we would all want to witness, and experience first hand, is how the Web is losing that human touch of embedded ethics and civic responsibility with those system filters. We need to care about this. WE, and I am certain it is going to be down to us, need to keep humanising and socialising the Web, through that smart filtering, because, if we don’t do it, no-one else would. And that’s certainly a prospect I would not be looking forward to any time soon. And I bet you wouldn’t either!
(I may need to even start thinking about changing my business card altogether…)
“Organizations must consciously balance the need to protect (and maximize the productivity of) their constituents from information overload …”
Really? If we take the talk of knowledge work seriously then I do not think that the tasks of organizations is or even can be to protect their members. They should differ from other organizations in fostering best possible coordination and making best possible decisions. There is not so much magic in what organizations should do.