You are being redirected to my new web site www.albertsuch.com

Tuesday, March 13, 2007

Web 3.0 (?)

Web2.0 has become the big buzz word of the year.

Reading certain news and hearing some people talk, it seems that we are back in time to 1999, but with tagging and social networking substituting portal and e-commerce as the bright ideas that will change the world, and make (some of) us very rich. Web2.0 has its new heroes (the googles, the flickrs,...) and villains (guess who...). And obviously, there are the pundits that talk a lot about it and create all the hype, and in some cases make a lot of money out of it.

But, we cannot say that we did not learn our lessons: the Web2.0 bubble may explode as the .com bubble did a few years ago, so it is better that we have a new concept ready when this happens: an the term is, obviously, Web3.0.

Web3.0 is just a fancy name for a concept that has been lying around for a few years: the semantic web. I guess that the term semantic web sounds too geeky to attract venture capital, so somebody came up with the fancier Web3.0, and then, publications such as the MIT Technology Review have picked it up, so it is becoming mainstream in the internet and technology related circles. According to Nova Spivack, blogger and founder of one startup using semantic web technologies, there is even a Web4.0 waiting somewhere in the future...

But besides all the naming fireworks, there is a more subtle issue around the concept itself of an intelligent (as in artificial intelligence) network. I've already talked about the goods and bads of the collective intelligence that some of the new internet based technologies enable. The underlying question is how much intelligence are you ready to outsource to somebody else, be it some artificial intelligence search engine, be it the collective seating somewhere in cyberspace or, most probably, a combination of both.

It is obvious that any technology, and Web *.0 is not an exception, embodies in its design lots of cognitive and social assumptions and when adopting those technological artifacts we are, up to a certain extent, adopting those assumptions. And that is fine if you are aware of what are those underlying assumptions and what do they mean for you.

An example with serach engines: although Google's page ranking algorithm is kept as the company's major trade secret, it is well known that it is somehow based on the number of pages that link to a certain page, so, when I'm using Google as a search engine, I know that I'm actually looking, more or less, for the most popular pages about something, and hope that the most popular are also the best. But of course, that is not always the case, so it is nice to have other search engines that are based on other criteria and even a different search space (some of them provided by Google itself, like Google Scholar for research papers).

At the end, I end up using different search engines for different things, and I guess that the Web3.0 response to it would be building some kind of intelligent agent that can embody part of the criteria I use to select between the different criteria embodied in the different search engine options. The only problem I see is that somewhere in this chain of building intelligence on top of intelligence there must be some place left to personal, private, options and criteria.

I have to admit that I have not digged deep enough in the semantic web theories and technologies to understand how personal option and individual difference can be implemented in a way that is also easy to understand and use. But it is also true that I have not seen this issue addressed by any of the Web3.0 visions and predictions I have seen so far. So, at least for the time being, I will remain in the skeptic side about Web3.0 and I'll be, at least, a little reluctant to outsource the small portion of intelligence I have left to some unknown agent...

No comments: