Web and education, a successful open entanglement.
Semantic Technologies for Learning and Teaching in the Web 2.0 era - A survey.
The oreChem Project: Integrating Chemistry Scholarship with the Semantic Web.
Why Bowl Alone When You Can Flashmob the Bowling Alley?: Implications of the Mobile Web for Online-Offline Reputation Systems.
Trust- and Distrust-Based Recommendations for Controversial Reviews.
The Devil's Long Tail: Religious Moderation and Extremism on the Web.
Lessons from the Net Neutrality Lobby: Balancing openness and control in a networked society.
The consumer and the Web: a critical revision of the contributions to Web science from the marketing and the consumer behaviour discipline.
Can cognitive science help us make information risk more tangible online?
Designing effective regulation for the "Dark side" of the Web.
On Measuring Expertise in Collaborative Tagging Systems.
Evaluating implicit judgements from Image search interactions.
Ephemeral emergents and anticipation in online connected creativity.
Federating Distributed Social Data to Build an Interlinked Online Information Society.
Kernel Models for Complex Networks.
Six Degrees of Separation in Online Society.
BFFE (Be Friends Forever): the way in which young adolescents are using social networking sites to maintain friendships and explore identity.
Class Association Structure Derived From Linked Objects.
Social Meaning on the Web: From Wittgenstein To Search Engines.
Interactive Information Access on the Web of Data.
Introducing new features to Wikipedia - Case studies for Web science.
Is web-based interaction reshaping the organizational dynamics of public administration?: A comparative empirical study on eGovernment.
Online Dispute Resolution: Designing new Legal Processes for Cyberspace.
Experiments for Web Science: Examining the Effect of the Internet on Collective Action.
LifeGuide: A platform for performing web-based behavioural interventions.
Securing Cyberspace: Realigning Economic Incentives in the ICT Value Net.
Cognition, Cognitive Technology and the Web.
Web and education, a successful open entanglement (Papers 1: Teaching and Learning) | E-learning, understood as the intensive use of Information and Communication Technologies in distance education, has radically changed the meaning of the latter. Today, the most widely accepted meaning of e-learning coincides with the fourth generation described by Taylor (1999), where there is an asynchronous process that allows students and teachers to interact in an educational process expressly designed in accordance with these principles. We prefer to speak of Internet-Based Learning or, better still, Web-Based Learning, for example, to explain the fact that distance education is carried out using the Internet, with the appearance of the virtual learning environment concept, a web space where the teaching and learning process is generated and supported (Sangrà, 2002). This entails overcoming the barriers of space and time of brick and mortar education (we prefer the term face-to-face) or of classical distance education using broadcasting and adopting a completely asynchronous mode... | |
Semantic Technologies for Learning and Teaching in the Web 2.0 era - A survey (Papers 1: Teaching and Learning) | The strengths of semantic technologies for learning and teaching and their benefits in the areas of digital libraries, virtual communities and e-learning have been identified and well established. The case for semantic technologies in education has been on the expressive power of metadata to describe learning content, people, and services, and on how these could be intelligently matched for added value services and an advanced learning experience. However, certain concerns on the feasibility of ontology consensus and of the annotation of the enormous amount of content currently available on the Web have arisen making globally available and interoperable semantic-rich metadata for learning resources a long term vision. At the same time, progress towards a more modest machine-readable Web has been made, and pragmatic solutions to interoperability based around REST and XML have emerged in the last few years along with prototypes of SPARQL server implementations and new RDF/OWL annotatio... | |
The oreChem Project: Integrating Chemistry Scholarship with the Semantic Web (Papers 1: Teaching and Learning) | The oreChem project is a collaboration between chemistry scholars and information scientists to develop and deploy the infrastructure, services, and applications to enable new models for research and dissemination of scholarly materials in the chemistry community. Although the focus of the project is chemistry, the work is being undertaken with an attention to general cyber infrastructure for eScience, thereby enabling the linkages among disciplines that are required to solve today’s key scientific challenges such as global warming. A key aspect of this work, and a core aim of this project, is the design and implementation of an interoperability infrastructure based on semantic Web principles that will allow chemistry scholars to share, reuse, manipulate, and enhance data that are located in repositories, databases, and Web services distributed across the network. | |
Why Bowl Alone When You Can Flashmob the Bowling Alley?: Implications of the Mobile Web for Online-Offline Reputation Systems (Papers 2: Trust and Distrust) | This paper explores the implications of the mobile web for expanded use of reputation management and collaborative content editing systems and enhanced forms of civic participation that may develop as a result. The paper bears upon the Websci’09 topics of Trust and Reputation, Networking (Social and Technical), and Government and Political Life. Previous scholarship has identified reputation tracking as a necessary precondition for the development of large-scale online participatory communities (Resnick 2000, Benkler 2006, Lev-On and Hardin 2007). Reputation systems incentivize good behavior while spreading out the costs of monitoring and policing bad behavior. They provide a crucial alternative to the logic of the firm, allowing for the scaling up of collaborative content creation in the absence of formal hierarchies of authority. To date, however, their use has been limited by a critical reality: reputation systems stop at cyberspace’s edge. Offline activity that is supporte... | |
Trust- and Distrust-Based Recommendations for Controversial Reviews (Papers 2: Trust and Distrust) | Systems that guide users through the vast amounts of online information are gaining tremendous importance. Among such applications are recommender systems (RSs) [1], which, given some information about their users' profiles and relationships, suggest items that might be of interest to them. Collaborative Filtering (CF) is a widely used strategy for generating recommendations by identifying users with tastes similar to the target user [8]. However, research has pointed out that people tend to rely more on recommendations from people they trust than on online RSs which generate recommendations based on anonymous people similar to them [9], and that humans score better on generating predictions for users with an eclectic profile [5]. These observations, combined with the growing popularity of online social networks and the trend to integrate e-commerce applications with recommender systems, has generated a rising interest in trust-enhanced RSs (see e.g. [2, 6, 7, 10]). Such systems incorp... | |
The Devil’s Long Tail: Religious Moderation and Extremism on the Web (Papers 2: Trust and Distrust) | Adam Smith’s Wealth of Nations offers a striking observation about religious groups: An unfettered religious market leads to a multitude of small sects and away from monopolistic churches. ‘Strict’ or ‘extreme’ religious views persist – because they cater for particular niches – but they remain small in terms of membership because the moderate religious centre-ground is where most believers and potential believers reside. However, the Web, because of the ease with which information is passed across it, caters for individuals with extreme interests or niche markets just as easily as it does for mainstream tastes. The Web removes many costs traditionally associated with market operations, thus allowing extreme or ‘strict’ religious groups and sects to flourish in a previously unprecedented way, by making it more feasible to cater to the long tail of the religious market. If this is true, then it threatens Smith’s claim. It also undermines the associated idea that convergence on the middl... | |
Lessons from the Net Neutrality Lobby: Balancing openness and control in a networked society (Papers 3: Openness and Control) | As much of everyday life becomes mediated by networked Web technologies, our communication, decision-making, and social lives depend on infrastructures designed, built, and maintained by a variety of stakeholders. This has made the structure and regulation of the internet, for example, into a battleground between owners of network infrastructure who claim that increased traffic increases their costs and that they must begin to filter network traffic and charge higher prices for network access. On the other hand are advocates who claim that equal access to the infrastructures of the web is important enough to be considered as an extension of the free speech rights so culturally important, for example, in the United States. On the other are infrastructure owners who want to maintain the right to control pricing and business models. Between 2005 and 2007, this conflict of interpretation exploded into competing efforts to legislate for, or against, “Net Neutrality” in the US Congress.... | |
The consumer and the Web: a critical revision of the contributions to Web science from the marketing and the consumer behaviour discipline (Papers 3: Openness and Control) | In the effort to study the Web in an integral way, contemplating the multiple facets of this phenomenon - both in technological and social and economic terms – it has become essential to incorporate the contributions of marketing and, in particular, consumer behaviour. Indeed the spectacular development of the Web cannot be explained without taking into account the use made by consumers of this technology in the field of electronic commerce. This work is a critical review of the main contributions of marketing and, specifically, the discipline of consumer behaviour to Web science. It starts by positing a new framework to understand the growing body of research on marketing and consumer behaviour from the perspective of Web science. State-of-the-art of knowledge is to be seen in each line of investigation identified in the Web/consumer behaviour interface. This framework and review aim to facilitate scientific interchange and promote the multidisciplinary work required to understand the... | |
Can cognitive science help us make information risk more tangible online? (Papers 3: Openness and Control) | People are already using the Web for many aspects of their lives at work and at home. Individuals’ ability to assess the risks associated with operating in cyberspace will have direct costs and benefits, potentially impacting themselves, the corporate networks they have access to and broader society. Although individuals are equipped with cognitive tools that allow them to assess risk in a wide range of situations and contexts, much evidence suggests that people do not find risk in cyberspace a tangible concept. We need to provide users of the Web with a better intuition for the risks they are taking by using tools with high usability. Crucial to achieving this will be an understanding of how to make risk tangible using Web interfaces. Although existing research in cognitive psychology can undoubtedly provide important general principles for the design of effective risk communication strategies, it is not clear to what extent these principles can be applied to Web interfaces. We requir... | |
Designing effective regulation for the "Dark side" of the Web (Papers 3: Openness and Control) | After the initial excitement about the Internet as a space outside the reach of governmental control has evaporated and courts in several states have succeeded in applying national laws to ‘Cyberspace’, there is now a consensus among scholars and activists that the Internet can very well be regulated through governmental intervention.1 Nevertheless, the application of the ‘old’ tools of regulation, which have worked sufficiently well in the offline world, to the challenges amplified by the Internet like unlicensed file-sharing, defamation, or email spam, has either shown to be ineffective or has produced severe side-effects. In this paper I will investigate a new and innovative approach to designing and implementing regulation based on non-profit and non-governmental entrepreneurship. For the remaining please see the extended abstract attached below. | |
On Measuring Expertise in Collaborative Tagging Systems (Papers 4: Tagging and Search) | The rise of collaborative tagging and folksonomies provide users with a new means of searching for interesting or useful resources on the Web. However, the tasks of identifying resources which are of high quality - interesting, useful, or relevant - and identifying users who are knowledgeable with respect to a particular topic are not trivial. In this paper we discuss expertise in the context of collaborative tagging, and propose a graph-based algorithm, EXITS (Expertise Induced Topic Search), for ranking users in a folksonomy according to their expertise with respect to a particular topic. We carry out experiments on both simulated datasets and datasets obtained from Delicious to study the behaviour of the algorithm on different types of users. Our experiments also show that the algorithm is more resistant to spammers than other methods such as ranking users by how many times they have used a tag. | |
Evaluating implicit judgements from Image search interactions (Papers 4: Tagging and Search) | The by-product of search engine interactions - the records of what users have clicked as a result of a query - known as ``click through data'' is an increasingly popular resource of implicit feedback. Recently, however, the validity of such feedback has been questioned. However, the criticisms have focused almost solely on traditional web search, for which users are only presented with short abstracts to evaluate the resource before clicking - and creating the ``implicit feedback'' between the query and resource. In contrast other search types vary significantly in what the user is presented with and hence what is used to pass judgment and create the ``implicit feedback''. Of note is web-based image search interactions where the user is presented with a reduced thumbnail of the whole image. Image search click-through data has also been shown to have many potential uses in recent works, however, unlike traditional web search interactions there has been no previous research that has... | |
Ephemeral emergents and anticipation in online connected creativity (Papers 4: Tagging and Search) | In this paper, we seek to address the challenge of online connected creativity by studying the emergence of novelty in the ‘Sensory Threads’ project, a six-month project currently underway as part of the EPSRC cluster project, http://www.creatorproject.org/, (EP/G002088/1), New Research Processes and Business Models for the Creative Industries. The cluster brings together practitioners from the creative industries with researchers from varied traditions that span ICT, the arts and humanities, the social sciences, and business studies, through the establishment of pilot projects, such as Sensory Threads that have a creative, technical and value-creating dimension. The paper extends previous conceptual and methodological work concerning the capture of the emergence of novelty and processes of learning and knowledge transfer in on-line settings comprising multiple actors, uncertain outcomes and diffuse data sources.. | |
Federating Distributed Social Data to Build an Interlinked Online Information Society (Papers 4: Tagging and Search) | While research on relationships between the Semantic Web and Social Media has been originally motivated by the lack of semantics in mainstream Web 2.0 services, this vision can go far further, impacting the whole society regarding how information is shared, interlinked and managed on the Web. In this paper, we will show that it offers a growing field of possibilities to provide a Web where data is socially created and maintained through end-user interactions, but also machine-understandable and so re-usable for advanced querying purposes. We will especially emphasise the impacts of such Social Semantic Information Spaces to build an Interlinked Online Information Society, where any social data is the component of a worldwide collective intelligence ecosystem. | |
Kernel Models for Complex Networks (Papers 5: Social Networking) | We advocate the study of complex network models based on kernel functions. These are hybrids between macroscopic random graph models for networks with heavy tailed statistics (or other desired structural characteristics), and microscopic models that incorporate network semantics. In particular, kernel-based models assign explicit semantics to network elements (nodes and links), while they maintain the conceptual and implementational simplicity of random graph models. | |
Six Degrees of Separation in Online Society (Papers 5: Social Networking) | Six degrees of separation is a well-known idea that any two people on this planet can be connected via an average number of six steps. Having succeeded in the real world, the theory even directly or indirectly motivated the invention of online societies. However, no much effort has been paid on checking if the theory really holds for online societies whose connection pattern may not be identical to the real world. This paper tries to give an answer to the question by both mathematical modeling and online measurements. The mathematical approach formulates the problem as a Minimum Diameter Problem in graph theory and evaluates the maximum and average values of the number of connections between any two random-selected community members. Measurements are conducted in three different kinds of online societies, namely ArnetMiner for academic researchers, Facebook for students, and Tencent QQ for teenagers in China. Analysis of these measurements verifies our theoretical findings. | |
BFFE (Be Friends Forever): the way in which young adolescents are using social networking sites to maintain friendships and explore identity. (Papers 5: Social Networking) | Children have embraced social networking sites and their age is becoming younger (Ofcom 2008; Livingstone, 2004). Friendship is especially important to early adolescents, as they turn from their family to the outside world (Erikson, 1968, Hartup, 2000, Dunn 2004). An ethnographic approach was used to carry out research with children in early adolescence to find out the nature of their communication through social networking sites. Findings are considered from a psychosocial perspective, and indicate that friendships may be maintained beyond their natural course, but that ‘keeping in touch’ with old friends helps to establish identity, one of the major tasks of becoming an adolescent. The use of social networking sites appear to be an important source of support and comfort to the young adolescent who is experiencing transition cognitively, physically, and through change of school. Early adolescents are more likely to spend time talking to friends than any other single activity, and c... | |
Class Association Structure Derived From Linked Objects (Papers 6: Web of Data) | The Web is being extended with more and more RDF data sources online as well as links between objects, even across data sources. To observe the macrostructure of these linked objects at a high level, we derive associations between classes from links between typed objects, which form a class association graph (CAG). In this paper, we report our experimental results on analyzing the complex network structure of CAG. We also present a big picture of the largest connected component of CAG. Therefore, a landscape of class associations brought by linked objects on the real semantic Web can be observed. | |
Social Meaning on the Web: From Wittgenstein To Search Engines (Papers 6: Web of Data) | One could hypothesize that the essential bet of the Web is that in a decentralized information space multiple agents can share the meaning of a URI. On the Semantic Web, does a URI get its meaning from its owner, or from the formal interpretation of statements that use it? We consider the positions of Berners-Lee and Hayes, comparing them to the descriptivist and causal theories of names, and giving the common problems of both of them. We reconcile these viewpoints by explicating the public language theory of meaning, where names are fundamentally given their meaning by social and linguistic agreement. This position was first articulated by Wittgenstein in a repudiation of his earlier strongly logicist viewpoint. This view is ultimately compatible with Frege, wherein the meaning of any expression, including URIs, are grounded out not just in their formal truth values, but in their sense. The notion of sense can be reconstructed to be construed in terms of the socially-grounded norms t... | |
Interactive Information Access on the Web of Data (Papers 6: Web of Data) | Position paper for the first Web Science conference. | |
Introducing new features to Wikipedia - Case studies for Web science (Papers 6: Web of Data) | Wikipedia is a free web-based encyclopedia. It is written in collaboration by hundred of thousands of contributors [1]. It runs on the MediaWiki wiki engine. Introducing new features to Wikipedia is never just a technical question, but a complex socio-technical process. Previous introductions of new features (the category system in 2004, parser functions in 2006, and flagged revisions in 2008) are described and analyzed. Based on these experiences, the design of a new feature (creating semantic annotations) is discussed. The interaction between the technical features and the community is shown to be an instantiation of the Web Science research cycle, thus testing the cycle as a methodological tool for web science. | |
Is web-based interaction reshaping the organizational dynamics of public administration?: A comparative empirical study on eGovernment. (Papers 7: Government, Citzens and Law on the Web) | EGovernment is often broadly defined as encompassing all uses of ICT within public administrations and government agencies and units. Some of these uses have more recently been considered triggers of important transformations in the way governments carry on their activities: particularly those uses involving the Internet and the so-called web 2.0 resources. These can be seen as providing new windows of interaction that foster communication and exchange of data with other social agents (citizens, firms, other institutions, etc.) and, within a single administration, among its different units, departments, or agencies. Since eGovernment has also been linked, at least in the prospective literature, to important changes in the inner workings and organisation of governments - in fact since the middle of the 90s it has been repeatedly seen as an ideal vehicle to overcome some of the long-standing traditional problems of public bureaucracies – we want to explore the relationship between these ... | |
Online Dispute Resolution: Designing new Legal Processes for Cyberspace (Papers 7: Government, Citzens and Law on the Web) | There has been considerable debate, during the last ten to fifteen years, about the impact of the Internet on law. The focus of this debate in the legal community has been on legal rules and doctrines, on the development of new rules and the revision of old ones. In both highly publicized areas, such as intellectual property and privacy, and less publicized areas, such as the formation of contracts and the protection of consumers, there have also been court cases that have tried, generally unsuccessfully, to enforce rules, establish standards and change behaviors. The purpose of the proposed paper is to examine how the Web changes not simply legal doctrines but legal processes. Indeed, it is new processes as much as or more than new doctrines that will shape the direction and evolution of a legal order for cyberspace. More specifically, those interested in the impact of information technologies on law need to rethink not only legal rules but those processes that are aimed at resolvi... | |
Experiments for Web Science: Examining the Effect of the Internet on Collective Action (Papers 7: Government, Citzens and Law on the Web) | The shift of much of political life on to the Internet and Web has implications for ‘what we know’ about political behaviour, requiring a re-evaluation of some of the micro-foundations of political science. Web-based experiments are an under-explored methodology to identify and investigate these Internet effects. This paper reports on one such experiment, which was used to explore how one particular characteristic of the Internet – the ability to feed real-time information about the preferences and behaviour of others back to an individual user – can affect people’s incentives to act collectively and to organise around public goods. Collective action has been a key puzzle of political science since the 1960s. In The Logic of Collective Action, Mancur Olson (1965) put forward a thesis of when individuals can be incentivized to act collectively. He argued that, when organising around collective goods, ‘small groups are more efficient and viable than large ones’ and that if they are ... | |
LifeGuide: A platform for performing web-based behavioural interventions (Papers 8: Life On-line) | Interventions designed to influence people's behaviour ('behavioural interventions') are a fundamental part of daily life, whether in the form of personal advice, support and skills training from professionals (e.g. educators, doctors) or general information disseminated through the media. However, personal advice and support are very costly, and it is impossible to provide everyone with 24-hour access to personal guidance on managing all their problems. General information provided through the media may not be seen as relevant to the particular problems of individuals, and provides no support to help people make desired changes to their behaviour. For the first time, the World Wide Web provides a cost-effective opportunity to provide open 24-hour access to extensive information and advice on any problem. Interactive technology means that the advice can now be specifically 'tailored' to address the particular situation, concerns, beliefs and preferences of each individual, and intensiv... | |
Securing Cyberspace: Realigning Economic Incentives in the ICT Value Net (Papers 8: Life On-line) | Malicious software (“malware”) has become a serious security threat for users of the Internet, whether they are large or small organizations or home users. Viruses, worms and the many other variants of malware have developed from a nuisance to sophisticated tools for criminals. Computers all across the world, some estimate as many as one in five to one in ten, are infected with malware, typically without knowledge of the owner of the machine. Many of these infected machines are connected through botnets: flexible remote-controlled networks of computers that operate collectively to provide a platform for criminal and fraudulent purposes. These activities include, but are not limited to, the distribution of spam (the bulk of spam now originates from botnets), various forms of socially engineered fraud such as phishing and whaling, attacks of websites and entire nations, as well as many other forms of abuses such as “click fraud” and “malvertising”. The analytical perspectives of schola... | |
Cognition, Cognitive Technology and the Web (Papers 8: Life On-line) | The aim of cognitive science is to explain how the human mind works, and passing the Turing Test (TT) is the explanatory goal of cognitive science: Once we have successfully designed a system that is capable of doing what any person can do, indistinguishably from any person, to any person, then we have a candidate explanation of how the human brain does it. In the Web era, increasingly powerful cognitive technology is available for people to use to do what they formerly had to do in their heads, as well as to enhance their performance capacity. The Web and Semantic Web form a comprehensive store of human knowledge in documentary and symbolic form while search services such as Google augment human recall. One could even say that it is becoming possible to offload more and more cognitive processing onto cognitive technology, liberating as well as augmenting the performance power of the human brain. What effect – if any – does this Web capability have on the search for the underlying me... |