Web Science 2009 List of Posters

How has Web 2.0 reshaped the presidential campaign in the United States?
Former President Bill Clinton’s 1992 campaign for the White House - largely based on telephone and small computer networks – was hi-tech for the time. Since then, there has been a slow, but major shift from Web 1.0. to today’s Web 2.0. Not only have these enhancements changed how the candidates effectively communicate with a mass population, but they have also changed how they raise funds for the campaign. This is critical in today’s presidential races, since candidates now need several hundred million dollars to successfully compete. This paper will address what and how technologies are used in the presidential campaigns in 1992, 2000, and 2008. It will also examine how Web 2.0. has changed today’s campaigns, indicating how they will shape the US election campaigns of the future. In the 19th Century, presidential campaign were mostly run by traveling from city to city (mostly by train) and through media coverage of that time (newspapers and telegraphs). That was changed by...
Capturing the structure of Internet auctions: the ratio of winning bids to the total number of bids
Capturing the structure of Internet auctions is not sufficiently attained although Internet auctions have become popular and many people are now using them. This paper considers auctions in a certain period for the identical good and introduces the ratio of winning bids to the total number of bids (RWT). A characteristic of RWT is that it pays attention to loosing bids as well as winning bids. We consider each bid in auction structure analysis corresponds to each incoming link in the link structure analysis. We show that RWT can capture the structure of auctions well. By observing RWT we can learn to what extent sniping bids are effective and predict to what extent providing quality assurance increases sellers' revenue.
WWW: The Darwinian Imperative
Why do we blog, post videos, photos or maps? Why do we cooperate with and trust total strangers online? More generally what shapes our behavior online? This paper argues that much of the human behavior online can be grounded on modern evolutionary theory, particularly that of evolutionary psychology and sociobiology. Our goal is to develop a scientifically grounded understanding of the traits and motives behind human behavior online. We believe that the outcome of this research will eventually lead to creating online environments that are better suited for and compatible with the social human nature. In this abstract I give one example of this understanding and its implications.
Behavioral research on the WWW
Research on human behavior has around 125 years background of methods development in psychology, medicine, biology, political science, economy, sociology and related fields behind it. The objective has been to establish experimental and observational scientific approaches which can produce valid and repeatable results both to provide a foundation for decisions on practical issues (engineering solutions or societal decision making), and to test hypotheses and theories in empirical sciences. All relevant scientific fields have developed their body of methods and principles (often against scientific positions which questioned the need for a rigid methodology), but it still often requires hard efforts to convince all actors of the benefit which a rigidly applied logic and methodology for empirical research provides. The question is whether we can transfer the traditional and proven empirical methodology from laboratory and field studies to the WWW, and even exploit new possibilities. This ...
Tools for Collective Simulation
Social networks can leverage collective intelligence toward understanding complex problems such as environmental sustainability, health and policy. This collective simulation relies on trustworthy systems that empower distributed and open investigation. We present a case study of the development of an application for the collective simulation of global product supply chains (Sourcemap.org). Experiential evidence from the development of this application suggests the importance of establishing trust through linked data and the need for a learning component as part of the user experience. Complex simulations can be intuitively understood by a wide audience through the design of multiple experiences based on the expertise and motivation of individual users. Pilot studies with product designers, restaurateurs and regional development agencies suggest that the collective simulation process can affect change in the behaviors of participants at the same time as contributing to a larger social ...
Dynamic characterization of a large Web graph
The Web is characterized by an extremely dynamic nature, as it is proved by the rapid and significant growth it has experimented in the last decade and by its continuous evolution through creation or deletion of pages and hyperlinks. Consequently, analyzing the temporal evolution of the Web has become a crucial task that can provide search engines with valuable information for refining crawling policies, improving ranking models or detecting spam. Understanding how the Web evolves over time is a delicate challenge that requires to integrate theoretical efforts of modelization and empirical results. Obtaining such findings is very expensive in terms of bandwidth, computation time and human intervention. Robust software is required to gather the data and provide easy access to the collected information. Apart for commercial engines, there have been only a few attempts to perform such task and to make the data available. Several previous works (e.g.,~\cite{ntoulas2004what,toyoda2006wha...
Internet Use by Transnational Advocacy Networks: a Case Study of the “No Software Patents” Campaign
This paper proposes to examine Internet use by transnational advocacy networks - also referred to as global activism (Benett, 2003) - by studying the case of the “No Software Patents” campaign of 2002-2005 that relayed on conventional and non conventional lobbying techniques in order to influence the European Union policy-making. Transnational advocacy networks can be defined as being composed of “relevant actors working internationally on an issue, who are bound together by shared values, a common discourse, and dense exchanges of information and services.” (Keck & Sikkink, 1998: 2). Examples of such advocacy groups include the alter-globalisation, the human rights or the environment movement yet precursors - such as the anti-slavery movement - existed already in the early nineteenth century. Following Chadwick, “campaigns that transcend the boundaries of a single nation-state existed long before the rise of the Internet. However, it is undeniable that during the last ten years tran...
Mapping the Australian Political Blogosphere
Tracing change over time. Most existing blog network analyses use generic network crawlers to provide long-term pictures of interconnections in the blogosphere. While interesting in their own right, these offer no information on how individual clusters and regions of the blogosphere may respond to specific topics of the day, and how the centre of such activity shifts between different regions on the map as topics change. Long-term analyses provide only a generic picture of which blogs may act as opinion leaders for the wider network; by breaking this down to take shorter-term snapshots of activity it becomes possible to identify a range of opinion leaders on specific issues. Our work addresses these shortcomings in a number of ways. First, we track blogging activity as it occurs, by scraping the content of new blog posts when they are announced through RSS feeds, rather than by crawling existing content in the blogosphere after the fact. Second, we utilise custom-made tools that disti...
Application of Common Sense Computing to enable the development of next-generation Semantic Web Applications
Differently from early web development, today Internet is a dynamic being in which information is no more the core but the user itself. The first simple websites evolved to more and more interactive, from static to dynamically generated, from handcrafted to CMS-driven, from pure media to more and more transactive. The aim of this project, born from the collaboration between Sitekit Solutions Ltd. (Scotland), the University of Stirling (Scotland) and the MIT Media Lab (USA), is to further develop and apply software agent and natural language processing based technologies in order to blend a so-called OpenMind database with any given ontology, and hence build a novel intelligent software engine that can autocategorise or auto-tag documents. The developed software engine will enable the development of future semantic web applications whose design and content can dynamically adapt to the user.
Discovering Social Relationships and Intentions in Web Forums Using Natural Language Processing Techniques
Extended Abstract Natural Languages Processing (NLP) constitutes a complex process for computer treatment. Natural Language (NL) includes a series of aspects oriented to allow the coupling of messages to inner ideas. It is not necessary to express those using strict constructions in NL; human brains are able to detect concepts and intentions in messages even in abstract or complex constructions. The multiple ways for expressing and interpreting thoughts are profoundly related with brain complexity in humans, which allows flexibility in message passing. Nevertheless, this flexibility implies ambiguity, which causes frequent misunderstanding among humans, as well as complexity for computers. Besides, there are different cases of languages. Different social groups speak different languages. Although the inner machinery in brain is practically the same for every human, language is strongly related to the context in which people live. Our research focuses on the analysis of collaborative...
Historically, education has been able to adapt and use the institutional, legal and social changes arising from the latest communication technologies in each era. For example, the invention of writing, the printing press or, more recently, radio, television, computers and the internet have led to important changes in teaching and learning processes. The latest technological infrastructure that has to be adapted to is the Web. Over recent years, we have seen how education in developed countries, whether distance or brick-and-mortar, is increasingly based on the Web. Given the dynamic nature of the Web, which has and continues to evolve technically, socially and organisationally, education faces increasingly more challenges in terms of making the most of its potential (Minguillón, 2008). Innovations are always one step ahead of their social and institutional adoption. Despite the fact that there are already studies on the usefulness of or need for Web 3.0, or the Semantic Web, in educat...
'Designing for Trust' for the Future Web
Trust is considered one of the most important components, assets and constructs. Currently we experience well-documented first signs of the crisis of trust on the Web. We attribute that to our inability to successfully apply our face to face experience to relationships that are web-based. We propose that the remedy is in developing such technical means that will enable the creation of remote justified trust. Such a concept of technology being 'designed for trust' requires the shared understanding of what trust is and how it can be communicated. We propose (elaborate this proposition in the paper) the approach that is close to both social and technical understanding of trust, demonstrating how such an approach can lead to design that supports trust.
Ensuring Consent and Revocation: Towards a Taxonomy of Consent
This poster presents the aims and objectives of EnCoRe, a large-scale multidisciplinary research project in e-security which is concerned with privacy controls for personal information. As the Internet continues to grow as the principal means of communication for many individuals, businesses, government bodies and institutions, there is an increasing need for secure means of sharing personal data. At present there is no uniform mechanism enabling a user to control the way in which such data is shared, stored and distributed. In particular, there is no commonly accepted framework or standard defining a means of granting consent to share data while guaranteeing that this consent can be revoked fully or partially at any time. EnCoRe is concerned with just this issue, and we consider the relevant motivation, open problems, and wider implications for society. We will discuss the involvement of the e-Security Group at the International Digital Laboratory (University of Warwick) in this progr...
Dark Web Patterns
The Internet now provides a myriad of new ways to interact that eliminate physical distance, time, and even culture and language as factors affecting the development, maintenance, evolution, and even power of social networks. In these new social spaces, concepts such as privacy, property rights, and social mores that have developed over thousands of years in physical societies are being vetted and charily adopted in this new virtual world. We propose to develop a network monitoring system that can detect nefarious content, capture identifying information, build a stochastic network model of social relationships, and quickly determine if the social network demonstrates dark network properties. By using the classification of web spam as a standard, we seek to pinpoint the properties that uniquely categorize the malicious content and data sources through clustering techniques and serves as an initial approach to circumvent the proliferation of online dark network.
Towards a Semantic Web Testbed for Collaborative Policy Development
The Transparent Accountable Datamining Initiative (TAMI) [1] was conceived to address the problem of accountability of data usage. One research theme focuses on an infrastructure for logging Web-based data flow among distributed data custodians. The infrastructure is designed to support compliance checks against data usage rules. Two results include: (i) TAMI scenarios that embody and identify many frequent real world privacy protection requirements; and (ii) the AIR policy language [2] for modeling privacy protection laws and policies as well as enabling user friendly explanations. This paper introduces our work on a test-bed aimed at supporting physically distributed collaborative development and evolution of complex TAMI scenarios, the AIR policy language, and the transaction log ontology. Users of the testbed can collaboratively edit, publish and navigate TAMI scenario content through an interface that supports both human and Semantic Web capable services. We needed to allow us...
A Model of World Wide Web Evolution
World Wide Web evolution is a fact. But the institution that drives Web evolution remains unknown. Based on the Theory of Dialectics, this work describes a model of Web evolution consisting of two postulates and seven corollaries. We have successfully applied the model to analyze the real-world Web-based business and solved their problems. The model is the first theoretic study on the objective laws behind Web evolution.
The Snowflake Number
This is a paper about mass hyper-personalization; more specifically: about how to measure personalization in web based systems and beyond, using a series of metrics that we call ’snowflake numbers’.
Semantic Social Network Analysis
Social Network Analysis (SNA) has been widely studied since the middle of the 20th century. Research in this field tries to understand and exploit the key features of social networks in order to manage their life cycle and predict their evolutions. Detecting strategic positions, roles and communities are among its main concerns. The increasingly popular web 2.0 sites form the largest social network. Some researchers apply classical methods from social network analysis (SNA) to such online networks; others provide models to leverage the semantics of their representation. In this paper, we propose to leverage semantic web technologies to merge and exploit the best features of each of these approaches. Furthermore, we show how to facilitate and enhance the analysis of online social networks, exploiting the power of semantic social network analysis.
Linking Ireland's Past and the Digital Future
The Digital Humanities Observatory (DHO) was created to guide the development of digital humanities in Ireland and promote standards and best practice among those working in the digital humanities domain. This poster lays out some the issues faced by digital humanities projects in Ireland and how the DHO is working with those researchers to overcome these problems. In it we highlight the importance of interoperability between the various repositories being set up across Ireland for the creation of a network of humanities resources and how methods developed to for Web 2.0 could be applied to the concept of authority lists.
Information and communication technologies (ICTs) enable the diffusion and appreciation of local culture in the information society. With the emergence of ICTs, popular culture and community identities (local traditions, music, food and life style) may become digital products available in a global market, through web portals and electronic commerce systems created as communication and place marketing strategies. In this way, ICTs may become tools that facilitate communication and reinforce the cultural identity of peripheral and marginal areas. The theoretical and conceptual foundations that enable the establishment of relationships between ICTs and the shaping of cultural economies on a local scale are introduced in this paper. A pilot survey conducted in a rural Galician municipality, in North West Spain, is also presented.
Theory of K-representations as a Source of an Advanced Language Platform for Semantic Web of a New Generation
Fomichov, V.A.: A comprehensive mathematical framework for bridging a gap between two approaches to creating a meaning-understanding Web. International Journal of Intelligent Computing and Cybernetics (Emerald Group Publishing Limited, UK). 2008, Vol. 1, No. 1. P. 143-163.
The Development of Trust within close relationships formed within Social Network sites
Social network sites have become a popular medium to develop and maintain relationships (Boyd & Donath, 2004, Donath, 2007). Through the ease with which people can communicate with offline friends and make new friends, millions of people are using social network sites as a way to socialize (Lampe et al, 2006) and form new relationships (Fono & Rayes -Goldie, 2006). However, social network site users who use such sites to form new relationships are potentially putting themselves at risk of developing relationships with people that they know little about and have not met face to face. Therefore developing trusted relationships within social network sites appears a lot harder to achieve than offline due to the lack of face to face contact and the ease with which deceptive information is passed off as being reliable (Barnes, 2007, Joinson & Dietz, 2002). In this research we focus on the importance of trust within close relationships formed within social network sites and explain how trus...
We analyzed human browsing behavior on a large-scale web-based system. Human-web interaction sequences were temporally segmented into blocks encompassing elemental and compound browsing tasks. Network representation of the human browsing behavior resembles a complex network. The complex network traversal topology has a small number of hubs. The hubs contract and disperse navigational pathways. Underlying long tail attributes of complex networks coincide with broad user population and diminish in behaviorally focused user groups.
Information Dissemination in Unstructured Networks by Self-repulsive Random Walks
Distributed consensus algorithms play a key role in several self-organized systems and are used, for instance, in ad-hoc and sensor networks. In order to behave correctly they need to rely upon an efficient information dissemination mechanism, so as to make available, to each node in the network, a representative sample of the states of the other nodes. This is a non-trivial issue when agents communicate over an unstructured network, where each node has to rely only on local information to perform routing decisions. Often this issue is approached by having the information to be disseminated by gossiping i.e. by means of some kind of random walks. Recently we have proposed a self repulsive random walk policy which increases the speed of information propagation with respect to traditional policies, by avoiding to visit the neighbors of the current node which are also neighbors of the most recently visited node (neighbor-avoiding random walks). This sort of self-repulsion of the path resu...
An infrastructure for educational virtual laboratories based on Semantic Web and Grid Computing
Herold, M., WSMX Documentation, D.E.R.I. Galway, Editor. 2008.
Towards a reference architecture for Semantic Web applications
The Semantic Web currently has two complimentary architectural approaches. "Bottom-up" emergent best practices and "top-down" prescriptive standards leave a gap regarding the concrete implementation of Semantic Web technologies. Based on the Web Science approach of combining empirical analysis with engineering, we are proposing seven reusable and domain independent component patterns for implementing Semantic Web standards. The patterns are based on a survey of 50 Semantic Web applications and can be used as a starting point toawrds a reference architecture for Semantic Web applications. Together, the patterns and the reference architecture provide a common terminology for communicating concepts related to the implementation of Semantic Web technologies.
The Life and Mating Habits of the Brown Bordered DIV: Emergent Semantic Elements from Genetic Design on the Web
Genetic design techniques for the web are presented as a use of genetic algorithms and evolutionary approaches to the problem of design and development on the web. We establish a scaffolding to model a number of genetic algorithm adjustments, simulate the results and draw implications about the influence of linked data and semantic clustering in distributed systems. The goal of this paper is to present genetic design as a tool that is uniquely suited for the design and evaluation of web systems and establish an understanding of the possibilities and problems with employing these techniques by studying the possible and probable outcomes associated with their use. Genetic design employs the principles of genetic algorithms, the computing technique that incorporates the process of evolutionary growth and selection in order to approach approximate answers to search and operation problems, as a solution to the problem of design - approximating the answer of what is the most ideal or usabl...
Towards a methodology for research on trust
It is clear that many human concepts (such as love, justice, equity, status, power etc) are not amenable to a universal objective computational definition that can be used as a yardstick for measurement. For well over one hundred years methodological debate has persisted over the best way to study the social world. The two main positions are a quantitative approach (positivism) , and a qualitative approach (constructivism and interpretivism) rooted in an idealist, hermeneutic and essentially relativist and postmodern view on the world. We propose a methodology applicable to trust that combines both approaches and elaborate on such a methodology with reference to a trust application area. A synthesis of positivist and interpretivist methodology is outlined that shows how conflicting responses to factual questions and data can reveal previously hidden value systems and elucidate notions of trust.
CitationBase: A social tagging management for references
Social tagging is one of the major phenomena brought by the social media and technologies in the Web2.0. It allows users to organize and share their information and online resources on the Web (Li, et. al., 2008; Lin, et al., 2008). It enables the content management for the communities as well as for individual person. For managing references, there are already some available social network websites, such as CiteULike and Bibsonomy . In order to explore the possibilities of applying ontology and semantic web data integration approaches to similar area, we implement a system called CitationBase which aims to provide the reference management for community and individual users.
KuiPOLL: a Social Relationship Tool
This research proposes a knowledge user interface for online collaborative tool in order to create knowledge bases. Theoretically, KuiPOLL is derived from KUI (Knowledge Unifying Initiator) proposed by Sornlertlamvanich et al. [1, 2, 3, 4]. Development of KuiPOLL aims to manage and handle online opinion polls. It includes a feature for finding out what people think about interesting topics.
Service-Oriented Collective Intelligence for Multicultural Society
The Language Grid is a service-oriented collective intelligence platform. Its software has been continuously developed since April 2006, and Kyoto University started its non-profit operation in December 2007. Seventy groups worldwide have signed onto the agreement to participate in this initiative. Resources registered by participants to the Language Grid include machine translators that cover Chinese, English, French, German, Italian, Japanese, Korean, Spanish, and Portuguese. Morphological analyzers, dependency parsers, concept dictionaries, specialized dictionaries in disaster management, tourism, life sciences etc. have been already registered. Various types of application activities are ongoing in the Language Grid Association, a user group of the Language Grid: NPOs and universities have started supporting intercultural collaboration in hospitals, schools, and so on.
Homo iunctus: Modeling the web user
One of the main interests of the field of Human computer interaction is the user modeling. This search has been extrapolated to the web realm. Here, the process is more complex that in desktop applications since the study subject is most of the time, unreachable (the study subject is only a set of Datagrams). It is the same case as in Astronomy which requires the use of indirect techniques for studying the sky phenomena. The web user is different from the typical computer user in many aspects.Here we propose a model for describing the web user and the use of this identity.
Methods for Re-imagining Social Tools in New Contexts
Digital exclusion refers to a lack of access to technological facilities, including the blossoming arena of social interaction. People without mobile phones or PCs cannot access email, SMS or social networking websites; this includes many groups, such as the elderly, who can become vulnerable without good social contact. These people could partake in such interactions if we could enable multimodal access to social networks through a wider variety of communication channels (for example, television and telephone). This poster describes how we have used methods from HCI and Social Theory to better understand social technology.
Interacting With Linked Data About Music
In this work we develop a novel method for visualizing and interacting with large amounts of structured data. By applying concepts from complex networks research to semantic graphs we create meaningful visualizations of the relation- ships between hundreds of thousands of individuals. These visualizations are applied to music collections and used to create a collection navigation tool. The visualizations are also applied to data collected from the Semantic Web and used as a music navigation and music discovery interface. Although we present use-cases related to music, the tools described could be used in many other domains.
Methodology for Identifying Relationships within a Corpus of Documents
One of the pressing issues with information management today is actually finding relevant information among the vast amounts of data available. This paper discusses a methodology which extrapolates meaning from the contents of the documents themselves. This methodology is well suited for use as a search engine, as it focuses on the concepts contained within the documents, rather than on keywords found within the documents. The driving principle of this methodology is the simple fact that in any language there exist 'words' and in any language, specific sets of words; in context (either derived or explicit) with each other define concepts. By utilizing current known statistical methods to determine which concepts a document potentially intersects a statistical representation of the document, and all potential concept intersections can be created. If these statistical representations are mapped into a theoretical n-dimensional space such as a Banach's space, where the axis are repre...
The interplay of theory and observation: a proposition for structured research on human behavior on the web
The attempt of Web Science to develop a deeper understanding of human behavior on and with the web, as practiced today, struggles to transcend the stage of isolated case studies of individual phenomena with little or no connection to the nature of human behavior as a whole. The authors believe this state can be remedied by a more conscious combination of theoretical concepts of human behavior and empirical work. To this end this paper identifies four key challenges in sound Web Science and proposes a blueprint for research practices which is based on the school of critical rationalism.
Identifying Communities on the Web: A Proposal for the Analysis of Online Discussion Networks
In this paper we propose a methodology for the analysis of online discussion networks. Using data collected from the Slashdot discussion forum, which comprises all the posts and attached comments published during one year, this paper reconstructs the discussion threads as hierarchical networks and proposes a model for their comparison and classification. In addition to the substantive topic of discussion, which corresponds to the different sections of the forum (such as Developers, Games, or Politics), we classify the threads according to structural features such as the maximum number of comments at any level of the network (ie. the width), and the number of nested layers in the network (ie. the depth). We find that some discussion networks display a tendency to cluster around the area that corresponds to wider and deeper structures, showing a significant departure from the structure exhibited by other types of discussions. We propose using this model to create a framework that allows...
A Web Browser Extension for Improving Security in Internet Auctions
Internet auctions are used everyday by millions. However, despite frequent criticism (B. Gavish and C. Tucci, Reducing Internet Auction Fraud, ACM Communications, May 2008, vol. 51, no. 5, pp. 89-97), only the most simple reputation systems are used by the most popular Internet auctions today.While it is true that the human mind is the best possible method of evaluating this information, the task is time-consuming and error-prone. Our goal has been the improvement of trust management for Internet auction users. The TM system should increase the safety and comfort of the user by providing additional information not available on auction sites today. We have designed an extension for a popular Web browser (Firefox) that gives users access to our algorithms. (The algorithms themselves are part of a library of trust management tools developed in the uTrust project.) The extension obtains its information by automatically performing the task that is performed by an auction user: by crawling p...
Bridging the Scientific Divide: Building a common language between computer scientists and social scientists to understand the New Technologies and Digital Divide
  The term Digital Divide is reported as the inequality of accessibility in new technologies.Thus, Digital Divide describes the social inequalities that are created between those who have and those who have not access in new technologies. Equal accessibility in new technologies and the Internet indicates equal access to information society, e-inclusion strategies and knowledge sharing. The statement of the problem addresses both the capacities and the capabilities of users and social groups to the exploration and adaptation to the on line society.   Due to this reason, Digital Divide is considered to be a social problem that contains new forms of social exclusion with political, economic and cultural dimensions for users and social groups.  In a first level, the phenomenon and the different ways it appears, depends on demographic characteristics such as sex, age, place of residence (difference between the metropolis and the region) (Cuneo, 2002).  In a second level Digital Divide deals...
From Barbie to Gargantua: A mapping of the representations of the female human body in Online Pornography
Pornography and Sex remain two of the "dirtiest little secrets" throughout the course of human history. Sexual expression is probably the only social practice that is so repressed and at the same time manages to haunt our routine. Online Sexuality is presented as a unique and absolute component that actually ensures the coherence and the stability of our social life. The point of this paper is to observe, write down, report and give to the best possible extend a detailed description of the up-to-date cybersex culture as seen on online sites containing pornographic content. Within the context of consumer society the human body becomes a bearer of symbolic value as well as a central element in the postmodern sense of personal and social identity. Consumer capitalism constructs and sells the image of a sexually active female body that is constructed and reproduced as a social figure, icon, tool and practice in pornographic internet sites. This body is also involved in a dynamic and ongoi...
E-Counseling: the new modality. Online Career Counseling - a challenging opportunity for greek tertiary education.
In this context, the author presents a research being designed at present for the provision of career counseling to students and graduates via Internet. Setting: A forum is being designed in detail so as the "Counseling Act" to be effectively transferred in the online environment. Two procedures are designed to take place: a. e-counselor - e-counselees procedure and b. e-counselees - e-counselees procedure (peer e-counseling). Objective: To investigate whether the online forum environment is suitable and effective for the career counseling process in tertiary education. Participants: Students and graduates of the Panteion University of Social and Political Sciences. Method: Two-three months provision of e-counseling through the forum medium in a text-based form. Supplying a questionnaire (either in the traditional way or via Internet) to participants afterwards on issues relevant to the effectiveness of the new medium and especially the forum environment, their satisfaction of th...
I have for the last two years been writing and publishing papers in Information Systems conferences, journals and edited books applying cultural theory to distributed web-based information systems – virtual worlds, online social networks, etc. I am the Secretary of IFIP WG 9.5 on Virtuality and Society. I would like to present a poster at WebSci09 presenting the ideas of the various critical theorists I have used and the systems I have analysed with them.
(Linguistic) Science Through Web Collaboration in the ANAWIKI project
Perhaps the greatest obstacle to progress towards systems able to extract semantic information from text is the lack of semantically annotated corpora large enough to be used to train and evaluate semantic interpretation methods. The community is beginning to realize that even the 1M word annotated corpora created in substantial efforts such as PropBank and OntoNotes are likely to be too small; but unfortunately, the creation of 100M-plus corpora via hand annotation is likely to be prohibitively expensive. Yet initiatives such as Wikipedia and, in the AI community, OpenMind CommonSense show that it is possible to get thousands of people to participate in science-through-the-Web initiatives. And the ESP game showed that the game format is a promising way to address the motivation issue. The goal of the ANAWIKI project is to experiment with Web collaboration as a solution to the problem of creating large-scale annotated corpora, both by developing tools through which memb...
Discovering DIMDIM: A heuristic evaluation of MOODLE's synchronous open source perspective
Moodle is a course management system (CMS) - a free, Open Source software package designed using sound pedagogical principles, to help educators create effective online learning communities. Moodle has been widely used with success in many Universities. It covers all the main aspects of Asynchronous e-Learning but lacks on modern Synchronous e-learning features like hosting virtual classrooms. Dimdim steps in to fill the gap between Moodle and other commercial Learning Management Systems by providing virtual classroom features while simultaneously remaining available as open source software. Today's modern learning environment with continuous learning and training needs considers face-to-face meetings with all learning participants present in one place as an unrealistic approach. Synchronous learning features are essential for a successful e-learning program implementation. Dimdim's virtual classroom solution enables the learning participants to conduct online virtual classroom meeting...
A Platform for Studying Progressive Self Management in Online Communities
Web2.0 provides a myriad of ways of communicating and collaborating such as email lists, instant messaging, blogs, wikis, tagging and commenting on other’s posts. Increasingly, individuals and businesses are using this technology to build strong ad hoc online communities to pursue common shared goals, such as: addressing social or environmental issues; collaborating on technical innovations; collectively creating new multimedia content or forming business networks in a particular field. As such value-generating online communities take on increasing social and economic significance, it is vital we study and understand how best they can management themselves to adapt to change and maintain the engagement of members. Currently, such self-management is not well understood and poorly supported by the web platforms. This poster outline a platform developed to enable the collective and participative management of online communities and to study it use over time by a selection of different on...
The Plausibility of Estimating Language Diversity on the Internet through Wikipedia Projects
Estimating the language diversity on the Internet through Wikipedia projects provides other benefits that go beyond easy and updated research. This article argues that Wikipedia projects has generally alleviates some of the language bias embedded in computer systems that the Friedman and Nissenbaum (1995, 1997) has identified as pre-existing, technical, and emergent biases. First, since Wikipedia projects are maintained by volunteers and charity organizations, they are often less influenced by pre-existing major corporations and states compared to other websites. Under-represented ethno-linguistic groups do not need governmental approval or industry attention in order to start a language version, thus avoiding unnecessary language/dialect politics and market influence. Second, because Wikipedia projects adopt the Unicode, the international encoding standard that aims to accommodate all languages and scripts on the same webpage, they contains less technical biases than those website...
Typo-Squatting: The ``Curse'' of Popularity
Typo-squatting is the practice of registering a domain name which contains a typographical error, if compared to the name of a registered trademark or, more in general, of a more famous and broadly known web site. In this paper we study analytical tools for the characterisation of typo-squatting and present the notion of neighbourhood of a domain name. We apply our technique to a real scenario and analyse the spread of typo-squatting in the .uk registry.
Extending SOA with Social Annotation of Services
S. Xuan Semantic Web Services: An Unfulfilled Promise. IEEE IT Professional 9(4) (2007) 42 – 45
Brazilian Institute for Web Science Research
This paper introduces the Brazilian Institute for Web Science Research, which will congregate 110 researchers from 10 Brazilian institutions. Investigations conducted within the Institute will range from understanding the impact the Web has on the daily lives of individuals to meeting the challenges of the Web graph. They will adress the problems of developing software for Web-wide applications, of searching, retrieving and managing data stored in hundreds of millions of Web sites, and of proposing novel architectures that overcome the limitations of the current Web infrastructure.
Web Communities as a Tool for the Social Integration of Immigrants
We live in a time of great historical transformation given by immense number of processed information and thus, a multiplication of connections and networks. Nevertheless, in the era of accelerated migration there is no successful integration model and there is no successful attempt to approach this problem from the Internet. Recent proliferation of Web communities brought to life new social dynamics and values that could help to overcome some traditional problems of immigrants. This research will analyze Web communities and its values as a tool for the successful social integration of immigrants. Through this analysis I hope to contribute to a better understanding of the social dynamic of Internet. 1. Introduction Even in the most developed and rich countries of the Western world immigrants face many difficulties in their incorporation. Immigrants usually receive a different treatment than native citizens, cope with specific barriers and stay for generations in lower economic a...
Online Health Search Affordances
Nettleton, S., R. Burrows, et al. (2005). "The mundane realities of the everyday lay use of the internet for health, and their consequences for media convergence." Sociology of Health & Illness 27(7): 972-992.
Semantically enhanced games for the Web
The growth of recent interest in casual multiplayer online games comes at a time when the Web is increasingly involved in connecting people, notably through Web-based social networking platforms. Online games are increasingly being embedded into social networking sites where they can form new connections between the network inhabitants. Semantic Web ontologies can be used to put social networking profiles in a format that can be used as a foundation for intelligent socially-aware software services. Defining an ontology for a game’s rule set and recording the significant events in a game session in an RDF-based format could bring the Semantic Web to games analysis and pave the way for new software services built around multiplayer online games. This paper proposes a methodology for defining the application logic layer of a games application from its rule set. We present a software framework that can translate game rules defined in a subset of OWL-DL combined with SWRL into a Java librar...
The Battle for the 2008 US Congressional Elections on the Web
With more and more people using the Web as an important source for gathering information on almost every issue, search engines are in the position to influence what is perceived as relevant information through their mechanism of ranking web pages. However, as studies in web spamming have shown, interested groups and individuals can also make use of similar mechanisms to fool search engines in ranking their pages higher than those of their rivals. During the 2006 US midterm election, a concerted effort to Google-bomb (mainly) Republican incumbents running for Congress took openly place. Google responded with changed algorithms to diminish their effect. At a time when the attention of the public and the mainstream media is focused on the presidential race, we decided to follow the most contented races for the Congress. While a full analysis of the collected data will be more revealing once the results of the November 4th election are known, a preliminary analysis has already shown intere...
The case of the LHC initiation at CERN, as reported through the WWW
How is information propagated over the many mediums available on the WWW? When and how do unauthoritative influences take over? How can vital information concerning areas of fundamental societal value like helth, science, education, or even basic factual news, be protected from clearly unauthoritative infuences which can do more harm than good? And how secure is this entire entity that we call the "Internet"? We propose to study the case of how, when and what information was reported in different web forums on and around September 10 2008, about the LHC initiation at CERN in Geneva. It is a typical case where the initial, pure, concretely scientific information, was suddenly overtaken by a wide range of unsubstantialed fears (further annotated with blatantly inappropriate language and comments in forums where innocent teachers and parents were addressingtheir children to watch "the biggest experiment of the decade"). This further propagated back to the fairly authoritative news forums...
Knowledge-Enabled Research Support: RKBExplorer.com
As part of the ReSIST Project [1], we have developed a set of knowledge bases and the infrastructure that surrounds them to support all aspects of the project work and endeavour, using Semantic Web technologies throughout. Recently we have started to capture metadata on courseware and link it up to the other data on Computer Science and resilient systems. The system includes more than 20 individual knowledge bases, many containing over 10M RDF triples, along with knowledge capture utilities, knowledge publishing facilities, coreference analysis and publishing subsystem and the infrastructure required to enable the interoperation of these resources, giving users a unified view of the system as a whole [2]. We present an overview of the system, identifying all the major components. [1] \"ReSIST: Resilience for Survivability in IST\" EU-funded Network of Excellence, http://www.resist-noe.org/ [2] RKB Explorer Application, http://www.rkbexplorer.com/
From Game Neverending to Flickr. Tagging systems as ludic systems and their consequences.
Little to nothing has been written about the origins of Flickr, one of the first and most successful tagging Web services. However, looking back on its early stage of development, one learns that in its first incarnation it used to be a massively multiplayer online game playable through the browser entitled Game Neverending. However, after some releases, the game per se was dropped and its tools later evolved into what became Flickr, the widely hailed photo-sharing website famous for its innovative implementation of tagging and folksonomies. With now more than 54 millions visitors since its inception, such unquestionable achievement immediately raises deep questions about what motivated users to adopt this service. While such a transition between a MMORPG and a tagging system shall be recognized as unique, it is nevertheless tempting to analyse the latter following an hermeneutical grid that would retain the peculiar ludic dimension of video games. A close analysis of Flickr's mec...
Designing a website for creative learning
The Scratch Online Community is a website (Figure 1) that allows kids from around the world to share their own interactive media. In less than two years, more than 50,000 people have uploaded close to 350,00 Scratch projects ranging from video games to animated stories to science simulations to dance projects. Continuous iterations in the design and moderation of the site have been guided by observations of the participation patterns that have emerged in the community around issues such as remixing, moral judgment and group formation. The Scratch website hopes to be an example of how web technologies can foster young people's involvement in the participatory culture to develop 21st century skills
Extracting Expertises Using Online Shared Workspaces
We present an approach for extracting expertises using online research-oriented shared workspaces in two steps: a) Content analysis of stored documents (mainly scientific deliverables) within shared workspaces; and b) Log files analysis. The log files contain the transactions that happened in a shared workspace. We use mainly those log records that keep the document-based events (e.g. read, create). After analyzing log files and documents, relevant key phrases are assigned to users as expertise elements. We also demonstrate a prototype that we developed and uses BSCW shared workspace for extracting expertises. As a use case, we used fifty submitted deliverables of the Ecospace project, a large European project in the area of Collaborative Working Environment (CWE), and also log files of the Ecospace project from March 2005 to May 2008 with more than 30,000 records.
The WebStand Projet
WebStand is a French National Research Agency funded projet, whose goal is to create a platform for sociological analysis of Web Data. We have applied this platform to the sociological analysis of W3C mailing lists to understand the process of standardization. An important aspect of the system is its ability to capture temporal aspects of the data, that are crucial for sociologists. This poster presents the outline of the project.
Wars, (empty) threats and other effects in buyer-seller cross-comments left on Internet auctions
An auction platform is a dynamic environment where a rich variety of social effects can be observed. Most of those effects remain unnoticed or even hidden to ordinary users. The in-depth studies of such effects should allow us to identify and understand the key factors influencing users’ behaviour. The material collected from the biggest Polish auction house has been analyzed. NLP algorithms were applied to extract sentiment-related content from collected comments. Emotional distance between negative, neutral and positive comments has been calculated. The obtained results confirm the existence of the spiral-of-hatred effect but also indicate that much more complex patterns of mutual relations between sellers and buyers exist. The last section contains a several suggestions which can prove useful to improve trustworthiness of users’ reports and security of an auction platform in general.
MaRVIN: A platform for large-scale analysis of Semantic Web data
Web Science involves, amongst others, the analysis and interpretation of data and phenomena on the Web [5]. Since the datasets involved are typically very large, efficient techniques are needed for scalable execution of analysis jobs over these datasets. In contrast to other analysis tasks concerning Web data, many Semantic Web problems cannot be solved through the common strategy of divide-and-conquer, since the problems cannot be split into independent partitions. We present MaRVIN, a parallel and distributed platform for processing large amounts of RDF data, on a network of loosely-coupled machines, using a peer-to-peer model. We believe that MaRVIN is well-suited to aid many data processing and analysis tasks common in Web Science, and welcome this conference as a venue to discuss and investigate potential use-cases.
A Web Portal based Framework for the Integration of Business Processes to Support the Networked Virtual University
This poster describes the work done to provide the web based integration of business processes and educational Quality Assurance (QA) systems across the Networked Virtual University (NVU). This poster presents the semantic integration framework and highlights the interdisciplinary challenges met in the effort to enable the presence of a NVU on the web. The poster also presents the architectural design for a Web Portal Framework based on the Web Services for Remote Portlet (WSRP) standard and shows the architecture and components of a transparent Business Process Management System supporting the NVU [Ma et al 2007]. The architecture supports intelligent agents responsible for the transformation and mapping of semantic elements across NVU partner systems. The poster presents and evaluates a case study of a NVU portal system using WS-BPEL standards to provide semantic and business process integration through web service choreography and orchestration. The portlet-based architecture of ...
A scheme for enhancing trust in virtual citizen communities
Society has widely adopted use of electronic data without sufficient attention to the problems of non-repudiation (NR). A universal, transparent scheme is needed to replace the traditional paper-based model that people are familiar with. A registration scheme is proposed that uses a network of registration servers run in a way that is robust to legal and technical challenge. Any user can register potential electronic evidence with one or more of these servers. This enables a user to later assert that they had the data at the time. Wide availability should induce proper behaviour between parties whether they use the scheme or not.
InterDataNet: an Infrastructural Approach for the Web of Data
provide a set of collaboration-oriented services for distributed Data management to IDN-compliant applications. InterDataNet (IDN) is a project within the Web of Data which can improve the Semantic Web. IDN infrastructural solution can boost the Semantic Web vision by providing a way of achieving the collaborative creation of shared information (Web of Data) and conceptualization (Semantic Web). IDN proposes itself as a pretty general interdataworking facility which can boost the Semantic Web vision. IDN design pattern is intended to manage both ontologies and their instances as IDN-IM-compliant documents and formally encoded by the architecture in much the same way. As a consequence of being handled by the IDN-SA services, the ontologies thus become globally addressable and reusable; moreover they can benefit of the collaboration oriented IDN-SA functions (versioning, traceability, replicability, etc.). IDN hence proposes itself to act as an interoperability application-independent...
The photo as a conversation -- a case study on Flickr
Flickr, emblem of the Web 2.0 supports a set of photography practices that is different from what traditional photo amateurs are used to. In this paper we cary on a distinction drawn by other authors between “Kodak Culture” and what we call "conversational use of photography". Using a (5M-users, 150M-photos) base of flickr data extracted by way of the Flickr public API, we show that these two types of uses coexist on the platform and even that the rate of this conversational use is surprisingly low. Beyond extensive basic figures, we have led several analyses giving some key features explaining by which way this minority can have a significant impact on the dynamism and organization of the whole community.
Role playing Games: Virtual or Reality?
Life in Virtual worlds could be seen as something of less importance. Whereas- due to personal estimation-it has the importance that anyone has chosen to give. It is based in the subjectively dimension. It is important to understand the games as laboratory of identity, some places where the character can experiment himself in a safer environment and start over anytime he wants. We will present the bibliography in general for the social and psychological condition of the players and their motivation. We attempt to examine the relationship of avatar with the personality of player and basically the impact of the game in the mental health of the player. To sum up we could consider the virtual environments as a supplementary dimension of reality, where that person could discover himself and get involved into relationships as important as in an off-line life.
Web science: a new computer-related curriculum
Degree curriculums reaction to changes in society is extremely slow. This becomes ever more sensitive when talking about technology-based disciplines like those proposed in the ACM computer curricula. This paper defends that computer-related curriculums have already lived three different stages --- regarding the appearance and proliferation of the Web. Furthermore, it seem these curriculums seem to be exhausted and are facing the need of a new one. This change comes directly linked to the evolution of the Web towards the Web 2.0 and the every time deeper participation of users on it. New ways of communication, new relationships between users on the Web require new education to those involved on it. Old curriculums are not able to deal with all these new issues. A renovation is necessary. Is Web science the solution?
Government 2.0? Technology, Trust and Collaboration in the UAE Public Sector
A new wave of innovation is fostering cultural and technological changes on a global scale, with collaboration playing a critical role in transforming society, business and government. Over the past two decades, government leaders worldwide have been striving under different banners to develop governance models that contribute to societal and public sector development. Utilizing a variety of approaches—such as “whole-of-government”, “joined-up-government”, “networked-government”, “horizontal government” and “connected government” initiatives—public sector leaders have sought to transform government increasingly through collaborative approaches. Today, governments are still struggling to build the foundations of their future governance strategies with an emphasis on cross-agency collaboration. The increased usage of information technologies in government has spurred the promise of a less centralized and more collaborative approach. This promise is primarily derived from the tremendou...
In law we trust? Trusted Computing and legal responsibility for internet security
This paper analyses potential legal responses and consequences to the anticipated roll out of Trusted Computing (TC). Taking the UK House of Lords report on personal internet security as a starting point for our analysis, we argue that TC constitutes such a dramatic shift in power away from users to the software providers that it is necessary for the legal system to respond. A possible response is to mirror the shift in power by a shift in legal responsibility, creating new legal liabilities and duties for software companies as the new guardians of internet security. Trusted Computing (TC), a project commenced by an industry organization known as the Trusted Computing Group (TCG), was set up to achieve higher levels of security for the information technology infrastructure. It was driven by the recognition that it is insufficient to rely on users taking the necessary precautions, such as regularly updated firewalls and anti-virus systems themselves. The notion of “trust” as used Tru...
On the Relationship Between Online Social Networks and the User-User Bonds that Create Them
Recent studies have extensively examined online social networks, but have analyzed only one network for each set of users. In this work, we use YouTube as a case study and examine multiple networks amongst the same set of users, each generated by a slightly different and real user-to-user relationship. We aim to understand the dependence of the structure and evolution of online social networks on the nature of user-user bonds that generate them. We report on a variety of properties of these networks in relation to the corresponding user-user bonds, and present a thorough study of the correlations between the networks, the neighbors of nodes, and the distribution of cluster sizes across networks. Insights gained from this study will be helpful in understanding the dynamics of social networks with applications in viral marketing, searching, controlling epidemic behaviors and the design of web applications.
Toward integration travel information data using information extraction and instance matching
In this paper, we introduce the method on how to integrate travel information data embedded in web pages using approaches of information extraction and instance matching. Furthermore we extend the concept of instance matching to find the connotative relationship between instances extracted from different sources in order to improve the result of integration. We extracted more than 145,000 pieces of travel data terms of sight, route, agent, hotel, restaurant and ticket from several different sources, and integrated them into a piece of travel data with comprehensive information.
Cognitive Extension and the Web
[see attached paper for extended abstract] There has been a growing interest in recent years regarding the relationship between social interaction processes, technological artefacts and human cognition. Human cognition, it is argued, is often dependent on features of our social and technological environments, and changes to these environments can exert a profound influence on the kind of cognitive processing that we are capable of. Given this assertion, our attempts to understand a technology as pervasive as the Web assumes a new significance; for inasmuch as Web resources and technologies are apt for forms of cognitive extension and incorporation, we may fully expect such resources and technologies to fundamentally transfigure the space of human thought and reason. Our aim in this paper is to evaluate the legitimacy of this claim. We assess whether the current properties of the Web meet the kind of criteria for cognitive extension that have been proposed in the cognitive scientific a...
Think-Trust: Investigating Trust, Security, Dependability, Privacy & Identity from ICT and Societal perspectives
Society & technology are evolving and continuing to accelerate. This is leading to increased complexity, inter-dependencies, convergence and mass data collection/collation. Think-Trust aims to formulate recommendations on: 1. Policy environment; develop coherent ICT legal/administrative frameworks, encompassing human behaviour relating to security, privacy and confidence. 2. Research agenda; encourage R n’ D that facilitates a secure Information Society and respects the freedom and privacy of its citizens, with due attention given to ICT infrastructures, networks, services and applications. Envisaged impacts include: (a) Improving the confidence level of users of technology in the future Information Society, (b) Ensuring that Europe is well-positioned to embrace ICT developments, (c) Finding the right balance between social, legal & technical requirements.
Online Networking as a Growing Multimodal and Multipurpose Media Practice: a Key Factor for Socio-Cultural Change
The different uses and functions with which citizens individually and collectively make use of the Internet bring about decisive modifications in the quantity and quality of people’s involvement in communication processes. Indeed, the uptake of the Internet in our everyday lives modifies the manner in which we manage and arrange our daily undertakings. In this regard, as far as communication, and specifically media practices are concerned, there is every indication that, among other aspects, the rise of home Internet access necessarily plays a fundamental role in the development of growingly personalized and widely participative practices, making the household a key context for the continual interplay between technology, audience and use factors, and thus, in particular, for the adoption of the Internet as a valuable, wide-ranging daily life instrument. In our research, we are trying to understand current patterns of transformation in communication practices owing to the ever-increasi...
Advergames Content Analysis: Applying a Methodological Toolkit based on Ludology Principles
Advergames made their appearance more as an evolution of interactive advertising, than as actual games and adopted advertising structural rules more than game play structure. Nowadays, advergames tend to be more games than advertises and as so they must be methodologically approached and analyzed as games. Ludology focuses on understanding the structure, the elements and mechanism of the game. In this work we apply the principles of ludology to advergames and introduce the use of a methodological toolkit based on these principles. Advergames Content Analysis may adopt Ludology principles and use the methodological toolkits based on these principles but adaptation of the toolkit is needed in order to fit to advergame requirements.
Unleashing Argumentation Support Systems on the Web: The case of CoPe_it!
Argumentation support systems have a long history. Generally speaking, they offer sophisticated support for sense- and/or decision-making, and have been proven effective in addressing a wide range of concerns in various domains, such as engineering, law and medicine. In the majority of cases, these systems have largely remained within the communities in which they originated, thus failing to reach a wider audience. When investigating how the advent of the World Wide Web affected them, the results are rather disappointing: only Web-based discussion forums, with rather primitive support when compared to argumentation support systems, have successfully migrated to the Web. One key factor contributing to the wide adoption of these forums is their emphasis on simplicity. On the other hand, the formal nature of sophisticated argumentation systems has been pointed out as an important barrier to their wide adoption, and as one factor that hinders them to make the step towards the World Wide We...
Detecting and Understanding Web communities
Collective user activities on multiple, often heterogeneous and evolving Web sources contributes in the formation of Web communities which are either derived from Web documents/pages, or by users navigational tasks and more recently by tags and social frameworks. Defining, deriving and exploiting communities is not a trivial task since several parameters (large-scale, complexity, evolving information etc) are involved. This paper aims at providing answers for crucial questions raised about communities emerging in the Web and it summarizes different community definitions such that then, the problem of community detection (which is well matured and researched in the past) is understood. The paper emphasizes and discusses the most important methodologies and techniques which deal with large populations of Web documents participating in vast hyperlinked networks, or networks formed from crawling (part of) the web and more recently, networks reflecting the social relations and/or intera...
Wikipedia2Onto --- Adding Wikipedia Semantics to Web Image Retrieval
This paper describes our preliminary attempt to automatically construct large-scale multi-modality ontology for web image classification. For text part we take advantage of both structural and content features of Wikipedia, and formalize real world objects in terms of concepts and relationships. For visual part we train classifiers according to both global and local features, and generate middle-level concepts from the training result. A variant of the association rule mining algorithm is further developed to refine the built ontology. Through experiment we prove that our method allow automatic construction of large-scale multi-modality ontology with high accuracy from challenging web image data set.
Why do people participate in cybercommunities?
This paper relates the rise of cybercommunities and the reasons behind individuals’ participation in cyber-communities to the modern world. It makes use of Giddens’ theories of modernity as a set of analytical tools.
Phatic Technology and Modernity
This paper introduces the concept of phatic technology and analyses its role in modern society. A phatic technology is a technology that serves to establish, develop, and maintain human relationships. The primary function of this type of technology is to create a social context: its users form a social community with a collection of interactional goals, which may be relevant to all human interchanges in that social context. Some aspects of phatic technology have been noticed and used to classify different types of technology. However, the precise combination of characteristics that make up a phatic technology does not seem to have been recognised and generalised. Since the establishment, development, and maintenance of human relations is the primary characteristic of a phatic technology, an account of it may be expected to involve a social context with a rich sociology, namely a community constituted by individual users of the phatic technology, in which personal and group goals must be...
A Digital Citizens Bill of Rights
A DIGITAL CITIZENS BILL OF RIGHTS "[A] bill of rights is what the people are entitled to against every government on earth, general or particular, and what no just government should refuse." --Thomas Jefferson December 20, 1787 Introduction This paper presents a comprehensive framework for discussing proposed rights of digital citizens in the digital age. Over the past decade, as the Internet has grown in influence and scope, many have proposed individual or partial lists of rights that citizens should have, but none of them have been comprehensive or complete. We present here a comprehensive list of “rights” that citizens should expect governments in the digital age to protect and enhance. The paper proposes the notion of “digital citizens,” who interact with agencies of government through voting, receipt of government services and day-to-day interactions over important public policy issues. We explore briefly the history of the Internet in regards to government and democrati...
Academic Internet Use in Korea: Issues and Lessons in e-Research
Abstract Since the 1995 inception of the Internet in South Korea, the Internet has become an important medium for information and communication among collegians due to its complete integration into everyday school lives. This study examines the scholarly use and role of advanced computer and communications technologies in general and the Internet in particular via an open-ended, qualitative survey among Korean university students. Through word frequency analysis and semantic mapping, this paper identifies the key issues in academic Internet use. In addition to information science methods, content analysis is used to investigate the attitudinal and behavior dimensions in scholarly Internet use. The results are expected to enable professors and policymakers to target populations who underutilize the educational potential of Internet technologies and to design e-learning programs for such students. Introduction Since the 1995 inception of the Internet in South Korea (hereafter, Korea...
Chinese Citizen's Attitude towards Internet Censorship: a Survey in Mainland China, 2005
The aim of this study is to describe Chinese citizen's attitude towards internet censorship, and explore some variables that may affect it. A dataset that contains 1,620 cases was referred to. It was found that the interviewee’s internet usage experience had an influence on their opinion of internet censorship, and this effect was shaped by respondents' character. At the end of the paper, a structure equation model was constructed to illustrate the relations among respondents’ character, their perspective of internet's function, and their attitude to internet censorship.
M. E. J. Newman, “The structure of scientific collaboration networks,” Proc. Natl. Acad. Sci., no. 404-409, 2001.
Title: PREVENTING CYBER BULLYING AND PARENTS AND TEACHER EDUCATION WITH 192 LEAFLETS AND NEWSLETTERS YASUDA Hirohiko, Eng. M., Teacher, Shimonoseki Technical High school, Shimonoseki City, JAPAN Category: Cybercrime and Prevention Introduction In Nagasaki, Japan, an elementary school student killed her classmate in 2004. She got angry because of an e-mail a girl wrote, and killed her. Also, last summer, a high school student in Kobe killed himself because bullies threatened him with ‘cyber bullying’ (internet abuse). This January, the Ministry of Education, Culture, Sports, Science and Technology declared that written mental abuse by way of the Internet or mobile phone are considered forms of bullying. Especially, ‘cyber bullying’ is one of the most serious problems in Japanese schools these days.. Purpose of the study We help students to learn information morality through case studies and understand how our information network society will be changing. Students learn how i...
BlogosphereExplorer: Opens a window to the blog sphere
For Web Science, we think that blog is a very important part. BlogMiningExplorer is an on going project that exploits the power of blog sphere, the pioneer of Web 2.0 applications. We launched this project and hope that it can be a practicable blog mining service for individuals, commercial organizations and the government or regulation organizations. We think that the multiple properties mining and spatio-temporal evolution analysis make BlogMiningExplorer more powerful that even some commerce blog search engines such as Technorati。
Improving coursework for Web Engineering based on MVC pattern
Recently, there have been tremendous changes in web engineering with adoption of Web 2.0 technologies such as front-end applications; rapid application development based on lightweight framework and service platform models by open APIs. However, college course works for web engineering do not cover these changes, while most of them tend to be focused on HTML and form processing with database or learning specific languages (e.g. PHP, Java). These lead to that many students lose their interests and they do not recognize a novel method for web develpment, although there exist remarkable importance of web engineering. To solve these limitations, we develop web engineering coursework 1) applying MVC (model, view and controller) framework to both server-side and front-side web technologies, 2) using lightweight framework for rapid application development (e.g. CakePHP, Ruby on Rails), and 3) using database-less programming approach with open APIs. A proposed coursework was experimented for ...
MusicMash2: Mashing Linked Music Data via An OWL DL Web Ontology
MusicMash2 is an ontology-based semantic mashup application, which is intended to integrate music related content from various folksonomy based tagging systems, linked open data and music meta-data Web services. MusicMash2 provides the functionality for users to search for information (including videos) related to artists, albums and songs.