869 resultados para Web Information Gathering, Web Personalization, Concepts
Resumo:
Web wrapper extracts data from HTML document. The accuracy and quality of the information extracted by web wrapper relies on the structure of the HTML document. If an HTML document is changed, the web wrapper may or may not function correctly. This paper presents an Adjacency-Weight method to be used in the web wrapper extraction process or in a wrapper self-maintenance mechanism to validate web wrappers. The algorithm and data structures are illustrated by some intuitive examples.
Resumo:
In the early 21st century, we need to prepare university students to navigate local and global cultures effectively and sensitively. These future professionals must develop comprehensive intercultural communication skills and understanding. Yet university assessment in Australia is often based on a western template of knowledge, which automatically places International, Indigenous, as well as certain groups of local students at a study disadvantage. It also ensures that Australian students from dominant groups are not given the opportunity to develop these vital intercultural skills. This paper explores the issues embedded in themes 1 and 4 of this conference and provides details of an innovative website developed at Queensland University of Technology in Brisbane, Australia, which encourages academic staff to investigate the hidden assumptions that can underpin their assessment practices. The website also suggests strategies academics can use to ensure that their assessment becomes more socially and culturally responsive.
Resumo:
The main argument of this paper is that Natural Language Processing (NLP) does, and will continue to, underlie the Semantic Web (SW), including its initial construction from unstructured sources like the World Wide Web (WWW), whether its advocates realise this or not. Chiefly, we argue, such NLP activity is the only way up to a defensible notion of meaning at conceptual levels (in the original SW diagram) based on lower level empirical computations over usage. Our aim is definitely not to claim logic-bad, NLP-good in any simple-minded way, but to argue that the SW will be a fascinating interaction of these two methodologies, again like the WWW (which has been basically a field for statistical NLP research) but with deeper content. Only NLP technologies (and chiefly information extraction) will be able to provide the requisite RDF knowledge stores for the SW from existing unstructured text databases in the WWW, and in the vast quantities needed. There is no alternative at this point, since a wholly or mostly hand-crafted SW is also unthinkable, as is a SW built from scratch and without reference to the WWW. We also assume that, whatever the limitations on current SW representational power we have drawn attention to here, the SW will continue to grow in a distributed manner so as to serve the needs of scientists, even if it is not perfect. The WWW has already shown how an imperfect artefact can become indispensable.
Resumo:
The impact and use of information and communication technology on learning outcomes for accounting students is not well understood. This study investigates the impact of design features of Blackboard 1 used as aWeb-based Learning Environment (WBLE) in teaching undergraduate accounting students. Specifically, this investigation reports on a number of Blackboard design features (e.g. delivery of lecture notes, announcements, online assessment and model answers) used to deliver learning materials regarded as necessary to enhance learning outcomes. Responses from 369 on-campus students provided data to develop a regression model that seeks to explain enhanced participation and mental effort. The final regression shows that student satisfaction with the use of a WBLE is associated with five design features or variables. These include usefulness and availability of lecture notes, online assessment, model answers, and online chat.
Resumo:
Interpolated data are an important part of the environmental information exchange as many variables can only be measured at situate discrete sampling locations. Spatial interpolation is a complex operation that has traditionally required expert treatment, making automation a serious challenge. This paper presents a few lessons learnt from INTAMAP, a project that is developing an interoperable web processing service (WPS) for the automatic interpolation of environmental data using advanced geostatistics, adopting a Service Oriented Architecture (SOA). The “rainbow box” approach we followed provides access to the functionality at a whole range of different levels. We show here how the integration of open standards, open source and powerful statistical processing capabilities allows us to automate a complex process while offering users a level of access and control that best suits their requirements. This facilitates benchmarking exercises as well as the regular reporting of environmental information without requiring remote users to have specialized skills in geostatistics.