138 resultados para World-wide-web
Resumo:
Teaching awards, grants and fellowships are strategies used to recognise outstanding contributions to learning and teaching, encourage innovation, and to shift learning and teaching from the edge to centre stage. Examples range from school, faculty and institutional award and grant schemes to national schemes such as those offered by the Australian Learning and Teaching Council (ALTC), the Carnegie Foundation for the Advancement of Teaching in the United States, and the Fund for the Development of Teaching and Learning in higher education in the United Kingdom. The Queensland University of Technology (QUT) has experienced outstanding success in all areas of the ALTC funding since the inception of the Carrick Institute for Learning and Teaching in 2004. This paper reports on a study of the critical factors that have enabled sustainable and resilient institutional engagement with ALTC programs. As a lens for examining the QUT environment and practices, the study draws upon the five conditions of the framework for effective dissemination of innovation developed by Southwell, Gannaway, Orrell, Chalmers and Abraham (2005, 2010): 1. Effective, multi-level leadership and management 2. Climate of readiness for change 3. Availability of resources 4. Comprehensive systems in institutions and funding bodies 5. Funding design The discussion on the critical factors and practical and strategic lessons learnt for successful university-wide engagement offer insights for university leaders and staff who are responsible for learning and teaching award, grant and associated internal and external funding schemes.
Resumo:
Nowadays, everyone can effortlessly access a range of information on the World Wide Web (WWW). As information resources on the web continue to grow tremendously, it becomes progressively more difficult to meet high expectations of users and find relevant information. Although existing search engine technologies can find valuable information, however, they suffer from the problems of information overload and information mismatch. This paper presents a hybrid Web Information Retrieval approach allowing personalised search using ontology, user profile and collaborative filtering. This approach finds the context of user query with least user’s involvement, using ontology. Simultaneously, this approach uses time-based automatic user profile updating with user’s changing behaviour. Subsequently, this approach uses recommendations from similar users using collaborative filtering technique. The proposed method is evaluated with the FIRE 2010 dataset and manually generated dataset. Empirical analysis reveals that Precision, Recall and F-Score of most of the queries for many users are improved with proposed method.
Resumo:
In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques.
Resumo:
The proliferation of the web presents an unsolved problem of automatically analyzing billions of pages of natural language. We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters. To the best of our knowledge, such fine grained clustering has not been previously demonstrated. Previous approaches clustered a sample that limits the maximum number of discoverable clusters. The proposed EM-tree algorithm uses the entire collection in clustering and produces several orders of magnitude more clusters than the existing algorithms. Fine grained clustering is necessary for meaningful clustering in massive collections where the number of distinct topics grows linearly with collection size. These fine-grained clusters show an improved cluster quality when assessed with two novel evaluations using ad hoc search relevance judgments and spam classifications for external validation. These evaluations solve the problem of assessing the quality of clusters where categorical labeling is unavailable and unfeasible.
Resumo:
Currently we are facing an overburdening growth of the number of reliable information sources on the Internet. The quantity of information available to everyone via Internet is dramatically growing each year [15]. At the same time, temporal and cognitive resources of human users are not changing, therefore causing a phenomenon of information overload. World Wide Web is one of the main sources of information for decision makers (reference to my research). However our studies show that, at least in Poland, the decision makers see some important problems when turning to Internet as a source of decision information. One of the most common obstacles raised is distribution of relevant information among many sources, and therefore need to visit different Web sources in order to collect all important content and analyze it. A few research groups have recently turned to the problem of information extraction from the Web [13]. The most effort so far has been directed toward collecting data from dispersed databases accessible via web pages (related to as data extraction or information extraction from the Web) and towards understanding natural language texts by means of fact, entity, and association recognition (related to as information extraction). Data extraction efforts show some interesting results, however proper integration of web databases is still beyond us. Information extraction field has been recently very successful in retrieving information from natural language texts, however it is still lacking abilities to understand more complex information, requiring use of common sense knowledge, discourse analysis and disambiguation techniques.
Resumo:
Web service and business process technologies are widely adopted to facilitate business automation and collaboration. Given the complexity of business processes, it is a sought-after feature to show a business process with different views to cater for the diverse interests, authority levels, etc., of different users. Aiming to implement such flexible process views in the Web service environment, this paper presents a novel framework named FlexView to support view abstraction and concretisation of WS-BPEL processes. In the FlexView framework, a rigorous view model is proposed to specify the dependency and correlation between structural components of process views with emphasis on the characteristics of WS-BPEL, and a set of rules are defined to guarantee the structural consistency between process views during transformations. A set of algorithms are developed to shift the abstraction and concretisation operations to the operational level. A prototype is also implemented for the proof-of-concept purpose. © 2010 Springer Science+Business Media, LLC.
Resumo:
We present an empirical evaluation and comparison of two content extraction methods in HTML: absolute XPath expressions and relative XPath expressions. We argue that the relative XPath expressions, although not widely used, should be used in preference to absolute XPath expressions in extracting content from human-created Web documents. Evaluation of robustness covers four thousand queries executed on several hundred webpages. We show that in referencing parts of real world dynamic HTML documents, relative XPath expressions are on average significantly more robust than absolute XPath ones.
Resumo:
This paper describes methods used to support collaboration and communication between practitioners, designers and engineers when designing ubiquitous computing systems. We tested methods such as “Wizard of Oz” and design games in a real domain, the dental surgery, in an attempt to create a system that is: affordable; minimally disruptive of the natural flow of work; and improves human-computer interaction. In doing so we found that such activities allowed the practitioners to be on a ‘level playing ground’ with designers and engineers. The findings we present suggest that dentists are willing to engage in detailed exploration and constructive critique of technical design possibilities if the design ideas and prototypes are presented in the context of their work practice and are of a resolution and relevance that allow them to jointly explore and question with the design time. This paper is an extension of a short paper submitted to the Participatory Design Conference, 2004.
Resumo:
Über die letzten Jahre hat sich einige öffentliche und kommerzielle Aufmerksamkeit auf ein Phänomen gerichtet, das sich anschickt, die Medienlandschaft grundlegend zu verändern. Yahoo! kaufte Flickr. Google erwarb YouTube. Rupert Murdoch kaufte MySpace, und erklärte, die Zukunft seines NewsCorp-Imperiums läge eher in der nutzergesteuerten Inhaltserschaffung innerhalb solcher sozialer Medien als in seinen vielen Zeitungen, Fernsehsendern und anderen Medieninteressen (2005). Schließlich brach TIME mit seiner langetablierten Tradition, eine herausragende Persönlichkeit als „Person des Jahres“ zu nominieren, und wählte stattdessen „You“: uns alle, die wir online in Kollaboration Inhalte schaffen (2006). Allerdings liegt die Bedeutung dieses nutzergesteuerten Phänomens nicht in solchen (letztlich unwichtigen) Ehrungen, oder auch nur in den Inhalten zentraler Websites wie YouTube und Flickr – vielmehr findet man sie in logischer Folge der ihr zugrunde liegenden Prinzipien (die wir hier weiter untersuchen werden) viel flächendeckender über das World Wide Web verbreitet; was wichtig ist am neuen Phänomen ist nicht nur der Erfolg seiner sichtbarsten Exponenten, sondern auch der „Long Tail“ (Anderson 2006) der vielen anderen nutzergesteuerten Projekte, die sich überall in der Online-Welt etabliert haben und jetzt beginnen, sich sogar in die Offline-Welt hinein auszubreiten.
Resumo:
One of most modern professional jobs minimum requirements is to handle and manage the World Wide Web [WWW] and communications such as Electric Mail [email]. In my office all staff including, administration, marketing, management, and all levels of quantity surveyors, ranging from cadet to director must manage electric communication. One of many aspects in my professional development I have struggled with is managing my tasks dictated to me through e-mail.