632 resultados para Trophic web structure
em Queensland University of Technology - ePrints Archive
Resumo:
Searching for multimedia is an important activity for users of Web search engines. Studying user's interactions with Web search engine multimedia buttons, including image, audio, and video, is important for the development of multimedia Web search systems. This article provides results from a Weblog analysis study of multimedia Web searching by Dogpile users in 2006. The study analyzes the (a) duration, size, and structure of Web search queries and sessions; (b) user demographics; (c) most popular multimedia Web searching terms; and (d) use of advanced Web search techniques including Boolean and natural language. The current study findings are compared with results from previous multimedia Web searching studies. The key findings are: (a) Since 1997, image search consistently is the dominant media type searched followed by audio and video; (b) multimedia search duration is still short (>50% of searching episodes are <1 min), using few search terms; (c) many multimedia searches are for information about people, especially in audio search; and (d) multimedia search has begun to shift from entertainment to other categories such as medical, sports, and technology (based on the most repeated terms). Implications for design of Web multimedia search engines are discussed.
Resumo:
Over the last decade, the rapid growth and adoption of the World Wide Web has further exacerbated user needs for e±cient mechanisms for information and knowledge location, selection, and retrieval. How to gather useful and meaningful information from the Web becomes challenging to users. The capture of user information needs is key to delivering users' desired information, and user pro¯les can help to capture information needs. However, e®ectively acquiring user pro¯les is di±cult. It is argued that if user background knowledge can be speci¯ed by ontolo- gies, more accurate user pro¯les can be acquired and thus information needs can be captured e®ectively. Web users implicitly possess concept models that are obtained from their experience and education, and use the concept models in information gathering. Prior to this work, much research has attempted to use ontologies to specify user background knowledge and user concept models. However, these works have a drawback in that they cannot move beyond the subsumption of super - and sub-class structure to emphasising the speci¯c se- mantic relations in a single computational model. This has also been a challenge for years in the knowledge engineering community. Thus, using ontologies to represent user concept models and to acquire user pro¯les remains an unsolved problem in personalised Web information gathering and knowledge engineering. In this thesis, an ontology learning and mining model is proposed to acquire user pro¯les for personalised Web information gathering. The proposed compu- tational model emphasises the speci¯c is-a and part-of semantic relations in one computational model. The world knowledge and users' Local Instance Reposito- ries are used to attempt to discover and specify user background knowledge. From a world knowledge base, personalised ontologies are constructed by adopting au- tomatic or semi-automatic techniques to extract user interest concepts, focusing on user information needs. A multidimensional ontology mining method, Speci- ¯city and Exhaustivity, is also introduced in this thesis for analysing the user background knowledge discovered and speci¯ed in user personalised ontologies. The ontology learning and mining model is evaluated by comparing with human- based and state-of-the-art computational models in experiments, using a large, standard data set. The experimental results are promising for evaluation. The proposed ontology learning and mining model in this thesis helps to develop a better understanding of user pro¯le acquisition, thus providing better design of personalised Web information gathering systems. The contributions are increasingly signi¯cant, given both the rapid explosion of Web information in recent years and today's accessibility to the Internet and the full text world.
Resumo:
This paper describes the approach taken to the clustering task at INEX 2009 by a group at the Queensland University of Technology. The Random Indexing (RI) K-tree has been used with a representation that is based on the semantic markup available in the INEX 2009 Wikipedia collection. The RI K-tree is a scalable approach to clustering large document collections. This approach has produced quality clustering when evaluated using two different methodologies.
Resumo:
Stigmergy is a biological term used when discussing insect or swarm behaviour, and describes a model supporting environmental communication separately from artefacts or agents. This phenomenon is demonstrated in the behavior of ants and their food gathering process when following pheromone trails, or similarly termites and their termite mound building process. What is interesting with this mechanism is that highly organized societies are achieved with a lack of any apparent management structure. Stigmergic behavior is implicit in the Web where the volume of users provides a self-organizing and self-contextualization of content in sites which facilitate collaboration. However, the majority of content is generated by a minority of the Web participants. A significant contribution from this research would be to create a model of Web stigmergy, identifying virtual pheromones and their importance in the collaborative process. This paper explores how exploiting stigmergy has the potential of providing a valuable mechanism for identifying and analyzing online user behavior recording actionable knowledge otherwise lost in the existing web interaction dynamics. Ultimately this might assist our building better collaborative Web sites.
Resumo:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.
Resumo:
A business process is often modeled using some kind of a directed flow graph, which we call a workflow graph. The Refined Process Structure Tree (RPST) is a technique for workflow graph parsing, i.e., for discovering the structure of a workflow graph, which has various applications. In this paper, we provide two improvements to the RPST. First, we propose an alternative way to compute the RPST that is simpler than the one developed originally. In particular, the computation reduces to constructing the tree of the triconnected components of a workflow graph in the special case when every node has at most one incoming or at most one outgoing edge. Such graphs occur frequently in applications. Secondly, we extend the applicability of the RPST. Originally, the RPST was applicable only to graphs with a single source and single sink such that the completed version of the graph is biconnected. We lift both restrictions. Therefore, the RPST is then applicable to arbitrary directed graphs such that every node is on a path from some source to some sink. This includes graphs with multiple sources and/or sinks and disconnected graphs.
Resumo:
This paper reports on a current initiative at Queensland University of Technology to provide timely, flexible and sustainable training and support to academic staff in blended learning and associated techno-pedagogies via a web-conferencing classroom and collaboration tool, Elluminate Live!. This technology was first introduced to QUT in 2008 as part of the university‘s ongoing commitment to meeting the learning needs of diverse student cohorts. The centralised Learning Design team, in collaboration with the university‘s department of eLearning Services, was given the task of providing training and support to academic staff in the effective use of the technology for teaching and learning, as part of the team‘s ongoing brief to support and enhance the provision of blended learning throughout the university. The resulting program, ―Learning Design Live‖ (LDL) is informed by Rogers‘ theory of innovation and diffusion (2003) and structured according to Wilson‘s framework for faculty development (2007). This paper discusses the program‘s design and structure, considers the program‘s impact on academic capacity in blended learning within the institution, and reflects on future directions for the program and emerging insights into blended learning and participant engagement for both staff and students.
Resumo:
This paper presents a new approach to web browsing in situ- ations where the user can only provide the device with a sin- gle input command device (switch). Switches have been de- veloped for example for people with locked-in syndrome and are used in combination with scanning to navigate virtual keyboards and desktop interfaces. Our proposed approach leverages the hierarchical structure of webpages to operate a multi-level scan of actionable elements of webpages (links or form elements). As there are a few methods already exist- ing to facilitate browsing under these conditions, we present a theoretical usability evaluation of our approach in com- parison to the existing ones, which takes into account the average time taken to reach any part of a web page (such as a link or a form) but also the number of clicks necessary to reach the goal. We argue that these factors contribute together to usability. In addition, we propose that our ap- proach presents additional usability benefits.
Resumo:
The aim of this project was to develop a general theory of stigmergy and a software design pattern to build collaborative websites. Stigmergy is a biological term used when describing some insect swarm-behaviour where 'food gathering' and 'nest building' activities demonstrate the emergence of self-organised societies achieved without an apparent management structure. The results of the project are an abstract model of stigmergy and a software design pattern for building Web 2.0 components exploiting this self-organizing phenomenon. A proof-of-concept implementation was also created demonstrating potential commercial viability for future website projects.
Resumo:
Information available on company websites can help people navigate to the offices of groups and individuals within the company. Automatically retrieving this within-organisation spatial information is a challenging AI problem This paper introduces a novel unsupervised pattern-based method to extract within-organisation spatial information by taking advantage of HTML structure patterns, together with a novel Conditional Random Fields (CRF) based method to identify different categories of within-organisation spatial information. The results show that the proposed method can achieve a high performance in terms of F-Score, indicating that this purely syntactic method based on web search and an analysis of HTML structure is well-suited for retrieving within-organisation spatial information.
Resumo:
Purpose Peer-review programmes in radiation oncology are used to facilitate the process and evaluation of clinical decision-making. However, web-based peer-review methods are still uncommon. This study analysed an inter-centre, web-based peer-review case conference as a method of facilitating the decision-making process in radiation oncology. Methodology A benchmark form was designed based on the American Society for Radiation Oncology targets for radiation oncology peer review. This was used for evaluating the contents of the peer-review case presentations on 40 cases, selected from three participating radiation oncology centres. A scoring system was used for comparison of data, and a survey was conducted to analyse the experiences of radiation oncology professionals who attended the web-based peer-review meetings in order to identify priorities for improvement. Results The mean scores for the evaluations were 82·7, 84·5, 86·3 and 87·3% for cervical, prostate, breast and head and neck presentations, respectively. The survey showed that radiation oncology professionals were confident about the role of web-based peer-reviews in facilitating sharing of good practice, stimulating professionalism and promoting professional growth. The participants were satisfied with the quality of the audio and visual aspects of the web-based meeting. Conclusion The results of this study suggest that simple inter-centre web-based peer-review case conferences are a feasible technique for peer review in radiation oncology. Limitations such as data security and confidentiality can be overcome by the use of appropriate structure and technology. To drive the issues of quality and safety a step further, small radiotherapy departments may need to consider web-based peer-review case conference as part of their routine quality assurance practices.
Resumo:
The mixed double-decker Eu\[Pc(15C5)4](TPP) (1) was obtained by base-catalysed tetramerisation of 4,5-dicyanobenzo-15-crown-5 using the half-sandwich complex Eu(TPP)(acac) (acac = acetylacetonate), generated in situ, as the template. For comparative studies, the mixed triple-decker complexes Eu2\[Pc(15C5)4](TPP)2 (2) and Eu2\[Pc(15C5)4]2(TPP) (3) were also synthesised by the raise-by-one-story method. These mixed ring sandwich complexes were characterised by various spectroscopic methods. Up to four one-electron oxidations and two one-electron reductions were revealed by cyclic voltammetry (CV) and differential pulse voltammetry (DPV). As shown by electronic absorption and infrared spectroscopy, supramolecular dimers (SM1 and SM3) were formed from the corresponding double-decker 1 and triple-decker 3 in the presence of potassium ions in MeOH/CHCl3.