967 resultados para Cartes-Col·lections


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective than developing them from scratch. As process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. Experiments are conducted to demonstrate that our proposal achieves a significant reduction in query evaluation time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

what was silent will speak, what is closed will open and will take on a voice Paul Virilio The fundamental problem in dealing with the digital is that we are forced to contend with a fundamental deconstruction of form. A deconstruction that renders our content and practice into a single state that can be openly and easily manipulated, reimagined and mashed together in rapid time to create completely unique artefacts and potentially unwranglable jumbles of data. Once our work is essentially broken down into this series of number sequences, (or bytes), our sound, images, movies and documents – our memory files - we are left with nothing but choice….and this is the key concern. This absence of form transforms our work into new collections and poses unique challenges for the artist seeking opportunities to exploit the potential of digital deconstruction. It is through this struggle with the absent form that we are able to thoroughly explore the latent potential of content, exploit modern abstractions of time and devise approaches within our practice that actively deal with the digital as an essential matter of course.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Last year European Intellectual Property Review published an article comparing the latest version of the proposed US database legislation, the Collections of Information Antipiracy Bill with the UK's Copyright and Rights in Database Regulations 1997. Subsequently a new US Bill, the Consumer and Investor Access to Information Act has emerged, the Antipiracy Bill has been amended and much debate has occurred, but the US seems no closer to enacting database legislation. This article briefly outlines the background to the US legislative efforts, examines the two Bills and draws some comparisons with the UK Regulations. A study of the US Bills clearly demonstrates the starkly divided opinion on database protection held by the Bills' proponents and the principal lobby groups driving the legislative efforts: the Antipiracy Bill is very protective of database producers' interests, whereas the Access Bill is heavily user-oriented. If the US experience is any indication there will be a long horizon involved in achieving any consensus on international harmonisation of this difficult area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Campylobacter jejuni followed by Campylobacter coli contribute substantially to the economic and public health burden attributed to food-borne infections in Australia. Genotypic characterisation of isolates has provided new insights into the epidemiology and pathogenesis of C. jejuni and C. coli. However, currently available methods are not conducive to large scale epidemiological investigations that are necessary to elucidate the global epidemiology of these common food-borne pathogens. This research aims to develop high resolution C. jejuni and C. coli genotyping schemes that are convenient for high throughput applications. Real-time PCR and High Resolution Melt (HRM) analysis are fundamental to the genotyping schemes developed in this study and enable rapid, cost effective, interrogation of a range of different polymorphic sites within the Campylobacter genome. While the sources and routes of transmission of campylobacters are unclear, handling and consumption of poultry meat is frequently associated with human campylobacteriosis in Australia. Therefore, chicken derived C. jejuni and C. coli isolates were used to develop and verify the methods described in this study. The first aim of this study describes the application of MLST-SNP (Multi Locus Sequence Typing Single Nucleotide Polymorphisms) + binary typing to 87 chicken C. jejuni isolates using real-time PCR analysis. These typing schemes were developed previously by our research group using isolates from campylobacteriosis patients. This present study showed that SNP + binary typing alone or in combination are effective at detecting epidemiological linkage between chicken derived Campylobacter isolates and enable data comparisons with other MLST based investigations. SNP + binary types obtained from chicken isolates in this study were compared with a previously SNP + binary and MLST typed set of human isolates. Common genotypes between the two collections of isolates were identified and ST-524 represented a clone that could be worth monitoring in the chicken meat industry. In contrast, ST-48, mainly associated with bovine hosts, was abundant in the human isolates. This genotype was, however, absent in the chicken isolates, indicating the role of non-poultry sources in causing human Campylobacter infections. This demonstrates the potential application of SNP + binary typing for epidemiological investigations and source tracing. While MLST SNPs and binary genes comprise the more stable backbone of the Campylobacter genome and are indicative of long term epidemiological linkage of the isolates, the development of a High Resolution Melt (HRM) based curve analysis method to interrogate the hypervariable Campylobacter flagellin encoding gene (flaA) is described in Aim 2 of this study. The flaA gene product appears to be an important pathogenicity determinant of campylobacters and is therefore a popular target for genotyping, especially for short term epidemiological studies such as outbreak investigations. HRM curve analysis based flaA interrogation is a single-step closed-tube method that provides portable data that can be easily shared and accessed. Critical to the development of flaA HRM was the use of flaA specific primers that did not amplify the flaB gene. HRM curve analysis flaA interrogation was successful at discriminating the 47 sequence variants identified within the 87 C. jejuni and 15 C. coli isolates and correlated to the epidemiological background of the isolates. In the combinatorial format, the resolving power of flaA was additive to that of SNP + binary typing and CRISPR (Clustered regularly spaced short Palindromic repeats) HRM and fits the PHRANA (Progressive hierarchical resolving assays using nucleic acids) approach for genotyping. The use of statistical methods to analyse the HRM data enhanced sophistication of the method. Therefore, flaA HRM is a rapid and cost effective alternative to gel- or sequence-based flaA typing schemes. Aim 3 of this study describes the development of a novel bioinformatics driven method to interrogate Campylobacter MLST gene fragments using HRM, and is called ‘SNP Nucleated Minim MLST’ or ‘Minim typing’. The method involves HRM interrogation of MLST fragments that encompass highly informative “Nucleating SNPS” to ensure high resolution. Selection of fragments potentially suited to HRM analysis was conducted in silico using i) “Minimum SNPs” and ii) the new ’HRMtype’ software packages. Species specific sets of six “Nucleating SNPs” and six HRM fragments were identified for both C. jejuni and C. coli to ensure high typeability and resolution relevant to the MLST database. ‘Minim typing’ was tested empirically by typing 15 C. jejuni and five C. coli isolates. The association of clonal complexes (CC) to each isolate by ‘Minim typing’ and SNP + binary typing were used to compare the two MLST interrogation schemes. The CCs linked with each C. jejuni isolate were consistent for both methods. Thus, ‘Minim typing’ is an efficient and cost effective method to interrogate MLST genes. However, it is not expected to be independent, or meet the resolution of, sequence based MLST gene interrogation. ‘Minim typing’ in combination with flaA HRM is envisaged to comprise a highly resolving combinatorial typing scheme developed around the HRM platform and is amenable to automation and multiplexing. The genotyping techniques described in this thesis involve the combinatorial interrogation of differentially evolving genetic markers on the unified real-time PCR and HRM platform. They provide high resolution and are simple, cost effective and ideally suited to rapid and high throughput genotyping for these common food-borne pathogens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the approach taken to the clustering task at INEX 2009 by a group at the Queensland University of Technology. The Random Indexing (RI) K-tree has been used with a representation that is based on the semantic markup available in the INEX 2009 Wikipedia collection. The RI K-tree is a scalable approach to clustering large document collections. This approach has produced quality clustering when evaluated using two different methodologies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Through an exploration of representations of metamorphosis and the creation of a body of written work, this thesis uses a critical examination of theoretical approaches to metamorphosis in combination with textual analysis of representations of metamorphosis and creative practice as research to arrive at the beginnings of an ethic of writing. The creative work, The Coming, consists of a collection of short fiction, The Coming, and two collections of poetry, Orison and Milagros. The exegesis, Transhuman Change: towards an ethic of writing, explores theories about metamorphosis as a figure for writing, as a trope, and as a motif for exploring identity to contextualise the analysis of representations of metamorphosis from which the ethic is developed. With reference to the psychosexual development theory of Jacques Lacan and Elaine Scarry’s philosophy of the body, pain, language and creativity, the exegesis examines existing approaches to metamorphosis and uses supplementary textual analysis of influential representations of metamorphosis from Ovid to Pygmalion, X-Men and Extreme Makeover to explore assumptions about the body, language, the self, gender in western culture. The limitations of the performance of representations of metamorphosis as a figure for the self’s survival of death are considered in the light of voice as metonym for self to propose an ethic which valorises life. The experience of sex and the construction of gender in representations of metamorphosis are considered in the light of Lacan’s theory of desire and Scarry’s theory of the body and language to propose an ethic of representing gender ironically. The motif of the faithless lover and the Pygmalion myth are considered in the light of the (m)other’s role in language to propose an ethic in which indeterminacy constitutes the condition for being aware of oneself among selves. Each of the three proposals is discussed in relation to the short fiction, memoir and poems produced in the course of this research to test their limits and possibilities as the foundation of an emerging ethic of writing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Digital collections are growing exponentially in size as the information age takes a firm grip on all aspects of society. As a result Information Retrieval (IR) has become an increasingly important area of research. It promises to provide new and more effective ways for users to find information relevant to their search intentions. Document clustering is one of the many tools in the IR toolbox and is far from being perfected. It groups documents that share common features. This grouping allows a user to quickly identify relevant information. If these groups are misleading then valuable information can accidentally be ignored. There- fore, the study and analysis of the quality of document clustering is important. With more and more digital information available, the performance of these algorithms is also of interest. An algorithm with a time complexity of O(n2) can quickly become impractical when clustering a corpus containing millions of documents. Therefore, the investigation of algorithms and data structures to perform clustering in an efficient manner is vital to its success as an IR tool. Document classification is another tool frequently used in the IR field. It predicts categories of new documents based on an existing database of (doc- ument, category) pairs. Support Vector Machines (SVM) have been found to be effective when classifying text documents. As the algorithms for classifica- tion are both efficient and of high quality, the largest gains can be made from improvements to representation. Document representations are vital for both clustering and classification. Representations exploit the content and structure of documents. Dimensionality reduction can improve the effectiveness of existing representations in terms of quality and run-time performance. Research into these areas is another way to improve the efficiency and quality of clustering and classification results. Evaluating document clustering is a difficult task. Intrinsic measures of quality such as distortion only indicate how well an algorithm minimised a sim- ilarity function in a particular vector space. Intrinsic comparisons are inherently limited by the given representation and are not comparable between different representations. Extrinsic measures of quality compare a clustering solution to a “ground truth” solution. This allows comparison between different approaches. As the “ground truth” is created by humans it can suffer from the fact that not every human interprets a topic in the same manner. Whether a document belongs to a particular topic or not can be subjective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On the 13th February 2008 Prime Minister Kevin Rudd made an apology to Australia’s Indigenous Peoples on behalf of the Australian Parliament. The State Library of Queensland (SLQ) with assistance from Queensland University of Technology and Queensland’s Aboriginal and Torres Strait Islander communities, has captured responses to this historic event. ‘Responses to the 2008 Apology’ is a collection of digital stories created as part of this research initiative. Until recently, digital storytelling has not generally been treated as a necessary addition to the research collections of Australian libraries. However, libraries increasingly aim to promote new literacies and active audiences as they seek innovative ways to encourage life-long learning by their users, and digital storytelling is one methodology that can contribute to these goals. The State Library of Queensland is the only Australian State Library to have undertaken a major role in the collection of digital stories. They currently lead the way with their Queensland Stories digital storytelling program. This presentation will report findings and outcomes from this research project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of constructing consolidated business process models out of collections of process models that share common fragments. The paper considers the construction of unions of multiple models (called merged models) as well as intersections (called digests). Merged models are intended for analysts who wish to create a model that subsumes a collection of process models - typically representing variants of the same underlying process - with the aim of replacing the variants with the merged model. Digests, on the other hand, are intended for analysts who wish to identify the most recurring fragments across a collection of process models, so that they can focus their efforts on optimizing these fragments. The paper presents an algorithm for computing merged models and an algorithm for extracting digests from a merged model. The merging and digest extraction algorithms have been implemented and tested against collections of process models taken from multiple application domains. The tests show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes. Furthermore, a case study conducted in a large insurance company has demonstrated the usefulness of the merging and digest extraction operators in a practical setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Paediatric Spine Research group was formed in 2002 to perform high quality research into the prevention and management of spinal deformity, with an emphasis on scoliosis. The group has successfully built collaborative bridges between the scientific and research expertise at QUT, and the clinical skills and experience of the spinal orthopaedic surgeons at the Mater Children’s Hospital in Brisbane. Clinical and biomechanical research is now possible as a result of the development of detailed databases of patients who have innovative and unique surgical interventions for spinal deformity such as thoracoscopic scoliosis correction, thoracoscopic staple insertion for juvenile idiopathic scoliosis and minimally invasive growing rods. The Mater in Brisbane provides these unique datasets of spinal deformity surgery patients, whose procedures are not being performed anywhere else in the Southern Hemisphere. The most detailed is a database of thoracoscopic scoliosis correction surgery which now contains 180 patients with electronic collections of X-Rays, photographs and patient questionnaires. With ethics approval, a subset of these patients has had CT scans, and a further subset have had MRI scans with and without a compressive load to simulate the erect standing position. This database has to date contributed to 17 international refereed journal papers, a further 7 journal papers either under review or in final preparation, 53 national conference presentations and 35 international conference presentations. Major findings from selected journal publications will be presented. It is anticipated that as the surgical databases grow they will continue to provide invaluable clinical data which will feed into clinically relevant projects driven by both medical and engineering researchers whose findings will benefit spinal deformity patients and scientific knowledge worldwide.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As organizations reach higher levels of Business Process Management maturity, they tend to accumulate large collections of process models. These repositories may contain thousands of activities and be managed by different stakeholders with varying skills and responsibilities. However, while being of great value, these repositories induce high management costs. Thus, it becomes essential to keep track of the various model versions as they may mutually overlap, supersede one another and evolve over time. We propose an innovative versioning model and associated storage structure, specifically designed to maximize sharing across process model versions, and to automatically handle change propagation. The focal point of this technique is to version single process model fragments, rather than entire process models. Indeed empirical evidence shows that real-life process model repositories have numerous duplicate fragments. Experiments on two industrial datasets confirm the usefulness of our technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective and less error-prone than developing them from scratch. Since process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. To make our approach more applicable, we consider the semantic similarity between labels. Experiments are conducted to demonstrate that our approach is efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The impact of citizen journalism on the established journalism industry, and its role in the future news media mix, remain key topics in current journalism studies research, not least in the context of the current crisis facing many news organisations around the globe. The centrality of this issue is also reflected in the substantial number of ‘citizen journalism’ monographs and collections published across the last few years (see for example Paterson & Domingo, 2008; Boler, 2008; Allan & Thorsen, 2009; Neuberger, Nuernbergk, & Rischke, 2009; Gordon, 2009; Russell & Echchaibi, 2009; Meikle & Redden, forthcoming). With relatively few notable exceptions, much of the research and wider public discussion surrounding the citizen journalism phenomenon has employed a relatively narrow definition of the term, with many researchers focussing on citizen journalism projects which provide mainly political news and commentary, and on their role in influencing the political process especially in countries like the U.S.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Australian National Data Service (ANDS) was established in 2008 and aims to: influence national policy in the area of data management in the Australian research community; inform best practice for the curation of data, and, transform the disparate collections of research data around Australia into a cohesive collection of research resources One high profile ANDS activity is to establish the population of Research Data Australia, a set of web pages describing data collections produced by or relevant to Australian researchers. It is designed to promote visibility of research data collections in search engines, in order to encourage their re-use. As part of activities associated with the Australian National Data Service, an increasing number of Australian Universities are choosing to implement VIVO, not as a platform to profile information about researchers, but as a 'metadata store' platform to profile information about institutional research data sets, both locally and as part of a national data commons. To date, the University of Melbourne, Griffith University, the Queensland University of Technology, and the University of Western Australia have all chosen to implement VIVO, with interest from other Universities growing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the first time in human history, large volumes of spoken audio are being broadcast, made available on the internet, archived, and monitored for surveillance every day. New technologies are urgently required to unlock these vast and powerful stores of information. Spoken Term Detection (STD) systems provide access to speech collections by detecting individual occurrences of specified search terms. The aim of this work is to develop improved STD solutions based on phonetic indexing. In particular, this work aims to develop phonetic STD systems for applications that require open-vocabulary search, fast indexing and search speeds, and accurate term detection. Within this scope, novel contributions are made within two research themes, that is, accommodating phone recognition errors and, secondly, modelling uncertainty with probabilistic scores. A state-of-the-art Dynamic Match Lattice Spotting (DMLS) system is used to address the problem of accommodating phone recognition errors with approximate phone sequence matching. Extensive experimentation on the use of DMLS is carried out and a number of novel enhancements are developed that provide for faster indexing, faster search, and improved accuracy. Firstly, a novel comparison of methods for deriving a phone error cost model is presented to improve STD accuracy, resulting in up to a 33% improvement in the Figure of Merit. A method is also presented for drastically increasing the speed of DMLS search by at least an order of magnitude with no loss in search accuracy. An investigation is then presented of the effects of increasing indexing speed for DMLS, by using simpler modelling during phone decoding, with results highlighting the trade-off between indexing speed, search speed and search accuracy. The Figure of Merit is further improved by up to 25% using a novel proposal to utilise word-level language modelling during DMLS indexing. Analysis shows that this use of language modelling can, however, be unhelpful or even disadvantageous for terms with a very low language model probability. The DMLS approach to STD involves generating an index of phone sequences using phone recognition. An alternative approach to phonetic STD is also investigated that instead indexes probabilistic acoustic scores in the form of a posterior-feature matrix. A state-of-the-art system is described and its use for STD is explored through several experiments on spontaneous conversational telephone speech. A novel technique and framework is proposed for discriminatively training such a system to directly maximise the Figure of Merit. This results in a 13% improvement in the Figure of Merit on held-out data. The framework is also found to be particularly useful for index compression in conjunction with the proposed optimisation technique, providing for a substantial index compression factor in addition to an overall gain in the Figure of Merit. These contributions significantly advance the state-of-the-art in phonetic STD, by improving the utility of such systems in a wide range of applications.