970 resultados para data collections
Resumo:
This study is the first to employ an epidemiological framework to evaluate the ‘fit-for-purpose’ of ICD-10-AM external cause of injury codes, ambulance and hospital clinical documentation for injury surveillance. Importantly, this thesis develops an evidence-based platform to guide future improvements in routine data collections used to inform the design of effective injury prevention strategies. Quantification of the impact of ambulance clinical records on the overall information quality of Queensland hospital morbidity data collections for injury causal information is a unique and notable contribution of this study.
Resumo:
A varicella-zoster virus (VZV) vaccine is available overseas, and universal immunisation in childhood is recommended in the United States.1 Any decision to introduce the vaccine to Australia must be based on an assessment of potential benefits and harms. While there has been some assessment of VZV significance in populations in southern Australia,2 the impact on the NT population is not known. It is not a notifiable condition and information on morbidity and mortality is limited to a few data collections. These are hospital separation data, deaths registers, and in 1995 the inclusion of VZV congenital and neonatal complications in the Australian Paediatric Surveillance System. Hospital separation data were analysed to assess the importance of VZV as a cause of severe morbidity and mortality in the NT population.
Resumo:
This paper presents a summary of the key findings of the TTF TPACK Survey developed and administered for the Teaching the Teachers for the Future (TTF) Project implemented in 2011. The TTF Project, funded by an Australian Government ICT Innovation Fund grant, involved all 39 Australian Higher Education Institutions which provide initial teacher education. TTF data collections were undertaken at the end of Semester 1 (T1) and at the end of Semester 2 (T2) in 2011. A total of 12881 participants completed the first survey (T1) and 5809 participants completed the second survey (T2). Groups of like-named items from the T1 survey were subject to a battery of complementary data analysis techniques. The psychometric properties of the four scales: Confidence - teacher items; Usefulness - teacher items; Confidence - student items; Usefulness- student items, were confirmed both at T1 and T2. Among the key findings summarised, at the national level, the scale: Confidence to use ICT as a teacher showed measurable growth across the whole scale from T1 to T2, and the scale: Confidence to facilitate student use of ICT also showed measurable growth across the whole scale from T1 to T2. Additional key TTF TPACK Survey findings are summarised.
Resumo:
We identify relation completion (RC) as one recurring problem that is central to the success of novel big data applications such as Entity Reconstruction and Data Enrichment. Given a semantic relation, RC attempts at linking entity pairs between two entity lists under the relation. To accomplish the RC goals, we propose to formulate search queries for each query entity α based on some auxiliary information, so that to detect its target entity β from the set of retrieved documents. For instance, a pattern-based method (PaRE) uses extracted patterns as the auxiliary information in formulating search queries. However, high-quality patterns may decrease the probability of finding suitable target entities. As an alternative, we propose CoRE method that uses context terms learned surrounding the expression of a relation as the auxiliary information in formulating queries. The experimental results based on several real-world web data collections demonstrate that CoRE reaches a much higher accuracy than PaRE for the purpose of RC.
Resumo:
Ever since Cox et. al published their paper, “A Secure, Robust Watermark for Multimedia” in 1996 [6], there has been tremendous progress in multimedia watermarking. The same pattern re-emerged with Agrawal and Kiernan publishing their work “Watermarking Relational Databases” in 2001 [1]. However, little attention has been given to primitive data collections with only a handful works of research known to the authors [11, 10]. This is primarily due to the absence of an attribute that differentiates marked items from unmarked item during insertion and detection process. This paper presents a distribution-independent, watermarking model that is secure against secondary-watermarking in addition to conventional attacks such as data addition, deletion and distortion. The low false positives and high capacity provide additional strength to the scheme. These claims are backed by experimental results provided in the paper.
Resumo:
In a pilot application based on web search engine calledWeb-based Relation Completion (WebRC), we propose to join two columns of entities linked by a predefined relation by mining knowledge from the web through a web search engine. To achieve this, a novel retrieval task Relation Query Expansion (RelQE) is modelled: given an entity (query), the task is to retrieve documents containing entities in predefined relation to the given one. Solving this problem entails expanding the query before submitting it to a web search engine to ensure that mostly documents containing the linked entity are returned in the top K search results. In this paper, we propose a novel Learning-based Relevance Feedback (LRF) approach to solve this retrieval task. Expansion terms are learned from training pairs of entities linked by the predefined relation and applied to new entity-queries to find entities linked by the same relation. After describing the approach, we present experimental results on real-world web data collections, which show that the LRF approach always improves the precision of top-ranked search results to up to 8.6 times the baseline. Using LRF, WebRC also shows performances way above the baseline.
Resumo:
The current state of the prefabricated housing market in Australia is systematically profiled, guided by a theoretical systems model. Particular focus is given to two original data collections. The first identifies manufacturers and builders using prefabrication innovations, and the second compares the context for prefabricated housing in Australia with that of key international jurisdictions. The results indicate a small but growing market for prefabricated housing in Australia, often building upon expertise developed through non-residential building applications. The international comparison highlighted the complexity of the interactions between macro policy decisions and historical influences and the uptake of prefabricated housing. The data suggest factors such as the small scale of the Australian market, and a lack of investment in research, development and training have not encouraged prefabrication. A lack of clear regulatory policy surrounding prefabricated housing is common both in Australia and internationally, with local effects in regards to home warranties and housing finance highlighted. Future research should target the continuing lack of consideration of prefabrication from within the housing construction industry, and build upon the research reported in this paper to further quantify the potential end user market and the continuing development of the industry.
Resumo:
Narrative text is a useful way of identifying injury circumstances from the routine emergency department data collections. Automatically classifying narratives based on machine learning techniques is a promising technique, which can consequently reduce the tedious manual classification process. Existing works focus on using Naive Bayes which does not always offer the best performance. This paper proposes the Matrix Factorization approaches along with a learning enhancement process for this task. The results are compared with the performance of various other classification approaches. The impact on the classification results from the parameters setting during the classification of a medical text dataset is discussed. With the selection of right dimension k, Non Negative Matrix Factorization-model method achieves 10 CV accuracy of 0.93.
Resumo:
Over the past six months the project has undertaken three key, separate, data collection rounds. Each of these rounds focused on essentially different issues within the broader common construct of heavy vehicle road safety. This document will initially report on a series of two key qualitative data collections rounds. Firstly it will detail findings and report on discussions held in focus groups with 43 heavy vehicle drivers. The second qualitative study involved a series of interviews undertaken with 19 police officers from various levels of command and operations within the Royal Oman Police. The final data collection round reported on in this document is a roadside survey questionnaire undertaken with 400 heavy vehicle drivers.
Resumo:
With the explosion of information resources, there is an imminent need to understand interesting text features or topics in massive text information. This thesis proposes a theoretical model to accurately weight specific text features, such as patterns and n-grams. The proposed model achieves impressive performance in two data collections, Reuters Corpus Volume 1 (RCV1) and Reuters 21578.
Resumo:
Species biology drives the frequency, duration and extent of survey and control activities in weed eradication programs. Researching the key biological characters can be difficult when plants occur at limited locations and are controlled immediately by field crews who are dedicated to preventing reproduction. Within the National Four Tropical Weeds Eradication Program and the former National Siam Weed Eradication Program, key information needed by the eradication teams has been obtained through a combination of field, glasshouse and laboratory studies without jeopardising the eradication objective. Information gained on seed longevity, age to reproductive maturity, dispersal and control options has been used to direct survey and control activities. Planned and opportunistic data collections will continue to provide biological information to refine eradication activities.
Resumo:
Authority files serve to uniquely identify real world ‘things’ or entities like documents, persons, organisations, and their properties, like relations and features. Already important in the classical library world, authority files are indispensable for adequate information retrieval and analysis in the computer age. This is because, even more than humans, computers are poor at handling ambiguity. Through authority files, people tell computers which terms, names or numbers refer to the same thing or have the same meaning by giving equivalent notions the same identifier. Thus, authority files signpost the internet where these identifiers are interlinked on the basis of relevance. When executing a query, computers are able to navigate from identifier to identifier by following these links and collect the queried information on these so-called ‘crosswalks’. In this context, identifiers also go under the name controlled access points. Identifiers become even more crucial now massive data collections like library catalogues or research datasets are releasing their till-now contained data directly to the internet. This development is coined Open Linked Data. The concatenating name for the internet is Web of Data instead of the classical Web of Documents.
Resumo:
Very little research has been carried out on detrital energetics and pathways in lotic ecosystems. Most investigations have concentrated on the degradation of allochthonous plant litter by fungi, with a glance at heterotrophic bacteria associated with decaying litter. In this short review, the author describes what is known of the detrition of plant litter in lotic waters, which results from the degradative activities of colonising saprophytic fungi and bacteria, and goes on to relate this process to those invertebrates that consume coarse and/or fine particulate detritus, or dissolved organic matter that aggregates into colloidal exopolymer particles. It is clear that many of the key processes involved in the relationships between the physical, chemical, biotic and biochemical elements present in running waters are very complex and poorly understood. Those few aspects for which there are reliable models with predictive power have resulted from data collections made over periods of 20 years or more. Comprehensive research of single catchments would provide a fine opportunity to collect data over a long period.
Resumo:
Physico-chemical data collections aimed to assess the interannual variability of the lagoon hydroclimate and the impact of an airport dam on the water quality of the Ebrié lagoon.
Resumo:
Este estudo trata da atual Política Nacional de Resíduos Sólidos, regulamentada pelo Decreto n 7.404/10, enfocando os mecanismos jurídicos garantidores da integração dos catadores de materiais recicláveis e reutilizáveis na responsabilidade compartilhada pelo ciclo de vida dos produtos, que historicamente tem um passado de exploração de trabalho e invisibilidade social. Com o objetivo de analisar as condições de aplicabilidade dos mecanismos presentes na Lei n 12.305/10 voltados para o reconhecimento social e ambiental, como também para a proteção legal dos direitos desse grupo social, iremos inicialmente esclarecer os aspectos conceituais basilares para a compreensão da temática das iniquidades sociais, bem como verificar a importância da utilização da teoria das necessidades humanas fundamentais, como sendo um instrumento adequado para a interpretação dessa forma de exclusão social. Ademais, este trabalho se propõe a discutir as principais correntes teóricas contemporâneas utilizadas no estudo da otimização da satisfação das necessidades humanas fundamentais, como também teorizar, filosoficamente, que tais necessidades funcionam como pressuposto de justificação para atribuição de direitos específicos e obrigações institucionais. Do ponto de vista metodológico, trata-se de uma pesquisa qualitativa, tendo sido realizado, de forma dedutiva, levantamentos de dados por meio de revisão bibliográfica envolvendo consultas a jornais, revistas, livros, dissertações, teses, projetos, leis, decretos e pesquisas via internet em sites institucionais. O método de procedimento adotado foi o descritivo-analítico, ressaltando-se ainda que, de forma indutiva, foi igualmente desenvolvida uma pesquisa de campo em duas cooperativas de reciclagem da cidade de Campina Grande-PB. Os estudos desenvolvidos revelaram que o grupo social em análise se enquadra no contexto de pessoas que necessitam de otimização para satisfação das necessidades fundamentais, havendo uma consistente e sustentável argumentação teórica nesse sentido. Concluiu-se que, apesar do compromisso expresso na Lei n 12.305/10, para com a valorização do trabalho dos catadores, deve ocorrer um esforço interpretativo dos mecanismos de inclusão social, empoderamento econômico e reconhecimento social e ambiental desta categoria. Foi igualmente concluído que as estratégias de integração dos catadores na responsabilidade compartilhada pelo ciclo de vida dos produtos, criadas pela legislação de resíduos sólidos, foram delineadas a partir do reconhecimento dos catadores pelo poder público na coleta seletiva e da inserção dos catadores na logística reversa, garantindo condições de mercado e acesso a recursos; contudo, o principal desafio parece ser o da inovação na própria forma de se pensar as políticas públicas para o setor.