955 resultados para Large datasets
Resumo:
Background: Room ventilation is a key determinant of airborne disease transmission. Despite this, ventilation guidelines in hospitals are not founded on robust scientific evidence related to prevention of airborne transmission. Methods: We sought to assess the effect of ventilation rates on influenza, tuberculosis (TB) and rhinovirus infection risk within three distinct rooms in a major urban hospital; a Lung Function Laboratory, Emergency Department (ED) Negative-pressure Isolation Room and an Outpatient Consultation Room were investigated. Air exchange rate measurements were performed in each room using CO2 as a tracer. Gammaitoni and Nucci’s model was employed to estimate infection risk. Results: Current outdoor air exchange rates in the Lung Function Laboratory and ED Isolation Room limited infection risks to between 0.1 and 3.6%. Influenza risk for individuals entering an Outpatient Consultation Room after an infectious individual departed ranged from 3.6 to 20.7%, depending on the duration for which each person occupied the room. Conclusions: Given the absence of definitive ventilation guidelines for hospitals, air exchange measurements combined with modelling afford a useful means of assessing, on a case-by-case basis, the suitability of room ventilation at preventing airborne disease transmission.
Resumo:
The automated extraction of roads from aerial imagery can be of value for tasks including mapping, surveillance and change detection. Unfortunately, there are no public databases or standard evaluation protocols for evaluating these techniques. Many techniques are further hindered by a reliance on manual initialisation, making large scale application of the techniques impractical. In this paper, we present a public database and evaluation protocol for the evaluation of road extraction algorithms, and propose an improved automatic seed finding technique to initialise road extraction, based on a combination of geometric and colour features.
Resumo:
Recent studies on automatic new topic identification in Web search engine user sessions demonstrated that neural networks are successful in automatic new topic identification. However most of this work applied their new topic identification algorithms on data logs from a single search engine. In this study, we investigate whether the application of neural networks for automatic new topic identification are more successful on some search engines than others. Sample data logs from the Norwegian search engine FAST (currently owned by Overture) and Excite are used in this study. Findings of this study suggest that query logs with more topic shifts tend to provide more successful results on shift-based performance measures, whereas logs with more topic continuations tend to provide better results on continuation-based performance measures.
Resumo:
Currently, well established clinical therapeutic approaches for bone reconstruction are restricted to the transplantation of autografts and allografts, and the implantation of metal devices or ceramic-based implants to assist bone regeneration. Bone grafts possess osteoconductive and osteoinductive properties, their application, however, is associated with disadvantages. These include limited access and availability, donor site morbidity and haemorrhage, increased risk of infection, and insufficient transplant integration. As a result, recent research focuses on the development of complementary therapeutic concepts. The field of tissue engineering has emerged as an important alternative approach to bone regeneration. Tissue engineering unites aspects of cellular biology, biomechanical engineering, biomaterial sciences and trauma and orthopaedic surgery. To obtain approval by regulatory bodies for these novel therapeutic concepts the level of therapeutic benefit must be demonstrated rigorously in well characterized, clinically relevant animal models. Therefore, in this PhD project, a reproducible and clinically relevant, ovine, critically sized, high load bearing, tibial defect model was established and characterized as a prerequisite to assess the regenerative potential of a novel treatment concept in vivo involving a medical grade polycaprolactone and tricalciumphosphate based composite scaffold and recombinant human bone morphogenetic proteins.
Resumo:
Nowadays, business process management is an important approach for managing organizations from an operational perspective. As a consequence, it is common to see organizations develop collections of hundreds or even thousands of business process models. Such large collections of process models bring new challenges and provide new opportunities, as the knowledge that they encapsulate requires to be properly managed. Therefore, a variety of techniques for managing large collections of business process models is being developed. The goal of this paper is to provide an overview of the management techniques that currently exist, as well as the open research challenges that they pose.
Resumo:
With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.
Resumo:
Audience Response Systems (ARS) have been successfully used by academics to facilitate student learning and engagement, particularly in large lecture settings. However, in large core subjects a key challenge is not only to engage students, but also to engage large and diverse teaching teams in order to ensure a consistent approach to grading assessments. This paper provides an insight into the ways in which ARS can be used to encourage participation by tutors in marking and moderation meetings. It concludes that ARS can improve the consistency of grading and the quality of feedback provided to students.
Resumo:
Airports, whether publicly or privately owned or operated fill both public and private roles. They need to act as public infrastructure providers and as businesses which cover their operating costs. That leads to special governance concerns with respect to consumers and competitors which are only beginning to be addressed. These challenges are highlighted both by shifts in ownership status and by the expansion of roles performed by airports as passenger and cargo volumes continue to increase and as nearby urban areas expand outward towards airports. We survey five ways in which the regulatory shoe doesn‟t quite fit the needs. Our findings suggest that, while ad hoc measures limit political tension, new governance measures are needed.
Resumo:
This paper illustrates the damage identification and condition assessment of a three story bookshelf structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). A major obstacle of using measured frequency response function data is a large size input variables to ANNs. This problem is overcome by applying a data reduction technique called principal component analysis (PCA). In the proposed procedure, ANNs with their powerful pattern recognition and classification ability were used to extract damage information such as damage locations and severities from measured FRFs. Therefore, simple neural network models are developed, trained by Back Propagation (BP), to associate the FRFs with the damage or undamaged locations and severity of the damage of the structure. Finally, the effectiveness of the proposed method is illustrated and validated by using the real data provided by the Los Alamos National Laboratory, USA. The illustrated results show that the PCA based artificial Neural Network method is suitable and effective for damage identification and condition assessment of building structures. In addition, it is clearly demonstrated that the accuracy of proposed damage detection method can also be improved by increasing number of baseline datasets and number of principal components of the baseline dataset.
Resumo:
Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices which are then suitably combined to estimate the constant for the whole lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and which may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios enabling us to implement a practical and efficient non-simulation based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The supplemental materials include our C++ source code for computing the approximate normalizing constant and simulation studies.
Resumo:
1. Overview of hotspot identification (HSID)methods 2. Challenges with HSID 3. Bringing crash severity into the ‘mix’ 4. Case Study: Truck Involved Crashes in Arizona 5. Conclusions • Heavy duty trucks have different performance envelopes than passenger cars and have more difficulty weaving, accelerating, and braking • Passenger vehicles have extremely limited sight distance around trucks • Lane and shoulder widths affect truck crash risk more than passenger cars • Using PDOEs to model truck crashes results in a different set of locations to examine for possible engineering and behavioral problems • PDOE models point to higher societal cost locations, whereas frequency models point to higher crash frequency locations • PDOE models are less sensitive to unreported crashes • PDOE models are a great complement to existing practice
Resumo:
University classes in marketing are often large, and therefore require teams of teachers to cover all of the necessary activities. A major problem with teaching teams is the inconsistency that results from myriad individuals offering subjective opinions. This innovation uses the latest moderation techniques along with Audience Response Technology (ART) to enhance the learning experience by providing more consistent and reliable grading in large classes. Assessment items are moderated before they are graded in meetings that employ ART. Results show the process is effective when the teaching team is very large, or there is a diverse range of experienced and inexperienced teachers. This “behind the scenes” innovation is not immediately apparent to students, but results in more consistent grades, more useful feedback for students, and more confident graders.
Resumo:
Whereas many good examples can be found of the study of urban morphology informing the design of new residential areas in Europe, it is much more difficult to find examples relating to other land uses and outside of Europe. This paper addresses a particular issue, the control and coordination of large and complex development schemes within cities, and, in doing so, considers commercial and mixed-use schemes outside of Europe. It is argued that urban morphology has much to offer for both the design of such development and its implementation over time. Firstly, lessons are drawn from the work of Krier and Rossi in Berlin, the form-based guidance developed in Chelmsford, UK, and the redesign and coordination of the Melrose Arch project in Johannesburg, SA. A recent development at Boggo Road in Brisbane, Australia, is then subjected to a more detailed examination. It is argued that the scheme has been unsatisfactory in terms of both design and implementation. An alternative framework based on historical morphological studies is proposed that would overcome these deficiencies. It is proposed that this points the way to a general approach that could be incorporated within the planning process internationally.
Resumo:
There is a growing need for successful bone tissue engineering strategies and advanced biomaterials that mimic the structure and function of native tissues carry great promise. Successful bone repair approaches may include an osteoconductive scaffold, osteoinductive growth factors, cells with an osteogenic potential and capacity for graft vascularisation. To increase osteoinductivity of biomaterials, the local combination and delivery of growth factors has been developed. In the present study we investigated the osteogenic effects of calcium phosphate (CaP)-coated nanofiber mesh tube-mediated delivery of BMP-7 from a PRP matrix for the regeneration of critical sized segmental bone defects in a small animal model. Bilateral full-thickness diaphyseal segmental defects were created in twelve male Lewis rats and nanofiber mesh tubes were placed around the defect. Defects received either treatment with a CaP-coated nanofiber mesh tube (n = 6), an un-coated nanofiber mesh tube (n=6) a CaP-coated nanofiber mesh tube with PRP (n=6) or a CaP-coated nanofiber mesh tube in combination with 5 μg BMP-7 and PRP (n = 6). After 12 weeks, bone volume and biomechanical properties were evaluated using radiography, microCT, biomechanical testing and histology. The results demonstrated significantly higher biomechanical properties and bone volume for the BMP group compared to the control groups. These results were supported by the histological evaluations, where BMP group showed the highest rate of bone regeneration within the defect. In conclusion, BMP-7 delivery via PRP enhanced functional bone defect regeneration, and together these data support the use of BMP-7 in the treatment of critical sized defects.