339 resultados para User friendly interface
em Queensland University of Technology - ePrints Archive
Resumo:
A worldwide interest is being generated in the use of fibre reinforced polymer composites (FRP) in rehabilitation of reinforced concrete structures. As a replacement for the traditional steel plates or external post-tensioning in strengthening applications, various types of FRP plates, with their high strength to weight ratio and good resistance to corrosion, represent a class of ideal material in external retrofitting. Within the last ten years, many design guidelines have been published to provide guidance for the selection, design and installation of FRP systems for external strengthening of concrete structures. Use of these guidelines requires understanding of a number of issues pertaining to different properties and structural failure modes specific to these materials. A research initiative funded by the CRC for Construction Innovation was undertaken (primarily at RMIT) to develop a decision support tool and a user friendly guide for use of fibre reinforced polymer composites in rehabilitation of concrete structures. The user guidelines presented in this report were developed after industry consultation and a comprehensive review of the state of the art technology. The scope of the guide was mainly developed based on outcomes of two workshops with Queensland Department of Main Roads (QDMR). The document covers material properties, recommended construction requirements, design philosophy, flexural, shear and torsional strengthening of beams and strengthening of columns. In developing this document, the guidelines published on FIB Bulletin 14 (2002), Task group 9.3, International Federation of Structural Concrete (FIB) and American Concrete Institute Committee 440 report (2002) were consulted in conjunction with provisions of the Austroads Bridge design code (1992) and Australian Concrete Structures code AS3600 (2002). In conclusion, the user guide presents design examples covering typical strengthening scenarios.
Resumo:
Previous studies exploring the incidence and readmission rates of cardiac patients admitted to a coronary care unit (CCU) with type 2 diabetes [1] have been undertaken by the first author. Interviews of these patients regarding their experiences in managing their everyday conditions [2] provided the basis for developing the initial cardiac–diabetes self-management programme (CDSMP) [3]. Findings from each of these previous studies highlighted the complexity of self-management for patients with both conditions and contributed to the creation of a new self-management programme, the CDSMP, based on Bandura’s (2004) self-efficacy theory [4]. From patient and staff feedback received for the CDSMP [3], it became evident that further revision of the programme was needed to improve self-management levels of patients and possibility of incorporating methods of information technology (IT). Little is known about the applicability of different methods of technology for delivering self-management programmes for patients with chronic diseases such as those with type 2 diabetes and cardiac conditions. Although there is some evidence supporting the benefits and the great potential of using IT in supporting self-management programmes, it is not strong, and further research on the use of IT in such programmes is recommended [5–7]. Therefore, this study was designed to pilot test feasibility of the CDSMP incorporating telephone and text-messaging as follow-up approaches.
Resumo:
Expert elicitation is the process of determining what expert knowledge is relevant to support a quantitative analysis and then eliciting this information in a form that supports analysis or decision-making. The credibility of the overall analysis, therefore, relies on the credibility of the elicited knowledge. This, in turn, is determined by the rigor of the design and execution of the elicitation methodology, as well as by its clear communication to ensure transparency and repeatability. It is difficult to establish rigor when the elicitation methods are not documented, as often occurs in ecological research. In this chapter, we describe software that can be combined with a well-structured elicitation process to improve the rigor of expert elicitation and documentation of the results
Resumo:
In earlier cultures and societies, hazards and risks to human health were dealt with by methods derived from myth, metaphor and ritual. In modem society however, notions of hazard and risk have been transformed from the level of a folk discourse to that of an expert centred concept (Plough & Krimsky, 1987). With the professionalization of risk and hazard analysis came a preferred framework for decision making based on a range of 'technical' methodologies (Giere, 1991 ). This is especially true for decision processes relating to risk assessment and management, and impact assessment. Such approaches however, often entail narrow technical-based theoretical assumptions about human behaviour and the natural world, and the· methods used. They therefore carry 'in-built' error factors that contribute considerable uncertainty to the results.
Resumo:
This research project investigated a bioreactor system capable of high density cell growth intended for use in regenerative medicine and protein production. The bioreactor was based on a drip-perfusion concept and constructed with minimal costs, readily available components, and straightforward processes for usage. This study involved the design, construction, and testing of the bioreactor where the results showed promising three dimensional cell growth within a polymer structure. The accessibility of this equipment and the capability of high density, three dimensional cell growth would be suitable for future research in pharmaceutical drug manufacturing, and human organ and tissue regeneration.
Resumo:
Properly designed decision support environments encourage proactive and objective decision making. The work presented in this paper inquires into developing a decision support environment and a tool to facilitate objective decision making in dealing with road traffic noise. The decision support methodology incorporates traffic amelioration strategies both within and outside the road reserve. The project is funded by the CRC for Construction Innovation and conducted jointly by the RMIT University and the Queensland Department of Main Roads (MR) in collaboration with the Queensland Department of Public Works, Arup Pty Ltd., and the Queensland University of Technology. In this paper, the proposed decision support framework is presented in the way of a flowchart which enabled the development of the decision support tool (DST). The underpinning concept is to establish and retain an information warehouse for each critical road segment (noise corridor) for a given planning horizon. It is understood that, in current practice, some components of the approach described are already in place but not fully integrated and supported. It provides an integrated user-friendly interface between traffic noise modeling software, noise management criteria and cost databases.
Resumo:
The road and transport industry in Australia and overseas has come a long way to understanding the impact of road traffic noise on the urban environment. Most road authorities now have guidelines to help assess and manage the impact of road traffic noise on noise-sensitive areas and development. While several economic studies across Australia and overseas have tried to value the impact of noise on property prices, decision-makers investing in road traffic noise management strategies have relatively limited historic data and case studies to go on. The perceived success of a noise management strategy currently relies largely on community expectations at a given time, and is not necessarily based on the analysis of the costs and benefits, or the long-term viability and value to the community of the proposed treatment options. With changing trends in urban design, it is essential that the 'whole-of-life' costs and benefits of noise ameliorative treatment options and strategies be identified and made available for decisionmakers in future investment considerations. For this reason, CRC for Construction Innovation Australia funded a research project, Noise Management in Urban Environments to help decision-makers with future road traffic noise management investment decisions. RMIT University and the Queensland Department of Main Roads (QDMR) have conducted the research work, in collaboration with the Queensland Department of Public Works, ARUP Pty Ltd, and the Queensland University of Technology. The research has formed the basis for the development of a decision-support software tool, and helped collate technical and costing data for known noise amelioration treatment options. We intend that the decision support software tool (DST) should help an investment decision-maker to be better informed of suitable noise ameliorative treatment options on a project-by-project basis and identify likely costs and benefits associated with each of those options. This handbook has been prepared as a procedural guide for conducting a comparative assessment of noise ameliorative options. The handbook outlines the methodology and assumptions adopted in the decision-support framework for the investment decision-maker and user of the DST. The DST has been developed to provide an integrated user-friendly interface between road traffic noise modelling software, the relevant assessment criteria and the options analysis process. A user guide for the DST is incorporated in this handbook.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
Background Small RNA sequencing is commonly used to identify novel miRNAs and to determine their expression levels in plants. There are several miRNA identification tools for animals such as miRDeep, miRDeep2 and miRDeep*. miRDeep-P was developed to identify plant miRNA using miRDeep’s probabilistic model of miRNA biogenesis, but it depends on several third party tools and lacks a user-friendly interface. The objective of our miRPlant program is to predict novel plant miRNA, while providing a user-friendly interface with improved accuracy of prediction. Result We have developed a user-friendly plant miRNA prediction tool called miRPlant. We show using 16 plant miRNA datasets from four different plant species that miRPlant has at least a 10% improvement in accuracy compared to miRDeep-P, which is the most popular plant miRNA prediction tool. Furthermore, miRPlant uses a Graphical User Interface for data input and output, and identified miRNA are shown with all RNAseq reads in a hairpin diagram. Conclusions We have developed miRPlant which extends miRDeep* to various plant species by adopting suitable strategies to identify hairpin excision regions and hairpin structure filtering for plants. miRPlant does not require any third party tools such as mapping or RNA secondary structure prediction tools. miRPlant is also the first plant miRNA prediction tool that dynamically plots miRNA hairpin structure with small reads for identified novel miRNAs. This feature will enable biologists to visualize novel pre-miRNA structure and the location of small RNA reads relative to the hairpin. Moreover, miRPlant can be easily used by biologists with limited bioinformatics skills.
Resumo:
miRDeep and its varieties are widely used to quantify known and novel micro RNA (miRNA) from small RNA sequencing (RNAseq). This article describes miRDeep*, our integrated miRNA identification tool, which is modeled off miRDeep, but the precision of detecting novel miRNAs is improved by introducing new strategies to identify precursor miRNAs. miRDeep* has a user-friendly graphic interface and accepts raw data in FastQ and Sequence Alignment Map (SAM) or the binary equivalent (BAM) format. Known and novel miRNA expression levels, as measured by the number of reads, are displayed in an interface, which shows each RNAseq read relative to the pre-miRNA hairpin. The secondary pre-miRNA structure and read locations for each predicted miRNA are shown and kept in a separate figure file. Moreover, the target genes of known and novel miRNAs are predicted using the TargetScan algorithm, and the targets are ranked according to the confidence score. miRDeep* is an integrated standalone application where sequence alignment, pre-miRNA secondary structure calculation and graphical display are purely Java coded. This application tool can be executed using a normal personal computer with 1.5 GB of memory. Further, we show that miRDeep* outperformed existing miRNA prediction tools using our LNCaP and other small RNAseq datasets. miRDeep* is freely available online at http://www.australianprostatecentre.org/research/software/mirdeep-star
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.
Resumo:
In this paper we describe tag-based interaction afforded by a tag-based interface in online and mobile banking, and present our preliminary usability evaluation findings. We conducted a pilot usability study with a group of banking users by comparing the present 'conventional' interface and tag-based interface. The results show that participants perceive the tag-based interface as more usable in both online and mobile contexts. Participants also rated the tag-based interface better despite their unfamiliarity and perceived it as more user-friendly. Additionally, the results highlight that tag-based interaction is more effective in the mobile context especially to inexperienced mobile banking users. This in turn could have a positive effect on the adoption and acceptance of mobile banking in general and also specifically in Australia. We discuss our findings in more detail in the later sections of this paper and conclude with a discussion on future work.
Resumo:
Executive Summary Emergency Departments (EDs) locally, nationally and internationally are becoming increasingly busy. Within this context, it can be challenging to deliver a health service that is safe, of high quality and cost-effective. Whilst various models are described within the literature that aim to measure ED ‘work’ or ‘activity’, they are often not linked to a measure of costs to provide such activity. It is important for hospital and ED managers to understand and apply this link so that optimal staffing and financial resourcing can be justifiably sought. This research is timely given that Australia has moved towards a national Activity Based Funding (ABF) model for ED activity. ABF is believed to increase transparency of care and fairness (i.e. equal work receives equal pay). ABF involves a person-, performance- or activity-based payment system, and thus a move away from historical “block payment” models that do not incentivise efficiency and quality. The aim of the Statewide Workforce and Activity-Based Funding Modelling Project in Queensland Emergency Departments (SWAMPED) is to identify and describe best practice Emergency Department (ED) workforce models within the current context of ED funding that operates under an ABF model. The study is comprised of five distinct phases. This monograph (Phase 1) comprises a systematic review of the literature that was completed in June 2013. The remaining phases include a detailed survey of Queensland hospital EDs’ resource levels, activity and operational models of care, development of new resource models, development of a user-friendly modelling interface for ED mangers, and production of a final report that identifies policy implications. The anticipated deliverable outcome of this research is the development of an ABF based Emergency Workforce Modelling Tool that will enable ED managers to profile both their workforce and operational models of care. Additionally, the tool will assist with the ability to more accurately inform adequate staffing numbers required in the future, inform planning of expected expenditures and be used for standardisation and benchmarking across similar EDs. Summary of the Findings Within the remit of this review of the literature, the main findings include: 1. EDs are becoming busier and more congested Rising demand, barriers to ED throughput and transitions of care all contribute to ED congestion. In addition requests by organisational managers and the community require continued broadening of the scope of services required of the ED and further increases in demand. As the population live longer with more lifestyle diseases their propensity to require ED care continues to grow. 2. Various models of care within EDs exist Models often vary to account for site specific characteritics to suit staffing profile, ED geographical location (e.g. metropolitan or rural site), and patient demographic profile (e.g. paediatrics, older persons, ethnicity). Existing and new models implemented within EDs often depend on the target outcome requiring change. Generally this is focussed on addressing issues at the input, throughput or output areas of the ED. Even with models targeting similar demographic or illness, the structure and process elements underpinning the model can vary, which can impact on outcomes and variance to the patient and carer experience between and within EDs. Major models of care to manage throughput inefficiencies include: A. Workforce Models of Care focus on the appropriate level of staffing for a given workload to provide prompt, timely and clinically effective patient care within an emergency care setting. The studies reviewed suggest that the early involvement of senior medical decision maker and/or specialised nursing roles such as Emergency Nurse Practitioners and Clinical Initiatives Nurse, primary contact or extended scope Allied Health Practitioners can facilitate patient flow and improve key indicators such as length of stay and reducing the number of those who did not wait to be seen amongst others. B. Operational Models of Care within EDs focus on mechanisms for streaming (e.g. fast-tracking) or otherwise grouping patient care based on acuity and complexity to assist with minimising any throughput inefficiencies. While studies support the positive impact of these models in general, it appears that they are most effective when they are adequately resourced. 3. Various methods of measuring ED activity exist Measuring ED activity requires careful consideration of models of care and staffing profile. Measuring activity requires the ability to account for factors including: patient census, acuity, LOS, intensity of intervention, department skill-mix plus an adjustment for non-patient care time. 4. Gaps in the literature Continued ED growth calls for new and innovative care delivery models that are safe, clinically effective and cost effective. New roles and stand-alone service delivery models are often evaluated in isolation without considering the global and economic impact on staffing profiles. Whilst various models of accounting for and measuring health care activity exist, costing studies and cost effectiveness studies are lacking for EDs making accurate and reliable assessments of care models difficult. There is a necessity to further understand, refine and account for measures of ED complexity that define a workload upon which resources and appropriate staffing determinations can be made into the future. There is also a need for continued monitoring and comprehensive evaluation of newly implemented workforce modelling tools. This research acknowledges those gaps and aims to: • Undertake a comprehensive and integrated whole of department workforce profiling exercise relative to resources in the context of ABF. • Inform workforce requirements based on traditional quantitative markers (e.g. volume and acuity) combined with qualitative elements of ED models of care; • Develop a comprehensive and validated workforce calculation tool that can be used to better inform or at least guide workforce requirements in a more transparent manner.
Resumo:
Species identification based on short sequences of DNA markers, that is, DNA barcoding, has emerged as an integral part of modern taxonomy. However, software for the analysis of large and multilocus barcoding data sets is scarce. The Basic Local Alignment Search Tool (BLAST) is currently the fastest tool capable of handling large databases (e.g. >5000 sequences), but its accuracy is a concern and has been criticized for its local optimization. However, current more accurate software requires sequence alignment or complex calculations, which are time-consuming when dealing with large data sets during data preprocessing or during the search stage. Therefore, it is imperative to develop a practical program for both accurate and scalable species identification for DNA barcoding. In this context, we present VIP Barcoding: a user-friendly software in graphical user interface for rapid DNA barcoding. It adopts a hybrid, two-stage algorithm. First, an alignment-free composition vector (CV) method is utilized to reduce searching space by screening a reference database. The alignment-based K2P distance nearest-neighbour method is then employed to analyse the smaller data set generated in the first stage. In comparison with other software, we demonstrate that VIP Barcoding has (i) higher accuracy than Blastn and several alignment-free methods and (ii) higher scalability than alignment-based distance methods and character-based methods. These results suggest that this platform is able to deal with both large-scale and multilocus barcoding data with accuracy and can contribute to DNA barcoding for modern taxonomy. VIP Barcoding is free and available at http://msl.sls.cuhk.edu.hk/vipbarcoding/.