906 resultados para Saccade threshold
Resumo:
We consider how data from scientific research should be used for decision making in health services. Whether a hand hygiene intervention to reduce risk of nosocomial infection should be widely adopted is the case study. Improving hand hygiene has been described as the most important measure to prevent nosocomial infection. 1 Transmission of microorganisms is reduced, and fewer infections arise, which leads to a reduction in mortality2 and cost savings.3 Implementing a hand hygiene program is itself costly, so the extra investment should be tested for cost-effectiveness.4,5 The first part of our commentary is about cost-effectiveness models and how they inform decision making for health services. The second part is about how data on the effectiveness of hand hygiene programs arising from scientific studies are used, and 2 points are made: the threshold for statistical inference of .05 used to judge effectiveness studies is not important for decision making,6,7 and potentially valuable evidence about effectiveness might be excluded by decision makers because it is deemed low quality.8 The ideas put forward will help researchers and health services decision makers to appraise scientific evidence in a more powerful way.
Resumo:
Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.
Resumo:
A key issue in the field of inclusive design is the ability to provide designers with an understanding of people's range of capabilities. Since it is not feasible to assess product interactions with a large sample, this paper assesses a range of proxy measures of design-relevant capabilities. It describes a study that was conducted to identify which measures provide the best prediction of people's abilities to use a range of products. A detailed investigation with 100 respondents aged 50-80 years was undertaken to examine how they manage typical household products. Predictor variables included self-report and performance measures across a variety of capabilities (vision, hearing, dexterity and cognitive function), component activities used in product interactions (e.g. using a remote control, touch screen) and psychological characteristics (e.g. self-efficacy, confidence with using electronic devices). Results showed, as expected, a higher prevalence of visual, hearing, dexterity, cognitive and product interaction difficulties in the 65-80 age group. Regression analyses showed that, in addition to age, performance measures of vision (acuity, contrast sensitivity) and hearing (hearing threshold) and self-report and performance measures of component activities are strong predictors of successful product interactions. These findings will guide the choice of measures to be used in a subsequent national survey of design-relevant capabilities, which will lead to the creation of a capability database. This will be converted into a tool for designers to understand the implications of their design decisions, so that they can design products in a more inclusive way.
Resumo:
Anisotropic damage distribution and evolution have a profound effect on borehole stress concentrations. Damage evolution is an irreversible process that is not adequately described within classical equilibrium thermodynamics. Therefore, we propose a constitutive model, based on non-equilibrium thermodynamics, that accounts for anisotropic damage distribution, anisotropic damage threshold and anisotropic damage evolution. We implemented this constitutive model numerically, using the finite element method, to calculate stress–strain curves and borehole stresses. The resulting stress–strain curves are distinctively different from linear elastic-brittle and linear elastic-ideal plastic constitutive models and realistically model experimental responses of brittle rocks. We show that the onset of damage evolution leads to an inhomogeneous redistribution of material properties and stresses along the borehole wall. The classical linear elastic-brittle approach to borehole stability analysis systematically overestimates the stress concentrations on the borehole wall, because dissipative strain-softening is underestimated. The proposed damage mechanics approach explicitly models dissipative behaviour and leads to non-conservative mud window estimations. Furthermore, anisotropic rocks with preferential planes of failure, like shales, can be addressed with our model.
Resumo:
Quantitative imaging methods to analyze cell migration assays are not standardized. Here we present a suite of two–dimensional barrier assays describing the collective spreading of an initially–confined population of 3T3 fibroblast cells. To quantify the motility rate we apply two different automatic image detection methods to locate the position of the leading edge of the spreading population after 24, 48 and 72 hours. These results are compared with a manual edge detection method where we systematically vary the detection threshold. Our results indicate that the observed spreading rates are very sensitive to the choice of image analysis tools and we show that a standard measure of cell migration can vary by as much as 25% for the same experimental images depending on the details of the image analysis tools. Our results imply that it is very difficult, if not impossible, to meaningfully compare previously published measures of cell migration since previous results have been obtained using different image analysis techniques and the details of these techniques are not always reported. Using a mathematical model, we provide a physical interpretation of our edge detection results. The physical interpretation is important since edge detection algorithms alone do not specify any physical measure, or physical definition, of the leading edge of the spreading population. Our modeling indicates that variations in the image threshold parameter correspond to a consistent variation in the local cell density. This means that varying the threshold parameter is equivalent to varying the location of the leading edge in the range of approximately 1–5% of the maximum cell density.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Introduction. Calculating segmental (vertebral level-by-level) torso masses in Adolescent Idiopathic Scoliosis (AIS) patients allows the gravitational loading on the scoliotic spine during relaxed standing to be determined. This study used CT scans of AIS patients to measure segmental torso masses and explores how joint moments in the coronal plane are affected by changes in the position of the intervertebral joint’s axis of rotation; particularly at the apex of a scoliotic major curve. Methods. Existing low dose CT data from the Paediatric Spine Research Group was used to calculate vertebral level-by-level torso masses and joint torques occurring in the spine for a group of 20 female AIS patients (mean age 15.0 ± 2.7 years, mean Cobb angle 53 ± 7.1°). Image processing software, ImageJ (v1.45 NIH USA) was used to threshold the T1 to L5 CT images and calculate the segmental torso volume and mass corresponding to each vertebral level. Body segment masses for the head, neck and arms were taken from published anthropometric data. Intervertebral (IV) joint torques at each vertebral level were found using principles of static equilibrium together with the segmental body mass data. Summing the torque contributions for each level above the required joint, allowed the cumulative joint torque at a particular level to be found. Since there is some uncertainty in the position of the coronal plane Instantaneous Axis of Rotation (IAR) for scoliosis patients, it was assumed the IAR was located in the centre of the IV disc. A sensitivity analysis was performed to see what effect the IAR had on the joint torques by moving it laterally 10mm in both directions. Results. The magnitude of the torso masses from T1-L5 increased inferiorly, with a 150% increase in mean segmental torso mass from 0.6kg at T1 to 1.5kg at L5. The magnitudes of the calculated coronal plane joint torques during relaxed standing were typically 5-7 Nm at the apex of the curve, with the highest apex joint torque of 7Nm being found in patient 13. Shifting the assumed IAR by 10mm towards the convexity of the spine, increased the joint torque at that level by a mean 9.0%, showing that calculated joint torques were moderately sensitive to the assumed IAR location. When the IAR midline position was moved 10mm away from the convexity of the spine, the joint torque reduced by a mean 8.9%. Conclusion. Coronal plane joint torques as high as 7Nm can occur during relaxed standing in scoliosis patients, which may help to explain the mechanics of AIS progression. This study provides new anthropometric reference data on vertebral level-by-level torso mass in AIS patients which will be useful for biomechanical models of scoliosis progression and treatment. However, the CT scans were performed in supine (no gravitational load on spine) and curve magnitudes are known to be smaller than those measured in standing.
Resumo:
Experimentally, hydrogen-free diamond-like carbon (DLC) films were assembled by means of pulsed laser deposition (PLD), where energetic small-carbon-clusters were deposited on the substrate. In this paper, the chemisorption of energetic C2 and C10 clusters on diamond (001)-( 2×1) surface was investigated by molecular dynamics simulation. The influence of cluster size and the impact energy on the structure character of the deposited clusters is mainly addressed. The impact energy was varied from a few tens eV to 100 eV. The chemisorption of C10 was found to occur only when its incident energy is above a threshold value ( E th). While, the C2 cluster was easily to adsorb on the surface even at much lower incident energy. With increasing the impact energy, the structures of the deposited C2 and C10 are different from the free clusters. Finally, the growth of films synthesized by energetic C2 and C10 clusters were simulated. The statistics indicate the C2 cluster has high probability of adsorption and films assembled of C2 present slightly higher SP3 fraction than that of C10-films, especially at higher impact energy and lower substrate temperature. Our result supports the experimental findings. Moreover, the simulation underlines the deposition mechanism at atomic scale.
Resumo:
The impact induced chemisorption of hydrocarbon molecules (CH3 and CH2) on H-terminated diamond (001)-(2x1) surface was investigated by molecular dynamics simulation using the many-body Brenner potential. The deposition dynamics of the CH3 radical at impact energies of 0.1-50 eV per molecule was studied and the energy threshold for chemisorption was calculated. The impact-induced decomposition of hydrogen atoms and the dimer opening mechanism on the surface was investigated. Furthermore, the probability for dimer opening event induced by chemisorption of CH, was simulated by randomly varying the impact position as well as the orientation of the molecule relative to the surface. Finally, the energetic hydrocarbons were modeled, slowing down one after the other to simulate the initial fabrication of diamond-like carbon (DLC) films. The structure characteristic in synthesized films with different hydrogen flux was studied. Our results indicate that CH3, CH2 and H are highly reactive and important species in diamond growth. Especially, the fraction of C-atoms in the film having sp(3) hybridization will be enhanced in the presence of H atoms, which is in good agreement with experimental observations. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The adsorption of low-energy C20 isomers on diamond (0 0 1)–(2×1) surface was investigated by molecular dynamics simulation using the Brenner potential. The energy dependence of chemisorption characteristic was studied. We found that there existed an energy threshold for chemisorption of C20 to occur. Between 10 and 20 eV, the C20 fullerene has high probability of chemisorption and the adsorbed cage retains its original structure, which supports the experimental observations of memory effects. However, the structures of the adsorbed bowl and ring C20 were different from their original ones. In this case, the local order in cluster-assembled films would be different from the free clusters.
Resumo:
The deposition of hyperthermal CH3 on diamond (001)-(2×1) surface at room temperature has been studied by means of molecular dynamics simulation using the many-body hydrocarbon potential. The energy threshold effect has been observed. That is, with fixed collision geometry, chemisorption can occur only when the incident energy of CH3 is above a critical value (Eth). Increasing the incident energy, dissociation of hydrogen atoms from the incident molecule was observed. The chemisorption probability of CH3 as a function of its incident energy was calculated and compared with that of C2H2. We found that below 10 eV, the chemisorption probability of C2H2 is much lower than that of CH3 on the same surface. The interesting thing is that it is even lower than that of CH3 on a hydrogen covered surface at the same impact energy. It indicates that the reactive CH3 molecule is the more important species than C2H2 in diamond synthesis at low energy, which is in good agreement with the experimental observation.
Resumo:
In this paper, the collision of a C36, with D6h symmetry, on diamond (001)-(/2×1) surface was investigated using molecular dynamics (MD) simulation based on the semi-empirical Brenner potential. The incident kinetic energy of the C36 ranges from 20 to 150 eV per cluster. The collision dynamics was investigated as a function of impact energy Ein. The C36 cluster was first impacted towards the center of two dimers with a fixed orientation. It was found that when Ein was lower than 30 eV, C36 bounces off the surface without breaking up. Increasing Ein to 30-45 eV, bonds were formed between C36 and surface dimer atoms, and the adsorbed C36 retained its original free-cluster structure. Around 50-60 eV, the C36 rebounded from the surface with cage defects. Above 70 eV, fragmentation both in the cluster and on the surface was observed. Our simulation supported the experimental findings that during low-energy cluster beam deposition small fullerenes could keep their original structure after adsorption (i.e. the memory effect), if Ein is within a certain range. Furthermore, we found that the energy threshold for chemisorption is sensitive to the orientation of the incident C36 and its impact position on the asymmetric surface.
Resumo:
Background Knowledge of current trends in nurse-administered procedural sedation and analgesia (PSA) in the cardiac catheterisation laboratory (CCL) may provide important insights into how to improve safety and effectiveness of this practice. Objective To characterise current practice as well as education and competency standards regarding nurse-administered PSA in Australian and New Zealand CCLs. Design A quantitative, cross-sectional, descriptive survey design was used. Methods Data were collected using a web-based questionnaire on practice, educational standards and protocols related to nurse-administered PSA. Descriptive statistics were used to analyse data. Results A sample of 62 nurses, each from a different CCL, completed a questionnaire that focused on PSA practice. Over half of the estimated total number of CCLs in Australia and New Zealand was represented. Nurse-administered PSA was used in 94% (n = 58) of respondents CCLs. All respondents indicated that benzodiazepines, opioids or a combination of both is used for PSA (n = 58). One respondent indicated that propofol was also used. 20% (n = 12) indicated that deep sedation is purposefully induced for defibrillation threshold testing and cardioversion without a second medical practitioner present. Sedation monitoring practices vary considerably between institutions. 31% (n = 18) indicated that comprehensive education about PSA is provided. 45% (n = 26) indicated that nurses who administer PSA should undergo competency assessment. Conclusion By characterising nurse-administered PSA in Australian and New Zealand CCLs, a baseline for future studies has been established. Areas of particular importance to improve include protocols for patient monitoring and comprehensive PSA education for CCL nurses in Australia and New Zealand.
Resumo:
Making institutional expectations explicit using clear and common language engages commencing students and promotes help-seeking behaviour. When first year students enter university they cross the threshold into an unfamiliar environment (Devlin, Kift, Nelson, Smith & McKay, 2012). Universities endeavour to provide appropriate learning support services and resources; however research suggests that there is limited up take of these services, particularly in high risk students (Nelson-Field & Goodman, 2005). The Successful Student Skills Checklist is a tool which will be trialled during the 2013 Orientation period at the QUT Caboolture campus. The new tool is a response to the university’s commitment to provide “an environment where [students] are supported to take responsibility for their own learning, and to embrace an active role in succeeding to their full potential” (QUT, 2012, 6.2.1). This paper will outline the design of the support tool implemented during Orientation, as well as discuss the anticipated outcomes of the trial.
Resumo:
During the last several decades, the quality of natural resources and their services have been exposed to significant degradation from increased urban populations combined with the sprawl of settlements, development of transportation networks and industrial activities (Dorsey, 2003; Pauleit et al., 2005). As a result of this environmental degradation, a sustainable framework for urban development is required to provide the resilience of natural resources and ecosystems. Sustainable urban development refers to the management of cities with adequate infrastructure to support the needs of its population for the present and future generations as well as maintain the sustainability of its ecosystems (UNEP/IETC, 2002; Yigitcanlar, 2010). One of the important strategic approaches for planning sustainable cities is „ecological planning‟. Ecological planning is a multi-dimensional concept that aims to preserve biodiversity richness and ecosystem productivity through the sustainable management of natural resources (Barnes et al., 2005). As stated by Baldwin (1985, p.4), ecological planning is the initiation and operation of activities to direct and control the acquisition, transformation, disruption and disposal of resources in a manner capable of sustaining human activities with a minimum disruption of ecosystem processes. Therefore, ecological planning is a powerful method for creating sustainable urban ecosystems. In order to explore the city as an ecosystem and investigate the interaction between the urban ecosystem and human activities, a holistic urban ecosystem sustainability assessment approach is required. Urban ecosystem sustainability assessment serves as a tool that helps policy and decision-makers in improving their actions towards sustainable urban development. There are several methods used in urban ecosystem sustainability assessment among which sustainability indicators and composite indices are the most commonly used tools for assessing the progress towards sustainable land use and urban management. Currently, a variety of composite indices are available to measure the sustainability at the local, national and international levels. However, the main conclusion drawn from the literature review is that they are too broad to be applied to assess local and micro level sustainability and no benchmark value for most of the indicators exists due to limited data availability and non-comparable data across countries. Mayer (2008, p. 280) advocates that by stating "as different as the indices may seem, many of them incorporate the same underlying data because of the small number of available sustainability datasets". Mori and Christodoulou (2011) also argue that this relative evaluation and comparison brings along biased assessments, as data only exists for some entities, which also means excluding many nations from evaluation and comparison. Thus, there is a need for developing an accurate and comprehensive micro-level urban ecosystem sustainability assessment method. In order to develop such a model, it is practical to adopt an approach that uses a method to utilise indicators for collecting data, designate certain threshold values or ranges, perform a comparative sustainability assessment via indices at the micro-level, and aggregate these assessment findings to the local level. Hereby, through this approach and model, it is possible to produce sufficient and reliable data to enable comparison at the local level, and provide useful results to inform the local planning, conservation and development decision-making process to secure sustainable ecosystems and urban futures. To advance research in this area, this study investigated the environmental impacts of an existing urban context by using a composite index with an aim to identify the interaction between urban ecosystems and human activities in the context of environmental sustainability. In this respect, this study developed a new comprehensive urban ecosystem sustainability assessment tool entitled the „Micro-level Urban-ecosystem Sustainability IndeX‟ (MUSIX). The MUSIX model is an indicator-based indexing model that investigates the factors affecting urban sustainability in a local context. The model outputs provide local and micro-level sustainability reporting guidance to help policy-making concerning environmental issues. A multi-method research approach, which is based on both quantitative analysis and qualitative analysis, was employed in the construction of the MUSIX model. First, a qualitative research was conducted through an interpretive and critical literature review in developing a theoretical framework and indicator selection. Afterwards, a quantitative research was conducted through statistical and spatial analyses in data collection, processing and model application. The MUSIX model was tested in four pilot study sites selected from the Gold Coast City, Queensland, Australia. The model results detected the sustainability performance of current urban settings referring to six main issues of urban development: (1) hydrology, (2) ecology, (3) pollution, (4) location, (5) design, and; (6) efficiency. For each category, a set of core indicators was assigned which are intended to: (1) benchmark the current situation, strengths and weaknesses, (2) evaluate the efficiency of implemented plans, and; (3) measure the progress towards sustainable development. While the indicator set of the model provided specific information about the environmental impacts in the area at the parcel scale, the composite index score provided general information about the sustainability of the area at the neighbourhood scale. Finally, in light of the model findings, integrated ecological planning strategies were developed to guide the preparation and assessment of development and local area plans in conjunction with the Gold Coast Planning Scheme, which establishes regulatory provisions to achieve ecological sustainability through the formulation of place codes, development codes, constraint codes and other assessment criteria that provide guidance for best practice development solutions. These relevant strategies can be summarised as follows: • Establishing hydrological conservation through sustainable stormwater management in order to preserve the Earth’s water cycle and aquatic ecosystems; • Providing ecological conservation through sustainable ecosystem management in order to protect biological diversity and maintain the integrity of natural ecosystems; • Improving environmental quality through developing pollution prevention regulations and policies in order to promote high quality water resources, clean air and enhanced ecosystem health; • Creating sustainable mobility and accessibility through designing better local services and walkable neighbourhoods in order to promote safe environments and healthy communities; • Sustainable design of urban environment through climate responsive design in order to increase the efficient use of solar energy to provide thermal comfort, and; • Use of renewable resources through creating efficient communities in order to provide long-term management of natural resources for the sustainability of future generations.