920 resultados para Methods Time Measurement (MTM)
Resumo:
BACKGROUND Measurement of the global burden of disease with disability-adjusted life-years (DALYs) requires disability weights that quantify health losses for all non-fatal consequences of disease and injury. There has been extensive debate about a range of conceptual and methodological issues concerning the definition and measurement of these weights. Our primary objective was a comprehensive re-estimation of disability weights for the Global Burden of Disease Study 2010 through a large-scale empirical investigation in which judgments about health losses associated with many causes of disease and injury were elicited from the general public in diverse communities through a new, standardised approach. METHODS We surveyed respondents in two ways: household surveys of adults aged 18 years or older (face-to-face interviews in Bangladesh, Indonesia, Peru, and Tanzania; telephone interviews in the USA) between Oct 28, 2009, and June 23, 2010; and an open-access web-based survey between July 26, 2010, and May 16, 2011. The surveys used paired comparison questions, in which respondents considered two hypothetical individuals with different, randomly selected health states and indicated which person they regarded as healthier. The web survey added questions about population health equivalence, which compared the overall health benefits of different life-saving or disease-prevention programmes. We analysed paired comparison responses with probit regression analysis on all 220 unique states in the study. We used results from the population health equivalence responses to anchor the results from the paired comparisons on the disability weight scale from 0 (implying no loss of health) to 1 (implying a health loss equivalent to death). Additionally, we compared new disability weights with those used in WHO's most recent update of the Global Burden of Disease Study for 2004. FINDINGS 13,902 individuals participated in household surveys and 16,328 in the web survey. Analysis of paired comparison responses indicated a high degree of consistency across surveys: correlations between individual survey results and results from analysis of the pooled dataset were 0·9 or higher in all surveys except in Bangladesh (r=0·75). Most of the 220 disability weights were located on the mild end of the severity scale, with 58 (26%) having weights below 0·05. Five (11%) states had weights below 0·01, such as mild anaemia, mild hearing or vision loss, and secondary infertility. The health states with the highest disability weights were acute schizophrenia (0·76) and severe multiple sclerosis (0·71). We identified a broad pattern of agreement between the old and new weights (r=0·70), particularly in the moderate-to-severe range. However, in the mild range below 0·2, many states had significantly lower weights in our study than previously. INTERPRETATION This study represents the most extensive empirical effort as yet to measure disability weights. By contrast with the popular hypothesis that disability assessments vary widely across samples with different cultural environments, we have reported strong evidence of highly consistent results.
Resumo:
The characterisation of facial expression through landmark-based analysis methods such as FACEM (Pilowsky & Katsikitis, 1994) has a variety of uses in psychiatric and psychological research. In these systems, important structural relationships are extracted from images of facial expressions by the analysis of a pre-defined set of feature points. These relationship measures may then be used, for instance, to assess the degree of variability and similarity between different facial expressions of emotion. FaceXpress is a multimedia software suite that provides a generalised workbench for landmark-based facial emotion analysis and stimulus manipulation. It is a flexible tool that is designed to be specialised at runtime by the user. While FaceXpress has been used to implement the FACEM process, it can also be configured to support any other similar, arbitrary system for quantifying human facial emotion. FaceXpress also implements an integrated set of image processing tools and specialised tools for facial expression stimulus production including facial morphing routines and the generation of expression-representative line drawings from photographs.
Resumo:
Non-use values (i.e. economic values assigned by individuals to ecosystem goods and services unrelated to current or future uses) provide one of the most compelling incentives for the preservation of ecosystems and biodiversity. Assessing the non-use values of non-users is relatively straightforward using stated preference methods, but the standard approaches for estimating non-use values of users (stated decomposition) have substantial shortcomings which undermine the robustness of their results. In this paper, we propose a pragmatic interpretation of non-use values to derive estimates that capture their main dimensions, based on the identification of a willingness to pay for ecosystem protection beyond one's expected life. We empirically test our approach using a choice experiment conducted on coral reef ecosystem protection in two coastal areas in New Caledonia with different institutional, cultural, environmental and socio-economic contexts. We compute individual willingness to pay estimates, and derive individual non-use value estimates using our interpretation. We find that, a minima, estimates of non-use values may comprise between 25 and 40% of the mean willingness to pay for ecosystem preservation, less than has been found in most studies.
Resumo:
Magnetic resonance is a well-established tool for structural characterisation of porous media. Features of pore-space morphology can be inferred from NMR diffusion-diffraction plots or the time-dependence of the apparent diffusion coefficient. Diffusion NMR signal attenuation can be computed from the restricted diffusion propagator, which describes the distribution of diffusing particles for a given starting position and diffusion time. We present two techniques for efficient evaluation of restricted diffusion propagators for use in NMR porous-media characterisation. The first is the Lattice Path Count (LPC). Its physical essence is that the restricted diffusion propagator connecting points A and B in time t is proportional to the number of distinct length-t paths from A to B. By using a discrete lattice, the number of such paths can be counted exactly. The second technique is the Markov transition matrix (MTM). The matrix represents the probabilities of jumps between every pair of lattice nodes within a single timestep. The propagator for an arbitrary diffusion time can be calculated as the appropriate matrix power. For periodic geometries, the transition matrix needs to be defined only for a single unit cell. This makes MTM ideally suited for periodic systems. Both LPC and MTM are closely related to existing computational techniques: LPC, to combinatorial techniques; and MTM, to the Fokker-Planck master equation. The relationship between LPC, MTM and other computational techniques is briefly discussed in the paper. Both LPC and MTM perform favourably compared to Monte Carlo sampling, yielding highly accurate and almost noiseless restricted diffusion propagators. Initial tests indicate that their computational performance is comparable to that of finite element methods. Both LPC and MTM can be applied to complicated pore-space geometries with no analytic solution. We discuss the new methods in the context of diffusion propagator calculation in porous materials and model biological tissues.
Resumo:
This paper presents an extension to the Rapidly-exploring Random Tree (RRT) algorithm applied to autonomous, drifting underwater vehicles. The proposed algorithm is able to plan paths that guarantee convergence in the presence of time-varying ocean dynamics. The method utilizes 4-Dimensional, ocean model prediction data as an evolving basis for expanding the tree from the start location to the goal. The performance of the proposed method is validated through Monte-Carlo simulations. Results illustrate the importance of the temporal variance in path execution, and demonstrate the convergence guarantee of the proposed methods.
Resumo:
Background Drink driving remains an important issue to address in terms of health and injury prevention even though research shows that over time there has been a steady decline in drink driving. This has been attributed to the introduction of countermeasures such as random breath testing (RBT), changing community attitudes and norms leading to less acceptance of the behaviour and, to a lesser degree, the implementation of programs designed to deter offenders from engaging in drink driving. Most of the research to date has focused on the hard core offenders - those with high blood alcohol content at the time of arrest, and those who have more than one offence. Aims There has been little research on differences within the first offender population or on factors contributing to second offences. This research aims to fill the gap by reporting on those factors in a sample of offenders. Methods This paper reports on a study that involved interviewing 198 first offenders in court and following up this group 6-8 months post offence. Of these original participants, 101 offenders were able to be followed up, with 88 included in this paper on the basis that they had driven a vehicle since the offence. Results Interestingly, while the rate of reported apprehended second offences was low in that time frame (3%), a surprising number of offenders reported that they had driven under the influence at a much higher rate (27%). That is a large proportion of first offenders were willing to risk the much larger penalties associated with a second offence in order to engage in drink driving. Discussion and conclusions Key characteristics of this follow up group are examined to inform the development of a evidence based brief intervention program that targets first time offenders with the goal of decreasing the rate of repeat drink driving.
Resumo:
Background: This study attempted to develop health risk-based metrics for defining a heatwave in Brisbane, Australia. Methods: Poisson generalised additive model was performed to assess the impact of heatwaves on mortality and emergency hospital admissions (EHAs) in Brisbane. Results: In general, the higher the intensity and the longer the duration of a heatwave, the greater the health impacts. There was no apparent difference in EHAs risk during different periods of a warm season. However, there was a greater risk of mortality in the second half of a warm season than that in the first half. While elderly (>75 years)were particularly vulnerable to both the EHA and mortality effects of a heatwave, the risk for EHAs also significantly increased for two other age groups (0-64 years and 65-74 years) during severe heatwaves. Different patterns between cardiorespiratory mortality and EHAs were observed. Based on these findings, we propose the use of a teiered heat warning system based on the health risk of heatwave. Conclusions: Health risk-based metrics are a useful tool for the development of local heatwave definitions. thsi tool may have significant implications for the assessment of heatwave-related health consequences and development of heatwave response plans and implementation strategies.
Resumo:
The technique of photo-CELIV (charge extraction by linearly increasing voltage) is one of the more straightforward and popular approaches to measure the faster carrier mobility in measurement geometries that are relevant for operational solar cells and other optoelectronic devices. It has been used to demonstrate a time-dependent photocarrier mobility in pristine polymers, attributed to energetic relaxation within the density of states. Conversely, in solar cell blends, the presence or absence of such energetic relaxation on transport timescales remains under debate. We developed a complete numerical model and performed photo-CELIV experiments on the model high efficiency organic solar cell blend poly[3,6-dithiophene-2-yl-2,5-di(2-octyldodecyl)-pyrrolo[3,4-c]pyrrole-1,4-dione-alt-naphthalene] (PDPP-TNT):[6,6]-phenyl-C71-butyric-acid-methyl-ester (PC70BM). In the studied solar cells a constant, time-independent mobility on the scale relevant to charge extraction was observed, where thermalisation of photocarriers occurs on time scales much shorter than the transit time. Therefore, photocarrier relaxation effects are insignificant for charge transport in these efficient photovoltaic devices.
Resumo:
Background Explosive ordnance disposal (EOD) technicians are often required to wear specialised clothing combinations that not only protect against the risk of explosion but also potential chemical contamination. This heavy (>35kg) and encapsulating ensemble is likely to increase physiological strain by increasing metabolic heat production and impairing heat dissipation. This study investigated the physiological tolerance times of two different chemical protective undergarments, commonly worn with EOD personal protective clothing, in a range of simulated environmental extremes and work intensities Methods Seven males performed eighteen trials wearing two ensembles. The trials involved walking on a treadmill at 2.5, 4 and 5.5 km.h-1 at each of the following environmental conditions, 21, 30 and 37°C wet bulb globe temperature (WBGT). The trials were ceased if the participants’ core temperature reached 39°C, if heart rate exceeded 90% of maximum, if walking time reached 60 minutes or due to volitional fatigue. Results Physiological tolerance times ranged from 8 to 60 min and the duration (mean difference: 2.78 min, P>0.05) were similar in both ensembles. A significant effect for environment (21>30>37°C WBGT, P<0.05) and work intensity (2.5>4>5.5 km.h-1, P< 0.05) was observed in tolerance time. The majority of trials across both ensembles (101/126; 80.1%) were terminated due to participants achieving a heart rate equivalent to greater than 90% of their maximum. Conclusions Physiological tolerance times wearing these two chemical protective undergarments, worn underneath EOD personal protective clothing, were similar and predominantly limited by cardiovascular strain.
Resumo:
Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2–6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.
Resumo:
Stochastic modelling is critical in GNSS data processing. Currently, GNSS data processing commonly relies on the empirical stochastic model which may not reflect the actual data quality or noise characteristics. This paper examines the real-time GNSS observation noise estimation methods enabling to determine the observation variance from single receiver data stream. The methods involve three steps: forming linear combination, handling the ionosphere and ambiguity bias and variance estimation. Two distinguished ways are applied to overcome the ionosphere and ambiguity biases, known as the time differenced method and polynomial prediction method respectively. The real time variance estimation methods are compared with the zero-baseline and short-baseline methods. The proposed method only requires single receiver observation, thus applicable to both differenced and un-differenced data processing modes. However, the methods may be subject to the normal ionosphere conditions and low autocorrelation GNSS receivers. Experimental results also indicate the proposed method can result on more realistic parameter precision.
Resumo:
This article aims to fill in the gap of the second-order accurate schemes for the time-fractional subdiffusion equation with unconditional stability. Two fully discrete schemes are first proposed for the time-fractional subdiffusion equation with space discretized by finite element and time discretized by the fractional linear multistep methods. These two methods are unconditionally stable with maximum global convergence order of $O(\tau+h^{r+1})$ in the $L^2$ norm, where $\tau$ and $h$ are the step sizes in time and space, respectively, and $r$ is the degree of the piecewise polynomial space. The average convergence rates for the two methods in time are also investigated, which shows that the average convergence rates of the two methods are $O(\tau^{1.5}+h^{r+1})$. Furthermore, two improved algorithms are constrcted, they are also unconditionally stable and convergent of order $O(\tau^2+h^{r+1})$. Numerical examples are provided to verify the theoretical analysis. The comparisons between the present algorithms and the existing ones are included, which show that our numerical algorithms exhibit better performances than the known ones.
Resumo:
The fractional Fokker-Planck equation is an important physical model for simulating anomalous diffusions with external forces. Because of the non-local property of the fractional derivative an interesting problem is to explore high accuracy numerical methods for fractional differential equations. In this paper, a space-time spectral method is presented for the numerical solution of the time fractional Fokker-Planck initial-boundary value problem. The proposed method employs the Jacobi polynomials for the temporal discretization and Fourier-like basis functions for the spatial discretization. Due to the diagonalizable trait of the Fourier-like basis functions, this leads to a reduced representation of the inner product in the Galerkin analysis. We prove that the time fractional Fokker-Planck equation attains the same approximation order as the time fractional diffusion equation developed in [23] by using the present method. That indicates an exponential decay may be achieved if the exact solution is sufficiently smooth. Finally, some numerical results are given to demonstrate the high order accuracy and efficiency of the new numerical scheme. The results show that the errors of the numerical solutions obtained by the space-time spectral method decay exponentially.
Resumo:
Background: Overviews of systematic reviews (SRs) are useful for public health policy; however there is an absence of Cochrane Overviews covering public health (PH) topics. Objectives: We sought to analyze the methodological approaches used in existing Cochrane Overviews and Protocols for overviews (primarily clinical in nature), and compare these to the methods and approaches used in PH overviews (non-Cochrane). The intent was to identify issues that would be relevant for undertaking Cochrane overviews. Methods: We conducted a descriptive analysis of overviews published between 1999 and 2014. We searched the Cochrane Database of Systematic Reviews for Cochrane Protocols for overviews and Cochrane Overviews, and the HealthEvidence.org for PH overviews. The primary characteristics of the overviews and elements of the methodology were extracted and compared. Results: A total of 61 overviews of SRs were included in our analysis; specifically, this included 21 Cochrane Protocols for overviews, 15 Cochrane Overviews, and 27 non-Cochrane PH overviews. Amongst the overviews, the most significant differences are that PH overviews (non-Cochrane) tend to: include earlier and more reviews, greater number of participants, allow lower levels of evidence, use assessment tools other than AMSTAR (A Measurement Tool to Assess Systematic Reviews, i.e. a tool for assessing quality of SRs), not assess quality of evidence in reviews, search more databases overall, specify search limits including English-only reviews, and not consider recent primary studies for inclusion. Some of these differences clearly related to quality, however many relate to the nuances of PH interventions. Conclusions: The methodology in Cochrane overviews and PH overviews varies widely. Future PH overviews may benefit from the Cochrane methodology but the Cochrane approach requires modification to accommodate PH research methodology. Additionally, the use of databases that pre-screen and quality assess relevant PH systematic reviews may help expedite the search process.
Resumo:
The development of methods for real-time crash prediction as a function of current or recent traffic and roadway conditions is gaining increasing attention in the literature. Numerous studies have modeled the relationships between traffic characteristics and crash occurrence, and significant progress has been made. Given the accumulated evidence on this topic and the lack of an articulate summary of research status, challenges, and opportunities, there is an urgent need to scientifically review these studies and to synthesize the existing state-of-the-art knowledge. This paper addresses this need by undertaking a systematic literature review to identify current knowledge, challenges, and opportunities, and then conducts a meta-analysis of existing studies to provide a summary impact of traffic characteristics on crash occurrence. Sensitivity analyses were conducted to assess quality, publication bias, and outlier bias of the various studies; and the time intervals used to measure traffic characteristics were also considered. As a result of this comprehensive and systematic review, issues in study designs, traffic and crash data, and model development and validation are discussed. Outcomes of this study are intended to provide researchers focused on real-time crash prediction with greater insight into the modeling of this important but extremely challenging safety issue.