990 resultados para Estimation errors
Resumo:
Abstract
Resumo:
Osteoporosis (OP) is a systemic skeletal disease characterized by a low bone mineral density (BMD) and a micro-architectural (MA) deterioration. Clinical risk factors (CRF) are often used as a MA approximation. MA is yet evaluable in daily practice by the trabecular bone score (TBS) measure. TBS is very simple to obtain, by reanalyzing a lumbar DXA-scan. TBS has proven to have diagnosis and prognosis values, partially independent of CRF and BMD. The aim of the OsteoLaus cohort is to combine in daily practice the CRF and the information given by DXA (BMD, TBS and vertebral fracture assessment (VFA)) to better identify women at high fracture risk. The OsteoLaus cohort (1400 women 50 to 80 years living in Lausanne, Switzerland) started in 2010. This study is derived from the cohort COLAUS who started in Lausanne in 2003. The main goal of COLAUS is to obtain information on the epidemiology and genetic determinants of cardiovascular risk in 6700 men and women. CRF for OP, bone ultrasound of the heel, lumbar spine and hip BMD, VFA by DXA and MA evaluation by TBS are recorded in OsteoLaus. Preliminary results are reported. We included 631 women: mean age 67.4 ± 6.7 years, BMI 26.1 ± 4.6, mean lumbar spine BMD 0.943 ± 0.168 (T-score − 1.4 SD), and TBS 1.271 ± 0.103. As expected, correlation between BMD and site matched TBS is low (r2 = 0.16). Prevalence of VFx grade 2/3, major OP Fx and all OP Fx is 8.4%, 17.0% and 26.0% respectively. Age- and BMI-adjusted ORs (per SD decrease) are 1.8 (1.2-2.5), 1.6 (1.2-2.1), and 1.3 (1.1-1.6) for BMD for the different categories of fractures and 2.0 (1.4-3.0), 1.9 (1.4-2.5), and 1.4 (1.1-1.7) for TBS respectively. Only 32 to 37% of women with OP Fx have a BMD < − 2.5 SD or a TBS < 1.200. If we combine a BMD < − 2.5 SD or a TBS < 1.200, 54 to 60% of women with an osteoporotic Fx are identified. As in the already published studies, these preliminary results confirm the partial independence between BMD and TBS. More importantly, a combination of TBS subsequent to BMD increases significantly the identification of women with prevalent OP Fx which would have been misclassified by BMD alone. For the first time we are able to have complementary information about fracture (VFA), density (BMD), micro- and macro architecture (TBS and HAS) from a simple, low ionizing radiation and cheap device: DXA. Such complementary information is very useful for the patient in the daily practice and moreover will likely have an impact on cost effectiveness analysis.
Resumo:
Abstract
Resumo:
In the current issue of epidemiology, Danaei and colleagues elegantly estimated both the direct effect and the indirect effect-that is, the effect mediated by blood pressure, cholesterol, glucose, fibrinogen, and high-sensitivity C-reactive protein-of body mass index (BMI) on the risk of coronary heart disease (CHD). they analyzed data from 9 cohort studies including 58,322 patients and 9459 CHD events, with baseline measurements between 1954 and 2001. Using sophisticated and cutting-edge methods for direct and indirect effect estimations, the authors estimated that half of the risk of overweight and obesity would be mediated by blood pressure, cholesterol, and glucose. Few additional percentage points of the risk would be mediated by fibrinogen and hs-CRP. How should we understand these estimates? Can we say that if obese persons reduce their body weight and reach a normal body weight, their excess risk of CHD would be reduced by half through an improvement in these mediators and by half through the reduction in BmI itself? Is that also true if these individuals are prevented from becoming obese in the first place? Can we also conclude that if these mediators are well controlled in obese individuals through other means than a body weight reduction, their excess risk of CHD would be reduced by half? Let us confront these estimates with observations from studies evaluating 2 interventions to reduce body weight, that is, bariatric surgery in patients with severe obesity and intensive lifestyle intervention in overweight patients with diabetes
Resumo:
The objective of this work was to evaluate an estimation system for rice yield in Brazil, based on simple agrometeorological models and on the technological level of production systems. This estimation system incorporates the conceptual basis proposed by Doorenbos & Kassam for potential and attainable yields with empirical adjusts for maximum yield and crop sensitivity to water deficit, considering five categories of rice yield. Rice yield was estimated from 2000/2001 to 2007/2008, and compared to IBGE yield data. Regression analyses between model estimates and data from IBGE surveys resulted in significant coefficients of determination, with less dispersion in the South than in the North and Northeast regions of the country. Index of model efficiency (E1') ranged from 0.01 in the lower yield classes to 0.45 in higher ones, and mean absolute error ranged from 58 to 250 kg ha‑1, respectively.
Resumo:
This article reports on a lossless data hiding scheme for digital images where the data hiding capacity is either determined by minimum acceptable subjective quality or by the demanded capacity. In the proposed method data is hidden within the image prediction errors, where the most well-known prediction algorithms such as the median edge detector (MED), gradient adjacent prediction (GAP) and Jiang prediction are tested for this purpose. In this method, first the histogram of the prediction errors of images are computed and then based on the required capacity or desired image quality, the prediction error values of frequencies larger than this capacity are shifted. The empty space created by such a shift is used for embedding the data. Experimental results show distinct superiority of the image prediction error histogram over the conventional image histogram itself, due to much narrower spectrum of the former over the latter. We have also devised an adaptive method for hiding data, where subjective quality is traded for data hiding capacity. Here the positive and negative error values are chosen such that the sum of their frequencies on the histogram is just above the given capacity or above a certain quality.
Resumo:
Abstract
Resumo:
In a system where tens of thousands of words are made up of a limited number of phonemes, many words are bound to sound alike. This similarity of the words in the lexicon as characterized by phonological neighbourhood density (PhND) has been shown to affect speed and accuracy of word comprehension and production. Whereas there is a consensus about the interfering nature of neighbourhood effects in comprehension, the language production literature offers a more contradictory picture with mainly facilitatory but also interfering effects reported on word production. Here we report both of these two types of effects in the same study. Multiple regression mixed models analyses were conducted on PhND effects on errors produced in a naming task by a group of 21 participants with aphasia. These participants produced more formal errors (interfering effect) for words in dense phonological neighbourhoods, but produced fewer nonwords and semantic errors (a facilitatory effect) with increasing density. In order to investigate the nature of these opposite effects of PhND, we further analysed a subset of formal errors and nonword errors by distinguishing errors differing on a single phoneme from the target (corresponding to the definition of phonological neighbours) from those differing on two or more phonemes. This analysis confirmed that only formal errors that were phonological neighbours of the target increased in dense neighbourhoods, while all other errors decreased. Based on additional observations favouring a lexical origin of these formal errors (they exceeded the probability of producing a real-word error by chance, were of a higher frequency, and preserved the grammatical category of the targets), we suggest that the interfering effect of PhND is due to competition between lexical neighbours and target words in dense neighbourhoods.
Resumo:
Selostus: Ravikilpailumenestysmittojen periytymisasteet ja toistumiskertoimet kilpailukohtaisten tulosten perusteella
Resumo:
The purpose of this bachelor's thesis was to chart scientific research articles to present contributing factors to medication errors done by nurses in a hospital setting, and introduce methods to prevent medication errors. Additionally, international and Finnish research was combined and findings were reflected in relation to the Finnish health care system. Literature review was conducted out of 23 scientific articles. Data was searched systematically from CINAHL, MEDIC and MEDLINE databases, and also manually. Literature was analysed and the findings combined using inductive content analysis. Findings revealed that both organisational and individual factors contributed to medication errors. High workload, communication breakdowns, unsuitable working environment, distractions and interruptions, and similar medication products were identified as organisational factors. Individual factors included nurses' inability to follow protocol, inadequate knowledge of medications and personal qualities of the nurse. Developing and improving the physical environment, error reporting, and medication management protocols were emphasised as methods to prevent medication errors. Investing to the staff's competence and well-being was also identified as a prevention method. The number of Finnish articles was small, and therefore the applicability of the findings to Finland is difficult to assess. However, the findings seem to fit to the Finnish health care system relatively well. Further research is needed to identify those factors that contribute to medication errors in Finland. This is a necessity for the development of methods to prevent medication errors that fit in to the Finnish health care system.
Resumo:
This paper deals with the goodness of the Gaussian assumption when designing second-order blind estimationmethods in the context of digital communications. The low- andhigh-signal-to-noise ratio (SNR) asymptotic performance of the maximum likelihood estimator—derived assuming Gaussiantransmitted symbols—is compared with the performance of the optimal second-order estimator, which exploits the actualdistribution of the discrete constellation. The asymptotic study concludes that the Gaussian assumption leads to the optimalsecond-order solution if the SNR is very low or if the symbols belong to a multilevel constellation such as quadrature-amplitudemodulation (QAM) or amplitude-phase-shift keying (APSK). On the other hand, the Gaussian assumption can yield importantlosses at high SNR if the transmitted symbols are drawn from a constant modulus constellation such as phase-shift keying (PSK)or continuous-phase modulations (CPM). These conclusions are illustrated for the problem of direction-of-arrival (DOA) estimation of multiple digitally-modulated signals.
Resumo:
This paper analyzes the asymptotic performance of maximum likelihood (ML) channel estimation algorithms in wideband code division multiple access (WCDMA) scenarios. We concentrate on systems with periodic spreading sequences (period larger than or equal to the symbol span) where the transmitted signal contains a code division multiplexed pilot for channel estimation purposes. First, the asymptotic covariances of the training-only, semi-blind conditional maximum likelihood (CML) and semi-blind Gaussian maximum likelihood (GML) channelestimators are derived. Then, these formulas are further simplified assuming randomized spreading and training sequences under the approximation of high spreading factors and high number of codes. The results provide a useful tool to describe the performance of the channel estimators as a function of basicsystem parameters such as number of codes, spreading factors, or traffic to training power ratio.
Resumo:
In this paper, the theory of hidden Markov models (HMM) isapplied to the problem of blind (without training sequences) channel estimationand data detection. Within a HMM framework, the Baum–Welch(BW) identification algorithm is frequently used to find out maximum-likelihood (ML) estimates of the corresponding model. However, such a procedureassumes the model (i.e., the channel response) to be static throughoutthe observation sequence. By means of introducing a parametric model fortime-varying channel responses, a version of the algorithm, which is moreappropriate for mobile channels [time-dependent Baum-Welch (TDBW)] isderived. Aiming to compare algorithm behavior, a set of computer simulationsfor a GSM scenario is provided. Results indicate that, in comparisonto other Baum–Welch (BW) versions of the algorithm, the TDBW approachattains a remarkable enhancement in performance. For that purpose, onlya moderate increase in computational complexity is needed.