983 resultados para Estimation par maximum de vraisemblance


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parenteral anticoagulation is a cornerstone in the management of venous and arterial thrombosis. Unfractionated heparin has a wide dose/response relationship, requiring frequent and troublesome laboratorial follow-up. Because of all these factors, low-molecular-weight heparin use has been increasing. Inadequate dosage has been pointed out as a potential problem because the use of subjectively estimated weight instead of real measured weight is common practice in the emergency department (ED). To evaluate the impact of inadequate weight estimation on enoxaparin dosage, we investigated the adequacy of anticoagulation of patients in a tertiary ED where subjective weight estimation is common practice. We obtained the estimated, informed, and measured weight of 28 patients in need of parenteral anticoagulation. Basal and steady-state (after the second subcutaneous shot of enoxaparin) anti-Xa activity was obtained as a measure of adequate anticoagulation. The patients were divided into 2 groups according the anticoagulation adequacy. From the 28 patients enrolled, 75% (group 1, n = 21) received at least 0.9 mg/kg per dose BID and 25% (group 2, n = 7) received less than 0.9 mg/kg per dose BID of enoxaparin. Only 4 (14.3%) of all patients had anti-Xa activity less than the inferior limit of the therapeutic range (<0.5 UI/mL), all of them from group 2. In conclusion, when weight estimation was used to determine the enoxaparin dosage, 25% of the patients were inadequately anticoagulated (anti-Xa activity <0.5 UI/mL) during the initial crucial phase of treatment. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Aim: Tissue injury leads to activation of coagulation and generation of thrombin. Inhibition of thrombin receptor protease-activated receptor 1 (PAR-1) has been shown to reduce liver fibrosis in animals. This study aimed to evaluate the effect of PAR-1 gene polymorphism on rate of liver fibrosis (RF) in chronic hepatitis C. Methods: Polymorphisms studied: C > T transition 1426 bp upstream of translation start site (-1426C/T), 13 bp repeat of preceding -506 5`-CGGCCGCGGGAAG-3` sequence (-506I/D), and A > T transversion in intervening sequence (IVS) 14 bp upstream of exon-2 start site (IVS-14A/T). A total of 287 European and 90 Brazilian patients were studied. Results: 1426C/T polymorphism: There was a trend to higher RF in patients with the TT genotype (P = 0.06) and an association between genotype CC and slow fibrosis (P = 0.03) in Europeans. In males, RF was significantly higher in those with the TT genotype compared to CT (P = 0.003) and CC (P = 0.007). There was a significant association between TT and fast fibrosis (P = 0.04). This was confirmed in an independent cohort of Brazilians where RF was higher in TT than in CC (P = 0.03). Analysis of -506I/D showed no difference in RF and distribution of slow/fast fibrosis among different genotypes in both populations. Analysis of IVS-14A/T showed no difference between genotypes. Conclusion: In conclusion, these findings suggest that PAR-1 receptor polymorphisms influence the progression of liver fibrosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Food portion size estimation involves a complex mental process that may influence food consumption evaluation. Knowing the variables that influence this process can improve the accuracy of dietary assessment. The present study aimed to evaluate the ability of nutrition students to estimate food portions in usual meals and relate food energy content with errors in food portion size estimation. Methods: Seventy-eight nutrition students, who had already studied food energy content, participated in this cross-sectional study on the estimation of food portions, organised into four meals. The participants estimated the quantity of each food, in grams or millilitres, with the food in view. Estimation errors were quantified, and their magnitude were evaluated. Estimated quantities (EQ) lower than 90% and higher than 110% of the weighed quantity (WQ) were considered to represent underestimation and overestimation, respectively. Correlation between food energy content and error on estimation was analysed by the Spearman correlation, and comparison between the mean EQ and WQ was accomplished by means of the Wilcoxon signed rank test (P < 0.05). Results: A low percentage of estimates (18.5%) were considered accurate (+/- 10% of the actual weight). The most frequently underestimated food items were cauliflower, lettuce, apple and papaya; the most often overestimated items were milk, margarine and sugar. A significant positive correlation between food energy density and estimation was found (r = 0.8166; P = 0.0002). Conclusions: The results obtained in the present study revealed a low percentage of acceptable estimations of food portion size by nutrition students, with trends toward overestimation of high-energy food items and underestimation of low-energy items.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Historically, the cure rate model has been used for modeling time-to-event data within which a significant proportion of patients are assumed to be cured of illnesses, including breast cancer, non-Hodgkin lymphoma, leukemia, prostate cancer, melanoma, and head and neck cancer. Perhaps the most popular type of cure rate model is the mixture model introduced by Berkson and Gage [1]. In this model, it is assumed that a certain proportion of the patients are cured, in the sense that they do not present the event of interest during a long period of time and can found to be immune to the cause of failure under study. In this paper, we propose a general hazard model which accommodates comprehensive families of cure rate models as particular cases, including the model proposed by Berkson and Gage. The maximum-likelihood-estimation procedure is discussed. A simulation study analyzes the coverage probabilities of the asymptotic confidence intervals for the parameters. A real data set on children exposed to HIV by vertical transmission illustrates the methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To test a mathematical model for measuring blinking kinematics. Spontaneous and reflex blinks of 23 healthy subjects were recorded with two different temporal resolutions. A magnetic search coil was used to record 77 blinks sampled at 200 Hz and 2 kHz in 13 subjects. A video system with low temporal resolution (30 Hz) was employed to register 60 blinks of 10 other subjects. The experimental data points were fitted with a model that assumes that the upper eyelid movement can be divided into two parts: an impulsive accelerated motion followed by a damped harmonic oscillation. All spontaneous and reflex blinks, including those recorded with low resolution, were well fitted by the model with a median coefficient of determination of 0.990. No significant difference was observed when the parameters of the blinks were estimated with the under-damped or critically damped solutions of the harmonic oscillator. On the other hand, the over-damped solution was not applicable to fit any movement. There was good agreement between the model and numerical estimation of the amplitude but not of maximum velocity. Spontaneous and reflex blinks can be mathematically described as consisting of two different phases. The down-phase is mainly an accelerated movement followed by a short time that represents the initial part of the damped harmonic oscillation. The latter is entirely responsible for the up-phase of the movement. Depending on the instantaneous characteristics of each movement, the under-damped or critically damped oscillation is better suited to describe the second phase of the blink. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To evaluate the effect of pregnancy and smoking on endothelial function using brachial artery flow-mediated dilation (FMD) and to determine the time necessary until the occurrence of maximum brachial artery dilation after stimulus. This study was an observational study evaluating 133 women, who were grouped as follows: non-smoking pregnant women (N = 47), smoking pregnant women (N = 33), non-smoking women (N = 34), and smoking pregnant women (N = 19). The diameter of the brachial artery was measured at baseline and at 30, 60, 90 and 120 s after stimulus. The relative change of brachial artery was determined for each of these four moments. FMD measured at 60 s after stimulus was compared between the groups. The maximum FMD was observed at 60 s after cuff release in all groups. FMD was greater among non-smoking pregnant women compared to smoking pregnant women (11.50 +/- A 5.77 vs. 8.74 +/- A 4.83; p = 0.03) and also between non-smoking non-pregnant women compared to smoking non-pregnant women (10.52 +/- A 4.76 vs. 7.21 +/- A 5.57; p = 0.03). Maximum FMD was observed approximately 60 s after stimulus in all groups regardless of smoking and pregnancy status. The smoking habit seems to lead to endothelial dysfunction both in pregnant and non-pregnant women, as demonstrated by the lower FMD in smokers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, there has been increasing fish consumption in Brazil, largely due to the popularity of Japanese cuisine. No study, however, has previously assessed the presence of inorganic contaminants in species used in the preparation of Japanese food. In this paper, we determined total arsenic, cadmium, chromium, total mercury, and lead contents in 82 fish samples of Tuna (Thunnus thynnus), Porgy (Pagrus pagrus), Snook (Centropomus sp.), and Salmon (Salmo salar) species marketed in Sao Paulo (Brazil). Samples were mineralized in HNO(3)/H(2)O(2) for As, Cd, Cr and Pb, and in HNO(3)/H(2)SO(4)/V(2)O(5) for Hg. Inorganic contaminants were determined after the validation of the methodology using Inductively Coupled Plasma Optical Emission Spectrometry (ICP OES); and for Hg, an ICP-coupled hydride generator was used. Concentration ranges for elements analyzed in mg kg(-1) (wet base) were as follows: Total As (0.11-10.82); Cd (0.005-0.047); Cr (0.008-0.259); Pb (0.026-0.481); and total Hg (0.0077-0.9681). As and Cr levels exceeded the maximum limits allowed by the Brazilian law (1 and 0.1 mg kg(-1)) in 51.2 and 7.3% of the total samples studied, respectively. The most contaminated species were porgy (As = 95% and Cr = 10%) and tuna (As 91% and Cr = 10%). An estimation of As, Cd, Pb, and Hg weekly intake was calculated considering a 60 kg adult person and a 350 g consumption of fish per week, with As and Hg elements presenting the highest contribution on diets reaching 222% of provisional tolerable weekly intake (PTWI) for As in porgy and 41% of PTWI for Hg in tuna. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The magnitude of the basic reproduction ratio R(0) of an epidemic can be estimated in several ways, namely, from the final size of the epidemic, from the average age at first infection, or from the initial growth phase of the outbreak. In this paper, we discuss this last method for estimating R(0) for vector-borne infections. Implicit in these models is the assumption that there is an exponential phase of the outbreaks, which implies that in all cases R(0) > 1. We demonstrate that an outbreak is possible, even in cases where R(0) is less than one, provided that the vector-to-human component of R(0) is greater than one and that a certain number of infected vectors are introduced into the affected population. This theory is applied to two real epidemiological dengue situations in the southeastern part of Brazil, one where R(0) is less than one, and other one where R(0) is greater than one. In both cases, the model mirrors the real situations with reasonable accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this study was to analyze the electromyographic (EMG) data, before and after normalization. One hundred (100) normal subjects (with no signs and symptoms of temporomandibular disorders) participated in this study. A surface EMG of the masticatory muscles was performed. Two different tests were performed: maximum voluntary clench (MVC) on cotton rolls and MVC in intercuspal position. The normalization was done using the mean value of the EMG signal of the first examination. The coefficient of variation CV showed lower values for the standardized data. The standardization was effective in reducing the differences between records from the same subject and in different subjects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research diagnostic criteria for temporomandibular disorders (RDC/TMD) are used for the classification of patients with temporomandibular disorders (TMD). Surface electromyography of the right and left masseter and temporalis muscles was performed during Maximum teeth clenching in 103 TMD patients subdivided according to the RDC/TMD into 3 non-overlapping groups: (a) 25 myogenous; (b) 61 arthrogenous; and (c) 17 psycogenous patients. Thirty-two control subjects matched for sex and age were also measured. During clenching, standardized total muscle activities (electromyographic potentials over time) significantly differed: 131.7 mu V/mu V s % in the normal subjects, 117.6 mu V/mu V s % in the myogenous patients, 105.3 mu V/mu V s % in the arthrogenous patients, 88.7 mu V/mu V s % in the psycogenous patients (p < 0.001, analysis of covariance). Symmetry in the temporalis muscles was larger in normal subjects (86.3%) and in myogenous patients (84.9%) than in arthrogenous (82.7%), and psycogenous patients (80.5%) (p=0.041). No differences were found for masseter muscle symmetry and torque coefficient (p>0.05). Surface electromyography of the masticatory muscles allowed an objective discrimination among different RDC/TMD subgroups. This evaluation could assist conventional clinical assessments. (C) 2007 Elsevier Ltd. All rights reserved.