953 resultados para quasi-likelihood
Resumo:
Background: A small number of patients develop acute severe dysphagia for which reoperation is necessary within 10 days of laparoscopic fundoplication. The aim of this study was to identify clinical variables that might predict the likelihood of this condition occurring, such that it could be avoided in the future. Methods: This was a prospective cohort study from three tertiary referral centres, using reoperation for acute dysphagia as the main outcome variable. Gastrointestinal symptom rating scale, and psychological well-being index questionnaires were undertaken before laparoscopic fundoplication, and dysphagia scores were determined before operation and 1 year later. Standard preoperative assessment included gastroscopy, oesophageal manometry and pH studies. Results: Twelve (1.9 per cent) of the 617 patients suffered acute dysphagia, which was predicted by older age and female sex, and resulted in a longer duration of hospital stay. This condition was not predicted by any other demographic, clinical, investigative or operative variables. Conclusions: The study did not identify useful criteria by which severe acute dysphagia could be anticipated and thereby avoided following laparoscopic fundoplication.
Resumo:
Multi-environment trials (METs) used to evaluate breeding lines vary in the number of years that they sample. We used a cropping systems model to simulate the target population of environments (TPE) for 6 locations over 108 years for 54 'near-isolines' of sorghum in north-eastern Australia. For a single reference genotype, each of 547 trials was clustered into 1 of 3 'drought environment types' (DETs) based on a seasonal water stress index. Within sequential METs of 2 years duration, the frequencies of these drought patterns often differed substantially from those derived for the entire TPE. This was reflected in variation in the mean yield of the reference genotype. For the TPE and for 2-year METs, restricted maximum likelihood methods were used to estimate components of genotypic and genotype by environment variance. These also varied substantially, although not in direct correlation with frequency of occurrence of different DETs over a 2-year period. Combined analysis over different numbers of seasons demonstrated the expected improvement in the correlation between MET estimates of genotype performance and the overall genotype averages as the number of seasons in the MET was increased.
Resumo:
Objectives: The aims of this study were to investigate the population pharmacokinetics of tacrolimus in adult kidney transplant recipients and to identify factors that explain variability. Methods: Population analysis was performed on retrospective data from 70 patients who received oral tacrolimus twice daily. Morning blood trough concentrations were measured by liquid chromatography-tandem mass spectrometry. Maximum likelihood estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F), with the use of NONMEM (GloboMax LLC, Hanover, Md). Factors screened for influence on these parameters were weight, age, gender, postoperative day, days of tacrolimus therapy, liver function tests, creatinine clearance, hematocrit fraction, corticosteroid dose, and potential interacting drugs. Results. CL/F was greater in patients with abnormally low hematocrit fraction (data from 21 patients only), and it decreased with increasing days of therapy and AST concentrations (P
Resumo:
The Torres Strait in northernmost Queensland, Australia, is subject to periodic outbreaks of dengue. A large outbreak of dengue 2 in 1996-97 affected five islands, resulting in 200 confirmed cases. On most of the affected islands, rainwater tanks were a common breeding site for vector mosquitoes. Rainwater tanks, wells and household containers filled with water are the most common breeding sites for dengue mosquitoes (Aedes aegypti), the primary vector of dengue in Queensland. We report on surveys conducted in February 2002 to measure the productivity of rainwater tanks and wells on Yorke Is. (Torres Strait), the first time the productivity of rainwater tanks has been measured in Australia. Of 60 rainwater tanks sampled, 10 had broken screens. Using a sticky emergence trap, 179 adult mosquitoes were collected, consisting of 63 Aedes scutellaris and 116 Culex quinquefasciatus. One unscreened tank produced 177 (99%) of the adults. A plankton net was used to sample 16 wells; 12 positive wells yielded 111 immature (larvae and pupae) mosquitoes, consisting of 57% and 43% Ae. scutellaris and Cx. quinquefasciatus, respectively. The apparent displacement of Ae. aegypti by Ae. scutellaris is discussed. Measures to reduce the likelihood of future dengue outbreaks are recommended.
Resumo:
The pharmacotherapy currently recommended by the American College of Cardiology and the American Heart Association for heart failure (HF) is a diuretic, an angiotensin-converting enzyme inhibitor (ACEI), a β-adrenoceptor antagonist and (usually) digitalis. This current treatment of HF may be improved by optimising the dose of ACEI used, as increasing the dose of lisinopril increases its benefits in HF. Selective angiotensin receptor-1 (AT1) antagonists are effective alternatives for those who cannot tolerate ACEIs. AT1 antagonists may also be used in combination with ACEIs, as some studies have shown cumulative benefits for the combination. In addition to being used in Stage IV HF patients, in whom it has a marked benefit, spironolactone should be studied in less severe HF and in the presence of β-blockers. The use of carvedilol, extended-release metoprolol and bisoprolol should be extended to severe HF patients as these agents have been shown to decrease mortality in this group. The ancillary properties of carvedilol, particularly antagonism at prejunctional β-adrenoceptors, may give it additional benefits to selective β1-adrenoceptor antagonists. Celiprolol and bucindolol are not the β-blockers of choice in HF, as they do not decrease mortality. Although digitalis does not reduce mortality, it remains the only option for a long-term positive inotropic effect, as the long-term use of the phosphodiesterase inhibitors is associated with increased mortality. The calcium sensitising drug levosimendan may be useful in the hospital treatment of decompensated HF to increase cardiac output and improve dyspnoea and fatigue. The antiarrhythmic drug amiodarone should probably be used in patients at high risk of arrhythmic or sudden death, although this treatment may soon be superseded by the more expensive implanted cardioverter defibrillators, which are probably more effective and have fewer side effects. The natriuretic peptide nesiritide has recently been introduced for the hospital treatment of decompensated HF. Novel drugs that may be beneficial in the treatment of HF include the vasopeptidase inhibitors and the selective endothelin-A receptor antagonists but these require much more investigation. However, disappointing results have been obtained in a large clinical trial of the tumour necrosis factor α antagonist etanercept, where no likelihood of a difference between placebo and etanercept was observed. Small clinical trials with recombinant growth hormone to thicken ventricles in dilated cardiomyopathy have given variable results.
Resumo:
We focus on mixtures of factor analyzers from the perspective of a method for model-based density estimation from high-dimensional data, and hence for the clustering of such data. This approach enables a normal mixture model to be fitted to a sample of n data points of dimension p, where p is large relative to n. The number of free parameters is controlled through the dimension of the latent factor space. By working in this reduced space, it allows a model for each component-covariance matrix with complexity lying between that of the isotropic and full covariance structure models. We shall illustrate the use of mixtures of factor analyzers in a practical example that considers the clustering of cell lines on the basis of gene expressions from microarray experiments. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The extent to which density-dependent processes regulate natural populations is the subject of an ongoing debate. We contribute evidence to this debate showing that density-dependent processes influence the population dynamics of the ectoparasite Aponomma hydrosauri (Acari: Ixodidae), a tick species that infests reptiles in Australia. The first piece of evidence comes from an unusually long-term dataset on the distribution of ticks among individual hosts. If density-dependent processes are influencing either host mortality or vital rates of the parasite population, and those distributions can be approximated with negative binomial distributions, then general host-parasite models predict that the aggregation coefficient of the parasite distribution will increase with the average intensity of infections. We fit negative binomial distributions to the frequency distributions of ticks on hosts, and find that the estimated aggregation coefficient k increases with increasing average tick density. This pattern indirectly implies that one or more vital rates of the tick population must be changing with increasing tick density, because mortality rates of the tick's main host, the sleepy lizard, Tiliqua rugosa, are unaffected by changes in tick burdens. Our second piece of evidence is a re-analysis of experimental data on the attachment success of individual ticks to lizard hosts using generalized linear modelling. The probability of successful engorgement decreases with increasing numbers of ticks attached to a host. This is direct evidence of a density-dependent process that could lead to an increase in the aggregation coefficient of tick distributions described earlier. The population-scale increase in the aggregation coefficient is indirect evidence of a density-dependent process or processes sufficiently strong to produce a population-wide pattern, and thus also likely to influence population regulation. The direct observation of a density-dependent process is evidence of at least part of the responsible mechanism.
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Let X and Y be Hausdorff topological vector spaces, K a nonempty, closed, and convex subset of X, C: K--> 2(Y) a point-to-set mapping such that for any x is an element of K, C(x) is a pointed, closed, and convex cone in Y and int C(x) not equal 0. Given a mapping g : K --> K and a vector valued bifunction f : K x K - Y, we consider the implicit vector equilibrium problem (IVEP) of finding x* is an element of K such that f (g(x*), y) is not an element of - int C(x) for all y is an element of K. This problem generalizes the (scalar) implicit equilibrium problem and implicit variational inequality problem. We propose the dual of the implicit vector equilibrium problem (DIVEP) and establish the equivalence between (IVEP) and (DIVEP) under certain assumptions. Also, we give characterizations of the set of solutions for (IVP) in case of nonmonotonicity, weak C-pseudomonotonicity, C-pseudomonotonicity, and strict C-pseudomonotonicity, respectively. Under these assumptions, we conclude that the sets of solutions are nonempty, closed, and convex. Finally, we give some applications of (IVEP) to vector variational inequality problems and vector optimization problems. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
In the present paper, we study the quasiequilibrium problem and generalized quasiequilibrium problem of generalized quasi-variational inequality in H-spaces by a new method. Some new equilibrium existence theorems are given. Our results are different from corresponding given results or contain some recent results as their special cases. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
A model of iron carbonate (FeCO3) film growth is proposed, which is an extension of the recent mechanistic model of carbon dioxide (CO2) corrosion by Nesic, et al. In the present model, the film growth occurs by precipitation of iron carbonate once saturation is exceeded. The kinetics of precipitation is dependent on temperature and local species concentrations that are calculated by solving the coupled species transport equations. Precipitation tends to build up a layer of FeCO3 on the surface of the steel and reduce the corrosion rate. On the other hand, the corrosion process induces voids under the precipitated film, thus increasing the porosity and leading to a higher corrosion rate. Depending on the environmental parameters such as temperature, pH, CO2 partial pressure, velocity, etc., the balance of the two processes can lead to a variety of outcomes. Very protective films and low corrosion rates are predicted at high pH, temperature, CO2 partial pressure, and Fe2+ ion concentration due to formation of dense protective films as expected. The model has been successfully calibrated against limited experimental data. Parametric testing of the model has been done to gain insight into the effect of various environmental parameters on iron carbonate film formation. The trends shown in the predictions agreed well with the general understanding of the CO2 corrosion process in the presence of iron carbonate films. The present model confirms that the concept of scaling tendency is a good tool for predicting the likelihood of protective iron carbonate film formation.
Resumo:
This paper deals with an n-fold Weibull competing risk model. A characterisation of the WPP plot is given along with estimation of model parameters when modelling a given data set. These are illustrated through two examples. A study of the different possible shapes for the density and failure rate functions is also presented. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
It has been suggested that twinning may influence handedness through the effects of birth order, intra-uterine crowding and mirror imaging. The influence of these effects on handedness (for writing and throwing) was examined in 3657 Monozygotic (MZ) and 3762 Dizygotic (DZ) twin pairs (born 1893-1992). Maximum likelihood analyses revealed no effects of birth order on the incidence of left-handedness. Twins were no more likely to be left-handed than their singleton siblings (n = 1757), and there were no differences between the DZ co-twin and sibling-twin covariances, suggesting that neither intra-uterine crowding nor the experience of being a twin affects handedness. There was no evidence of mirror imaging; the co-twin correlations of monochorionic and dichorionic MZ twins did not differ. Univariate genetic analyses revealed common environmental factors to be the most parsimonious explanation of familial aggregation for the writing-hand measure, while additive genetic influences provided a better interpretation of the throwing hand data.
Resumo:
Background: A knowledge of energy expenditure in infancy is required for the estimation of recommended daily amounts of food energy, for designing artificial infant feeds, and as a reference standard for studies of energy metabolism in disease states. Objectives: The objectives of this study were to construct centile reference charts for total energy expenditure (TEE) in infants across the first year of life. Methods: Repeated measures of TEE using the doubly labeled water technique were made in 162 infants at 1.5, 3, 6, 9 and 12 months. In total, 322 TEE measurements were obtained. The LMS method with maximum penalized likelihood was used to construct the centile reference charts. Centiles were constructed for TEE expressed as MJ/day and also expressed relative to body weight (BW) and fat-free mass (FFM). Results: TEE increased with age and was 1.40,1.86, 2.64, 3.07 and 3.65 MJ/day at 1.5, 3, 6, 9 and 12 months, respectively. The standard deviations were 0.43, 0.47, 0.52, 0.66 and 0.88, respectively. TEE in MJ/kg increased from 0.29 to 0.36 and in MJ/day/kg FFM from 0.36 to 0.48. Conclusions: We have presented centile reference charts for TEE expressed as MJ/day and expressed relative to BW and FFM in infants across the first year of life. There was a wide variation or biological scatter in TEE values seen at all ages. We suggest that these centile charts may be used to assess and possibly quantify abnormal energy metabolism in disease states in infants.
Resumo:
Background Patients with known or suspected coronary disease are often investigated to facilitate risk assessment. We sought to examine the cost-effectiveness of strategies based on exercise echocardiography and exercise electrocardiography. Methods and results We studied 7656 patients undergoing exercise testing; of whom half underwent exercise echocardiography. Risk was defined with the Duke treadmill score for those undergoing exercise electrocardiography alone, and by the extent of ischaemia by exercise echocardiography. Cox proportional hazards models, risk adjusted for pretest likelihood of coronary artery disease, were used to estimate time to cardiac death or myocardial infarction. Costs (including diagnostic and revascularisation procedures, hospitalisations, and events) were calculated, inflation-corrected to year 2000 using Medicare trust fund rates and discounted at a rate of 5%. A decision model was employed to assess the marginal cost effectiveness (cost/life year saved) of exercise echo compared with exercise electrocardiography. Exercise echocardiography identified more patients as low-risk (51% vs 24%, p<0.001), and fewer as intermediate- (27% vs 51%, p<0.001) and high-risk (22% vs 4%); survival was greater in low- and intermediate- risk and less in high-risk patients. Although initial procedural costs and revascularisation costs (in intermediate- high risk patients) were greater, exercise echocardiography was associated with a greater incremental life expectancy (0.2 years) and a lower use of additional diagnostic procedures when compared with exercise electrocardiography (especially in lower risk patients). Using decision analysis, exercise echocardiography (Euro 2615/life year saved) was more cost effective than exercise electrocardiography. Conclusion Exercise echocardiography may enhance cost-effectiveness for the detection and management of at risk patients with known or suspected coronary disease. (C) 2003 Published by Elsevier Science Ltd on behalf of The European Society of Cardiology.