196 resultados para quasi-likelihood
Resumo:
Historically, few articles have addressed the use of district level mill production data for analysing the effect of varietal change on sugarcane productivity trends. This appears to be due to lack of compiled district data sets and appropriate methods by which to analyse these data. Recently, varietal data on tonnes of sugarcane per hectare (TCH), sugar content (CCS), and their product, tonnes of sugar content per hectare (TSH) on a district basis, have been compiled. This study was conducted to develop a methodology for regular analysis of such data from mill districts to assess productivity trends over time, accounting for variety and variety x environment interaction effects for 3 mill districts (Mulgrave, Babinda, and Tully) from 1958 to 1995. Restricted maximum likelihood methodology was used to analyse the district level data and best linear unbiased predictors for random effects, and best linear unbiased estimates for fixed effects were computed in a mixed model analysis. In the combined analysis over districts, Q124 was the top ranking variety for TCH, and Q120 was top ranking for both CCS and TSH. Overall production for TCH increased over the 38-year period investigated. Some of this increase can be attributed to varietal improvement, although the predictors for TCH have shown little progress since the introduction of Q99 in 1976. Although smaller gains have been made in varietal improvement for CCS, overall production for CCS decreased over the 38 years due to non-varietal factors. Varietal improvement in TSH appears to have peaked in the mid-1980s. Overall production for TSH remained stable over time due to the varietal increase in TCH and the non-varietal decrease in CCS.
Resumo:
When the data consist of certain attributes measured on the same set of items in different situations, they would be described as a three-mode three-way array. A mixture likelihood approach can be implemented to cluster the items (i.e., one of the modes) on the basis of both of the other modes simultaneously (i.e,, the attributes measured in different situations). In this paper, it is shown that this approach can be extended to handle three-mode three-way arrays where some of the data values are missing at random in the sense of Little and Rubin (1987). The methodology is illustrated by clustering the genotypes in a three-way soybean data set where various attributes were measured on genotypes grown in several environments.
Resumo:
We develop a new iterative filter diagonalization (FD) scheme based on Lanczos subspaces and demonstrate its application to the calculation of bound-state and resonance eigenvalues. The new scheme combines the Lanczos three-term vector recursion for the generation of a tridiagonal representation of the Hamiltonian with a three-term scalar recursion to generate filtered states within the Lanczos representation. Eigenstates in the energy windows of interest can then be obtained by solving a small generalized eigenvalue problem in the subspace spanned by the filtered states. The scalar filtering recursion is based on the homogeneous eigenvalue equation of the tridiagonal representation of the Hamiltonian, and is simpler and more efficient than our previous quasi-minimum-residual filter diagonalization (QMRFD) scheme (H. G. Yu and S. C. Smith, Chem. Phys. Lett., 1998, 283, 69), which was based on solving for the action of the Green operator via an inhomogeneous equation. A low-storage method for the construction of Hamiltonian and overlap matrix elements in the filtered-basis representation is devised, in which contributions to the matrix elements are computed simultaneously as the recursion proceeds, allowing coefficients of the filtered states to be discarded once their contribution has been evaluated. Application to the HO2 system shows that the new scheme is highly efficient and can generate eigenvalues with the same numerical accuracy as the basic Lanczos algorithm.
Population pharmacokinetics of tacrolimus in children who receive cut-down or full liver transplants
Resumo:
Background. The aim of this study was to investigate the population pharmacokinetics of tacrolimus in pediatric liver transplant recipients and to identify factors that may explain pharmacokinetic variability. Methods. Data were collected retrospectively from 35 children who received oral immunosuppressant therapy with tacrolimus. Maximum likelihood estimates were sought for the typical values of apparent clearance (CL/F) and apparent volume of distribution (V/F) with the program NONMEM. Factors screened for influence on the pharmacokinetic parameters were weight, age, gender, postoperative day, days since commencing tacrolimus therapy, transplant type (whole child liver or cut-down adult liver), liver function tests (bilirubin, alkaline phosphatase [ALP], aspartate aminotransferase [AST], gamma -glutamyl transferase [GGT], alanine aminotransferase [ALT]), creatinine clearance, hematocrit, corticosteroid dose, and concurrent therapy with metabolic inducers and inhibitors of tacrolimus. Results. No clear correlation existed between tacrolimus dosage and blood concentrations (r(2) =0.003). Transplant type, age, and liver function test values were the most important factors (P
Resumo:
We study the effect of quantum interference on the population distribution and absorptive properties of a V-type three-level atom driven by two lasers of unequal intensities and different angular frequencies. Three coupling configurations of the lasers to the atom are analysed: (a) both lasers coupled to the same atomic transition, (b) each laser coupled to different atomic transition and (c) each laser coupled to both atomic transitions. Dressed stales for the three coupling configurations are identified, and the population distribution and absorptive properties of the weaker field are interpreted in terms of transition dipole moments and transition frequencies among these dressed states. In particular, we find that in the first two cases there is no population inversion between the bare atomic states, but the population can be trapped in a superposition of the dressed states induced by quantum interference and the stronger held. We show that the trapping of the population, which results from the cancellation of transition dipole moments, does not prevent the weaker field to be coupled to the cancelled (dark) transitions. As a result, the weaker field can be strongly amplified on transparent transitions. In the case of each laser coupled to both atomic transitions the population can be trapped in a linear superposition of the excited bare atomic states leaving the ground state unpopulated in the steady state. Moreover, we find that the absorption rate of the weaker field depends on the detuning of the strong field from the atomic resonances and the splitting between the atomic excited states. When the strong held is resonant to one of the atomic transitions a quasi-trapping effect appears in one of the dressed states. In the quasi-trapping situation all the transition dipole moments are different from zero, which allows the weaker field to be amplified on the inverted transitions. When the strong field is tuned halfway between the atomic excited states, the population is completely trapped in one of the dressed states and no amplification is found for the weaker field.
Resumo:
We describe in detail the theory underpinning the measurement of density matrices of a pair of quantum two-level systems (qubits). Our particular emphasis is on qubits realized by the two polarization degrees of freedom of a pair of entangled photons generated in a down-conversion experiment; however, the discussion applies in general, regardless of the actual physical realization. Two techniques are discussed, namely, a tomographic reconstruction (in which the density matrix is linearly related to a set of measured quantities) and a maximum likelihood technique which requires numerical optimization (but has the advantage of producing density matrices that are always non-negative definite). In addition, a detailed error analysis is presented, allowing errors in quantities derived from the density matrix, such as the entropy or entanglement of formation, to be estimated. Examples based on down-conversion experiments are used to illustrate our results.
Resumo:
Sequences from the tuf gene coding for the elongation factor EF-Tu were amplified and sequenced from the genomic DNA of Pirellula marina and Isosphaera pallida, two species of bacteria within the order Planctomycetales. A near-complete (1140-bp) sequence was obtained from Pi. marina and a partial (759-bp) sequence was obtained for I. pallida. Alignment of the deduced Pi. marina EF-Tu amino acid sequence against reference sequences demonstrated the presence of a unique Il-amino acid sequence motif not present in any other division of the domain Bacteria. Pi. marina shared the highest percentage amino acid sequence identity with I. pallida but showed only a low percentage identity with other members of the domain Bacteria. This is consistent with the concept of the planctomycetes as a unique division of the Bacteria. Neither primary sequence comparison of EF-Tu nor phylogenetic analysis supports any close relationship between planctomycetes and the chlamydiae, which has previously been postulated on the basis of 16S rRNA. Phylogenetic analysis of aligned EF-Tu amino acid sequences performed using distance, maximum-parsimony, and maximum likelihood approaches yielded contradictory results with respect to the position of planctomycetes relative to other bacteria, It is hypothesized that long-branch attraction effects due to unequal evolutionary rates and mutational saturation effects may account for some of the contradictions.
Resumo:
The variation of seawater level resulting from tidal fluctuations is usually neglected in regional groundwater flow studies. Although the tidal oscillation is damped near the shoreline, there is a quasi-steady-slate rise in the mean water-table position, which may have an influence on regional groundwater flow. In this paper the effects of tidal fluctuations on groundwater hydraulics are investigated using a variably saturated numerical model that includes the effects of a realistic mild beach slope, seepage face and the unsaturated zone. In particular the impact of these factors on the velocity field in the aquifer is assessed. Simulations show that the tidal fluctuation has substantial consequences for the local velocity field in the vicinity of the exit face, which affects the nearshore migration of contaminant in coastal aquifers. An overheight in the water table as a result of the tidal fluctuation is observed anti this has a significant effect on groundwater discharge to the sea when the landward boundary condition is a constant water level. The effect of beach slope is very significant and simplifying the problem by considering a vertical beach face causes serious errors in predicting the water-table position and the groundwater flux. For media with a high effective capillary fringe, the moisture retained above the water table is important in determining the effects of the tidal fluctuations. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Significant pain continues to be reported by many hospitalized patients despite the numerous and varied educational programs developed and implemented to improve pain management. A theoretically based Peer Intervention Program was designed from a predictive model to address nurses' beliefs, attitudes, subjective norms, self-efficacy, perceived control and intentions in the management of pain with p.r.n. (as required) narcotic analgesia. The pilot study of this program utilized a quasi-experimental pre-post test design with a patient intervention, nurse and patient intervention and control conditions consisting of 24, 18 and 19 nurses, respectively. One week after the intervention, significant differences were found between the nurse and patient condition and the two other conditions in beliefs, self-efficacy, perceived control, positive trend in attitudes, subjective norms and intentions. The most positive aspects of the program were supportive interactive discussions with peers and an awareness and understanding of beliefs and attitudes and their roles in behavior.
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
Participatory plant breeding (PPB) has been suggested as an effective alternative to formal plant breeding (FPB) as a breeding strategy for achieving productivity gains under low input conditions. With genetic progress through PPB and FPB being determined by the same genetic variables, the likelihood of success of PPB approaches applied in low input target conditions was analyzed using two case studies from FPB that have resulted in significant productivity gains under low input conditions: (1) breeding tropical maize for low input conditions by CIMMYT, and (2) breeding of spring wheat for the highly variable low input rainfed farming systems in Australia. In both cases, genetic improvement was an outcome of long-term investment in a sustained research effort aimed at understanding the detail of the important environmental constraints to productivity and the plant requirements for improved adaptation to the identified constraints, followed up by the design and continued evaluation of efficient breeding strategies. The breeding strategies used differed between the two case studies but were consistent in their attention to the key determinants of response to selection: (1) ensuring adequate sources of genetic variation and high selection pressures for the important traits at all stages of the breeding program, (2) use of experimental procedures to achieve high levels of heritability in the breeding trials, and (3) testing strategies that achieved a high genetic correlation between performance of germplasm in the breeding trials and under on-farm conditions. The implications of the outcomes from these FPB case studies for realizing the positive motivations for adopting PPB strategies are discussed with particular reference for low input target environment conditions.
Resumo:
The 16S rRNA gene (16S rDNA) is currently the most widely used gene for estimating the evolutionary history of prokaryotes, To date, there are more than 30 000 16S rDNA sequences available from the core databases, GenBank, EMBL and DDBJ, This great number may cause a dilemma when composing datasets for phylogenetic analysis, since the choice and number of reference organisms are known to affect the resulting tree topology. A group of sequences appearing monophyletic in one dataset may not be so in another. This can be especially problematic when establishing the relationships of distantly related sequences at the division (phylum) level. In this study, a multiple-outgroup approach to resolving division-level phylogenetic relationships is suggested using 16S rDNA data. The approach is illustrated by two case studies concerning the monophyly of two recently proposed bacterial divisions, OP9 and OP10.
Resumo:
Economic globalisation is seen by many as a driving force for global economic growth. Yet opinion is divided about the benefits of this process, as highlighted by the WTO meeting in Seattle in late 1999. Proponents of economic globalisation view it as a positive force for environmental improvement and as a major factor increasing the likelihood of sustainable development through its likely boost to global investment. These proponents mostly appeal to analysis based on the environmental Kuznets curve (EKC) to support their views about environmental improvement. But EKC-analysis has significant deficiencies. Furthermore, it is impossible to be confident that the process of economic globalisation will result in sustainable development, if 'weak conditions' only are satisfied. 'Strong conditions' probably need to be satisfied to achieve sustainable development, and given current global institutional arrangements, these are likely to be violated by the economic globalisation process. Global political action seems to be needed-to avert a deterioration in the global environment and to prevent unsustainability of development. This exposition demonstrates the limitations of EKC-analysis, identifies positive and negative effects of economic globalisation on pollution levels, and highlights connections between globalisation and the debate about whether strong or weak conditions are required for sustainable development. The article concludes with a short discussion of the position of WTO in relation to trade and the environment and the seemingly de facto endorsement of WTO of weak conditions for sustainable development. It suggests that WTO's relative neglect of environmental concerns is no longer politically tenable and needs to be reassessed in the light of recent developments in economic analysis. The skew of economic growth, e.g. in favour of developing countries, is shown to be extremely important from a global environmental perspective. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
A deterministic mathematical model which predicts the probability of developing a new drug-resistant parasite population within the human host is reported, The model incorporates the host's specific antibody response to PfEMP1, and also investigates the influence of chemotherapy on the probability of developing a viable drug-resistant parasite population within the host. Results indicate that early, treatment, and a high antibody threshold coupled with a long lag time between antibody stimulation and activity, are risk factors which increase the likelihood of developing a viable drug-resistant parasite population. High parasite mutation rates and fast PfEMP1 var gene switching are also identified as risk factors. The model output allows the relative importance of the various risk factors as well as the relationships between them to be established, thereby increasing the understanding of the conditions which favour the development of a new drug-resistant parasite population.
Resumo:
In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.