33 resultados para Likelihood functions
Resumo:
Smoothing the potential energy surface for structure optimization is a general and commonly applied strategy. We propose a combination of soft-core potential energy functions and a variation of the diffusion equation method to smooth potential energy surfaces, which is applicable to complex systems such as protein structures; The performance of the method was demonstrated by comparison with simulated annealing using the refinement of the undecapeptide Cyclosporin A as a test case. Simulations were repeated many times using different initial conditions and structures since the methods are heuristic and results are only meaningful in a statistical sense.
Resumo:
The financial and economic analysis of investment projects is typically carried out using the technique of discounted cash flow (DCF) analysis. This module introduces concepts of discounting and DCF analysis for the derivation of project performance criteria such as net present value (NPV), internal rate of return (IRR) and benefit to cost (B/C) ratios. These concepts and criteria are introduced with respect to a simple example, for which calculations using MicroSoft Excel are demonstrated.
Resumo:
Cells from patients with the genetic disorder ataxia-telangiectasia (A-T) are hypersensitive to ionizing radiation and radiomimetic agents, both of which generate reactive oxygen species capable of causing oxidative damage to DNA and other macromolecules. We describe in A-T cells constitutive activation of pathways that normally respond to genotoxic stress, Basal levels of p53 and p21(WAF1/CIP1), phosphorylation on serine 15 of p53, and the Tyr15-phosphorylated form of cdc2 are chronically elevated in these cells. Treatment of A-T cells with the antioxidant alpha -lipoic acid significantly reduced the levels of these proteins, pointing to the involvement of reactive oxygen species in their chronic activation. These findings suggest that the absence of functional ATM results in a mild but continuous state of oxidative stress, which could account for several features of the pleiotropic phenotype of A-T.
Resumo:
Conservation of biodiversity can generate considerable indirect economic value and this is being increasingly recognized in China. For a forest ecosystem type of a nature reserve, the most important of its values are its ecological functions which provide human beings and other living things with beneficial environmental services. These services include water conservancy, soil protection, CO2 fixation and O-2 release, nutrient cycling, pollutant decomposition, and disease and pest control. Based on a case study in Changbaishan Mountain Biosphere Reserve in Northeast China, this paper provides a monetary valuation of these services by using opportunity cost and alternative cost methods. Using such an approach, this reserve is valued at 510.11 million yuan (USD 61.68 mill.) per year, 10 times higher than the opportunity cost (51.78 mill. yuan/ha.a) for regular timber production. While China has heeded United Nations Environmental Program (UNEP)'s call for economic evaluation of ecological functions, the assessment techniques used need to be improved in China and in the West for reasons mentioned.
Resumo:
The EphA4 receptor tyrosine kinase regulates the formation of the corticospinal tract (CST), a pathway controlling voluntary movements, and of the anterior commissure (AC), connecting the neocortical temporal robes. To study EphA4 kinase signaling in these processes, we generated mice expressing mutant EphA4 receptors either lacking kinase activity or with severely downregulated kinase activity. We demonstrate that EphA4 is required for CST formation as a receptor for which it requires an active kinase domain. In contrast, the formation of the AC is rescued by kinase-dead EphA4, suggesting that in this structure EphA4 acts as a ligand for which its kinase activity is not required. Unexpectedly, the cytoplasmic sterile-alpha motif (SAM) domain is not required for EphA4 functions. Our findings establish both kinase-dependent and kinase-independent functions of EphA4 in the formation of major axon tracts.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
The numerical implementation of the complex image approach for the Green's function of a mixed-potential integralequation formulation is examined and is found to be limited to low values of k(0) rho (in this context k(0) rho = 2 pirho/ lambda(0), where rho is the distance between the source and the field points of the Green's function and lambda(0) is the free space wavelength). This is a clear limitation for problems of large dimension or high frequency where this limit is easily exceeded. This paper examines the various strategies and proposes a hybrid method whereby most of the above problems can be avoided. An efficient integral method that is valid for large k(0) rho is combined with the complex image method in order to take advantage of the relative merits of both schemes. It is found that a wide overlapping region exists between the two techniques allowing a very efficient and consistent approach for accurately calculating the Green's functions. In this paper, the method developed for the computation of the Green's function is used for planar structures containing both lossless and lossy media.
Resumo:
The neuropathological changes associated with Huntington's disease (HD) are most marked in the head of the caudate nucleus and, to a lesser extent, in the putamen and globus pallidus, suggesting that at least part of the language impairments found in patients with HD may result from non-thalamic subcortical (NTS) pathology. The present study aimed to test the hypothesis that a signature profile of impaired language functions is found in patients who have sustained damage to the non-thalamic subcortex, either focally induced or resulting from neurodegenerative pathology. The language abilities of a group of patients with Huntington's disease (n=13) were compared with those of an age- and education-matched group of patients with chronic NTS lesions following stroke (n=13) and a non-neurologically impaired control group (n=13). The three groups were compared on language tasks that assessed both primary and more complex language abilities. The primary language battery consisted of The Western Aphasia Battery and The Boston Naming Test, whilst the more complex cognitive-linguistic battery employed selected subtests from The Test of Language Competence-Expanded, The Test of Word Knowledge and The Word Test-Revised. On many of the tests of primary language function from the Western Aphasia Battery, both the HD and NTS participants performed in a similar manner to the control participants. The language performances of the HD participants were significantly more impaired (p<0.05 using modified Bonferroni adjustments) than the control group, however, on various lexico-semantic tasks (e. g. the Boston Naming Test and providing definitions), on both single-word and sentence-level generative tasks (e. g. category fluency and formulating sentences), and on tasks which required interpretation of ambiguous, figurative and inferential meaning. The difficulties that patients with HD experienced with tasks assessing complex language abilities were strikingly similar, both qualitatively and quantitatively, to the language profile produced by NTS participants. The results provide evidence to suggest that a signature language profile is associated with damage to the non-thalamic subcortex resulting from either focal neurological insult or a degenerative disease.
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This paper deals with an n-fold Weibull competing risk model. A characterisation of the WPP plot is given along with estimation of model parameters when modelling a given data set. These are illustrated through two examples. A study of the different possible shapes for the density and failure rate functions is also presented. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.