32 resultados para Non-gaussian statistical mechanics
Resumo:
The effects of thermodynamic non-ideality on the forms of sedimentation equilibrium distributions for several isoelectric proteins have been analysed on the statistical-mechanical basis of excluded volume to obtain an estimate of the extent of protein solvation. Values of the effective solvation. parameter delta are reported for ellipsoidal as well as spherical models of the proteins, taken to be rigid, impenetrable macromolecular structures. The dependence of the effective solvated radius upon protein molecular mass exhibits reasonable agreement with the relationship calculated for a model in which the unsolvated protein molecule is surrounded by a 0.52-nm solvation shell. Although the observation that this shell thickness corresponds to a double layer of water molecules may be of questionable relevance to mechanistic interpretation of protein hydration, it augurs well for the assignment of magnitudes to the second virial coefficients of putative complexes in the quantitative characterization of protein-protein interactions under conditions where effects of thermodynamic non-ideality cannot justifiably be neglected. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
We show that stochastic electrodynamics and quantum mechanics give quantitatively different predictions for the quantum nondemolition (QND) correlations in travelling wave second harmonic generation. Using phase space methods and stochastic integration, we calculate correlations in both the positive-P and truncated Wigner representations, the latter being equivalent to the semi-classical theory of stochastic electrodynamics. We show that the semiclassical results are different in the regions where the system performs best in relation to the QND criteria, and that they significantly overestimate the performance in these regions. (C) 2001 Published by Elsevier Science B.V.
Resumo:
Applying programming techniques to detailed data for 406 rice farms in 21 villages, for 1997, produces inefficiency measures, which differ substantially from the results of simple yield and unit cost measures. For the Boro (dry) season, mean technical efficiency was efficiency was 56.2 per cent and 69.4 per cent, allocative efficiency was 81.3 per cent, cost efficiency was 56.2 per cent and scale efficiency 94.9 per cent. The Aman (wet) season results are similar, but a few points lower. Allocative inefficiency is due to overuse of labour, suggesting population pressure, and of fertiliser, where recommended rates may warrant revision. Second-stage regressions show that large families are more inefficient, whereas farmers with better access to input markets, and those who do less off-farm work, tend to be more efficient. The information on the sources of inter-farm performance differentials could be used by the extension agents to help inefficient farmers. There is little excuse for such sub-optimal use of survey data, which are often collected at substantial costs.
Resumo:
This paper proposes a template for modelling complex datasets that integrates traditional statistical modelling approaches with more recent advances in statistics and modelling through an exploratory framework. Our approach builds on the well-known and long standing traditional idea of 'good practice in statistics' by establishing a comprehensive framework for modelling that focuses on exploration, prediction, interpretation and reliability assessment, a relatively new idea that allows individual assessment of predictions. The integrated framework we present comprises two stages. The first involves the use of exploratory methods to help visually understand the data and identify a parsimonious set of explanatory variables. The second encompasses a two step modelling process, where the use of non-parametric methods such as decision trees and generalized additive models are promoted to identify important variables and their modelling relationship with the response before a final predictive model is considered. We focus on fitting the predictive model using parametric, non-parametric and Bayesian approaches. This paper is motivated by a medical problem where interest focuses on developing a risk stratification system for morbidity of 1,710 cardiac patients given a suite of demographic, clinical and preoperative variables. Although the methods we use are applied specifically to this case study, these methods can be applied across any field, irrespective of the type of response.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Optical tweezers are widely used for the manipulation of cells and their internal structures. However, the degree of manipulation possible is limited by poor control over the orientation of the trapped cells. We show that it is possible to controllably align or rotate disc-shaped cells-chloroplasts of Spinacia oleracea-in a plane-polarized Gaussian beam trap, using optical torques resulting predominantly from circular polarization induced in the transmitted beam by the non-spherical shape of the cells.
Resumo:
Commercial explosives behave non-ideally in rock blasting. A direct and convenient measure of non-ideality is the detonation velocity. In this study, an alternative model fitted to experimental unconfined detonation velocity data is proposed and the effect of confinement on the detonation velocity is modelled. Unconfined data of several explosives showing various levels of nonideality were successfully modelled. The effect of confinement on detonation velocity was modelled empirically based on field detonation velocity measurements. Confined detonation velocity is a function of the ideal detonation velocity, unconfined detonation velocity at a given blasthole diameter and rock stiffness. For a given explosive and charge diameter, as confinement increases detonation velocity increases. The confinement model is implemented in a simple engineering based non-ideal detonation model. A number of simulations are carried out and analysed to predict the explosive performance parameters for the adopted blasting conditions.
Resumo:
The purpose of this work was to model lung cancer mortality as a function of past exposure to tobacco and to forecast age-sex-specific lung cancer mortality rates. A 3-factor age-period-cohort (APC) model, in which the period variable is replaced by the product of average tar content and adult tobacco consumption per capita, was estimated for the US, UK, Canada and Australia by the maximum likelihood method. Age- and sex-specific tobacco consumption was estimated from historical data on smoking prevalence and total tobacco consumption. Lung cancer mortality was derived from vital registration records. Future tobacco consumption, tar content and the cohort parameter were projected by autoregressive moving average (ARIMA) estimation. The optimal exposure variable was found to be the product of average tar content and adult cigarette consumption per capita, lagged for 2530 years for both males and females in all 4 countries. The coefficient of the product of average tar content and tobacco consumption per capita differs by age and sex. In all models, there was a statistically significant difference in the coefficient of the period variable by sex. In all countries, male age-standardized lung cancer mortality rates peaked in the 1980s and declined thereafter. Female mortality rates are projected to peak in the first decade of this century. The multiplicative models of age, tobacco exposure and cohort fit the observed data between 1950 and 1999 reasonably well, and time-series models yield plausible past trends of relevant variables. Despite a significant reduction in tobacco consumption and average tar content of cigarettes sold over the past few decades, the effect on lung cancer mortality is affected by the time lag between exposure and established disease. As a result, the burden of lung cancer among females is only just reaching, or soon will reach, its peak but has been declining for I to 2 decades in men. Future sex differences in lung cancer mortality are likely to be greater in North America than Australia and the UK due to differences in exposure patterns between the sexes. (c) 2005 Wiley-Liss, Inc.
Resumo:
Purpose: To evaluate the clinical features, treatment, and outcomes of a cohort of patients with ocular adnexal lymphoproliferative disease classified according to the World Health Organization modification of the Revised European-American Classification of Lymphoid neoplasms and to perform a robust statistical analysis of these data. Methods: Sixty-nine cases of ocular adnexal lymphoproliferative disease, seen in a tertiary referral center from 1992 to 2003, were included in the study. Lesions were classified by using the World Health Organization modification of the Revised European-American Classification of Lymphoid neoplasms classification. Outcome variables included disease-specific Survival, relapse-free survival, local control, and distant control. Results: Stage IV disease at presentation, aggressive lymphoma histology, the presence of prior or concurrent systemic lymphoma at presentation, and bilateral adnexal disease were significant predictors for reduced disease-specific survival, local control, and distant control. Multivariate analysis found that aggressive histology and bilateral adnexal disease had significantly reduced disease-specific Survival. Conclusions: The typical presentation of adnexal lymphoproliferative disease is with a painless mass, swelling, or proptosis; however, pain and inflammation occurred in 20% and 30% of patients, respectively. Stage at presentation, tumor histology, primary or secondary status, and whether the process was unilateral or bilateral were significant variables for disease outcome. In this study, distant spread of lymphoma was lower in patients who received greater than 20 Gy of orbital radiotherapy.
Resumo:
We investigate the nonclassicality of a photon-subtracted Gaussian field, which was produced in a recent experiment, using negativity of the Wigner function and the nonexistence of well-behaved positive P function. We obtain the condition to see negativity of the Wigner function for the case including the mixed Gaussian incoming field, the threshold photodetection and the inefficient homodyne measurement. We show how similar the photon-subtracted state is to a superposition of coherent states.
Resumo:
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.
Resumo:
Alcohol dependence is characterized by tolerance, physical dependence, and craving. The neuroadaptations underlying these effects of chronic alcohol abuse are likely due to altered gene expression. Previous gene expression studies using human post-mortem brain demonstrated that several gene families were altered by alcohol abuse. However, most of these changes in gene expression were small. It is not clear if gene expression profiles have sufficient power to discriminate control from alcoholic individuals and how consistent gene expression changes are when a relatively large sample size is examined. In the present study, microarray analysis (similar to 47 000 elements) was performed on the superior frontal cortex of 27 individual human cases ( 14 well characterized alcoholics and 13 matched controls). A partial least squares statistical procedure was applied to identify genes with altered expression levels in alcoholics. We found that genes involved in myelination, ubiquitination, apoptosis, cell adhesion, neurogenesis, and neural disease showed altered expression levels. Importantly, genes involved in neurodegenerative diseases such as Alzheimer's disease were significantly altered suggesting a link between alcoholism and other neurodegenerative conditions. A total of 27 genes identified in this study were previously shown to be changed by alcohol abuse in previous studies of human post-mortem brain. These results revealed a consistent re-programming of gene expression in alcohol abusers that reliably discriminates alcoholic from non-alcoholic individuals.
Resumo:
Aims paper describes the background to the establishment of the Substance Use Disorders Workgroup, which was charged with developing the research agenda for the development of the next edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM). It summarizes 18 articles that were commissioned to inform that process. Methods A preliminary list of research topics, developed at the DSM-V Launch Conference in 2004, led to the identification of subjects that were subject to formal presentations and detailed discussion at the Substance Use Disorders Conference in February 2005. Results The 18 articles presented in this supplement examine: (1) categorical versus dimensional diagnoses; (2) the neurobiological basis of substance use disorders; (3) social and cultural perspectives; (4) the crosswalk between DSM-IV and the International Classification of Diseases Tenth Revision (ICD-10); (5) comorbidity of substance use disorders and mental health disorders; (6) subtypes of disorders; (7) issues in adolescence; (8) substance-specific criteria; (9) the place of non-substance addictive disorders; and (10) the available research resources. Conclusions In the final paper a broadly based research agenda for the development of diagnostic concepts and criteria for substance use disorders is presented.
Resumo:
High-quality data about protein structures and their gene sequences are essential to the understanding of the relationship between protein folding and protein coding sequences. Firstly we constructed the EcoPDB database, which is a high-quality database of Escherichia coli genes and their corresponding PDB structures. Based on EcoPDB, we presented a novel approach based on information theory to investigate the correlation between cysteine synonymous codon usages and local amino acids flanking cysteines, the correlation between cysteine synonymous codon usages and synonymous codon usages of local amino acids flanking cysteines, as well as the correlation between cysteine synonymous codon usages and the disulfide bonding states of cysteines in the E. coli genome. The results indicate that the nearest neighboring residues and their synonymous codons of the C-terminus have the greatest influence on the usages of the synonymous codons of cysteines and the usage of the synonymous codons has a specific correlation with the disulfide bond formation of cysteines in proteins. The correlations may result from the regulation mechanism of protein structures at gene sequence level and reflect the biological function restriction that cysteines pair to form disulfide bonds. The results may also be helpful in identifying residues that are important for synonymous codon selection of cysteines to introduce disulfide bridges in protein engineering and molecular biology. The approach presented in this paper can also be utilized as a complementary computational method and be applicable to analyse the synonymous codon usages in other model organisms. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
We introduce a positive phase-space representation for fermions, using the most general possible multimode Gaussian operator basis. The representation generalizes previous bosonic quantum phase-space methods to Fermi systems. We derive equivalences between quantum and stochastic moments, as well as operator correspondences that map quantum operator evolution onto stochastic processes in phase space. The representation thus enables first-principles quantum dynamical or equilibrium calculations in many-body Fermi systems. Potential applications are to strongly interacting and correlated Fermi gases, including coherent behavior in open systems and nanostructures described by master equations. Examples of an ideal gas and the Hubbard model are given, as well as a generic open system, in order to illustrate these ideas.