997 resultados para basis-set
Resumo:
Smoking influences body weight such that smokers weigh less than non-smokers and smoking cessation often leads to weight increase. The relationship between body weight and smoking is partly explained by the effect of nicotine on appetite and metabolism. However, the brain reward system is involved in the control of the intake of both food and tobacco. We evaluated the effect of single-nucleotide polymorphisms (SNPs) affecting body mass index (BMI) on smoking behavior, and tested the 32 SNPs identified in a meta-analysis for association with two smoking phenotypes, smoking initiation (SI) and the number of cigarettes smoked per day (CPD) in an Icelandic sample (N=34,216 smokers). Combined according to their effect on BMI, the SNPs correlate with both SI (r=0.019, P=0.00054) and CPD (r=0.032, P=8.0 × 10(-7)). These findings replicate in a second large data set (N=127,274, thereof 76,242 smokers) for both SI (P=1.2 × 10(-5)) and CPD (P=9.3 × 10(-5)). Notably, the variant most strongly associated with BMI (rs1558902-A in FTO) did not associate with smoking behavior. The association with smoking behavior is not due to the effect of the SNPs on BMI. Our results strongly point to a common biological basis of the regulation of our appetite for tobacco and food, and thus the vulnerability to nicotine addiction and obesity.
Resumo:
Bei der Bestimmung der irreduziblen Charaktere einer Gruppe vom Lie-Typ entwickelte Lusztig eine Theorie, in der eine sogenannte Fourier-Transformation auftaucht. Dies ist eine Matrix, die nur von der Weylgruppe der Gruppe vom Lie-Typ abhängt. Anhand der Eigenschaften, die eine solche Fourier- Matrix erfüllen muß, haben Geck und Malle ein Axiomensystem aufgestellt. Dieses ermöglichte es Broue, Malle und Michel füur die Spetses, über die noch vieles unbekannt ist, Fourier-Matrizen zu bestimmen. Das Ziel dieser Arbeit ist eine Untersuchung und neue Interpretation dieser Fourier-Matrizen, die hoffentlich weitere Informationen zu den Spetses liefert. Die Werkzeuge, die dabei entstehen, sind sehr vielseitig verwendbar, denn diese Matrizen entsprechen gewissen Z-Algebren, die im Wesentlichen die Eigenschaften von Tafelalgebren besitzen. Diese spielen in der Darstellungstheorie eine wichtige Rolle, weil z.B. Darstellungsringe Tafelalgebren sind. In der Theorie der Kac-Moody-Algebren gibt es die sogenannte Kac-Peterson-Matrix, die auch die Eigenschaften unserer Fourier-Matrizen besitzt. Ein wichtiges Resultat dieser Arbeit ist, daß die Fourier-Matrizen, die G. Malle zu den imprimitiven komplexen Spiegelungsgruppen definiert, die Eigenschaft besitzen, daß die Strukturkonstanten der zugehörigen Algebren ganze Zahlen sind. Dazu müssen äußere Produkte von Gruppenringen von zyklischen Gruppen untersucht werden. Außerdem gibt es einen Zusammenhang zu den Kac-Peterson-Matrizen: Wir beweisen, daß wir durch Bildung äußerer Produkte von den Matrizen vom Typ A(1)1 zu denen vom Typ C(1) l gelangen. Lusztig erkannte, daß manche seiner Fourier-Matrizen zum Darstellungsring des Quantendoppels einer endlichen Gruppe gehören. Deswegen ist es naheliegend zu versuchen, die noch ungeklärten Matrizen als solche zu identifizieren. Coste, Gannon und Ruelle untersuchen diesen Darstellungsring. Sie stellen eine Reihe von wichtigen Fragen. Eine dieser Fragen beantworten wir, nämlich inwieweit rekonstruiert werden kann, zu welcher endlichen Gruppe gegebene Matrizen gehören. Den Darstellungsring des getwisteten Quantendoppels berechnen wir für viele Beispiele am Computer. Dazu müssen unter anderem Elemente aus der dritten Kohomologie-Gruppe H3(G,C×) explizit berechnet werden, was bisher anscheinend in noch keinem Computeralgebra-System implementiert wurde. Leider ergibt sich hierbei kein Zusammenhang zu den von Spetses herrührenden Matrizen. Die Werkzeuge, die in der Arbeit entwickelt werden, ermöglichen eine strukturelle Zerlegung der Z-Ringe mit Basis in bekannte Anteile. So können wir für die meisten Matrizen der Spetses Konstruktionen angeben: Die zugehörigen Z-Algebren sind Faktorringe von Tensorprodukten von affinen Ringe Charakterringen und von Darstellungsringen von Quantendoppeln.
Resumo:
Most parameterizations for precipitating convection in use today are bulk schemes, in which an ensemble of cumulus elements with different properties is modelled as a single, representative entraining-detraining plume. We review the underpinning mathematical model for such parameterizations, in particular by comparing it with spectral models in which elements are not combined into the representative plume. The chief merit of a bulk model is that the representative plume can be described by an equation set with the same structure as that which describes each element in a spectral model. The equivalence relies on an ansatz for detrained condensate introduced by Yanai et al. (1973) and on a simplified microphysics. There are also conceptual differences in the closure of bulk and spectral parameterizations. In particular, we show that the convective quasi-equilibrium closure of Arakawa and Schubert (1974) for spectral parameterizations cannot be carried over to a bulk parameterization in a straightforward way. Quasi-equilibrium of the cloud work function assumes a timescale separation between a slow forcing process and a rapid convective response. But, for the natural bulk analogue to the cloud-work function (the dilute CAPE), the relevant forcing is characterised by a different timescale, and so its quasi-equilibrium entails a different physical constraint. Closures of bulk parameterization that use the non-entraining parcel value of CAPE do not suffer from this timescale issue. However, the Yanai et al. (1973) ansatz must be invoked as a necessary ingredient of those closures.
Resumo:
Perchlorate-reducing bacteria fractionate chlorine stable isotopes giving a powerful approach to monitor the extent of microbial consumption of perchlorate in contaminated sites undergoing remediation or natural perchlorate containing sites. This study reports the full experimental data and methodology used to re-evaluate the chlorine isotope fractionation of perchlorate reduction in duplicate culture experiments of Azospira suillum strain PS at 37 degrees C (Delta Cl-37(Cr)--ClO4-) previously reported, without a supporting data set by Coleman et al. [Coleman, M.L., Ader, M., Chaudhuri, S., Coates,J.D., 2003. Microbial Isotopic Fractionation of Perchlorate Chlorine. Appl. Environ. Microbiol. 69, 4997-5000] in a reconnaissance study, with the goal of increasing the accuracy and precision of the isotopic fractionation determination. The method fully described here for the first time, allows the determination of a higher precision Delta Cl-37(Cl)--ClO4- value, either from accumulated chloride content and isotopic composition or from the residual perchlorate content and isotopic composition. The result sets agree perfectly, within error, giving average Delta Cl-37(Cl)--ClO4- = -14.94 +/- 0.15%omicron. Complementary use of chloride and perchlorate data allowed the identification and rejection of poor quality data by applying mass and isotopic balance checks. This precise Delta Cl-37(Cl)--ClO4-, value can serve as a reference point for comparison with future in situ or microcosm studies but we also note its similarity to the theoretical equilibrium isotopic fractionation between a hypothetical chlorine species of redox state +6 and perchlorate at 37 degrees C and suggest that the first electron transfer during perchlorate reduction may occur at isotopic equilibrium between art enzyme-bound chlorine and perchlorate. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A basic principle in data modelling is to incorporate available a priori information regarding the underlying data generating mechanism into the modelling process. We adopt this principle and consider grey-box radial basis function (RBF) modelling capable of incorporating prior knowledge. Specifically, we show how to explicitly incorporate the two types of prior knowledge: the underlying data generating mechanism exhibits known symmetric property and the underlying process obeys a set of given boundary value constraints. The class of orthogonal least squares regression algorithms can readily be applied to construct parsimonious grey-box RBF models with enhanced generalisation capability.
Resumo:
A new structure of Radial Basis Function (RBF) neural network called the Dual-orthogonal RBF Network (DRBF) is introduced for nonlinear time series prediction. The hidden nodes of a conventional RBF network compare the Euclidean distance between the network input vector and the centres, and the node responses are radially symmetrical. But in time series prediction where the system input vectors are lagged system outputs, which are usually highly correlated, the Euclidean distance measure may not be appropriate. The DRBF network modifies the distance metric by introducing a classification function which is based on the estimation data set. Training the DRBF networks consists of two stages. Learning the classification related basis functions and the important input nodes, followed by selecting the regressors and learning the weights of the hidden nodes. In both cases, a forward Orthogonal Least Squares (OLS) selection procedure is applied, initially to select the important input nodes and then to select the important centres. Simulation results of single-step and multi-step ahead predictions over a test data set are included to demonstrate the effectiveness of the new approach.
Resumo:
A fundamental principle in data modelling is to incorporate available a priori information regarding the underlying data generating mechanism into the modelling process. We adopt this principle and consider grey-box radial basis function (RBF) modelling capable of incorporating prior knowledge. Specifically, we show how to explicitly incorporate the two types of prior knowledge: (i) the underlying data generating mechanism exhibits known symmetric property, and (ii) the underlying process obeys a set of given boundary value constraints. The class of efficient orthogonal least squares regression algorithms can readily be applied without any modification to construct parsimonious grey-box RBF models with enhanced generalisation capability.
Resumo:
The combination of the synthetic minority oversampling technique (SMOTE) and the radial basis function (RBF) classifier is proposed to deal with classification for imbalanced two-class data. In order to enhance the significance of the small and specific region belonging to the positive class in the decision region, the SMOTE is applied to generate synthetic instances for the positive class to balance the training data set. Based on the over-sampled training data, the RBF classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier structure and the parameters of RBF kernels are determined using a particle swarm optimization algorithm based on the criterion of minimizing the leave-one-out misclassification rate. The experimental results on both simulated and real imbalanced data sets are presented to demonstrate the effectiveness of our proposed algorithm.
Resumo:
Paraconsistent logics are non-classical logics which allow non-trivial and consistent reasoning about inconsistent axioms. They have been pro- posed as a formal basis for handling inconsistent data, as commonly arise in human enterprises, and as methods for fuzzy reasoning, with applica- tions in Artificial Intelligence and the control of complex systems. Formalisations of paraconsistent logics usually require heroic mathe- matical efforts to provide a consistent axiomatisation of an inconsistent system. Here we use transreal arithmetic, which is known to be consis- tent, to arithmetise a paraconsistent logic. This is theoretically simple and should lead to efficient computer implementations. We introduce the metalogical principle of monotonicity which is a very simple way of making logics paraconsistent. Our logic has dialetheaic truth values which are both False and True. It allows contradictory propositions, allows variable contradictions, but blocks literal contradictions. Thus literal reasoning, in this logic, forms an on-the- y, syntactic partition of the propositions into internally consistent sets. We show how the set of all paraconsistent, possible worlds can be represented in a transreal space. During the development of our logic we discuss how other paraconsistent logics could be arithmetised in transreal arithmetic.
Resumo:
The WFDEI meteorological forcing data set has been generated using the same methodology as the widely used WATCH Forcing Data (WFD) by making use of the ERA-Interim reanalysis data. We discuss the specifics of how changes in the reanalysis and processing have led to improvement over the WFD. We attribute improvements in precipitation and wind speed to the latest reanalysis basis data and improved downward shortwave fluxes to the changes in the aerosol corrections. Covering 1979–2012, the WFDEI will allow more thorough comparisons of hydrological and Earth System model outputs with hydrologically and phenologically relevant satellite products than using the WFD.
Resumo:
The glycolytic enzyme glyceraldehyde-3 -phosphate dehydrogenase (GAPDH) is as an attractive target for the development of novel antitrypanosomatid agents. In the present work, comparative molecular field analysis and comparative molecular similarity index analysis were conducted on a large series of selective inhibitors of trypanosomatid GAPDH. Four statistically significant models were obtained (r(2) > 0.90 and q(2) > 0.70), indicating their predictive ability for untested compounds. The models were then used to predict the potency of an external test set, and the predicted values were in good agreement with the experimental results. Molecular modeling studies provided further insight into the structural basis for selective inhibition of trypanosomatid GAPDH.
Resumo:
When cement hydrated compositions are analyzed by usual initial mass basis TG curves to calculate mass losses, the higher is the amount of additive added or is the combined water content, the higher is the cement 'dilution' in the initial mass of the sample. In such cases, smaller mass changes in the different mass loss steps are obtained, due to the actual smaller content of cement in the initial mass compositions. To have a same mass basis of comparison, and to avoid erroneous results of initial components content there from, thermal analysis data and curves have to be transformed on cement calcined basis, i.e. on the basis of cement oxides mass present in the calcined samples or on the sample cement initial mass basis.The paper shows and discusses the fundamentals of these bases of calculation, with examples on free and combined water analysis, on calcium sulfate hydration during false cement set and on quantitative evaluation and comparison of pozzolanic materials activity.
Resumo:
Diagnosis and planning stages are critical to the success of orthodontic treatment, in which the orthodontist should have many elements that contribute to the most appropriate decision-making. The orthodontic set-up is an important resource in the planning of corrective orthodontics therapy. It consists of the repositioning of the teeth previously removed from the study dental casts and reassembled on its remaining basis. When properly made, the set-up allows a three-dimensional preview of problems and limitations of the case, assisting in decision-making regarding tooth extractions in cases with problems of space, amount of anchorage loss extent and type of tooth movement, discrepancy of dental arch perimeter, discrepancy of inter-arch tooth volume, among others, indicating the best option for treatment. This paper outlines the most important steps for its confection, an evaluation system and its application in the preparation of orthodontic treatment planning.
Resumo:
Diagnosis and planning stages are critical to the success of orthodontic treatment, in which the orthodontist should have many elements that contribute to the most appropriate decision-making. The orthodontic set-up is an important resource in the planning of corrective orthodontics therapy. It consists of the repositioning of the teeth previously removed from the study dental casts and reassembled on its remaining basis. When properly made, the set-up allows a three-dimensional preview of problems and limitations of the case, assisting in decision-making regarding tooth extractions in cases with problems of space, amount of anchorage loss extent and type of tooth movement, discrepancy of dental arch perimeter, discrepancy of inter-arch tooth volume, among others, indicating the best option for treatment. This paper outlines the most important steps for its confection, an evaluation system and its application in the preparation of orthodontic treatment planning.
Resumo:
„Wer studierte was wann und warum?“ Diese Formulierung impliziert die Fragestellung und die Themenbereiche der Arbeit, die einen Beitrag zur Diskussion von Bildungsentscheidungen auf gesellschaftlicher, organisationaler und individueller Ebene leistet. Ausgangspunkt der Analyse ist eine ausführliche theoretische Einbettung des Themas anhand verschiedener Konzepte und der Aufarbeitung des Forschungsstandes. Dabei werden sozialstrukturelle Merkmale, die Bedeutung von Lebensorientierungen und der Komplex der individuellen Motivationslagen diskutiert und u.a. in Bezug zur handlungstheoretischen Unterscheidung der Um-zu- und Weil-Motive von Alfred Schütz gesetzt. Dieses Konzept und die daraus resultierenden Hypothesen werden in einer quantitativ-empirischen Analyse untersucht. Datengrundlage ist das Studierendensurvey der AG Hochschulforschung der Uni Konstanz. Anhand von binären logistischen Regressionsanalysen werden bestimmte Einflussstrukturen und fachspezifische Profile ermittelt. Insbesondere die Konzeption der intrinsischen und extrinsischen Motivationen zeichnet dabei deutliche Unterscheidungen zwischen den Fächern. Auch in der Betrachtung des Zeitraumes 1985-2007 werden Veränderungen der Einflussstrukturen der Studienfachwahl deutlich, wie z.B. die schwindende Bedeutung der sozialen Herkunft für die Studienfachwahl zeigt. Abschließend wird der Zusammenhang der Einflussstrukturen der Studienfachwahl mit der Studienzufriedenheit analysiert. Auch für die Zufriedenheit von Studierenden und damit den Studienerfolg sind bestimmte Strukturen der Studienfachwahl von Bedeutung.