983 resultados para n-dimensional MacLaurine series
Resumo:
This paper describes a series of experiments undertaken to investigate the slamming of an Oscillating Wave Surge Converter in extreme sea states. These two-dimensional experiments were undertaken in the Wave Flume at Ecole Centrale Marseille. Images from a high speed camera are used to identify the physics of the slamming process. A single pressure sensor is used to record the characteristic of the pressure. Finally numerical results are compared to the output from the experiments.
Resumo:
This paper studies the statistical distributions of worldwide earthquakes from year 1963 up to year 2012. A Cartesian grid, dividing Earth into geographic regions, is considered. Entropy and the Jensen–Shannon divergence are used to analyze and compare real-world data. Hierarchical clustering and multi-dimensional scaling techniques are adopted for data visualization. Entropy-based indices have the advantage of leading to a single parameter expressing the relationships between the seismic data. Classical and generalized (fractional) entropy and Jensen–Shannon divergence are tested. The generalized measures lead to a clear identification of patterns embedded in the data and contribute to better understand earthquake distributions.
Resumo:
The spatial limits of the active site in the benzylic hydroxylase enzyme of the fungus Mortierella isabellina were investigated. Several molecular probes were used in incubation experiments to determine the acceptability of each compound by this enzyme. The yields of benzylic alcohols provided information on the acceptability of the particular compound into the active site, and the enantiomeric excess values provided information on the "fit" of acceptable substrates. Measurements of the molecular models were made using Cambridge Scientific Computing Inc. CSC Chem 3D Plus modeling program. i The dimensional limits of the aromatic binding pocket of the benzylic hydroxylase were tested using suitably substituted ethyl benzenes. Both the depth (para substituted substrates) and width (ortho and meta substituted substrates) of this region were investigated, with results demonstrating absolute spatial limits in both directions in the plane of the aromatic ring of 7.3 Angstroms for the depth and 7.1 Angstroms for the width. A minimum requirement for the height of this region has also been established at 6.2 Angstroms. The region containing the active oxygen species was also investigated, using a series of alkylphenylmethanes and fused ring systems in indan, 1,2,3,4-tetrahydronaphthalene and benzocycloheptene substrates. A maximum distance of 6.9 Angstroms (including the 1.5 Angstroms from the phenyl substituent to the active center of the heme prosthetic group of the enzyme) has been established extending directly in ii front of the aromatic binding pocket. The other dimensions in this region of the benzylic hydroxylase active site will require further investigation to establish maximum allowable values. An explanation of the stereochemical distributions in the obtained products has also been put forth that correlates well with the experimental observations.
Resumo:
The ab initio cluster model approach has been used to study the electronic structure and magnetic coupling of KCuF3 and K2CuF4 in their various ordered polytype crystal forms. Due to a cooperative Jahn-Teller distortion these systems exhibit strong anisotropies. In particular, the magnetic properties strongly differ from those of isomorphic compounds. Hence, KCuF3 is a quasi-one-dimensional (1D) nearest neighbor Heisenberg antiferromagnet whereas K2CuF4 is the only ferromagnet among the K2MF4 series of compounds (M=Mn, Fe, Co, Ni, and Cu) behaving all as quasi-2D nearest neighbor Heisenberg systems. Different ab initio techniques are used to explore the magnetic coupling in these systems. All methods, including unrestricted Hartree-Fock, are able to explain the magnetic ordering. However, quantitative agreement with experiment is reached only when using a state-of-the-art configuration interaction approach. Finally, an analysis of the dependence of the magnetic coupling constant with respect to distortion parameters is presented.
Resumo:
An efficient method of combining neutron diffraction data over an extended Q range with detailed atomistic models is presented. A quantitative and qualitative mapping of the organization of the chain conformation in both glass and liquid phase has been performed. The proposed structural refinement method is based on the exploitation of the intrachain features of the diffraction pattern by the use of internal coordinates for bond lengths, valence angles and torsion rotations. Models are built stochastically by assignment of these internal coordinates from probability distributions with limited variable parameters. Variation of these parameters is used in the construction of models that minimize the differences between the observed and calculated structure factors. A series of neutron scattering data of 1,4-polybutadiene at the region 20320 K is presented. Analysis of the experimental data yield bond lengths for C-C and C=C of 1.54 and 1.35 Å respectively. Valence angles of the backbone were found to be at 112 and 122.8 for the CCC and CC=C respectively. Three torsion angles corresponding to the double bond and the adjacent R and β bonds were found to occupy cis and trans, s(, trans and g( and trans states, respectively. We compare our results with theoretical predictions, computer simulations, RIS models, and previously reported experimental results.
Resumo:
The understanding of the statistical properties and of the dynamics of multistable systems is gaining more and more importance in a vast variety of scientific fields. This is especially relevant for the investigation of the tipping points of complex systems. Sometimes, in order to understand the time series of given observables exhibiting bimodal distributions, simple one-dimensional Langevin models are fitted to reproduce the observed statistical properties, and used to investing-ate the projected dynamics of the observable. This is of great relevance for studying potential catastrophic changes in the properties of the underlying system or resonant behaviours like those related to stochastic resonance-like mechanisms. In this paper, we propose a framework for encasing this kind of studies, using simple box models of the oceanic circulation and choosing as observable the strength of the thermohaline circulation. We study the statistical properties of the transitions between the two modes of operation of the thermohaline circulation under symmetric boundary forcings and test their agreement with simplified one-dimensional phenomenological theories. We extend our analysis to include stochastic resonance-like amplification processes. We conclude that fitted one-dimensional Langevin models, when closely scrutinised, may result to be more ad-hoc than they seem, lacking robustness and/or well-posedness. They should be treated with care, more as an empiric descriptive tool than as methodology with predictive power.
Resumo:
The Fourier series can be used to describe periodic phenomena such as the one-dimensional crystal wave function. By the trigonometric treatements in Hückel theory it is shown that Hückel theory is a special case of Fourier series theory. Thus, the conjugated π system is in fact a periodic system. Therefore, it can be explained why such a simple theorem as Hückel theory can be so powerful in organic chemistry. Although it only considers the immediate neighboring interactions, it implicitly takes account of the periodicity in the complete picture where all the interactions are considered. Furthermore, the success of the trigonometric methods in Hückel theory is not accidental, as it based on the fact that Hückel theory is a specific example of the more general method of Fourier series expansion. It is also important for education purposes to expand a specific approach such as Hückel theory into a more general method such as Fourier series expansion.
Resumo:
We examine to what degree we can expect to obtain accurate temperature trends for the last two decades near the surface and in the lower troposphere. We compare temperatures obtained from surface observations and radiosondes as well as satellite-based measurements from the Microwave Soundings Units (MSU), which have been adjusted for orbital decay and non-linear instrument-body effects, and reanalyses from the European Centre for Medium-Range Weather Forecasts (ERA) and the National Centre for Environmental Prediction (NCEP). In regions with abundant conventional data coverage, where the MSU has no major influence on the reanalysis, temperature anomalies obtained from microwave sounders, radiosondes and from both reanalyses agree reasonably. Where coverage is insufficient, in particular over the tropical oceans, large differences are found between the MSU and either reanalysis. These differences apparently relate to changes in the satellite data availability and to differing satellite retrieval methodologies, to which both reanalyses are quite sensitive over the oceans. For NCEP, this results from the use of raw radiances directly incorporated into the analysis, which make the reanalysis sensitive to changes in the underlying algorithms, e.g. those introduced in August 1992. For ERA, the bias-correction of the one-dimensional variational analysis may introduce an error when the satellite relative to which the correction is calculated is biased itself or when radiances change on a time scale longer than a couple of months, e.g. due to orbit decay. ERA inhomogeneities are apparent in April 1985, October/November 1986 and April 1989. These dates can be identified with the replacements of satellites. It is possible that a negative bias in the sea surface temperatures (SSTs) used in the reanalyses may have been introduced over the period of the satellite record. This could have resulted from a decrease in the number of ship measurements, a concomitant increase in the importance of satellite-derived SSTs, and a likely cold bias in the latter. Alternately, a warm bias in SSTs could have been caused by an increase in the percentage of buoy measurements (relative to deeper ship intake measurements) in the tropical Pacific. No indications for uncorrected inhomogeneities of land surface temperatures could be found. Near-surface temperatures have biases in the boundary layer in both reanalyses, presumably due to the incorrect treatment of snow cover. The increase of near-surface compared to lower tropospheric temperatures in the last two decades may be due to a combination of several factors, including high-latitude near-surface winter warming due to an enhanced NAO and upper-tropospheric cooling due to stratospheric ozone decrease.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
Consideration of the geometrical features of the functional groups present in furosemide has enabled synthesis of a series of ternary co-crystals with predictable structural features, containing a robust asymmetric two-dimensional network.
Resumo:
This paper describes a fast and reliable method for redistributing a computational mesh in three dimensions which can generate a complex three dimensional mesh without any problems due to mesh tangling. The method relies on a three dimensional implementation of the parabolic Monge–Ampère (PMA) technique, for finding an optimally transported mesh. The method for implementing PMA is described in detail and applied to both static and dynamic mesh redistribution problems, studying both the convergence and the computational cost of the algorithm. The algorithm is applied to a series of problems of increasing complexity. In particular very regular meshes are generated to resolve real meteorological features (derived from a weather forecasting model covering the UK area) in grids with over 2×107 degrees of freedom. The PMA method computes these grids in times commensurate with those required for operational weather forecasting.
Resumo:
Among existing remote sensing applications, land-based X-band radar is an effective technique to monitor the wave fields, and spatial wave information could be obtained from the radar images. Two-dimensional Fourier Transform (2-D FT) is the common algorithm to derive the spectra of radar images. However, the wave field in the nearshore area is highly non-homogeneous due to wave refraction, shoaling, and other coastal mechanisms. When applied in nearshore radar images, 2-D FT would lead to ambiguity of wave characteristics in wave number domain. In this article, we introduce two-dimensional Wavelet Transform (2-D WT) to capture the non-homogeneity of wave fields from nearshore radar images. The results show that wave number spectra by 2-D WT at six parallel space locations in the given image clearly present the shoaling of nearshore waves. Wave number of the peak wave energy is increasing along the inshore direction, and dominant direction of the spectra changes from South South West (SSW) to West South West (WSW). To verify the results of 2-D WT, wave shoaling in radar images is calculated based on dispersion relation. The theoretical calculation results agree with the results of 2-D WT on the whole. The encouraging performance of 2-D WT indicates its strong capability of revealing the non-homogeneity of wave fields in nearshore X-band radar images.
Resumo:
Human parasitic diseases are the foremost threat to human health and welfare around the world. Trypanosomiasis is a very serious infectious disease against which the currently available drugs are limited and not effective. Therefore, there is an urgent need for new chemotherapeutic agents. One attractive drug target is the major cysteine protease from Trypanosoma cruzi, cruzain. In the present work, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) studies were conducted on a series of thiosemicarbazone and semicarbazone derivatives as inhibitors of cruzain. Molecular modeling studies were performed in order to identify the preferred binding mode of the inhibitors into the enzyme active site, and to generate structural alignments for the three-dimensional quantitative structure-activity relationship (3D QSAR) investigations. Statistically significant models were obtained (CoMFA. r(2) = 0.96 and q(2) = 0.78; CoMSIA, r(2) = 0.91 and q(2) = 0.73), indicating their predictive ability for untested compounds. The models were externally validated employing a test set, and the predicted values were in good agreement with the experimental results. The final QSAR models and the information gathered from the 3D CoMFA and CoMSIA contour maps provided important insights into the chemical and structural basis involved in the molecular recognition process of this family of cruzain inhibitors, and should be useful for the design of new structurally related analogs with improved potency. (C) 2009 Elsevier Inc. All rights reserved.