942 resultados para Maximum Principles
Resumo:
Solving pharmaceutical crystal structures from powder diffraction data is discussed in terms of the methodologies that have been applied and the complexity of the structures that have been solved. The principles underlying these methodologies are summarized and representative examples of polymorph, solvate, salt and cocrystal structure solutions are provided, together with examples of some particularly challenging structure determinations.
Resumo:
This paper analyses 10 years of in-situ measurements of significant wave height (Hs) and maximum wave height (Hmax) from the ocean weather ship Polarfront in the Norwegian Sea. The 30-minute Ship-Borne Wave Recorder measurements of Hmax and Hs are shown to be consistent with theoretical wave distributions. The linear regression between Hmax and Hs has a slope of 1.53. Neither Hs nor Hmax show a significant trend in the period 2000–2009. These data are combined with earlier observations. The long-term trend over the period 1980–2009 in annual Hs is 2.72 ± 0.88 cm/year. Mean Hs and Hmax are both correlated with the North Atlantic Oscillation (NAO) index during winter. The correlation with the NAO index is highest for the more frequently encountered (75th percentile) wave heights. The wave field variability associated with the NAO index is reconstructed using a 500-year NAO index record. Hs and Hmax are found to vary by up to 1.42 m and 3.10 m respectively over the 500-year period. Trends in all 30-year segments of the reconstructed wave field are lower than the trend in the observations during 1980–2009. The NAO index does not change significantly in 21st century projections from CMIP5 climate models under scenario RCP85, and thus no NAO-related changes are expected in the mean and extreme wave fields of the Norwegian Sea.
Resumo:
FeM2X4 spinels, where M is a transition metal and X is oxygen or sulfur, are candidate materials for spin filters, one of the key devices in spintronics. We present here a computational study of the inversion thermodynamics and the electronic structure of these (thio)spinels for M = Cr, Mn, Co, Ni, using calculations based on the density functional theory with on-site Hubbard corrections (DFT+U). The analysis of the configurational free energies shows that different behaviour is expected for the equilibrium cation distributions in these structures: FeCr2X4 and FeMn2S4 are fully normal, FeNi2X4 and FeCo2S4 are intermediate, and FeCo2O4 and FeMn2O4 are fully inverted. We have analyzed the role played by the size of the ions and by the crystal field stabilization effects in determining the equilibrium inversion degree. We also discuss how the electronic and magnetic structure of these spinels is modified by the degree of inversion, assuming that this could be varied from the equilibrium value. We have obtained electronic densities of states for the completely normal and completely inverse cation distribution of each compound. FeCr2X4, FeMn2X4, FeCo2O4 and FeNi2O4 are half-metals in the ferrimagnetic state when Fe is in tetrahedral positions. When M is filling the tetrahedral positions, the Cr-containing compounds and FeMn2O4 are half-metallic systems, while the Co and Ni spinels are insulators. The Co and Ni sulfide counterparts are metallic for any inversion degree together with the inverse FeMn2S4. Our calculations suggest that the spin filtering properties of the FeM2X4 (thio)spinels could be modified via the control of the cation distribution through variations in the synthesis conditions.
Resumo:
Fire activity has varied globally and continuously since the last glacial maximum (LGM) in response to long-term changes in global climate and shorter-term regional changes in climate, vegetation, and human land use. We have synthesized sedimentary charcoal records of biomass burning since the LGM and present global maps showing changes in fire activity for time slices during the past 21,000 years (as differences in charcoal accumulation values compared to pre-industrial). There is strong broad-scale coherence in fire activity after the LGM, but spatial heterogeneity in the signals increases thereafter. In North America, Europe and southern South America, charcoal records indicate less-than-present fire activity during the deglacial period, from 21,000 to ∼11,000 cal yr BP. In contrast, the tropical latitudes of South America and Africa show greater-than-present fire activity from ∼19,000 to ∼17,000 cal yr BP and most sites from Indochina and Australia show greater-than-present fire activity from 16,000 to ∼13,000 cal yr BP. Many sites indicate greater-than-present or near-present activity during the Holocene with the exception of eastern North America and eastern Asia from 8,000 to ∼3,000 cal yr BP, Indonesia and Australia from 11,000 to 4,000 cal yr BP, and southern South America from 6,000 to 3,000 cal yr BP where fire activity was less than present. Regional coherence in the patterns of change in fire activity was evident throughout the post-glacial period. These complex patterns can largely be explained in terms of large-scale climate controls modulated by local changes in vegetation and fuel load
Resumo:
Modification of graphene to open a robust gap in its electronic spectrum is essential for its use in field effect transistors and photochemistry applications. Inspired by recent experimental success in the preparation of homogeneous alloys of graphene and boron nitride (BN), we consider here engineering the electronic structure and bandgap of C2xB1−xN1−x alloys via both compositional and configurational modification. We start from the BN end-member, which already has a large bandgap, and then show that (a) the bandgap can in principle be reduced to about 2 eV with moderate substitution of C (x < 0.25); and (b) the electronic structure of C2xB1−xN1−x can be further tuned not only with composition x, but also with the configuration adopted by C substituents in the BN matrix. Our analysis, based on accurate screened hybrid functional calculations, provides a clear understanding of the correlation found between the bandgap and the level of aggregation of C atoms: the bandgap decreases most when the C atoms are maximally isolated, and increases with aggregation of C atoms due to the formation of bonding and anti-bonding bands associated with hybridization of occupied and empty defect states. We determine the location of valence and conduction band edges relative to vacuum and discuss the implications on the potential use of 2D C2xB1−xN1−x alloys in photocatalytic applications. Finally, we assess the thermodynamic limitations on the formation of these alloys using a cluster expansion model derived from first-principles.
Resumo:
Individual-based models (IBMs) can simulate the actions of individual animals as they interact with one another and the landscape in which they live. When used in spatially-explicit landscapes IBMs can show how populations change over time in response to management actions. For instance, IBMs are being used to design strategies of conservation and of the exploitation of fisheries, and for assessing the effects on populations of major construction projects and of novel agricultural chemicals. In such real world contexts, it becomes especially important to build IBMs in a principled fashion, and to approach calibration and evaluation systematically. We argue that insights from physiological and behavioural ecology offer a recipe for building realistic models, and that Approximate Bayesian Computation (ABC) is a promising technique for the calibration and evaluation of IBMs. IBMs are constructed primarily from knowledge about individuals. In ecological applications the relevant knowledge is found in physiological and behavioural ecology, and we approach these from an evolutionary perspective by taking into account how physiological and behavioural processes contribute to life histories, and how those life histories evolve. Evolutionary life history theory shows that, other things being equal, organisms should grow to sexual maturity as fast as possible, and then reproduce as fast as possible, while minimising per capita death rate. Physiological and behavioural ecology are largely built on these principles together with the laws of conservation of matter and energy. To complete construction of an IBM information is also needed on the effects of competitors, conspecifics and food scarcity; the maximum rates of ingestion, growth and reproduction, and life-history parameters. Using this knowledge about physiological and behavioural processes provides a principled way to build IBMs, but model parameters vary between species and are often difficult to measure. A common solution is to manually compare model outputs with observations from real landscapes and so to obtain parameters which produce acceptable fits of model to data. However, this procedure can be convoluted and lead to over-calibrated and thus inflexible models. Many formal statistical techniques are unsuitable for use with IBMs, but we argue that ABC offers a potential way forward. It can be used to calibrate and compare complex stochastic models and to assess the uncertainty in their predictions. We describe methods used to implement ABC in an accessible way and illustrate them with examples and discussion of recent studies. Although much progress has been made, theoretical issues remain, and some of these are outlined and discussed.
Resumo:
We construct a quasi-sure version (in the sense of Malliavin) of geometric rough paths associated with a Gaussian process with long-time memory. As an application we establish a large deviation principle (LDP) for capacities for such Gaussian rough paths. Together with Lyons' universal limit theorem, our results yield immediately the corresponding results for pathwise solutions to stochastic differential equations driven by such Gaussian process in the sense of rough paths. Moreover, our LDP result implies the result of Yoshida on the LDP for capacities over the abstract Wiener space associated with such Gaussian process.
Resumo:
In this paper, I seek to undermine G.A. Cohen’s polemical use of a metaethical claim he makes in his article, ‘Facts and Principles’, by arguing that that use requires an unsustainable equivocation between epistemic and logical grounding. I begin by distinguishing three theses that Cohen has offered during the course of his critique of Rawls and contractualism more generally, the foundationalism about grounding thesis, the justice as non-regulative thesis, and the justice as all-encompassing thesis, and briefly argue that they are analytically independent of each other. I then offer an outline of the foundationalism about grounding thesis, characterising it, as Cohen does, as a demand of logic. That thesis claims that whenever a normative principle is dependent on a fact, it is so dependent in virtue of some other principle. I then argue that although this is true as a matter of logic, it, as Cohen admits, cannot be true of actual justifications, since logic cannot tell us anything about the truth as opposed to the validity of arguments. Facts about a justification cannot then be decisive for whether or not a given argument violates the foundationalism about grounding thesis. As long as, independently of actual justifications, theorists can point to plausible logically grounding principles, as I argue contractualists can, Cohen’s thesis lacks critical bite.
Resumo:
We describe the creation of a data set describing changes related to the presence of ice sheets, including ice-sheet extent and height, ice-shelf extent, and the distribution and elevation of ice-free land at the Last Glacial Maximum (LGM), which were used in LGM experiments conducted as part of the fifth phase of the Coupled Modelling Intercomparison Project (CMIP5) and the third phase of the Palaeoclimate Modelling Intercomparison Project (PMIP3). The CMIP5/PMIP3 data sets were created from reconstructions made by three different groups, which were all obtained using a model-inversion approach but differ in the assumptions used in the modelling and in the type of data used as constraints. The ice-sheet extent in the Northern Hemisphere (NH) does not vary substantially between the three individual data sources. The difference in the topography of the NH ice sheets is also moderate, and smaller than the differences between these reconstructions (and the resultant composite reconstruction) and ice-sheet reconstructions used in previous generations of PMIP. Only two of the individual reconstructions provide information for Antarctica. The discrepancy between these two reconstructions is larger than the difference for the NH ice sheets, although still less than the difference between the composite reconstruction and previous PMIP ice-sheet reconstructions. Although largely confined to the ice-covered regions, differences between the climate response to the individual LGM reconstructions extend over the North Atlantic Ocean and Northern Hemisphere continents, partly through atmospheric stationary waves. Differences between the climate response to the CMIP5/PMIP3 composite and any individual ice-sheet reconstruction are smaller than those between the CMIP5/PMIP3 composite and the ice sheet used in the last phase of PMIP (PMIP2).
Resumo:
The climates of the mid-Holocene (MH), 6,000 years ago, and of the Last Glacial Maximum (LGM), 21,000 years ago, have extensively been simulated, in particular in the framework of the Palaeoclimate Modelling Intercomparion Project. These periods are well documented by paleo-records, which can be used for evaluating model results for climates different from the present one. Here, we present new simulations of the MH and the LGM climates obtained with the IPSL_CM5A model and compare them to our previous results obtained with the IPSL_CM4 model. Compared to IPSL_CM4, IPSL_CM5A includes two new features: the interactive representation of the plant phenology and marine biogeochemistry. But one of the most important differences between these models is the latitudinal resolution and vertical domain of their atmospheric component, which have been improved in IPSL_CM5A and results in a better representation of the mid-latitude jet-streams. The Asian monsoon’s representation is also substantially improved. The global average mean annual temperature simulated for the pre-industrial (PI) period is colder in IPSL_CM5A than in IPSL_CM4 but their climate sensitivity to a CO2 doubling is similar. Here we show that these differences in the simulated PI climate have an impact on the simulated MH and LGM climatic anomalies. The larger cooling response to LGM boundary conditions in IPSL_CM5A appears to be mainly due to differences between the PMIP3 and PMIP2 boundary conditions, as shown by a short wave radiative forcing/feedback analysis based on a simplified perturbation method. It is found that the sensitivity computed from the LGM climate is lower than that computed from 2 × CO2 simulations, confirming previous studies based on different models. For the MH, the Asian monsoon, stronger in the IPSL_CM5A PI simulation, is also more sensitive to the insolation changes. The African monsoon is also further amplified in IPSL_CM5A due to the impact of the interactive phenology. Finally the changes in variability for both models and for MH and LGM are presented taking the example of the El-Niño Southern Oscillation (ENSO), which is very different in the PI simulations. ENSO variability is damped in both model versions at the MH, whereas inconsistent responses are found between the two versions for the LGM. Part 2 of this paper examines whether these differences between IPSL_CM4 and IPSL_CM5A can be distinguished when comparing those results to palaeo-climatic reconstructions and investigates new approaches for model-data comparisons made possible by the inclusion of new components in IPSL_CM5A.
Resumo:
The combined influences of the westerly phase of the quasi-biennial oscillation (QBO-W) and solar maximum (Smax) conditions on the Northern Hemisphere extratropical winter circulation are investigated using reanalysis data and Center for Climate System Research/National Institute for Environmental Studies chemistry climate model (CCM) simulations. The composite analysis for the reanalysis data indicates strengthened polar vortex in December followed by weakened polar vortex in February–March for QBO-W during Smax (QBO-W/Smax) conditions. This relationship need not be specific to QBO-W/Smax conditions but may just require strengthened vortex in December, which is more likely under QBO-W/Smax. Both the reanalysis data and CCM simulations suggest that dynamical processes of planetary wave propagation and meridional circulation related to QBO-W around polar vortex in December are similar in character to those related to Smax; furthermore, both processes may work in concert to maintain stronger vortex during QBO-W/Smax. In the reanalysis data, the strengthened polar vortex in December is associated with the development of north–south dipole tropospheric anomaly in the Atlantic sector similar to the North Atlantic oscillation (NAO) during December–January. The structure of the north–south dipole anomaly has zonal wavenumber 1 (WN1) component, where the longitude of anomalous ridge overlaps with that of climatological ridge in the North Atlantic in January. This implies amplification of the WN1 wave and results in the enhancement of the upward WN1 propagation from troposphere into stratosphere in January, leading to the weakened polar vortex in February–March. Although WN2 waves do not play a direct role in forcing the stratospheric vortex evolution, their tropospheric response to QBO-W/Smax conditions appears to be related to the maintenance of the NAO-like anomaly in the high-latitude troposphere in January. These results may provide a possible explanation for the mechanisms underlying the seasonal evolution of wintertime polar vortex anomalies during QBO-W/Smax conditions and the role of troposphere in this evolution.