169 resultados para Simulated Annealing Calculations
Resumo:
The structures of 2-hydroxybenzamide(C7H7NO2) and 2-methoxybenzamide (C8H9NO2) have been determined in the gas-phase by electron diffraction using results from quantum chemical calculations to inform restraints used on the structural parameters. Theoretical methods (HF and MP2/6-311+G(d,p)) predict four stable conformers for both 2-hydroxybenzamide and 2-methoxybenzamide. For both compounds, evidence for intramolecular hydrogen bonding is presented. In 2-hydroxybenzamide, the observed hydrogen bonded fragment is between the hydroxyl and carbonyl groups, while in 2-methoxybenzamide, the hydrogen bonded fragment is between one of the hydrogen atoms of the amide group and the methoxy oxygen atom.
Resumo:
Simulations of ozone loss rates using a three-dimensional chemical transport model and a box model during recent Antarctic and Arctic winters are compared with experimental loss rates. The study focused on the Antarctic winter 2003, during which the first Antarctic Match campaign was organized, and on Arctic winters 1999/2000, 2002/2003. The maximum ozone loss rates retrieved by the Match technique for the winters and levels studied reached 6 ppbv/sunlit hour and both types of simulations could generally reproduce the observations at 2-sigma error bar level. In some cases, for example, for the Arctic winter 2002/2003 at 475 K level, an excellent agreement within 1-sigma standard deviation level was obtained. An overestimation was also found with the box model simulation at some isentropic levels for the Antarctic winter and the Arctic winter 1999/2000, indicating an overestimation of chlorine activation in the model. Loss rates in the Antarctic show signs of saturation in September, which have to be considered in the comparison. Sensitivity tests were performed with the box model in order to assess the impact of kinetic parameters of the ClO-Cl2O2 catalytic cycle and total bromine content on the ozone loss rate. These tests resulted in a maximum change in ozone loss rates of 1.2 ppbv/sunlit hour, generally in high solar zenith angle conditions. In some cases, a better agreement was achieved with fastest photolysis of Cl2O2 and additional source of total inorganic bromine but at the expense of overestimation of smaller ozone loss rates derived later in the winter.
Resumo:
The crystal structure of an indomethacin–nicotinamide (1 : 1) cocrystal produced by milling has been determined from laboratory powder X-ray diffraction (PXRD) data. The hydrogen bonding motifs observed in the structure represent one of the most probable of all the possible combinations of donors and acceptors in the constituent molecules.
Resumo:
Purpose: To quantify to what extent the new registration method, DARTEL (Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra), may reduce the smoothing kernel width required and investigate the minimum group size necessary for voxel-based morphometry (VBM) studies. Materials and Methods: A simulated atrophy approach was employed to explore the role of smoothing kernel, group size, and their interactions on VBM detection accuracy. Group sizes of 10, 15, 25, and 50 were compared for kernels between 0–12 mm. Results: A smoothing kernel of 6 mm achieved the highest atrophy detection accuracy for groups with 50 participants and 8–10 mm for the groups of 25 at P < 0.05 with familywise correction. The results further demonstrated that a group size of 25 was the lower limit when two different groups of participants were compared, whereas a group size of 15 was the minimum for longitudinal comparisons but at P < 0.05 with false discovery rate correction. Conclusion: Our data confirmed DARTEL-based VBM generally benefits from smaller kernels and different kernels perform best for different group sizes with a tendency of smaller kernels for larger groups. Importantly, the kernel selection was also affected by the threshold applied. This highlighted that the choice of kernel in relation to group size should be considered with care.
Resumo:
Global warming is expected to enhance fluxes of fresh water between the surface and atmosphere, causing wet regions to become wetter and dry regions drier, with serious implications for water resource management. Defining the wet and dry regions as the upper 30% and lower 70% of the precipitation totals across the tropics (30° S–30° N) each month we combine observations and climate model simulations to understand changes in the wet and dry regions over the period 1850–2100. Observed decreases in precipitation over dry tropical land (1950–2010) are also simulated by coupled atmosphere–ocean climate models (−0.3%/decade) with trends projected to continue into the 21st century. Discrepancies between observations and simulations over wet land regions since 1950 exist, relating to decadal fluctuations in El Niño southern oscillation, the timing of which is not represented by the coupled simulations. When atmosphere-only simulations are instead driven by observed sea surface temperature they are able to adequately represent this variability over land. Global distributions of precipitation trends are dominated by spatial changes in atmospheric circulation. However, the tendency for already wet regions to become wetter (precipitation increases with warming by 3% K−1 over wet tropical oceans) and the driest regions drier (precipitation decreases of −2% K−1 over dry tropical land regions) emerges over the 21st century in response to the substantial surface warming.
Resumo:
The parameterisation of diabatic processes in numerical models is critical for the accuracy of weather forecasts and for climate projections. A novel approach to the evaluation of these processes in models is introduced in this contribution. The approach combines a suite of on-line tracer diagnostics with off-line trajectory calculations. Each tracer tracks accumulative changes in potential temperature associated with a particular parameterised diabatic process in the model. A comparison of tracers therefore allows the identification of the most active diabatic processes and their downstream impacts. The tracers are combined with trajectories computed using model-resolved winds, allowing the various diabatic contributions to be tracked back to their time and location of occurrence. We have used this approach to investigate diabatic processes within a simulated extratropical cyclone. We focus on the warm conveyor belt, in which the dominant diabatic contributions come from large-scale latent heating and parameterised convection. By contrasting two simulations, one with standard convection parameterisation settings and another with reduced parameterised convection, the effects of parameterised convection on the structure of the cyclone have been determined. Under reduced parameterised convection conditions, the large-scale latent heating is forced to release convective instability that would otherwise have been released by the convection parameterisation. Although the spatial distribution of precipitation depends on the details of the split between parameterised convection and large-scale latent heating, the total precipitation amount associated with the cyclone remains largely unchanged. For reduced parameterised convection, a more rapid and stronger latent heating episode takes place as air ascends within the warm conveyor belt.
Resumo:
The results of coupled high resolution global models (CGCMs) over South America are discussed. HiGEM1.2 and HadGEM1.2 simulations, with horizontal resolution of ~90 and 135 km, respectively, are compared. Precipitation estimations from CMAP (Climate Prediction Center—Merged Analysis of Precipitation), CPC (Climate Prediction Center) and GPCP (Global Precipitation Climatology Project) are used for validation. HiGEM1.2 and HadGEM1.2 simulated seasonal mean precipitation spatial patterns similar to the CMAP. The positioning and migration of the Intertropical Convergence Zone and of the Pacific and Atlantic subtropical highs are correctly simulated by the models. In HiGEM1.2 and HadGEM1.2, the intensity and locations of the South Atlantic Convergence Zone are in agreement with the observed dataset. The simulated annual cycles are in phase with estimations of rainfall for most of the six regions considered. An important result is that HiGEM1.2 and HadGEM1.2 eliminate a common problem of coarse resolution CGCMs, which is the simulation of a semiannual cycle of precipitation due to the semiannual solar forcing. Comparatively, the use of high resolution in HiGEM1.2 reduces the dry biases in the central part of Brazil during austral winter and spring and in most part of the year over an oceanic box in eastern Uruguay.
Resumo:
Atmospheric aerosols are now actively studied, in particular because of their radiative and climate impacts. Estimations of the direct aerosol radiative perturbation, caused by extinction of incident solar radiation, usually rely on radiative transfer codes and involve simplifying hypotheses. This paper addresses two approximations which are widely used for the sake of simplicity and limiting the computational cost of the calculations. Firstly, it is shown that using a Lambertian albedo instead of the more rigorous bidirectional reflectance distribution function (BRDF) to model the ocean surface radiative properties leads to large relative errors in the instantaneous aerosol radiative perturbation. When averaging over the day, these errors cancel out to acceptable levels of less than 3% (except in the northern hemisphere winter). The other scope of this study is to address aerosol non-sphericity effects. Comparing an experimental phase function with an equivalent Mie-calculated phase function, we found acceptable relative errors if the aerosol radiative perturbation calculated for a given optical thickness is daily averaged. However, retrieval of the optical thickness of non-spherical aerosols assuming spherical particles can lead to significant errors. This is due to significant differences between the spherical and non-spherical phase functions. Discrepancies in aerosol radiative perturbation between the spherical and non-spherical cases are sometimes reduced and sometimes enhanced if the aerosol optical thickness for the spherical case is adjusted to fit the simulated radiance of the non-spherical case.
Resumo:
Simulated multi-model “diversity” in aerosol direct radiative forcing estimates is often perceived as a measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated “host-model uncertainties” are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in twelve participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is −4.47Wm−2 and the inter-model standard deviation is 0.55Wm−2, corresponding to a relative standard deviation of 12 %. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.04Wm−2, and the standard deviation increases to 1.01W−2, corresponding to a significant relative standard deviation of 97 %. However, the top-of-atmosphere forcing variability owing to absorption (subtracting the scattering case from the case with scattering and absorption) is low, with absolute (relative) standard deviations of 0.45Wm−2 (8 %) clear-sky and 0.62Wm−2 (11 %) all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the Aero- Com Direct Effect experiment demonstrates that host model uncertainties could explain about 36% of the overall sulfate forcing diversity of 0.11Wm−2 in the AeroCom Direct Radiative Effect experiment.
Resumo:
As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.
Resumo:
Some climatological information from 14 atmospheric general circulation models is presented and compared in order to assess the ability of a broad group of models to simulate current climate. The quantities considered are cross sections of temperature, zonal wind, and meridional stream function together with latitudinal distributions of mean sea level pressure and precipitation rate. The nature of the deficiencies in the simulated climates that are common to all models and those which differ among models is investigated; the general improvement in the ability of models to simulate certain aspects of the climate is shown; consideration is given to the effect of increasing resolution on simulated climate; and approaches to understanding and reducing model deficiencies are discussed. The information presented here is a subset of a more voluminous compilation which is available in report form (Boer et al., 1991). This report contains essentially the same text, but results from all 14 models are presented together with additional results in the form of geographical distributions of surface variables and certain difference statistics.
Resumo:
Climatological information from fourteen atmospheric general circulation models is presented and compared in order to assess the ability of a broad group of models to simulate current climate. The quantities considered are cross sections of temperature, zonal wind and meridional stream function together with latitudinal distributions of mean sea-level pressure and precipitation rate. The nature of the deficiencies in the simulated climates that are common to all models and those which differ among models is investigated, general improvement in the ability of models to simulate certain aspects of the climate is shown, consideration is given to the effect of increasing resolution on simulated climate and approaches to the understanding and reduction of model deficiencies are discussed.
Resumo:
As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.
Resumo:
Results from nine coupled ocean-atmosphere simulations have been used to investigate changes in the relationship between the variability of monsoon precipitation over western Africa and tropical sea surface temperatures (SSTs) between the mid-Holocene and the present day. Although the influence of tropical SSTs on the African monsoon is generally overestimated in the control simulations, the models reproduce aspects of the observed modes of variability. Thus, most models reproduce the observed negative correlation between western Sahelian precipitation and SST anomalies in the eastern tropical Pacific, and many of them capture the positive correlation between SST anomalies in the eastern tropical Atlantic and precipitation over the Guinea coastal region. Although the response of individual model to the change in orbital forcing between 6 ka and present differs somewhat, eight of the models show that the strength of the teleconnection between SSTs in the eastern tropical Pacific and Sahelian precipitation is weaker in the mid-Holocene. Some of the models imply that this weakening was associated with a shift towards longer time periods (from 3–5 years in the control simulations toward 4–10 years in the mid-Holocene simulations). The simulated reduction in the teleconnection between eastern tropical Pacific SSTs and Sahelian precipitation appears to be primarily related to a reduction in the atmospheric circulation bridge between the Pacific and West Africa but, depending on the model, other mechanisms such as increased importance of other modes of tropical ocean variability or increased local recycling of monsoonal precipitation can also play a role.
Resumo:
There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources.