15 resultados para Physical experiments
em CentAUR: Central Archive University of Reading - UK
Resumo:
The D 2 dopamine receptor exists as dimers or as higher-order oligomers, as determined from data from physical experiments. In this study, we sought evidence that this oligomerization leads to cooperativity by examining the binding of three radioligands ([H-3] nemonapride, [H-3] raclopride, and [H-3] spiperone) to D 2 dopamine receptors expressed in membranes of Sf9 cells. In saturation binding experiments, the three radioligands exhibited different B-max values, and the B-max values could be altered by the addition of sodium ions to assays. Despite labeling different numbers of sites, the different ligands were able to achieve full inhibition in competition experiments. Some ligand pairs also exhibited complex inhibition curves in these experiments. In radioligand dissociation experiments, the rate of dissociation of [H-3] nemonapride or [H-3] spiperone depended on the sodium ion concentration but was independent of the competing ligand. Although some of the data in this study are consistent with the behavior of a cooperative oligomeric receptor, not all of the data are in agreement with this model. It may, therefore, be necessary to consider more complex models for the behavior of this receptor.
Resumo:
This study was an attempt to identify the epistemological roots of knowledge when students carry out hands-on experiments in physics. We found that, within the context of designing a solution to a stated problem, subjects constructed and ran thought experiments intertwined within the processes of conducting physical experiments. We show that the process of alternating between these two modes- empirically experimenting and experimenting in thought- leads towards a convergence on scientifically acceptable concepts. We call this process mutual projection. In the process of mutual projection, external representations were generated. Objects in the physical environment were represented in an imaginary world and these representations were associated with processes in the physical world. It is through this coupling that constituents of both the imaginary world and the physical world gain meaning. We further show that the external representations are rooted in sensory interaction and constitute a semi-symbolic pictorial communication system, a sort of primitive 'language', which is developed as the practical work continues. The constituents of this pictorial communication system are used in the thought experiments taking place in association with the empirical experimentation. The results of this study provide a model of physics learning during hands-on experimentation.
Resumo:
The D 2 dopamine receptor exists as dimers or as higher-order oligomers, as determined from data from physical experiments. In this study, we sought evidence that this oligomerization leads to cooperativity by examining the binding of three radioligands ([H-3] nemonapride, [H-3] raclopride, and [H-3] spiperone) to D 2 dopamine receptors expressed in membranes of Sf9 cells. In saturation binding experiments, the three radioligands exhibited different B-max values, and the B-max values could be altered by the addition of sodium ions to assays. Despite labeling different numbers of sites, the different ligands were able to achieve full inhibition in competition experiments. Some ligand pairs also exhibited complex inhibition curves in these experiments. In radioligand dissociation experiments, the rate of dissociation of [H-3] nemonapride or [H-3] spiperone depended on the sodium ion concentration but was independent of the competing ligand. Although some of the data in this study are consistent with the behavior of a cooperative oligomeric receptor, not all of the data are in agreement with this model. It may, therefore, be necessary to consider more complex models for the behavior of this receptor.
Resumo:
Plant root mucilages contain powerful surfactants that will alter the interaction of soil solids with water and ions, and the rates of microbial processes. The lipid composition of maize, lupin and wheat root mucilages was analysed by thin layer chromatography and gas chromatography-mass spectrometry. A commercially available phosphatidylcholine (lecithin), chemically similar to the phospholipid surfactants identified in the mucilages, was then used to evaluate its effects on selected soil properties. The lipids found in the mucilages were principally phosphatidylcholines, composed mainly of saturated fatty acids, in contrast to the lipids extracted from root tissues. In soil at low tension, lecithin reduced the water content at any particular tension by as much as 10 and 50% in soil and acid-washed sand, respectively. Lecithin decreased the amount of phosphate adsorption in soil and increased the phosphate concentration in solution by 10%. The surfactant also reduced net rates of ammonium consumption and nitrate production in soil. These experiments provide the first evidence we are aware of that plant-released surfactants will significantly modify the biophysical environment of the rhizosphere.
Resumo:
Much uncertainty in the value of the imaginary part of the refractive index of mineral dust contributes to uncertainty in the radiative effect of mineral dust in the atmosphere. A synthesis of optical, chemical and physical in-situ aircraft measurements from the DODO experiments during February and August 2006 are used to calculate the refractive index mineral dust encountered over West Africa. Radiative transfer modeling and measurements of broadband shortwave irradiance at a range of altitudes are used to test and validate these calculations for a specific dust event on 23 August 2006 over Mauritania. Two techniques are used to determine the refractive index: firstly a method combining measurements of scattering, absorption, size distributions and Mie code simulations, and secondly a method using composition measured on filter samples to apportion the content of internally mixed quartz, calcite and iron oxide-clay aggregates, where the iron oxide is represented by either hematite or goethite and clay by either illite or kaolinite. The imaginary part of the refractive index at 550 nm (ni550) is found to range between 0.0001 i to 0.0046 i, and where filter samples are available, agreement between methods is found depending on mineral combination assumed. The refractive indices are also found to agree well with AERONET data where comparisons are possible. ni550 is found to vary with dust source, which is investigated with the NAME model for each case. The relationship between both size distribution and ni550 on the accumulation mode single scattering albedo at 550 nm (ω0550) are examined and size distribution is found to have no correlation to ω0550, while ni550 shows a strong linear relationship with ω0550. Radiative transfer modeling was performed with different models (Mie-derived refractive indices, but also filter sampling composition assuming both internal and external mixing). Our calculations indicate that Mie-derived values of ni550 and the externally mixed dust where the iron oxide-clay aggregate corresponds to the goethite-kaolinite combination result in the best agreement with irradiance measurements. The radiative effect of the dust is found to be very sensitive to the mineral combination (and hence refractive index) assumed, and to whether the dust is assumed to be internally or externally mixed.
Resumo:
The Danish Eulerian Model (DEM) is a powerful air pollution model, designed to calculate the concentrations of various dangerous species over a large geographical region (e.g. Europe). It takes into account the main physical and chemical processes between these species, the actual meteorological conditions, emissions, etc.. This is a huge computational task and requires significant resources of storage and CPU time. Parallel computing is essential for the efficient practical use of the model. Some efficient parallel versions of the model were created over the past several years. A suitable parallel version of DEM by using the Message Passing Interface library (AIPI) was implemented on two powerful supercomputers of the EPCC - Edinburgh, available via the HPC-Europa programme for transnational access to research infrastructures in EC: a Sun Fire E15K and an IBM HPCx cluster. Although the implementation is in principal, the same for both supercomputers, few modifications had to be done for successful porting of the code on the IBM HPCx cluster. Performance analysis and parallel optimization was done next. Results from bench marking experiments will be presented in this paper. Another set of experiments was carried out in order to investigate the sensitivity of the model to variation of some chemical rate constants in the chemical submodel. Certain modifications of the code were necessary to be done in accordance with this task. The obtained results will be used for further sensitivity analysis Studies by using Monte Carlo simulation.
Resumo:
Large scale air pollution models are powerful tools, designed to meet the increasing demand in different environmental studies. The atmosphere is the most dynamic component of the environment, where the pollutants can be moved quickly on far distnce. Therefore the air pollution modeling must be done in a large computational domain. Moreover, all relevant physical, chemical and photochemical processes must be taken into account. In such complex models operator splitting is very often applied in order to achieve sufficient accuracy as well as efficiency of the numerical solution. The Danish Eulerian Model (DEM) is one of the most advanced such models. Its space domain (4800 × 4800 km) covers Europe, most of the Mediterian and neighboring parts of Asia and the Atlantic Ocean. Efficient parallelization is crucial for the performance and practical capabilities of this huge computational model. Different splitting schemes, based on the main processes mentioned above, have been implemented and tested with respect to accuracy and performance in the new version of DEM. Some numerical results of these experiments are presented in this paper.
Resumo:
To date, a number of studies have focused on the influence of sea surface temperature (SST) on global and regional rainfall variability, with the majority of these focusing on certain ocean basins e.g. the Pacific, North Atlantic and Indian Ocean. In contrast, relatively less work has been done on the influence of the central South Atlantic, particularly in relation to rainfall over southern Africa. Previous work by the authors, using reanalysis data and general circulation model (GCM) experiments, has suggested that cold SST anomalies in the central southern Atlantic Ocean are linked to an increase in rainfall extremes across southern Africa. In this paper we present results from idealised regional climate model (RCM) experiments forced with both positive and negative SST anomalies in the southern Atlantic Ocean. These experiments reveal an unexpected response of rainfall over southern Africa. In particular it was found that SST anomalies of opposite sign can cause similar rainfall responses in the model experiments, with isolated increases in rainfall over central southern Africa as well as a large region of drying over the Mozambique Channel. The purpose of this paper is to highlight this finding and explore explanations for the behaviour of the climate model. It is suggested that the observed changes in rainfall might result from the redistribution of energy (associated with upper level changes to Rossby waves) or, of more concern, model error, and therefore the paper concludes that the results of idealised regional climate models forced with SST anomalies should be viewed cautiously.
Resumo:
A method to solve a quasi-geostrophic two-layer model including the variation of static stability is presented. The divergent part of the wind is incorporated by means of an iterative procedure. The procedure is rather fast and the time of computation is only 60–70% longer than for the usual two-layer model. The method of solution is justified by the conservation of the difference between the gross static stability and the kinetic energy. To eliminate the side-boundary conditions the experiments have been performed on a zonal channel model. The investigation falls mainly into three parts: The first part (section 5) contains a discussion of the significance of some physically inconsistent approximations. It is shown that physical inconsistencies are rather serious and for these inconsistent models which were studied the total kinetic energy increased faster than the gross static stability. In the next part (section 6) we are studying the effect of a Jacobian difference operator which conserves the total kinetic energy. The use of this operator in two-layer models will give a slight improvement but probably does not have any practical use in short periodic forecasts. It is also shown that the energy-conservative operator will change the wave-speed in an erroneous way if the wave-number or the grid-length is large in the meridional direction. In the final part (section 7) we investigate the behaviour of baroclinic waves for some different initial states and for two energy-consistent models, one with constant and one with variable static stability. According to the linear theory the waves adjust rather rapidly in such a way that the temperature wave will lag behind the pressure wave independent of the initial configuration. Thus, both models give rise to a baroclinic development even if the initial state is quasi-barotropic. The effect of the variation of static stability is very small, qualitative differences in the development are only observed during the first 12 hours. For an amplifying wave we will get a stabilization over the troughs and an instabilization over the ridges.
Resumo:
We performed mutual tapping experiments between two humans to investigate the conditions required for synchronized motion. A transition from an alternative mode to a synchronization mode was discovered under the same conditions when a subject changed from a reactive mode to an anticipation mode in single tapping experiments. Experimental results suggest that the cycle time for each tapping motion is tuned by a proportional control that is based on synchronization errors and cycle time errors. As the tapping frequency increases, the mathematical model based on the feedback control in the sensory-motor closed loop predicts a discrete mode transition as the gain factors of the proportional control decease. The conditions of the synchronization were shown as a consequence of the coupled dynamics based on the subsequent feedback loop in the sensory-motor system.
Resumo:
Radiative forcing and climate sensitivity have been widely used as concepts to understand climate change. This work performs climate change experiments with an intermediate general circulation model (IGCM) to examine the robustness of the radiative forcing concept for carbon dioxide and solar constant changes. This IGCM has been specifically developed as a computationally fast model, but one that allows an interaction between physical processes and large-scale dynamics; the model allows many long integrations to be performed relatively quickly. It employs a fast and accurate radiative transfer scheme, as well as simple convection and surface schemes, and a slab ocean, to model the effects of climate change mechanisms on the atmospheric temperatures and dynamics with a reasonable degree of complexity. The climatology of the IGCM run at T-21 resolution with 22 levels is compared to European Centre for Medium Range Weather Forecasting Reanalysis data. The response of the model to changes in carbon dioxide and solar output are examined when these changes are applied globally and when constrained geographically (e.g. over land only). The CO2 experiments have a roughly 17% higher climate sensitivity than the solar experiments. It is also found that a forcing at high latitudes causes a 40% higher climate sensitivity than a forcing only applied at low latitudes. It is found that, despite differences in the model feedbacks, climate sensitivity is roughly constant over a range of distributions of CO2 and solar forcings. Hence, in the IGCM at least, the radiative forcing concept is capable of predicting global surface temperature changes to within 30%, for the perturbations described here. It is concluded that radiative forcing remains a useful tool for assessing the natural and anthropogenic impact of climate change mechanisms on surface temperature.
Resumo:
We investigate the initialization of Northern-hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates significantly reduces assimilation error both in identical-twin experiments and when assimilating sea-ice observations, reducing the concentration error by a factor of four to six, and the thickness error by a factor of two. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that the strong dependence of thermodynamic ice growth on ice concentration necessitates an adjustment of mean ice thickness in the analysis update. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that proportional mean-thickness updates are superior to the other two methods considered and enable us to assimilate sea ice in a global climate model using simple Newtonian relaxation.
Resumo:
We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.
Resumo:
The "Vertical structure and physical processes of the Madden-Julian oscillation (MJO)" project comprises three experiments, designed to evaluate comprehensively the heating, moistening and momentum associated with tropical convection in general circulation models (GCMs). We consider here only those GCMs that performed all experiments. Some models display relatively higher or lower MJO fidelity in both initialized hindcasts and climate simulations, while others show considerable variations in fidelity between experiments. Fidelity in hindcasts and climate simulations are not meaningfully correlated. The analysis of each experiment led to the development of process-oriented diagnostics, some of which distinguished between GCMs with higher or lower fidelity in that experiment. We select the most discriminating diagnostics and apply them to data from all experiments, where possible, to determine if correlations with MJO fidelity hold across scales and GCM states. While normalized gross moist stability had a small but statistically significant correlation with MJO fidelity in climate simulations, we find no link with fidelity in medium-range hindcasts. Similarly, there is no association between timestep-to-timestep rainfall variability, identified from short hindcasts, and fidelity in medium-range hindcasts or climate simulations. Two metrics that relate precipitation to free-tropospheric moisture--the relative humidity for extreme daily precipitation, and variations in the height and amplitude of moistening with rain rate--successfully distinguish between higher- and lower-fidelity GCMs in hindcasts and climate simulations. To improve the MJO, developers should focus on relationships between convection and both total moisture and its rate of change. We conclude by offering recommendations for further experiments.
Resumo:
An analysis of diabatic heating and moistening processes from 12-36 hour lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 hours is chosen to constrain the large scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up for the models as they adjust to being driven from the YOTC analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large scale dynamics is reasonably constrained, moistening and heating profiles have large inter-model spread. In particular, there are large spreads in convective heating and moistening at mid-levels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behaviour shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.