900 resultados para Which-way experiments
Resumo:
DNA-strand exchange is a vital step in the recombination process, of which a key intermediate is the four-way DNA Holliday junction formed transiently in most living organisms. Here, the single-crystal structure at a resolution of 2.35 Å of such a DNA junction formed by d(CCGGTACCGG)2, which has crystallized in a more highly symmetrical packing mode to that previously observed for the same sequence, is presented. In this case, the structure is isomorphous to the mismatch sequence d(CCGGGACCGG)2, which reveals the roles of both lattice and DNA sequence in determining the junction geometry. The helices cross at the larger angle of 43.0° (the previously observed angle for this sequence was 41.4°) as a right-handed X. No metal cations were observed; the crystals were grown in the presence of only group I counter-cations.
Resumo:
Cleft lip and palate is the most common of the congenital conditions affecting the face and cranial bones and is associated with a raised risk of difficulties in infant-caregiver interaction; the reasons for such difficulties are not fully understood. Here, we report two experiments designed to explore how adults respond to infant faces with and without cleft lip, using behavioural measures of attractiveness appraisal (‘liking’) and willingness to work to view or remove the images (‘wanting’). We found that infants with cleft lip were rated as less attractive and were viewed for shorter durations than healthy infants, an effect that was particularly apparent where the cleft lip was severe. Women rated the infant faces as more attractive than men did, but there were no differences in men and women's viewing times of these faces. In a second experiment, we found that the presence of a cleft lip in domestic animals affected adults' ‘liking’ and ‘wanting’ responses in a comparable way to that seen for human infants. Adults' responses were also remarkably similar for images of infants and animals with cleft lip, although no gender difference in attractiveness ratings or viewing times emerged for animals. We suggest that the presence of a cleft lip can substantially change the way in which adults respond to human and animal faces. Furthermore, women may respond in different ways to men when asked to appraise infant attractiveness, despite the fact that men and women ‘want’ to view images of infants for similar durations.
Resumo:
It is well known that gut bacteria contribute significantly to the host homeostasis, providing a range of benefits such as immune protection and vitamin synthesis. They also supply the host with a considerable amount of nutrients, making this ecosystem an essential metabolic organ. In the context of increasing evidence of the link between the gut flora and the metabolic syndrome, understanding the metabolic interaction between the host and its gut microbiota is becoming an important challenge of modern biology.1-4 Colonization (also referred to as normalization process) designates the establishment of micro-organisms in a former germ-free animal. While it is a natural process occurring at birth, it is also used in adult germ-free animals to control the gut floral ecosystem and further determine its impact on the host metabolism. A common procedure to control the colonization process is to use the gavage method with a single or a mixture of micro-organisms. This method results in a very quick colonization and presents the disadvantage of being extremely stressful5. It is therefore useful to minimize the stress and to obtain a slower colonization process to observe gradually the impact of bacterial establishment on the host metabolism. In this manuscript, we describe a procedure to assess the modification of hepatic metabolism during a gradual colonization process using a non-destructive metabolic profiling technique. We propose to monitor gut microbial colonization by assessing the gut microbial metabolic activity reflected by the urinary excretion of microbial co-metabolites by 1H NMR-based metabolic profiling. This allows an appreciation of the stability of gut microbial activity beyond the stable establishment of the gut microbial ecosystem usually assessed by monitoring fecal bacteria by DGGE (denaturing gradient gel electrophoresis).6 The colonization takes place in a conventional open environment and is initiated by a dirty litter soiled by conventional animals, which will serve as controls. Rodents being coprophagous animals, this ensures a homogenous colonization as previously described.7 Hepatic metabolic profiling is measured directly from an intact liver biopsy using 1H High Resolution Magic Angle Spinning NMR spectroscopy. This semi-quantitative technique offers a quick way to assess, without damaging the cell structure, the major metabolites such as triglycerides, glucose and glycogen in order to further estimate the complex interaction between the colonization process and the hepatic metabolism7-10. This method can also be applied to any tissue biopsy11,12.
Resumo:
In this paper a look is taken at how the use of implant and electrode technology can be employed to create biological brains for robots, to enable human enhancement and to diminish the effects of certain neural illnesses. In all cases the end result is to increase the range of abilities of the recipients. An indication is given of a number of areas in which such technology has already had a profound effect, a key element being the need for a clear interface linking a biological brain directly with computer technology. The emphasis is placed on practical scientific studies that have been and are being undertaken and reported on. The area of focus is the use of electrode technology, where either a connection is made directly with the cerebral cortex and/or nervous system or where implants into the human body are involved. The paper also considers robots that have biological brains in which human neurons can be employed as the sole thinking machine for a real world robot body.
Resumo:
Determination of the local structure of a polymer glass by scattering methods is complex due to the number of spatial and orientational correlations, both from within the polymer chain (intrachain) and between neighbouring chains (interchain), from which the scattering arises. Recently considerable advances have been made in the structural analysis of relatively simple polymers such as poly(ethylene) through the use of broad Q neutron scattering data tightly coupled to atomistic modelling procedures. This paper presents the results of an investigation into the use of these procedures for the analysis of the local structure of a-PMMA which is chemically more complex with a much greater number of intrachain structural parameters. We have utilised high quality neutron scattering data obtained using SANDALS at ISIS coupled with computer models representing both the single chain and bulk polymer system. Several different modelling approaches have been explored which encompass such techniques as Reverse Monte Carlo refinement and energy minimisation and their relative merits and successes are discussed. These different approaches highlight structural parameters which any realistic model of glassy atactic PMMA must replicate.
Resumo:
Keith DeRose has argued that context shifting experiments should be designed in a specific way in order to accommodate what he calls a ‘truth/falsity asymmetry’. I explain and critique DeRose's reasons for proposing this modification to contextualist methodology, drawing on recent experimental studies of DeRose's bank cases as well as experimental findings about the verification of affirmative and negative statements. While DeRose's arguments for his particular modification to contextualist methodology fail, the lesson of his proposal is that there is good reason to pay close attention to several subtle aspects of the design of context shifting experiments.
Resumo:
A cloud-resolving model is modified to implement the weak temperature gradient approximation in order to simulate the interactions between tropical convection and the large-scale tropical circulation. The instantaneous domain-mean potential temperature is relaxed toward a reference profile obtained from a radiative–convective equilibrium simulation of the cloud-resolving model. For homogeneous surface conditions, the model state at equilibrium is a large-scale circulation with its descending branch in the simulated column. This is similar to the equilibrium state found in some other studies, but not all. For this model, the development of such a circulation is insensitive to the relaxation profile and the initial conditions. Two columns of the cloud-resolving model are fully coupled by relaxing the instantaneous domain-mean potential temperature in both columns toward each other. This configuration is energetically closed in contrast to the reference-column configuration. No mean large-scale circulation develops over homogeneous surface conditions, regardless of the relative area of the two columns. The sensitivity to nonuniform surface conditions is similar to that obtained in the reference-column configuration if the two simulated columns have very different areas, but it is markedly weaker for columns of comparable area. The weaker sensitivity can be understood as being a consequence of a formulation for which the energy budget is closed. The reference-column configuration has been used to study the convection in a local region under the influence of a large-scale circulation. The extension to a two-column configuration is proposed as a methodology for studying the influence on local convection of changes in remote convection.
Resumo:
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Resumo:
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
Resumo:
Observations have shown that the monsoon is a highly variable phenomenon of the tropical troposphere, which exhibits significant variance in the temporal range of two to three years. The reason for this specific interannual variability has not yet been identified unequivocally. Observational analyses have also shown that EI Niño indices or western Pacific SSTs exhibit some power in the two to three year period range and therefore it was suggested that an ocean-atmosphere interaction could excite and support such a cycle. Similar mechanisms include land-surface-atmosphere interaction as a possible driving mechanism. A rather different explanation could be provided by a forcing mechanism based on the quasi-biennial oscillation of the zonal wind in the lower equatorial stratosphere (QBO). The QBO is a phenomenon driven by equatorial waves with periods of some days which are excited in the troposphere. Provided that the monsoon circulation reacts to the modulation of tropopause conditions as forced by the QBO, this could explain monsoon variability in the quasi-biennial window. The possibility of a QBO-driven monsoon variability is investigated in this study in a number of general circulation model experiments where the QBO is assimilated to externally controlled phase states. These experiments show that the boreal summer monsoon is significantly influenced by the QBO. A QBO westerly phase implies less precipitation in the western Pacific, but more in India, in agreement with observations. The austral summer monsoon is exposed to similar but weaker mechanisms and the precipitation does not change significantly.
Resumo:
The tropical tropopause is considered to be the main region of upward transport of tropospheric air carrying water vapor and other tracers to the tropical stratosphere. The lower tropical stratosphere is also the region where the quasi-biennial oscillation (QBO) in the zonal wind is observed. The QBO is positioned in the region where the upward transport of tropospheric tracers to the overworld takes place. Hence the QBO can in principle modulate these transports by its secondary meridional circulation. This modulation is investigated in this study by an analysis of general circulation model (GCM) experiments with an assimilated QBO. The experiments show, first, that the temperature signal of the QBO modifies the specific humidity in the air transported upward and, second, that the secondary meridional circulation modulates the velocity of the upward transport. Thus during the eastward phase of the QBO the upward moving air is moister and the upward velocity is less than during the westward phase of the QBO. It was further found that the QBO period is too short to allow an equilibration of the moisture in the QBO region. This causes a QBO signal of the moisture which is considerably smaller than what could be obtained in the limiting case of indefinitely long QBO phases. This also allows a high sensitivity of the mean moisture over a QBO cycle to the El Niño-Southern Oscillation (ENSO) phenomena or major tropical volcanic eruptions. The interplay of sporadic volcanic eruptions, ENSO, and QBO can produce low-frequency variability in the water vapor content of the tropical stratosphere, which renders the isolation of the QBO signal in observational data of water vapor in the equatorial lower stratosphere difficult.
Resumo:
We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.
Resumo:
As laid out in its convention there are 8 different objectives for ECMWF. One of the major objectives will consist of the preparation, on a regular basis, of the data necessary for the preparation of medium-range weather forecasts. The interpretation of this item is that the Centre will make forecasts once a day for a prediction period of up to 10 days. It is also evident that the Centre should not carry out any real weather forecasting but merely disseminate to the member countries the basic forecasting parameters with an appropriate resolution in space and time. It follows from this that the forecasting system at the Centre must from the operational point of view be functionally integrated with the Weather Services of the Member Countries. The operational interface between ECMWF and the Member Countries must be properly specified in order to get a reasonable flexibility for both systems. The problem of making numerical atmospheric predictions for periods beyond 4-5 days differs substantially from 2-3 days forecasting. From the physical point we can define a medium range forecast as a forecast where the initial disturbances have lost their individual structure. However we are still interested to predict the atmosphere in a similar way as in short range forecasting which means that the model must be able to predict the dissipation and decay of the initial phenomena and the creation of new ones. With this definition, medium range forecasting is indeed very difficult and generally regarded as more difficult than extended forecasts, where we usually only predict time and space mean values. The predictability of atmospheric flow has been extensively studied during the last years in theoretical investigations and by numerical experiments. As has been discussed elsewhere in this publication (see pp 338 and 431) a 10-day forecast is apparently on the fringe of predictability.
Resumo:
The problem of spurious excitation of gravity waves in the context of four-dimensional data assimilation is investigated using a simple model of balanced dynamics. The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode, and can be initialized such that the model evolves on a so-called slow manifold, where the fast motion is suppressed. Identical twin assimilation experiments are performed, comparing the extended and ensemble Kalman filters (EKF and EnKF, respectively). The EKF uses a tangent linear model (TLM) to estimate the evolution of forecast error statistics in time, whereas the EnKF uses the statistics of an ensemble of nonlinear model integrations. Specifically, the case is examined where the true state is balanced, but observation errors project onto all degrees of freedom, including the fast modes. It is shown that the EKF and EnKF will assimilate observations in a balanced way only if certain assumptions hold, and that, outside of ideal cases (i.e., with very frequent observations), dynamical balance can easily be lost in the assimilation. For the EKF, the repeated adjustment of the covariances by the assimilation of observations can easily unbalance the TLM, and destroy the assumptions on which balanced assimilation rests. It is shown that an important factor is the choice of initial forecast error covariance matrix. A balance-constrained EKF is described and compared to the standard EKF, and shown to offer significant improvement for observation frequencies where balance in the standard EKF is lost. The EnKF is advantageous in that balance in the error covariances relies only on a balanced forecast ensemble, and that the analysis step is an ensemble-mean operation. Numerical experiments show that the EnKF may be preferable to the EKF in terms of balance, though its validity is limited by ensemble size. It is also found that overobserving can lead to a more unbalanced forecast ensemble and thus to an unbalanced analysis.
Resumo:
Climate change is putting Colombian agriculture under significant stress and, if no adaptation is made, the latter will be severely impacted during the next decades. Ramirez-Villegas et al. (2012) set out a government-led, top-down, techno-scientific proposal for a way forward by which Colombian agriculture could adapt to climate change. However, this proposal largely overlooks the root causes of vulnerability of Colombian agriculture, and of smallholders in particular. I discuss some of the hidden assumptions underpinning this proposal and of the arguments employed by Ramirez-Villegas et al., based on existing literature on Colombian agriculture and the wider scientific debate on adaptation to climate change. While technical measures may play an important role in the adaptation of Colombian agriculture to climate change, I question whether the actions listed in the proposal alone and specifically for smallholders, truly represent priority issues. I suggest that by i) looking at vulnerability before adaptation, ii) contextualising climate change as one of multiple exposures, and iii) truly putting smallholders at the centre of adaptation, i.e. to learn about and with them, different and perhaps more urgent priorities for action can be identified. Ultimately, I argue that what is at stake is not only a list of adaptation measures but, more importantly, the scientific approach from which priorities for action are identified. In this respect, I propose that transformative rather than technical fix adaptation represents a better approach for Colombian agriculture and smallholders in particular, in the face of climate change.