992 resultados para Dimensional changes
Resumo:
A state-of-the-art chemistry climate model coupled to a three-dimensional ocean model is used to produce three experiments, all seamlessly covering the period 1950–2100, forced by different combinations of long-lived Greenhouse Gases (GHGs) and Ozone Depleting Substances (ODSs). The experiments are designed to quantify the separate effects of GHGs and ODSs on the evolution of ozone, as well as the extent to which these effects are independent of each other, by alternately holding one set of these two forcings constant in combination with a third experiment where both ODSs and GHGs vary. We estimate that up to the year 2000 the net decrease in the column amount of ozone above 20 hPa is approximately 75% of the decrease that can be attributed to ODSs due to the offsetting effects of cooling by increased CO2. Over the 21st century, as ODSs decrease, continued cooling from CO2 is projected to account for more than 50% of the projected increase in ozone above 20 hPa. Changes in ozone below 20 hPa show a redistribution of ozone from tropical to extra-tropical latitudes with an increase in the Brewer-Dobson circulation. In addition to a latitudinal redistribution of ozone, we find that the globally averaged column amount of ozone below 20 hPa decreases over the 21st century, which significantly mitigates the effect of upper stratospheric cooling on total column ozone. Analysis by linear regression shows that the recovery of ozone from the effects of ODSs generally follows the decline in reactive chlorine and bromine levels, with the exception of the lower polar stratosphere where recovery of ozone in the second half of the 21st century is slower than would be indicated by the decline in reactive chlorine and bromine concentrations. These results also reveal the degree to which GHGrelated effects mute the chemical effects of N2O on ozone in the standard future scenario used for the WMO Ozone Assessment. Increases in the residual circulation of the atmosphere and chemical effects from CO2 cooling more than halve the increase in reactive nitrogen in the mid to upper stratosphere that results from the specified increase in N2O between 1950 and 2100.
Resumo:
The vertical profile of global-mean stratospheric temperature changes has traditionally represented an important diagnostic for the attribution of the cooling effects of stratospheric ozone depletion and CO2 increases. However, CO2-induced cooling alters ozone abundance by perturbing ozone chemistry, thereby coupling the stratospheric ozone and temperature responses to changes in CO2 and ozone-depleting substances (ODSs). Here we untangle the ozone-temperature coupling and show that the attribution of global-mean stratospheric temperature changes to CO2 and ODS changes (which are the true anthropogenic forcing agents) can be quite different from the traditional attribution to CO2 and ozone changes. The significance of these effects is quantified empirically using simulations from a three-dimensional chemistry-climate model. The results confirm the essential validity of the traditional approach in attributing changes during the past period of rapid ODS increases, although we find that about 10% of the upper stratospheric ozone decrease from ODS increases over the period 1975–1995 was offset by the increase in CO2, and the CO2-induced cooling in the upper stratosphere has been somewhat overestimated. When considering ozone recovery, however, the ozone-temperature coupling is a first-order effect; fully 2/5 of the upper stratospheric ozone increase projected to occur from 2010–2040 is attributable to CO2 increases. Thus, it has now become necessary to base attribution of global-mean stratospheric temperature changes on CO2 and ODS changes rather than on CO2 and ozone changes.
Resumo:
We investigate the behavior of a two-dimensional inviscid and incompressible flow when pushed out of dynamical equilibrium. We use the two-dimensional vorticity equation with spectral truncation on a rectangular domain. For a sufficiently large number of degrees of freedom, the equilibrium statistics of the flow can be described through a canonical ensemble with two conserved quantities, energy and enstrophy. To perturb the system out of equilibrium, we change the shape of the domain according to a protocol, which changes the kinetic energy but leaves the enstrophy constant. We interpret this as doing work to the system. Evolving along a forward and its corresponding backward process, we find numerical evidence that the distributions of the work performed satisfy the Crooks relation. We confirm our results by proving the Crooks relation for this system rigorously.
Resumo:
Although promise exists for patterns of resting-state blood oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI) brain connectivity to be used as biomarkers of early brain pathology, a full understanding of the nature of the relationship between neural activity and spontaneous fMRI BOLD fluctuations is required before such data can be correctly interpreted. To investigate this issue, we combined electrophysiological recordings of rapid changes in multi-laminar local field potentials from the somatosensory cortex of anaesthetized rats with concurrent two-dimensional optical imaging spectroscopy measurements of resting-state haemodynamics that underlie fluctuations in the BOLD fMRI signal. After neural ‘events’ were identified, their time points served to indicate the start of an epoch in the accompanying haemodynamic fluctuations. Multiple epochs for both neural ‘events’ and the accompanying haemodynamic fluctuations were averaged. We found that the averaged epochs of resting-state haemodynamic fluctuations taken after neural ‘events’ closely resembled the temporal profile of stimulus-evoked cortical haemodynamics. Furthermore, we were able to demonstrate that averaged epochs of resting-state haemodynamic fluctuations resembling the temporal profile of stimulus-evoked haemodynamics could also be found after peaks in neural activity filtered into specific electroencephalographic frequency bands (theta, alpha, beta, and gamma). This technique allows investigation of resting-state neurovascular coupling using methodologies that are directly comparable to that developed for investigating stimulus-evoked neurovascular responses.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
The current work discusses the compositional analysis of spectra that may be related to amorphous materials that lack discernible Lorentzian, Debye or Drude responses. We propose to model such response using a 3-dimensional random RLC network using a descriptor formulation which is converted into an input-output transfer function representation. A wavelet identification study of these networks is performed to infer the composition of the networks. It was concluded that wavelet filter banks enable a parsimonious representation of the dynamics in excited randomly connected RLC networks. Furthermore, chemometric classification using the proposed technique enables the discrimination of dielectric samples with different composition. The methodology is promising for the classification of amorphous dielectrics.
Resumo:
Studiesthat use prolonged periods of sensory stimulation report associations between regional reductions in neural activity and negative blood oxygenation level-dependent (BOLD) signaling. However, the neural generators of the negative BOLD response remain to be characterized. Here, we use single-impulse electrical stimulation of the whisker pad in the anesthetized rat to identify components of the neural response that are related to “negative” hemodynamic changes in the brain. Laminar multiunit activity and local field potential recordings of neural activity were performed concurrently withtwo-dimensional optical imaging spectroscopy measuring hemodynamic changes. Repeated measurements over multiple stimulation trials revealed significant variations in neural responses across session and animal datasets. Within this variation, we found robust long-latency decreases (300 and 2000 ms after stimulus presentation) in gammaband power (30 – 80 Hz) in the middle-superficial cortical layers in regions surrounding the activated whisker barrel cortex. This reduction in gamma frequency activity was associated with corresponding decreases in the hemodynamic responses that drive the negative BOLD signal. These findings suggest a close relationship between BOLD responses and neural events that operate over time scales that outlast the initiating sensory stimulus, and provide important insights into the neurophysiological basis of negative neuroimaging signals.
Resumo:
Among existing remote sensing applications, land-based X-band radar is an effective technique to monitor the wave fields, and spatial wave information could be obtained from the radar images. Two-dimensional Fourier Transform (2-D FT) is the common algorithm to derive the spectra of radar images. However, the wave field in the nearshore area is highly non-homogeneous due to wave refraction, shoaling, and other coastal mechanisms. When applied in nearshore radar images, 2-D FT would lead to ambiguity of wave characteristics in wave number domain. In this article, we introduce two-dimensional Wavelet Transform (2-D WT) to capture the non-homogeneity of wave fields from nearshore radar images. The results show that wave number spectra by 2-D WT at six parallel space locations in the given image clearly present the shoaling of nearshore waves. Wave number of the peak wave energy is increasing along the inshore direction, and dominant direction of the spectra changes from South South West (SSW) to West South West (WSW). To verify the results of 2-D WT, wave shoaling in radar images is calculated based on dispersion relation. The theoretical calculation results agree with the results of 2-D WT on the whole. The encouraging performance of 2-D WT indicates its strong capability of revealing the non-homogeneity of wave fields in nearshore X-band radar images.
Resumo:
For many tasks, such as retrieving a previously viewed object, an observer must form a representation of the world at one location and use it at another. A world-based 3D reconstruction of the scene built up from visual information would fulfil this requirement, something computer vision now achieves with great speed and accuracy. However, I argue that it is neither easy nor necessary for the brain to do this. I discuss biologically plausible alternatives, including the possibility of avoiding 3D coordinate frames such as ego-centric and world-based representations. For example, the distance, slant and local shape of surfaces dictate the propensity of visual features to move in the image with respect to one another as the observer’s perspective changes (through movement or binocular viewing). Such propensities can be stored without the need for 3D reference frames. The problem of representing a stable scene in the face of continual head and eye movements is an appropriate starting place for understanding the goal of 3D vision, more so, I argue, than the case of a static binocular observer.
Resumo:
Allergic asthma represents an important public health issue, most common in the paediatric population, characterized by airway inflammation that may lead to changes in volatiles secreted via the lungs. Thus, exhaled breath has potential to be a matrix with relevant metabolomic information to characterize this disease. Progress in biochemistry, health sciences and related areas depends on instrumental advances, and a high throughput and sensitive equipment such as comprehensive two-dimensional gas chromatography–time of flight mass spectrometry (GC × GC–ToFMS) was considered. GC × GC–ToFMS application in the analysis of the exhaled breath of 32 children with allergic asthma, from which 10 had also allergic rhinitis, and 27 control children allowed the identification of several hundreds of compounds belonging to different chemical families. Multivariate analysis, using Partial Least Squares-Discriminant Analysis in tandem with Monte Carlo Cross Validation was performed to assess the predictive power and to help the interpretation of recovered compounds possibly linked to oxidative stress, inflammation processes or other cellular processes that may characterize asthma. The results suggest that the model is robust, considering the high classification rate, sensitivity, and specificity. A pattern of six compounds belonging to the alkanes characterized the asthmatic population: nonane, 2,2,4,6,6-pentamethylheptane, decane, 3,6-dimethyldecane, dodecane, and tetradecane. To explore future clinical applications, and considering the future role of molecular-based methodologies, a compound set was established to rapid access of information from exhaled breath, reducing the time of data processing, and thus, becoming more expedite method for the clinical purposes.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Introduction: Computer software can be used to predict orthognathic surgery outcomes. The aim of this study was to subjectively compare the soft-tissue surgical simulations of 2 software programs. Methods: Standard profile pictures were taken of 10 patients with a Class III malocclusion and a concave facial profile who were scheduled for double-jaw orthognathic surgery. The patients had horizontal maxillary deficiency or horizontal mandibular excess. Two software programs (Dentofacial Planner Plus [Dentofacial Software, Toronto, Ontario, Canada] and Dolphin Imaging [version 9.0, Dolphin Imaging Software, Canoga Park, Calif]) were used to predict the postsurgical profiles. The predictive images were compared with the actual final photographs. One hundred one orthodontists, oral-maxillofacial surgeons, and general dentists evaluated the images and were asked whether they would use either software program to plan treatment for, or to educate, their patients. Results: Statistical analyses showed differences between the groups when each point was judged. Dolphin Imaging software had better prediction of nasal tip, chin, and submandibular area. Dentofacial Planner Plus software was better in predicting nasolabial angle, and upper and lower lips. The total profile comparison showed no statistical difference between the softwares. Conclusions: The 2 types of software are similar for obtaining 2-dimensional predictive profile images of patients with Class III malocclusion treated with orthognathic surgery. (Am J Orthod Dentofacial Orthop 2010; 137: 452.e1-452.e5)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Parasite virulence genes are usually associated with telomeres. The clustering of the telomeres, together with their particular spatial distribution in the nucleus of human parasites such as Plasmodium falciparum and Trypanosoma brucei, has been suggested to play a role in facilitating ectopic recombination and in the emergence of new antigenic variants. Leishmania parasites, as well as other trypanosomes, have unusual gene expression characteristics, such as polycistronic and constitutive transcription of protein-coding genes. Leishmania subtelomeric regions are even more unique because unlike these regions in other trypanosomes they are devoid of virulence genes. Given these peculiarities of Leishmania, we sought to investigate how telomeres are organized in the nucleus of Leishmania major parasites at both the human and insect stages of their life cycle. We developed a new automated and precise method for identifying telomere position in the three-dimensional space of the nucleus, and we found that the telomeres are organized in clusters present in similar numbers in both the human and insect stages. While the number of clusters remained the same, their distribution differed between the two stages. The telomeric clusters were found more concentrated near the center of the nucleus in the human stage than in the insect stage suggesting reorganization during the parasite's differentiation process between the two hosts. These data provide the first 3D analysis of Leishmania telomere organization. The possible biological implications of these findings are discussed.