952 resultados para Spatial computable general equilibrium model
Resumo:
Using a highly resolved atmospheric general circulation model, the impact of different glacial boundary conditions on precipitation and atmospheric dynamics in the North Atlantic region is investigated. Six 30-yr time slice experiments of the Last Glacial Maximum at 21 thousand years before the present (ka BP) and of a less pronounced glacial state – the Middle Weichselian (65 ka BP) – are compared to analyse the sensitivity to changes in the ice sheet distribution, in the radiative forcing and in the prescribed time-varying sea surface temperature and sea ice, which are taken from a lower-resolved, but fully coupled atmosphere-ocean general circulation model. The strongest differences are found for simulations with different heights of the Laurentide ice sheet. A high surface elevation of the Laurentide ice sheet leads to a southward displacement of the jet stream and the storm track in the North Atlantic region. These changes in the atmospheric dynamics generate a band of increased precipitation in the mid-latitudes across the Atlantic to southern Europe in winter, while the precipitation pattern in summer is only marginally affected. The impact of the radiative forcing differences between the two glacial periods and of the prescribed time-varying sea surface temperatures and sea ice are of second order importance compared to the one of the Laurentide ice sheet. They affect the atmospheric dynamics and precipitation in a similar but less pronounced manner compared with the topographic changes.
Resumo:
[1] Winter circulation types under preindustrial and glacial conditions are investigated and used to quantify their impact on precipitation. The analysis is based on daily mean sea level pressure fields of a highly resolved atmospheric general circulation model and focuses on the North Atlantic and European region. We find that glacial circulation types are dominated by patterns with an east-west pressure gradient, which clearly differs from the predominantly zonal patterns for the recent past. This is also evident in the frequency of occurrence of circulation types when projecting preindustrial circulation types onto the glacial simulations. The elevation of the Laurentide ice sheet is identified as a major cause for these differences. In areas of strong precipitation signals in glacial times, the changes in the frequencies of occurrence of the circulation types explain up to 60% of the total difference between preindustrial and glacial simulations.
Resumo:
To increase the sparse knowledge of long-term Southern Hemisphere (SH) climate variability, we assess an ensemble of 4 transient simulations over the last 500 yr performed with a state-of-the-art atmosphere ocean general circulation model. The model is forced with reconstructions of solar irradiance, greenhouse gas (GHG) and volcanic aerosol concentrations. A 1990 control simulation shows that the model is able to represent the Southern Annular Mode (SAM), and to some extent the South Pacific Dipole (SPD) and the Zonal Wave 3 (ZW3). During the past 500 yr we find that SPD and ZW3 variability remain stable, whereas SAM shows a significant shift towards its positive state during the 20th century. Regional temperatures over South America are strongly influenced by changing both GHG concentrations and volcanic eruptions, whereas precipitation shows no significant response to the varying external forcing. For temperature this stands in contrast to proxy records, suggesting that SH climate is dominated by internal variability rather than external forcing. The underlying dynamics of the temperature changes generally point to a combination of several modes, thus, hampering the possibilities of regional reconstructing the modes from proxy records. The linear imprint of the external forcing is as expected, i.e. a warming for increase in the combined solar and GHG forcing and a cooling after volcanic eruptions. Dynamically, only the increase in SAM with increased combined forcing is simulated.
Resumo:
The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.
Resumo:
Searching for the neural correlates of visuospatial processing using functional magnetic resonance imaging (fMRI) is usually done in an event-related framework of cognitive subtraction, applying a paradigm comprising visuospatial cognitive components and a corresponding control task. Besides methodological caveats of the cognitive subtraction approach, the standard general linear model with fixed hemodynamic response predictors bears the risk of being underspecified. It does not take into account the variability of the blood oxygen level-dependent signal response due to variable task demand and performance on the level of each single trial. This underspecification may result in reduced sensitivity regarding the identification of task-related brain regions. In a rapid event-related fMRI study, we used an extended general linear model including single-trial reaction-time-dependent hemodynamic response predictors for the analysis of an angle discrimination task. In addition to the already known regions in superior and inferior parietal lobule, mapping the reaction-time-dependent hemodynamic response predictor revealed a more specific network including task demand-dependent regions not being detectable using the cognitive subtraction method, such as bilateral caudate nucleus and insula, right inferior frontal gyrus and left precentral gyrus.
Resumo:
An optimal multiple testing procedure is identified for linear hypotheses under the general linear model, maximizing the expected number of false null hypotheses rejected at any significance level. The optimal procedure depends on the unknown data-generating distribution, but can be consistently estimated. Drawing information together across many hypotheses, the estimated optimal procedure provides an empirical alternative hypothesis by adapting to underlying patterns of departure from the null. Proposed multiple testing procedures based on the empirical alternative are evaluated through simulations and an application to gene expression microarray data. Compared to a standard multiple testing procedure, it is not unusual for use of an empirical alternative hypothesis to increase by 50% or more the number of true positives identified at a given significance level.
Resumo:
A number of authors have studies the mixture survival model to analyze survival data with nonnegligible cure fractions. A key assumption made by these authors is the independence between the survival time and the censoring time. To our knowledge, no one has studies the mixture cure model in the presence of dependent censoring. To account for such dependence, we propose a more general cure model which allows for dependent censoring. In particular, we derive the cure models from the perspective of competing risks and model the dependence between the censoring time and the survival time using a class of Archimedean copula models. Within this framework, we consider the parameter estimation, the cure detection, and the two-sample comparison of latency distribution in the presence of dependent censoring when a proportion of patients is deemed cured. Large sample results using the martingale theory are obtained. We applied the proposed methodologies to the SEER prostate cancer data.
Resumo:
The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning.
Resumo:
Mild cognitive impairment (MCI) often refers to the preclinical stage of dementia, where the majority develop Alzheimer's disease (AD). Given that neurodegenerative burden and compensatory mechanisms might exist before accepted clinical symptoms of AD are noticeable, the current prospective study aimed to investigate the functioning of brain regions in the visuospatial networks responsible for preclinical symptoms in AD using event-related functional magnetic resonance imaging (fMRI). Eighteen MCI patients were evaluated and clinically followed for approximately 3 years. Five progressed to AD (PMCI) and eight remained stable (SMCI). Thirteen age-, gender- and education-matched controls also participated. An angle discrimination task with varying task demands was used. Brain activation patterns as well as task demand-dependent and -independent signal changes between the groups were investigated by using an extended general linear model including individual performance (reaction time [RT]) of each single trial. Similar behavioral (RT and accuracy) responses were observed between MCI patients and controls. A network of bilateral activations, e.g. dorsal pathway, which increased linearly with increasing task demand, was engaged in all subjects. Compared with SMCI patients and controls, PMCI patients showed a stronger relation between task demand and brain activity in left superior parietal lobules (SPL) as well as a general task demand-independent increased activation in left precuneus. Altered brain function can be detected at a group level in individuals that progress to AD before changes occur at the behavioral level. Increased parietal activation in PMCI could reflect a reduced neuronal efficacy due to accumulating AD pathology and might predict future clinical decline in patients with MCI.
Resumo:
Background: The goal of this study was to determine whether site-specific differences in the subgingival microbiota could be detected by the checkerboard method in subjects with periodontitis. Methods: Subjects with at least six periodontal pockets with a probing depth (PD) between 5 and 7 mm were enrolled in the study. Subgingival plaque samples were collected with sterile curets by a single-stroke procedure at six selected periodontal sites from 161 subjects (966 subgingival sites). Subgingival bacterial samples were assayed with the checkerboard DNA-DNA hybridization method identifying 37 species. Results: Probing depths of 5, 6, and 7 mm were found at 50% (n = 483), 34% (n = 328), and 16% (n = 155) of sites, respectively. Statistical analysis failed to demonstrate differences in the sum of bacterial counts by tooth type (P = 0.18) or specific location of the sample (P = 0.78). With the exceptions of Campylobacter gracilis (P <0.001) and Actinomyces naeslundii (P <0.001), analysis by general linear model multivariate regression failed to identify subject or sample location factors as explanatory to microbiologic results. A trend of difference in bacterial load by tooth type was found for Prevotella nigrescens (P <0.01). At a cutoff level of >/=1.0 x 10(5), Porphyromonas gingivalis and Tannerella forsythia (previously T. forsythensis) were present at 48.0% to 56.3% and 46.0% to 51.2% of sampled sites, respectively. Conclusions: Given the similarities in the clinical evidence of periodontitis, the presence and levels of 37 species commonly studied in periodontitis are similar, with no differences between molar, premolar, and incisor/cuspid subgingival sites. This may facilitate microbiologic sampling strategies in subjects during periodontal therapy.
Resumo:
Four papers, written in collaboration with the author’s graduate school advisor, are presented. In the first paper, uniform and non-uniform Berry-Esseen (BE) bounds on the convergence to normality of a general class of nonlinear statistics are provided; novel applications to specific statistics, including the non-central Student’s, Pearson’s, and the non-central Hotelling’s, are also stated. In the second paper, a BE bound on the rate of convergence of the F-statistic used in testing hypotheses from a general linear model is given. The third paper considers the asymptotic relative efficiency (ARE) between the Pearson, Spearman, and Kendall correlation statistics; conditions sufficient to ensure that the Spearman and Kendall statistics are equally (asymptotically) efficient are provided, and several models are considered which illustrate the use of such conditions. Lastly, the fourth paper proves that, in the bivariate normal model, the ARE between any of these correlation statistics possesses certain monotonicity properties; quadratic lower and upper bounds on the ARE are stated as direct applications of such monotonicity patterns.
Resumo:
The need for a stronger and more durable building material is becoming more important as the structural engineering field expands and challenges the behavioral limits of current materials. One of the demands for stronger material is rooted in the effects that dynamic loading has on a structure. High strain rates on the order of 101 s-1 to 103 s-1, though a small part of the overall types of loading that occur anywhere between 10-8 s-1 to 104 s-1 and at any point in a structures life, have very important effects when considering dynamic loading on a structure. High strain rates such as these can cause the material and structure to behave differently than at slower strain rates, which necessitates the need for the testing of materials under such loading to understand its behavior. Ultra high performance concrete (UHPC), a relatively new material in the U.S. construction industry, exhibits many enhanced strength and durability properties compared to the standard normal strength concrete. However, the use of this material for high strain rate applications requires an understanding of UHPC’s dynamic properties under corresponding loads. One such dynamic property is the increase in compressive strength under high strain rate load conditions, quantified as the dynamic increase factor (DIF). This factor allows a designer to relate the dynamic compressive strength back to the static compressive strength, which generally is a well-established property. Previous research establishes the relationships for the concept of DIF in design. The generally accepted methodology for obtaining high strain rates to study the enhanced behavior of compressive material strength is the split Hopkinson pressure bar (SHPB). In this research, 83 Cor-Tuf UHPC specimens were tested in dynamic compression using a SHPB at Michigan Technological University. The specimens were separated into two categories: ambient cured and thermally treated, with aspect ratios of 0.5:1, 1:1, and 2:1 within each category. There was statistically no significant difference in mean DIF for the aspect ratios and cure regimes that were considered in this study. DIF’s ranged from 1.85 to 2.09. Failure modes were observed to be mostly Type 2, Type 4, or combinations thereof for all specimen aspect ratios when classified according to ASTM C39 fracture pattern guidelines. The Comite Euro-International du Beton (CEB) model for DIF versus strain rate does not accurately predict the DIF for UHPC data gathered in this study. Additionally, a measurement system analysis was conducted to observe variance within the measurement system and a general linear model analysis was performed to examine the interaction and main effects that aspect ratio, cannon pressure, and cure method have on the maximum dynamic stress.
Resumo:
The general model The aim of this chapter is to introduce a structured overview of the different possibilities available to display and analyze brain electric scalp potentials. First, a general formal model of time-varying distributed EEG potentials is introduced. Based on this model, the most common analysis strategies used in EEG research are introduced and discussed as specific cases of this general model. Both the general model and particular methods are also expressed in mathematical terms. It is however not necessary to understand these terms to understand the chapter. The general model that we propose here is based on the statement made in Chapter 3, stating that the electric field produced by active neurons in the brain propagates in brain tissue without delay in time. Contrary to other imaging methods that are based on hemodynamic or metabolic processes, the EEG scalp potentials are thus “real-time,” not delayed and not a-priori frequency-filtered measurements. If only a single dipolar source in the brain were active, the temporal dynamics of the activity of that source would be exactly reproduced by the temporal dynamics observed in the scalp potentials produced by that source. This is illustrated in Figure 5.1, where the expected EEG signal of a single source with spindle-like dynamics in time has been computed. The dynamics of the scalp potentials exactly reproduce the dynamics of the source. The amplitude of the measured potentials depends on the relation between the location and orientation of the active source, its strength and the electrode position.
Resumo:
The response of atmospheric chemistry and dynamics to volcanic eruptions and to a decrease in solar activity during the Dalton Minimum is investigated with the fully coupled atmosphere–ocean chemistry general circulation model SOCOL-MPIOM (modeling tools for studies of SOlar Climate Ozone Links-Max Planck Institute Ocean Model) covering the time period 1780 to 1840 AD. We carried out several sensitivity ensemble experiments to separate the effects of (i) reduced solar ultra-violet (UV) irradiance, (ii) reduced solar visible and near infrared irradiance, (iii) enhanced galactic cosmic ray intensity as well as less intensive solar energetic proton events and auroral electron precipitation, and (iv) volcanic aerosols. The introduced changes of UV irradiance and volcanic aerosols significantly influence stratospheric dynamics in the early 19th century, whereas changes in the visible part of the spectrum and energetic particles have smaller effects. A reduction of UV irradiance by 15%, which represents the presently discussed highest estimate of UV irradiance change caused by solar activity changes, causes global ozone decrease below the stratopause reaching as much as 8% in the midlatitudes at 5 hPa and a significant stratospheric cooling of up to 2 °C in the mid-stratosphere and to 6 °C in the lower mesosphere. Changes in energetic particle precipitation lead only to minor changes in the yearly averaged temperature fields in the stratosphere. Volcanic aerosols heat the tropical lower stratosphere, allowing more water vapour to enter the tropical stratosphere, which, via HOx reactions, decreases upper stratospheric and mesospheric ozone by roughly 4%. Conversely, heterogeneous chemistry on aerosols reduces stratospheric NOx, leading to a 12% ozone increase in the tropics, whereas a decrease in ozone of up to 5% is found over Antarctica in boreal winter. The linear superposition of the different contributions is not equivalent to the response obtained in a simulation when all forcing factors are applied during the Dalton Minimum (DM) – this effect is especially well visible for NOx/NOy. Thus, this study also shows the non-linear behaviour of the coupled chemistry-climate system. Finally, we conclude that especially UV and volcanic eruptions dominate the changes in the ozone, temperature and dynamics while the NOx field is dominated by the energetic particle precipitation. Visible radiation changes have only very minor effects on both stratospheric dynamics and chemistry.
Resumo:
In terms of atmospheric impact, the volcanic eruption of Mt. Pinatubo (1991) is the best characterized large eruption on record. We investigate here the model-derived stratospheric warming following the Pinatubo eruption as derived from SAGE II extinction data including recent improvements in the processing algorithm. This method, termed SAGE_4λ, makes use of the four wavelengths (385, 452, 525 and 1024 nm) of the SAGE II data when available, and uses a data-filling procedure in the opacity-induced "gap" regions. Using SAGE_4λ, we derived aerosol size distributions that properly reproduce extinction coefficients also at much longer wavelengths. This provides a good basis for calculating the absorption of terrestrial infrared radiation and the resulting stratospheric heating. However, we also show that the use of this data set in a global chemistry–climate model (CCM) still leads to stronger aerosol-induced stratospheric heating than observed, with temperatures partly even higher than the already too high values found by many models in recent general circulation model (GCM) and CCM intercomparisons. This suggests that the overestimation of the stratospheric warming after the Pinatubo eruption may not be ascribed to an insufficient observational database but instead to using outdated data sets, to deficiencies in the implementation of the forcing data, or to radiative or dynamical model artifacts. Conversely, the SAGE_4λ approach reduces the infrared absorption in the tropical tropopause region, resulting in a significantly better agreement with the post-volcanic temperature record at these altitudes.