948 resultados para diffusive viscoelastic model, global weak solution, error estimate
Resumo:
Air was sampled from the porous firn layer at the NEEM site in Northern Greenland. We use an ensemble of ten reference tracers of known atmospheric history to characterise the transport properties of the site. By analysing uncertainties in both data and the reference gas atmospheric histories, we can objectively assign weights to each of the gases used for the depth-diffusivity reconstruction. We define an objective root mean square criterion that is minimised in the model tuning procedure. Each tracer constrains the firn profile differently through its unique atmospheric history and free air diffusivity, making our multiple-tracer characterisation method a clear improvement over the commonly used single-tracer tuning. Six firn air transport models are tuned to the NEEM site; all models successfully reproduce the data within a 1σ Gaussian distribution. A comparison between two replicate boreholes drilled 64 m apart shows differences in measured mixing ratio profiles that exceed the experimental error. We find evidence that diffusivity does not vanish completely in the lock-in zone, as is commonly assumed. The ice age- gas age difference (1 age) at the firn-ice transition is calculated to be 182+3−9 yr. We further present the first intercomparison study of firn air models, where we introduce diagnostic scenarios designed to probe specific aspects of the model physics. Our results show that there are major differences in the way the models handle advective transport. Furthermore, diffusive fractionation of isotopes in the firn is poorly constrained by the models, which has consequences for attempts to reconstruct the isotopic composition of trace gases back in time using firn air and ice core records.
Resumo:
Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and a methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates, consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil-fuel combustion and cement production (EFF) are based on energy statistics, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated for the first time in this budget with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models forced by observed climate, CO2 and land cover change (some including nitrogen–carbon interactions). All uncertainties are reported as ± 1 σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2003–2012), EFF was 8.6 ± 0.4 GtC yr − 1, ELUC 0.9 ± 0.5 GtC yr − 1, GATM 4.3 ± 0.1 GtC yr − 1, S OCEAN 2.5 ± 0.5 GtC yr − 1, and S LAND 2.8 ± 0.8 GtC yr − 1. For year 2012 alone, EFF grew to 9.7 ± 0.5 GtC yr − 1, 2.2 % above 2011, reflecting a continued growing trend in these emissions, GATM was 5.1 ± 0.2 GtC yr − 1, SOCEANwas 2.9 ± 0.5 GtC yr −1, and assuming an ELU Cof 1.0 ± 0.5 GtC yr − 1 (based on the 2001–2010 average), SLAND was 2.7 ± 0.9 GtC yr − 1. GATM was high in 2012 compared to the 2003–2012 average, almost entirely reflecting the high EFF. The global atmospheric CO2 con- centration reached 392.52 ± 0.10 ppm averaged over 2012. We estimate that EFF will increase by 2.1 % (1.1–3.1 %) to 9.9 ± 0.5 GtC in 2013, 61 % above emissions in 1990, based on projections of world gross domestic product and recent changes in the carbon intensity of the economy. With this projection, cumulative emissions of CO2 will reach about 535 ± 55 GtC for 1870–2013, about 70 % from EFF (390 ± 20 GtC) and 30 % from ELUC (145 ± 50 GtC). This paper also documents any changes in the methods and data sets used in this new carbon budget from previous budgets (Le Quéré et al., 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center.
Resumo:
OBJECTIVES Cerebral hypoxic-ischaemic injury following cardiac arrest is a devastating disease affecting thousands of patients each year. There is a complex interaction between post-resuscitation injury after whole-body ischaemia-reperfusion and cerebral damage which cannot be explored in in vitro systems only; there is a need for animal models. In this study, we describe and evaluate the feasibility and efficiency of our simple rodent cardiac arrest model. METHODS Ten wistar rats were subjected to 9 and 10 minutes of cardiac arrest. Cardiac arrest was introduced with a mixture of the short-acting beta-blocking drug esmolol and potassium chloride. RESULTS All animals could be resuscitated within 1 minute, and survived until day 5.General health score and neurobehavioural testing indicated substantial impairment after cardiac arrest, without differences between groups. Histological examination of the hippocampus CA1 segment, the most vulnerable segment of the cerebrum, demonstrated extensive damage in the cresyl violet staining, as well as in the Fluoro-Jade B staining and in the Iba-1 staining, indicating recruitment of microglia after the hypoxic-ischaemic event. Again, there were no differences between the 9- and 10-minute cardiac arrest groups. DISCUSSION We were able to establish a simple and reproducible 9- and 10-minute rodent cardiac arrest models with a well-defined no-flow-time. Extensive damage can be found in the hippocampus CA1 segment. The lack of difference between 9- and 10-minute cardiac arrest time in the neuropsychological, the open field test and the histological evaluations is mainly due to the small sample size.
Resumo:
Gebiet: Chirurgie Abstract: OBJECTIVES: – The number of heart transplantations is limited by donor organ availability. Donation after circulatory determination of death (DCDD) could significantly improve graft availability, however, organs undergo warm ischaemia followed by reperfusion, leading to tissue damage. Laboratory studies suggest that mechanical postconditioning [(MPC), brief, intermittent periods of ischaemia at the onset of reperfusion] can limit reperfusion injury, however, clinical translation has been disappointing. We hypothesized that MPC-induced cardioprotection depends on fatty acid levels at reperfusion. – – METHODS: – Experiments were performed with an isolated rat heart model of DCDD. Hearts of male Wistar rats (n = 42) underwent working-mode perfusion for 20 min (baseline), 27 min of global ischaemia and 60 min reperfusion with or without MPC (two cycles of 30 s reperfusion/30 s ischaemia) in the presence or absence of high fat [(HF), 1.2 mM palmitate]. Haemodynamic parameters, necrosis factors and oxygen consumption (O2C) were assessed. Recovery rate was calculated as the value at 60 min reperfusion expressed as a percentage of the mean baseline value. The Kruskal-Wallis test was used to provide an overview of differences between experimental groups, and pairwise comparisons were performed to compare specific time points of interest for parameters with significant overall results. – – RESULTS: – Percent recovery of left ventricular (LV) work [developed pressure (DP)-heart rate product] at 60 min reperfusion was higher in hearts reperfused without fat versus with fat (58 ± 8 vs 23 ± 26%, P < 0.01) in the absence of MPC. In the absence of fat, MPC did not affect post-ischaemic haemodynamic recovery. Among the hearts reperfused with HF, two significantly different subgroups emerged according to recovery of LV work: low recovery (LoR) and high recovery (HiR) subgroups. At 60 min reperfusion, recovery was increased with MPC versus no MPC for LV work (79 ± 6 vs 55 ± 7, respectively, P < 0.05) in HiR subgroups and for DP (40 ± 27 vs 4 ± 2%), dP/dtmax (37 ± 24 vs 5 ± 3%) and dP/dtmin (33 ± 21 vs 5 ± 4%, P < 0.01 for all) in LoR subgroups. – – CONCLUSIONS: – Effects of MPC depend on energy substrate availability, MPC increased recovery of LV work in the presence, but not in the absence, of HF. Controlled reperfusion may be useful for therapeutic strategies aimed at improving post-ischaemic recovery of cardiac DCDD grafts, and ultimately in increasing donor heart availability.
Resumo:
The Everglades Depth Estimation Network (EDEN) is an integrated network of realtime water-level monitoring, ground-elevation modeling, and water-surface modeling that provides scientists and managers with current (2000-present), online water-stage and water-depth information for the entire freshwater portion of the Greater Everglades. Continuous daily spatial interpolations of the EDEN network stage data are presented on grid with 400-square-meter spacing. EDEN offers a consistent and documented dataset that can be used by scientists and managers to: (1) guide large-scale field operations, (2) integrate hydrologic and ecological responses, and (3) support biological and ecological assessments that measure ecosystem responses to the implementation of the Comprehensive Everglades Restoration Plan (CERP) (U.S. Army Corps of Engineers, 1999). The target users are biologists and ecologists examining trophic level responses to hydrodynamic changes in the Everglades. The first objective of this report is to validate the spatially continuous EDEN water-surface model for the Everglades, Florida developed by Pearlstine et al. (2007) by using an independent field-measured data-set. The second objective is to demonstrate two applications of the EDEN water-surface model: to estimate site-specific ground elevation by using the validated EDEN water-surface model and observed water depth data; and to create water-depth hydrographs for tree islands. We found that there are no statistically significant differences between model-predicted and field-observed water-stage data in both southern Water Conservation Area (WCA) 3A and WCA 3B. Tree island elevations were derived by subtracting field water-depth measurements from the predicted EDEN water-surface. Water-depth hydrographs were then computed by subtracting tree island elevations from the EDEN water stage. Overall, the model is reliable by a root mean square error (RMSE) of 3.31 cm. By region, the RMSE is 2.49 cm and 7.77 cm in WCA 3A and 3B, respectively. This new landscape-scale hydrological model has wide applications for ongoing research and management efforts that are vital to restoration of the Florida Everglades. The accurate, high-resolution hydrological data, generated over broad spatial and temporal scales by the EDEN model, provides a previously missing key to understanding the habitat requirements and linkages among native and invasive populations, including fish, wildlife, wading birds, and plants. The EDEN model is a powerful tool that could be adapted for other ecosystem-scale restoration and management programs worldwide.
Resumo:
This paper proposes asymptotically optimal tests for unstable parameter process under the feasible circumstance that the researcher has little information about the unstable parameter process and the error distribution, and suggests conditions under which the knowledge of those processes does not provide asymptotic power gains. I first derive a test under known error distribution, which is asymptotically equivalent to LR tests for correctly identified unstable parameter processes under suitable conditions. The conditions are weak enough to cover a wide range of unstable processes such as various types of structural breaks and time varying parameter processes. The test is then extended to semiparametric models in which the underlying distribution in unknown but treated as unknown infinite dimensional nuisance parameter. The semiparametric test is adaptive in the sense that its asymptotic power function is equivalent to the power envelope under known error distribution.
Resumo:
Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^
Resumo:
Conventional designs of animal bioassays allocate the same number of animals into control and dose groups to explore the spontaneous and induced tumor incidence rates, respectively. The purpose of such bioassays are (a) to determine whether or not the substance exhibits carcinogenic properties, and (b) if so, to estimate the human response at relatively low doses. In this study, it has been found that the optimal allocation to the experimental groups which, in some sense, minimize the error of the estimated response for low dose extrapolation is associated with the dose level and tumor risk. The number of dose levels has been investigated at the affordable experimental cost. The pattern of the administered dose, 1 MTD, 1/2 MTD, 1/4 MTD,....., etc. plus control, gives the most reasonable arrangement for the low dose extrapolation purpose. The arrangement of five dose groups may make the highest dose trivial. A four-dose design can circumvent this problem and has also one degree of freedom for testing the goodness-of-fit of the response model.^ An example using the data on liver tumors induced in mice in a lifetime study of feeding dieldrin (Walker et al., 1973) is implemented with the methodology. The results are compared with conclusions drawn from other studies. ^
Resumo:
This study proposed a novel statistical method that modeled the multiple outcomes and missing data process jointly using item response theory. This method follows the "intent-to-treat" principle in clinical trials and accounts for the correlation between outcomes and missing data process. This method may provide a good solution to chronic mental disorder study. ^ The simulation study demonstrated that if the true model is the proposed model with moderate or strong correlation, ignoring the within correlation may lead to overestimate of the treatment effect and result in more type I error than specified level. Even if the within correlation is small, the performance of proposed model is as good as naïve response model. Thus, the proposed model is robust for different correlation settings if the data is generated by the proposed model.^
Resumo:
The infant mortality rate (IMR) is considered to be one of the most important indices of a country's well-being. Countries around the world and other health organizations like the World Health Organization are dedicating their resources, knowledge and energy to reduce the infant mortality rates. The well-known Millennium Development Goal 4 (MDG 4), whose aim is to archive a two thirds reduction of the under-five mortality rate between 1990 and 2015, is an example of the commitment. ^ In this study our goal is to model the trends of IMR between the 1950s to 2010s for selected countries. We would like to know how the IMR is changing overtime and how it differs across countries. ^ IMR data collected over time forms a time series. The repeated observations of IMR time series are not statistically independent. So in modeling the trend of IMR, it is necessary to account for these correlations. We proposed to use the generalized least squares method in general linear models setting to deal with the variance-covariance structure in our model. In order to estimate the variance-covariance matrix, we referred to the time-series models, especially the autoregressive and moving average models. Furthermore, we will compared results from general linear model with correlation structure to that from ordinary least squares method without taking into account the correlation structure to check how significantly the estimates change.^
Resumo:
This dataset characterizes the evolution of western African precipitation indicated by marine sediment geochemical records in comparison to transient simulations using CCSM3 global climate model throughout the Last Interglacial (130-115 ka). It contains (1) defined tie-points (age models), newly published stable isotopes of benthic foraminifera and Al/Si log-ratios of eight marine sediment cores from the western African margin and (2) annual and seasonal rainfall anomalies (relative to pre-industrial values) for six characteristic latitudinal bands in western Africa simulated by CCSM3 (two transient simulations: one non-accelerated and one accelerated experiment).
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.