926 resultados para truncation lag
Resumo:
A new spectral-based approach is presented to find orthogonal patterns from gridded weather/climate data. The method is based on optimizing the interpolation error variance. The optimally interpolated patterns (OIP) are then given by the eigenvectors of the interpolation error covariance matrix, obtained using the cross-spectral matrix. The formulation of the approach is presented, and the application to low-dimension stochastic toy models and to various reanalyses datasets is performed. In particular, it is found that the lowest-frequency patterns correspond to largest eigenvalues, that is, variances, of the interpolation error matrix. The approach has been applied to the Northern Hemispheric (NH) and tropical sea level pressure (SLP) and to the Indian Ocean sea surface temperature (SST). Two main OIP patterns are found for the NH SLP representing respectively the North Atlantic Oscillation and the North Pacific pattern. The leading tropical SLP OIP represents the Southern Oscillation. For the Indian Ocean SST, the leading OIP pattern shows a tripole-like structure having one sign over the eastern and north- and southwestern parts and an opposite sign in the remaining parts of the basin. The pattern is also found to have a high lagged correlation with the Niño-3 index with 6-months lag.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Many lowland rivers across northwest Europe exhibit broadly similar behavioural responses to glacial-interglacial transitions and landscape development. Difficulties exist in assessing these, largely because the evidence from many rivers remains limited and fragmentary. Here we address this issue in the context of the river Kennet, a tributary of the Thames, since c. 13,000 cal BP. Some similarities with other rivers are present, suggesting that regional climatic shifts are important controls. The Kennet differs from the regional pattern in a number of ways. The rate of response to sudden climatic change, particularly at the start of the Holocene and also mid-Holocene forest clearance, appears very high. This may reflect abrupt shifts between two catchment scale hydrological states arising from contemporary climates, land use change and geology. Stadial hydrology is dominated by nival regimes, with limited winter infiltration and high spring and summer runoff. Under an interglacial climate, infiltration is more significant. The probable absence of permafrost in the catchment means that a lag between the two states due to its gradual decay is unlikely. Palaeoecology, supported by radiocarbon dates, suggests that, at the very start of the Holocene, a dramatic episode of fine sediment deposition across most of the valley floor occurred, lasting 500-1000 years. A phase of peat accumulation followed as mineral sediment supply declined. A further shift led to tufa deposition, initially in small pools, then across the whole floodplain area, with the river flowing through channels cut in tufa and experiencing repeated avulsion. Major floods, leaving large gravel bars that still form positive relief features on the floodplain, followed mid-Holocene floodplain stability. Prehistoric deforestation is likely to be the cause of this flooding, inducing a major environmental shift with significantly increased surface runoff. Since the Bronze Age, predominantly fine sediments were deposited along the valley with apparently stable channels and vertical floodplain accretion associated with soil erosion and less catastrophic flooding. The Kennet demonstrates that, while a general pattern of river behaviour over time, within a region, may be identifiable, individual rivers are likely to diverge from this. Consequently, it is essential to understand catchment controls, particularly the relative significance of surface and subsurface hydrology. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Increased atmospheric deposition of inorganic nitrogen (N) may lead to increased leaching of nitrate (NO3-) to surface waters. The mechanisms responsible for, and controls on, this leaching are matters of debate. An experimental N addition has been conducted at Gardsjon, Sweden to determine the magnitude and identify the mechanisms of N leaching from forested catchments within the EU funded project NITREX. The ability of INCA-N, a simple process-based model of catchment N dynamics, to simulate catchment-scale inorganic N dynamics in soil and stream water during the course of the experimental addition is evaluated. Simulations were performed for 1990-2002. Experimental N addition began in 1991. INCA-N was able to successfully reproduce stream and soil water dynamics before and during the experiment. While INCA-N did not correctly simulate the lag between the start of N addition and NO 2 3 breakthrough, the model was able to simulate the state change resulting from increased N deposition. Sensitivity analysis showed that model behaviour was controlled primarily by parameters related to hydrology and vegetation dynamics and secondarily by in-soil processes.
Resumo:
The flavonoid class of plant secondary metabolites play a multifunctional role in below-ground plant-microbe interactions with their best known function as signals in the nitrogen fixing legume-rhizobia symbiosis. Flavonoids enter rhizosphere soil as a result of root exudation and senescence but little is known about their subsequent fate or impacts on microbial activity. Therefore, the present study examined the sorptive behaviour, biodegradation and impact on dehydrogenase activity (as determined by iodonitrotetrazolium chloride reduction) of the flavonoids naringenin and formononetin in soil. Organic carbon normalised partition coefficients, log K-oc, of 3.12 (formononetin) and 3.19 (naringenin) were estimated from sorption isotherms and, after comparison with literature log K-oc values for compounds whose soil behaviour is better characterised, the test flavonoids were deemed to be moderately sorbed. Naringenin (spiked at 50 mu g g(-1)) was biodegraded without a detectable lag phase with concentrations reduced to 0.13 +/- 0.01 mu g g(-1) at the end of the 96 h time course. Biodegradation of formononetin proceeded after a lag phase of similar to 24 with concentrations reduced to 4.5 +/- 1% of the sterile control after 72 h. Most probable number (MPN) analysis revealed that prior to the addition of flavonoids, the soil contained 5.4 x 10(6) MPNg(-1) (naringenin) and 7.9 x 10(5) MPNg(-1) (formononetin) catabolic microbes. Formononetin concentration had no significant (p > 0.05) effect on soil dehydrogenase activity, whereas naringenin concentration had an overall but non-systematic impact (p = 0.045). These results are discussed with reference to likely total and bioavailable concentrations of flavonoids experienced by microbes in the rhizosphere. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
This study uses a Granger causality time series modeling approach to quantitatively diagnose the feedback of daily sea surface temperatures (SSTs) on daily values of the North Atlantic Oscillation (NAO) as simulated by a realistic coupled general circulation model (GCM). Bivariate vector autoregressive time series models are carefully fitted to daily wintertime SST and NAO time series produced by a 50-yr simulation of the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3). The approach demonstrates that there is a small yet statistically significant feedback of SSTs oil the NAO. The SST tripole index is found to provide additional predictive information for the NAO than that available by using only past values of NAO-the SST tripole is Granger causal for the NAO. Careful examination of local SSTs reveals that much of this effect is due to the effect of SSTs in the region of the Gulf Steam, especially south of Cape Hatteras. The effect of SSTs on NAO is responsible for the slower-than-exponential decay in lag-autocorrelations of NAO notable at lags longer than 10 days. The persistence induced in daily NAO by SSTs causes long-term means of NAO to have more variance than expected from averaging NAO noise if there is no feedback of the ocean on the atmosphere. There are greater long-term trends in NAO than can be expected from aggregating just short-term atmospheric noise, and NAO is potentially predictable provided that future SSTs are known. For example, there is about 10%-30% more variance in seasonal wintertime means of NAO and almost 70% more variance in annual means of NAO due to SST effects than one would expect if NAO were a purely atmospheric process.
Resumo:
The ability of chlorogenic acid to inhibit oxidation of human low-density lipoprotein (LDL) was studied by in vitro copper-induced LDL oxidation. The effect of chlorogenic acid on the lag time before LDL oxidation increased in a dose dependent manner by up to 176% of the control value when added at concentrations of 0.25 -1.0 μM. Dose dependent increases in lag time of LDL oxidation were also observed, but at much higher concentrations, when chlorogenic acid was incubated with LDL (up to 29.7% increase in lag phase for 10 μM chlorogenic acid) or plasma (up to 16.6% increase in lag phase for 200 μM chlorogenic acid) prior to isolation of LDL, and this indicated that chlorogenic acid was able to bind, at least weakly, to LDL. Bovine serum albumin (BSA) increased the oxidative stability of LDL in the presence of chlorogenic acid. Fluorescence spectroscopy showed that chlorogenic acid binds to BSA with a binding constant of 3.88 x 104 M-1. BSA increased the antioxidant effect of chlorogenic acid, and this was attributed to copper ions binding to BSA, thereby reducing the amount of copper available for inducing lipid peroxidation.
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
The climatology of the OPA/ARPEGE-T21 coupled general circulation model (GCM) is presented. The atmosphere GCM has a T21 spectral truncation and the ocean GCM has a 2°×1.5° average resolution. A 50-year climatic simulation is performed using the OASIS coupler, without flux correction techniques. The mean state and seasonal cycle for the last 10 years of the experiment are described and compared to the corresponding uncoupled experiments and to climatology when available. The model reasonably simulates most of the basic features of the observed climate. Energy budgets and transports in the coupled system, of importance for climate studies, are assessed and prove to be within available estimates. After an adjustment phase of a few years, the model stabilizes around a mean state where the tropics are warm and resemble a permanent ENSO, the Southern Ocean warms and almost no sea-ice is left in the Southern Hemisphere. The atmospheric circulation becomes more zonal and symmetric with respect to the equator. Once those systematic errors are established, the model shows little secular drift, the small remaining trends being mainly associated to horizontal physics in the ocean GCM. The stability of the model is shown to be related to qualities already present in the uncoupled GCMs used, namely a balanced radiation budget at the top-of-the-atmosphere and a tight ocean thermocline.
Resumo:
Direct observations from an array of current meter moorings across the Mozambique Channel in the south-west Indian Ocean are presented covering a period of more than 4 years. This allows an analysis of the volume transport through the channel, including the variability on interannual and seasonal time scales. The mean volume transport over the entire observational period is 16.7 Sv poleward. Seasonal variations have a magnitude of 4.1 Sv and can be explained from the variability in the wind field over the western part of the Indian Ocean. Interannual variability has a magnitude of 8.9 Sv and is large compared to the mean. This time scale of variability could be related to variability in the Indian Ocean Dipole (IOD), showing that it forms part of the variability in the ocean-climate system of the entire Indian Ocean. By modulating the strength of the South Equatorial Current, the weakening (strengthening) tropical gyre circulation during a period of positive (negative) IOD index leads to a weakened (strengthened) southward transport through the channel, with a time lag of about a year. The relatively strong interannual variability stresses the importance of long-term direct observations.
Resumo:
A multivariate fit to the variation in global mean surface air temperature anomaly over the past half century is presented. The fit procedure allows for the effect of response time on the waveform, amplitude and lag of each radiative forcing input, and each is allowed to have its own time constant. It is shown that the contribution of solar variability to the temperature trend since 1987 is small and downward; the best estimate is -1.3% and the 2sigma confidence level sets the uncertainty range of -0.7 to -1.9%. The result is the same if one quantifies the solar variation using galactic cosmic ray fluxes (for which the analysis can be extended back to 1953) or the most accurate total solar irradiance data composite. The rise in the global mean air surface temperatures is predominantly associated with a linear increase that represents the combined effects of changes in anthropogenic well-mixed greenhouse gases and aerosols, although, in recent decades, there is also a considerable contribution by a relative lack of major volcanic eruptions. The best estimate is that the anthropogenic factors contribute 75% of the rise since 1987, with an uncertainty range (set by the 2sigma confidence level using an AR(1) noise model) of 49–160%; thus, the uncertainty is large, but we can state that at least half of the temperature trend comes from the linear term and that this term could explain the entire rise. The results are consistent with the intergovernmental panel on climate change (IPCC) estimates of the changes in radiative forcing (given for 1961–1995) and are here combined with those estimates to find the response times, equilibrium climate sensitivities and pertinent heat capacities (i.e. the depth into the oceans to which a given radiative forcing variation penetrates) of the quasi-periodic (decadal-scale) input forcing variations. As shown by previous studies, the decadal-scale variations do not penetrate as deeply into the oceans as the longer term drifts and have shorter response times. Hence, conclusions about the response to century-scale forcing changes (and hence the associated equilibrium climate sensitivity and the temperature rise commitment) cannot be made from studies of the response to shorter period forcing changes.
Resumo:
Aims: Accommodation to overcome hypermetropia is implicated in emmetropisation. This study recorded accommodation responses in a wide range of emmetropising infants and older children with clinically significant hypermetropia to assess common characteristics and differences. Methods: A PlusoptiXSO4 photorefractor in a laboratory setting was used to collect binocular accommodation data from participants viewing a detailed picture target moving between 33cm and 2m. 38 typically developing infants were studied between 6-26 weeks of age and were compared with cross-sectional data from children 5-9 years of age with clinically significant hypermetropia (n=15), corrected fully accommodative strabismus (n=14) and 27 age-matched controls. Results: Hypermetropes of all ages under-accommodated compared to controls at all distances, whether corrected or not (p<0.00001) and lag related to manifest refraction. Emmetropising infants under-accommodated most in the distance, while the hypermetropic patient groups underaccommodated most for near. Conclusions: Better accommodation for near than distance is demonstrated in those hypermetropic children who go on to emmetropise. This supports the approach of avoiding refractive correction in such children. In contrast, hypermetropic children referred for treatment for reduced distance visual acuity are not likely to habitually accommodate to overcome residual hypermetropia left by an under-correction.
Resumo:
Protein oxidation within cells exposed to oxidative free radicals has been reported to occur in an uninhibited manner with both hydroxyl and peroxyl radicals. In contrast, THP-1 cells exposed to peroxyl radicals (ROO center dot) generated by thermo decomposition of the azo compound AAPH showed a distinct lag phase of at least 6 h, during which time no protein oxidation or cell death was observed. Glutathione appears to be the source of the lag phase as cellular levels were observed to rapidly decrease during this period. Removal of glutathione with buthionine sulfoxamine eliminated the lag phase. At the end of the lag phase there was a rapid loss of cellular MTT reducing activity and the appearance of large numbers of propidium iodide/annexin-V staining necrotic cells with only 10% of the cells appearing apoptotic (annexin-V staining only). Cytochrome c was released into the cytoplasm after 12 h of incubation but no increase in caspase-3 activity was found at any time points. We propose that the rapid loss of glutathione caused by the AAPH peroxyl radicals resulted in the loss of caspase activity and the initiation of protein oxidation. The lack of caspase-3 activity appears to have caused the cells to undergo necrosis in response to protein oxidation and other cellular damage. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.