123 resultados para Time equivalent approach
Resumo:
A pot experiment was conducted to test the hypothesis that decomposition of organic matter in sewage sludge and the consequent formation of dissolved organic compounds (DOC) would lead to an increase in the bioavailability of the heavy metals. Two Brown Earth soils, one with clayey loam texture (CL) and the other a loamy sand (LS) were mixed with sewage sludge at rates equivalent to 0, 10 and 50 1 dry sludge ha(-1) and the pots were sown with ryegrass (Lolium perenne L.). The organic matter content and heavy metal availability assessed with soil extractions with 0.05 M CaCl2 were monitored over a residual time of two years, while plant uptake over one year, after addition of the sludge. It was found that the concentrations of Cd and Ni in both the ryegrass and the soil extracts increased slightly but significantly during the first year. In most cases, this increase was most evident especially at the higher sludge application rate (50 t ha(-1)). However, in the second year metal availability reached a plateau. Zinc concentrations in the ryegrass did not show an increase but the CaCl2 extracts increased during the first year. In contrast, organic matter content decreased rapidly in the first months of the first year and much more slowly in the second (total decrease of 16%). The concentrations of DOC increased significantly in the more organic rich CL soil in the course of two years. The pattern followed by the decomposition of organic matter with time and the production of DOC may provide at least a partial explanation for trend towards increased metal availability.
Resumo:
Much of the writing on urban regeneration in the UK has been focused on the types of urban spaces that have been created in city centres. Less has been written about the issue of when the benefits of regeneration could and should be delivered to a range of different interests, and the different time frames that exist in any development area. Different perceptions of time have been reflected in dominant development philosophies in the UK and elsewhere. The trickle-down agendas of the 1980s, for example, were criticised for their focus on the short-term time frames and needs of developers, often at the expense of those of local communities. The recent emergence of sustainability discourses, however, ostensibly changes the time focus of development and promotes a broader concern with new imagined futures. This paper draws on the example of development in Salford Quays, in the North West of England, to argue that more attention needs to be given to the politics of space-time in urban development processes. It begins by discussing the importance and relevance of this approach before turning to the case study and the ways in which the local politics of space-time has influenced development agendas and outcomes. The paper argues that such an approach harbours the potential for more progressive, far-reaching, and sustainable development agendas to be developed and implemented.
Resumo:
A new calibration curve for the conversion of radiocarbon ages to calibrated (cal) ages has been constructed and internationally ratified to replace ImCal98, which extended from 0-24 cal kyr BP (Before Present, 0 cal BP = AD 1950). The new calibration data set for terrestrial samples extends from 0-26 cal kyr BP, but with much higher resolution beyond 11.4 cal kyr BP than ImCal98. Dendrochronologically-dated tree-ring samples cover the period from 0-12.4 cal kyr BP. Beyond the end of the tree rings, data from marine records (corals and foraminifera) are converted to the atmospheric equivalent with a site-specific marine reservoir correction to provide terrestrial calibration from 12.4-26.0 cal kyr BP. A substantial enhancement relative to ImCal98 is the introduction of a coherent statistical approach based on a random walk model, which takes into account the uncertainty in both the calendar age and the C-14 age to calculate the underlying calibration curve (Buck and Blackwell, this issue). The tree-ring data sets, sources of uncertainty, and regional offsets are discussed here. The marine data sets and calibration curve for marine samples from the surface mixed layer (Marine 04) are discussed in brief, but details are presented in Hughen et al. (this issue a). We do not make a recommendation for calibration beyond 26 cal kyr BP at this time; however, potential calibration data sets are compared in another paper (van der Plicht et al., this issue).
Resumo:
Many currently available drugs show unfavourable physicochemical properties for delivery into or across the skin and temporary chemical modulation of the penetrant is one option to achieve improved delivery properties. Pro-drugs are chemical derivatives of an active drug which is covalently bonded to an inactive pro-moiety in order to overcome pharmaceutical and pharmacokinetic barriers. A pro-drug relies upon conversion within the body to release the parent active drug (and pro-moiety) to elicit its pharmacological effect. The main drawback of this approach is that the pro-moiety is essentially an unwanted ballast which, when released, can lead to adverse effects. The term ‘co-drug’ refers to two or more therapeutic compounds active against the same disease bonded via a covalent chemical linkage and it is this approach which is reviewed for the first time in the current article. For topically applied co-drugs, each moiety is liberated in situ, either chemically or enzymatically, once the stratum corneum barrier has been overcome by the co-drug. Advantages include synergistic modulation of the disease process, enhancement of drug delivery and pharmacokinetic properties and the potential to enhance stability by masking of labile functional groups. The amount of published work on co-drugs is limited but the available data suggest the co-drug concept could provide a significant therapeutic improvement in dermatological diseases. However, the applicability of the co-drug approach is subject to strict limitations pertaining mainly to the availability of compatible moieties and physicochemical properties of the overall molecule.
Resumo:
This series of experiments investigated the role of a prefrontal cortical-dorsal striatal circuit in attention, using a continuous performance task of sustained and spatially divided visual attention. A unilateral excitotoxic lesion of the medial prefrontal cortex and a contralateral lesion of the medial caudate-putamen were used to "disconnect" the circuit. Control groups of rats with unilateral lesions of either structure were tested in the same task. Behavioral controls included testing the effects of the disconnection lesion on Pavlovian discriminated approach behavior. The disconnection lesion produced a significant reduction in the accuracy of performance in the attentional task but did not impair Pavlovian approach behavior or affect locomotor or motivational variables, providing evidence for the involvement of this medial prefrontal corticostriatal system in aspects of visual attentional function.
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
The International System of Units (SI) is founded on seven base units, the metre, kilogram, second, ampere, kelvin, mole and candela corresponding to the seven base quantities of length, mass, time, electric current, thermodynamic temperature, amount of substance and luminous intensity. At its 94th meeting in October 2005, the International Committee for Weights and Measures (CIPM) adopted a recommendation on preparative steps towards redefining the kilogram, ampere, kelvin and mole so that these units are linked to exactly known values of fundamental constants. We propose here that these four base units should be given new definitions linking them to exactly defined values of the Planck constant h, elementary charge e, Boltzmann constant k and Avogadro constant NA, respectively. This would mean that six of the seven base units of the SI would be defined in terms of true invariants of nature. In addition, not only would these four fundamental constants have exactly defined values but also the uncertainties of many of the other fundamental constants of physics would be either eliminated or appreciably reduced. In this paper we present the background and discuss the merits of these proposed changes, and we also present possible wordings for the four new definitions. We also suggest a novel way to define the entire SI explicitly using such definitions without making any distinction between base units and derived units. We list a number of key points that should be addressed when the new definitions are adopted by the General Conference on Weights and Measures (CGPM), possibly by the 24th CGPM in 2011, and we discuss the implications of these changes for other aspects of metrology.
Resumo:
We solve a Dirichlet boundary value problem for the Klein–Gordon equation posed in a time-dependent domain. Our approach is based on a general transform method for solving boundary value problems for linear and integrable nonlinear PDE in two variables. Our results consist of the inversion formula for a generalized Fourier transform, and of the application of this generalized transform to the solution of the boundary value problem.
Resumo:
We solve an initial-boundary problem for the Klein-Gordon equation on the half line using the Riemann-Hilbert approach to solving linear boundary value problems advocated by Fokas. The approach we present can be also used to solve more complicated boundary value problems for this equation, such as problems posed on time-dependent domains. Furthermore, it can be extended to treat integrable nonlinearisations of the Klein-Gordon equation. In this respect, we briefly discuss how our results could motivate a novel treatment of the sine-Gordon equation.
The sequential analysis of repeated binary responses: a score test for the case of three time points
Resumo:
In this paper a robust method is developed for the analysis of data consisting of repeated binary observations taken at up to three fixed time points on each subject. The primary objective is to compare outcomes at the last time point, using earlier observations to predict this for subjects with incomplete records. A score test is derived. The method is developed for application to sequential clinical trials, as at interim analyses there will be many incomplete records occurring in non-informative patterns. Motivation for the methodology comes from experience with clinical trials in stroke and head injury, and data from one such trial is used to illustrate the approach. Extensions to more than three time points and to allow for stratification are discussed. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
This study presents a new simple approach for combining empirical with raw (i.e., not bias corrected) coupled model ensemble forecasts in order to make more skillful interval forecasts of ENSO. A Bayesian normal model has been used to combine empirical and raw coupled model December SST Niño-3.4 index forecasts started at the end of the preceding July (5-month lead time). The empirical forecasts were obtained by linear regression between December and the preceding July Niño-3.4 index values over the period 1950–2001. Coupled model ensemble forecasts for the period 1987–99 were provided by ECMWF, as part of the Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER) project. Empirical and raw coupled model ensemble forecasts alone have similar mean absolute error forecast skill score, compared to climatological forecasts, of around 50% over the period 1987–99. The combined forecast gives an increased skill score of 74% and provides a well-calibrated and reliable estimate of forecast uncertainty.
Resumo:
OBJECTIVES: This contribution provides a unifying concept for meta-analysis integrating the handling of unobserved heterogeneity, study covariates, publication bias and study quality. It is important to consider these issues simultaneously to avoid the occurrence of artifacts, and a method for doing so is suggested here. METHODS: The approach is based upon the meta-likelihood in combination with a general linear nonparametric mixed model, which lays the ground for all inferential conclusions suggested here. RESULTS: The concept is illustrated at hand of a meta-analysis investigating the relationship of hormone replacement therapy and breast cancer. The phenomenon of interest has been investigated in many studies for a considerable time and different results were reported. In 1992 a meta-analysis by Sillero-Arenas et al. concluded a small, but significant overall effect of 1.06 on the relative risk scale. Using the meta-likelihood approach it is demonstrated here that this meta-analysis is due to considerable unobserved heterogeneity. Furthermore, it is shown that new methods are available to model this heterogeneity successfully. It is argued further to include available study covariates to explain this heterogeneity in the meta-analysis at hand. CONCLUSIONS: The topic of HRT and breast cancer has again very recently become an issue of public debate, when results of a large trial investigating the health effects of hormone replacement therapy were published indicating an increased risk for breast cancer (risk ratio of 1.26). Using an adequate regression model in the previously published meta-analysis an adjusted estimate of effect of 1.14 can be given which is considerably higher than the one published in the meta-analysis of Sillero-Arenas et al. In summary, it is hoped that the method suggested here contributes further to a good meta-analytic practice in public health and clinical disciplines.
Resumo:
A sequential study design generally makes more efficient use of available information than a fixed sample counterpart of equal power. This feature is gradually being exploited by researchers in genetic and epidemiological investigations that utilize banked biological resources and in studies where time, cost and ethics are prominent considerations. Recent work in this area has focussed on the sequential analysis of matched case-control studies with a dichotomous trait. In this paper, we extend the sequential approach to a comparison of the associations within two independent groups of paired continuous observations. Such a comparison is particularly relevant in familial studies of phenotypic correlation using twins. We develop a sequential twin method based on the intraclass correlation and show that use of sequential methodology can lead to a substantial reduction in the number of observations without compromising the study error rates. Additionally, our approach permits straightforward allowance for other explanatory factors in the analysis. We illustrate our method in a sequential heritability study of dysplasia that allows for the effect of body mass index and compares monozygotes with pairs of singleton sisters. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
Graphical tracking is a technique for crop scheduling where the actual plant state is plotted against an ideal target curve which encapsulates all crop and environmental characteristics. Management decisions are made on the basis of the position of the actual crop against the ideal position. Due to the simplicity of the approach it is possible for graphical tracks to be developed on site without the requirement for controlled experimentation. Growth models and graphical tracks are discussed, and an implementation of the Richards curve for graphical tracking described. In many cases, the more intuitively desirable growth models perform sub-optimally due to problems with the specification of starting conditions, environmental factors outside the scope of the original model and the introduction of new cultivars. Accurate specification for a biological model requires detailed and usually costly study, and as such is not adaptable to a changing cultivar range and changing cultivation techniques. Fitting of a new graphical track for a new cultivar can be conducted on site and improved over subsequent seasons. Graphical tracking emphasises the current position relative to the objective, and as such does not require the time consuming or system specific input of an environmental history, although it does require detailed crop measurement. The approach is flexible and could be applied to a variety of specification metrics, with digital imaging providing a route for added value. For decision making regarding crop manipulation from the observed current state, there is a role for simple predictive modelling over the short term to indicate the short term consequences of crop manipulation.
Resumo:
This paper presents a multicriteria decision-making model for lifespan energy efficiency assessment of intelligent buildings (IBs). The decision-making model called IBAssessor is developed using an analytic network process (ANP) method and a set of lifespan performance indicators for IBs selected by a new quantitative approach called energy-time consumption index (ETI). In order to improve the quality of decision-making, the authors of this paper make use of previous research achievements including a lifespan sustainable business model, the Asian IB Index, and a number of relevant publications. Practitioners can use the IBAssessor ANP model at different stages of an IB lifespan for either engineering or business oriented assessments. Finally, this paper presents an experimental case study to demonstrate how to use IBAssessor ANP model to solve real-world design tasks.