48 resultados para Accelerated failure time Model. Correlated data. Imputation. Residuals analysis
Resumo:
Agro-hydrological models have widely been used for optimizing resources use and minimizing environmental consequences in agriculture. SMCRN is a recently developed sophisticated model which simulates crop response to nitrogen fertilizer for a wide range of crops, and the associated leaching of nitrate from arable soils. In this paper, we describe the improvements of this model by replacing the existing approximate hydrological cascade algorithm with a new simple and explicit algorithm for the basic soil water flow equation, which not only enhanced the model performance in hydrological simulation, but also was essential to extend the model application to the situations where the capillary flow is important. As a result, the updated SMCRN model could be used for more accurate study of water dynamics in the soil-crop system. The success of the model update was demonstrated by the simulated results that the updated model consistently out-performed the original model in drainage simulations and in predicting time course soil water content in different layers in the soil-wheat system. Tests of the updated SMCRN model against data from 4 field crop experiments showed that crop nitrogen offtakes and soil mineral nitrogen in the top 90 cm were in a good agreement with the measured values, indicating that the model could make more reliable predictions of nitrogen fate in the crop-soil system, and thus provides a useful platform to assess the impacts of nitrogen fertilizer on crop yield and nitrogen leaching from different production systems. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Atmosphere only and ocean only variational data assimilation (DA) schemes are able to use window lengths that are optimal for the error growth rate, non-linearity and observation density of the respective systems. Typical window lengths are 6-12 hours for the atmosphere and 2-10 days for the ocean. However, in the implementation of coupled DA schemes it has been necessary to match the window length of the ocean to that of the atmosphere, which may potentially sacrifice the accuracy of the ocean analysis in order to provide a more balanced coupled state. This paper investigates how extending the window length in the presence of model error affects both the analysis of the coupled state and the initialized forecast when using coupled DA with differing degrees of coupling. Results are illustrated using an idealized single column model of the coupled atmosphere-ocean system. It is found that the analysis error from an uncoupled DA scheme can be smaller than that from a coupled analysis at the initial time, due to faster error growth in the coupled system. However, this does not necessarily lead to a more accurate forecast due to imbalances in the coupled state. Instead coupled DA is more able to update the initial state to reduce the impact of the model error on the accuracy of the forecast. The effect of model error is potentially most detrimental in the weakly coupled formulation due to the inconsistency between the coupled model used in the outer loop and uncoupled models used in the inner loop.
Resumo:
Reanalysis data obtained from data assimilation are increasingly used for diagnostic studies of the general circulation of the atmosphere, for the validation of modelling experiments and for estimating energy and water fluxes between the Earth surface and the atmosphere. Because fluxes are not specifically observed, but determined by the data assimilation system, they are not only influenced by the utilized observations but also by model physics and dynamics and by the assimilation method. In order to better understand the relative importance of humidity observations for the determination of the hydrological cycle, in this paper we describe an assimilation experiment using the ERA40 reanalysis system where all humidity data have been excluded from the observational data base. The surprising result is that the model, driven by the time evolution of wind, temperature and surface pressure, is able to almost completely reconstitute the large-scale hydrological cycle of the control assimilation without the use of any humidity data. In addition, analysis of the individual weather systems in the extratropics and tropics using an objective feature tracking analysis indicates that the humidity data have very little impact on these systems. We include a discussion of these results and possible consequences for the way moisture information is assimilated, as well as the potential consequences for the design of observing systems for climate monitoring. It is further suggested, with support from a simple assimilation study with another model, that model physics and dynamics play a decisive role for the hydrological cycle, stressing the need to better understand these aspects of model parametrization. .
Resumo:
For the very large nonlinear dynamical systems that arise in a wide range of physical, biological and environmental problems, the data needed to initialize a numerical forecasting model are seldom available. To generate accurate estimates of the expected states of the system, both current and future, the technique of ‘data assimilation’ is used to combine the numerical model predictions with observations of the system measured over time. Assimilation of data is an inverse problem that for very large-scale systems is generally ill-posed. In four-dimensional variational assimilation schemes, the dynamical model equations provide constraints that act to spread information into data sparse regions, enabling the state of the system to be reconstructed accurately. The mechanism for this is not well understood. Singular value decomposition techniques are applied here to the observability matrix of the system in order to analyse the critical features in this process. Simplified models are used to demonstrate how information is propagated from observed regions into unobserved areas. The impact of the size of the observational noise and the temporal position of the observations is examined. The best signal-to-noise ratio needed to extract the most information from the observations is estimated using Tikhonov regularization theory. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Given the growing impact of human activities on the sea, managers are increasingly turning to marine protected areas (MPAs) to protect marine habitats and species. Many MPAs have been unsuccessful, however, and lack of income has been identified as a primary reason for failure. In this study, data from a global survey of 79 MPAs in 36 countries were analysed and attempts made to construct predictive models to determine the income requirements of any given MPA. Statistical tests were used to uncover possible patterns and relationships in the data, with two basic approaches. In the first of these, an attempt was made to build an explanatory "bottom-up" model of the cost structures that might be required to pursue various management activities. This proved difficult in practice owing to the very broad range of applicable data, spanning many orders of magnitude. In the second approach, a "top-down" regression model was constructed using logarithms of the base data, in order to address the breadth of the data ranges. This approach suggested that MPA size and visitor numbers together explained 46% of the minimum income requirements (P < 0.001), with area being the slightly more influential factor. The significance of area to income requirements was of little surprise, given its profile in the literature. However, the relationship between visitors and income requirements might go some way to explaining why northern hemisphere MPAs with apparently high incomes still claim to be under-funded. The relationship between running costs and visitor numbers has important implications not only in determining a realistic level of funding for MPAs, but also in assessing from where funding might be obtained. Since a substantial proportion of the income of many MPAs appears to be utilized for amenity purposes, a case may be made for funds to be provided from the typically better resourced government social and educational budgets as well as environmental budgets. Similarly visitor fees, already an important source of funding for some MPAs, might have a broader role to play in how MPAs are financed in the future. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The completion of the Single European Market was expected to create a large market that would enable firms to capture economies of scale that would in turn result in lower prices to European consumers. These benefits are only likely to be realised if consumers in the various countries of the EU wish to consume the same products and respond to similar marketing strategies (with respect to promotion, distribution etc). This study examines, through a model of yoghurt consumption, whether cultural differences continue to determine food-related behaviour in the EU. The model is derived from the marketing literature and views the consumption decision as the outcome of a multi-stage process in which yoghurt knowledge, attitudes to different yoghurt attributes (such as bio-bifidus, low-fat, organic) and overall attitude towards yoghurt as a product all feed into the frequency with which yoghurt is consumed at breakfast, as a snack and as a dessert. The model uses data collected from a consumer survey in I I European countries and is estimated using probit and ordinal probit methods. The results suggest that important cultural differences continue to determine food-related behaviour in the I I countries of the study. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
This investigation deals with the question of when a particular population can be considered to be disease-free. The motivation is the case of BSE where specific birth cohorts may present distinct disease-free subpopulations. The specific objective is to develop a statistical approach suitable for documenting freedom of disease, in particular, freedom from BSE in birth cohorts. The approach is based upon a geometric waiting time distribution for the occurrence of positive surveillance results and formalizes the relationship between design prevalence, cumulative sample size and statistical power. The simple geometric waiting time model is further modified to account for the diagnostic sensitivity and specificity associated with the detection of disease. This is exemplified for BSE using two different models for the diagnostic sensitivity. The model is furthermore modified in such a way that a set of different values for the design prevalence in the surveillance streams can be accommodated (prevalence heterogeneity) and a general expression for the power function is developed. For illustration, numerical results for BSE suggest that currently (data status September 2004) a birth cohort of Danish cattle born after March 1999 is free from BSE with probability (power) of 0.8746 or 0.8509, depending on the choice of a model for the diagnostic sensitivity.
Resumo:
The overall significance of the construction and building services sector internationally cannot be overemphasised. In the UK, the industry currently accounts for 10% gross domestic product (GDP) and employs 2 million people, which is more than 1 in 14 of the total workforce. However, regardless of its output (approximately £65 billion annually) there has been a steady decline in the number of trade entrants into the construction and building services sector. Consequently, the available ‘pool of labour’ is inadequately resourced; productivity is low; the existing labour force is overstressed; there is an increase in site deaths; and a long-term labour shortage is envisaged. Today, the evidence seems to suggest that multiskilling is a tentative redress for ameliorating the skills crisis in the construction and building sectors. A 43-year time-series of data on 23 manpower attributes was evaluated as part of this investigation. The developed linear regression models show that the concept of multiskilling obeys the ‘law of diminishing returns'. That is, a weak relation was found between construction output and a three or more combination of manpower attributes. An optimisation model is prescribed for traditional trades.
Resumo:
A method is presented for determining the time to first division of individual bacterial cells growing on agar media. Bacteria were inoculated onto agar-coated slides and viewed by phase-contrast microscopy. Digital images of the growing bacteria were captured at intervals and the time to first division estimated by calculating the "box area ratio". This is the area of the smallest rectangle that can be drawn around an object, divided by the area of the object itself. The box area ratios of cells were found to increase suddenly during growth at a time that correlated with cell division as estimated by visual inspection of the digital images. This was caused by a change in the orientation of the two daughter cells that occurred when sufficient flexibility arose at their point of attachment. This method was used successfully to generate lag time distributions for populations of Escherichia coli, Listeria monocytogenes and Pseudomonas aeruginosa, but did not work with the coccoid organism Staphylococcus aureus. This method provides an objective measure of the time to first cell division, whilst automation of the data processing allows a large number of cells to be examined per experiment. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A radionuclide source term model has been developed which simulates the biogeochemical evolution of the Drigg low level waste (LLW) disposal site. The DRINK (DRIgg Near field Kinetic) model provides data regarding radionuclide concentrations in groundwater over a period of 100,000 years, which are used as input to assessment calculations for a groundwater pathway. The DRINK model also provides input to human intrusion and gaseous assessment calculations through simulation of the solid radionuclide inventory. These calculations are being used to support the Drigg post closure safety case. The DRINK model considers the coupled interaction of the effects of fluid flow, microbiology, corrosion, chemical reaction, sorption and radioactive decay. It represents the first direct use of a mechanistic reaction-transport model in risk assessment calculations.
Resumo:
We test the response of the Oxford-RAL Aerosol and Cloud (ORAC) retrieval algorithm for MSG SEVIRI to changes in the aerosol properties used in the dust aerosol model, using data from the Dust Outflow and Deposition to the Ocean (DODO) flight campaign in August 2006. We find that using the observed DODO free tropospheric aerosol size distribution and refractive index increases simulated top of the atmosphere radiance at 0.55 µm assuming a fixed erosol optical depth of 0.5 by 10–15 %, reaching a maximum difference at low solar zenith angles. We test the sensitivity of the retrieval to the vertical distribution f the aerosol and find that this is unimportant in determining simulated radiance at 0.55 µm. We also test the ability of the ORAC retrieval when used to produce the GlobAerosol dataset to correctly identify continental aerosol outflow from the African continent and we find that it poorly constrains aerosol speciation. We develop spatially and temporally resolved prior distributions of aerosols to inform the retrieval which incorporates five aerosol models: desert dust, maritime, biomass burning, urban and continental. We use a Saharan Dust Index and the GEOS-Chem chemistry transport model to describe dust and biomass burning aerosol outflow, and compare AOD using our speciation against the GlobAerosol retrieval during January and July 2006. We find AOD discrepancies of 0.2–1 over regions of intense biomass burning outflow, where AOD from our aerosol speciation and GlobAerosol speciation can differ by as much as 50 - 70 %.
Resumo:
Earlier estimates of the City of London office market are extended by considering a longer time series of data, covering two cycles, and by explicitly modeling of asymmetric space market responses to employment and supply shocks. A long run structural model linking real rental levels, office-based employment and the supply of office space is estimated and then rental adjustment processes are modeled using an error correction model framework. Rental adjustment is seen to be asymmetric, depending both on the direction of the supply and demand shocks and on the state of the space market at the time of the shock. Vacancy adjustment does not display asymmetries. There is also a supply adjustment equation. Two three-equation systems, one with symmetric rental adjustment and the other with asymmetric adjustment, are subjected to positive and negative shocks to employment. These illustrate differences in the two systems.
Resumo:
This paper presents a new approach to modelling flash floods in dryland catchments by integrating remote sensing and digital elevation model (DEM) data in a geographical information system (GIS). The spectral reflectance of channels affected by recent flash floods exhibit a marked increase, due to the deposition of fine sediments in these channels as the flood recedes. This allows the parts of a catchment that have been affected by a recent flood event to be discriminated from unaffected parts, using a time series of Landsat images. Using images of the Wadi Hudain catchment in southern Egypt, the hillslope areas contributing flow were inferred for different flood events. The SRTM3 DEM was used to derive flow direction, flow length, active channel cross-sectional areas and slope. The Manning Equation was used to estimate the channel flow velocities, and hence the time-area zones of the catchment. A channel reach that was active during a 1985 runoff event, that does not receive any tributary flow, was used to estimate a transmission loss rate of 7·5 mm h−1, given the maximum peak discharge estimate. Runoff patterns resulting from different flood events are quite variable; however the southern part of the catchment appears to have experienced more floods during the period of study (1984–2000), perhaps because the bedrock hillslopes in this area are more effective at runoff production than other parts of the catchment which are underlain by unconsolidated Quaternary sands and gravels. Due to high transmission loss, runoff generated within the upper reaches is rarely delivered to the alluvial fan and Shalateen city situated at the catchment outlet. The synthetic GIS-based time area zones, on their own, cannot be relied on to model the hydrographs reliably; physical parameters, such as rainfall intensity, distribution, and transmission loss, must also be considered.
Resumo:
A One-Dimensional Time to Explosion (ODTX) apparatus has been used to study the times to explosion of a number of compositions based on RDX and HMX over a range of contact temperatures. The times to explosion at any given temperature tend to increase from RDX to HMX and with the proportion of HMX in the composition. Thermal ignition theory has been applied to time to explosion data to calculate kinetic parameters. The apparent activation energy for all of the compositions lay between 127 kJ mol−1 and 146 kJ mol−1. There were big differences in the pre-exponential factor and this controlled the time to explosion rather than the activation energy for the process.