62 resultados para rainfall-runoff empirical statistical model
em University of Queensland eSpace - Australia
Resumo:
Recent reviews of the desistance literature have advocated studying desistance as a process, yet current empirical methods continue to measure desistance as a discrete state. In this paper, we propose a framework for empirical research that recognizes desistance as a developmental process. This approach focuses on changes in the offending rare rather than on offending itself We describe a statistical model to implement this approach and provide an empirical example. We conclude with several suggestions for future research endeavors that arise from our conceptualization of desistance.
Resumo:
The use of a fitted parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can lead to predictive nonuniqueness. The extent of model predictive uncertainty should be investigated if management decisions are to be based on model projections. Using models built for four neighboring watersheds in the Neuse River Basin of North Carolina, the application of the automated parameter optimization software PEST in conjunction with the Hydrologic Simulation Program Fortran (HSPF) is demonstrated. Parameter nonuniqueness is illustrated, and a method is presented for calculating many different sets of parameters, all of which acceptably calibrate a watershed model. A regularization methodology is discussed in which models for similar watersheds can be calibrated simultaneously. Using this method, parameter differences between watershed models can be minimized while maintaining fit between model outputs and field observations. In recognition of the fact that parameter nonuniqueness and predictive uncertainty are inherent to the modeling process, PEST's nonlinear predictive analysis functionality is then used to explore the extent of model predictive uncertainty.
Resumo:
Background: Published birthweight references in Australia do not fully take into account constitutional factors that influence birthweight and therefore may not provide an accurate reference to identify the infant with abnormal growth. Furthermore, studies in other regions that have derived adjusted (customised) birthweight references have applied untested assumptions in the statistical modelling. Aims: To validate the customised birthweight model and to produce a reference set of coefficients for estimating a customised birthweight that may be useful for maternity care in Australia and for future research. Methods: De-identified data were extracted from the clinical database for all births at the Mater Mother's Hospital, Brisbane, Australia, between January 1997 and June 2005. Births with missing data for the variables under study were excluded. In addition the following were excluded: multiple pregnancies, births less than 37 completed week's gestation, stillbirths, and major congenital abnormalities. Multivariate analysis was undertaken. A double cross-validation procedure was used to validate the model. Results: The study of 42 206 births demonstrated that, for statistical purposes, birthweight is normally distributed. Coefficients for the derivation of customised birthweight in an Australian population were developed and the statistical model is demonstrably robust. Conclusions: This study provides empirical data as to the robustness of the model to determine customised birthweight. Further research is required to define where normal physiology ends and pathology begins, and which segments of the population should be included in the construction of a customised birthweight standard.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Numerical optimisation methods are being more commonly applied to agricultural systems models, to identify the most profitable management strategies. The available optimisation algorithms are reviewed and compared, with literature and our studies identifying evolutionary algorithms (including genetic algorithms) as superior in this regard to simulated annealing, tabu search, hill-climbing, and direct-search methods. Results of a complex beef property optimisation, using a real-value genetic algorithm, are presented. The relative contributions of the range of operational options and parameters of this method are discussed, and general recommendations listed to assist practitioners applying evolutionary algorithms to the solution of agricultural systems. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Traffic and tillage effects on runoff and crop performance on a heavy clay soil were investigated over a period of 4 years. Tillage treatments and the cropping program were representative of broadacre grain production practice in northern Australia, and a split-plot design used to isolate traffic effects. Treatments subject to zero, minimum, and stubble mulch tillage each comprised pairs of 90-m 2 plots, from which runoff was recorded. A 3-m-wide controlled traffic system allowed one of each pair to be maintained as a non-wheeled plot, while the total surface area of the other received a single annual wheeling treatment from a working 100-kW tractor. Rainfall/runoff hydrographs demonstrate that wheeling produced a large and consistent increase in runoff, whereas tillage produced a smaller increase. Treatment effects were greater on dry soil, but were still maintained in large and intense rainfall events on wet soil. Mean annual runoff from wheeled plots was 63 mm (44%) greater than that from controlled traffic plots, whereas runoff from stubble mulch tillage plots was 38 mm (24%) greater than that from zero tillage plots. Traffic and tillage effects appeared to be cumulative, so the mean annual runoff from wheeled stubble mulch tilled plots, representing conventional cropping practice, was more than 100 mm greater than that from controlled traffic zero tilled plots, representing best practice. This increased infiltration was reflected in an increased yield of 16% compared with wheeled stubble mulch. Minimum tilled plots demonstrated a characteristic midway between that of zero and stubble mulch tillage. The results confirm that unnecessary energy dissipation in the soil during the traction process that normally accompanies tillage has a major negative effect on infiltration and crop productivity. Controlled traffic farming systems appear to be the only practicable solution to this problem.
Resumo:
Recently, methods for computing D-optimal designs for population pharmacokinetic studies have become available. However there are few publications that have prospectively evaluated the benefits of D-optimality in population or single-subject settings. This study compared a population optimal design with an empirical design for estimating the base pharmacokinetic model for enoxaparin in a stratified randomized setting. The population pharmacokinetic D-optimal design for enoxaparin was estimated using the PFIM function (MATLAB version 6.0.0.88). The optimal design was based on a one-compartment model with lognormal between subject variability and proportional residual variability and consisted of a single design with three sampling windows (0-30 min, 1.5-5 hr and 11 - 12 hr post-dose) for all patients. The empirical design consisted of three sample time windows per patient from a total of nine windows that collectively represented the entire dose interval. Each patient was assigned to have one blood sample taken from three different windows. Windows for blood sampling times were also provided for the optimal design. Ninety six patients were recruited into the study who were currently receiving enoxaparin therapy. Patients were randomly assigned to either the optimal or empirical sampling design, stratified for body mass index. The exact times of blood samples and doses were recorded. Analysis was undertaken using NONMEM (version 5). The empirical design supported a one compartment linear model with additive residual error, while the optimal design supported a two compartment linear model with additive residual error as did the model derived from the full data set. A posterior predictive check was performed where the models arising from the empirical and optimal designs were used to predict into the full data set. This revealed the optimal'' design derived model was superior to the empirical design model in terms of precision and was similar to the model developed from the full dataset. This study suggests optimal design techniques may be useful, even when the optimized design was based on a model that was misspecified in terms of the structural and statistical models and when the implementation of the optimal designed study deviated from the nominal design.
Resumo:
Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.
The N-15 natural abundance (delta N-15) of ecosystem samples reflects measures of water availability
Resumo:
We assembled a globally-derived data set for site-averaged foliar delta(15)N, the delta(15)N of whole surface mineral soil and corresponding site factors (mean annual rainfall and temperature, latitude, altitude and soil pH). The delta(15)N of whole soil was related to all of the site variables (including foliar delta(15)N) except altitude and, when regressed on latitude and rainfall, provided the best model of these data, accounting for 49% of the variation in whole soil delta(15)N. As single linear regressions, site-averaged foliar delta(15)N was more strongly related to rainfall than was whole soil delta(15)N. A smaller data set showed similar, negative correlations between whole soil delta(15)N, site-averaged foliar delta(15)N and soil moisture variations during a single growing season. The negative correlation between water availability (measured here by rainfall and temperature) and soil or plant delta(15)N fails at the landscape scale, where wet spots are delta(15)N-enriched relative to their drier surroundings. Here we present global and seasonal data, postulate a proximate mechanism for the overall relationship between water availability and ecosystem delta(15)N and, newly, a mechanism accounting for the highly delta(15)N-depleted values found in the foliage and soils of many wet/cold ecosystems. These hypotheses are complemented by documentation of the present gaps in knowledge, suggesting lines of research which will provide new insights into terrestrial N-cycling. Our conclusions are consistent with those of Austin and Vitousek (1998) that foliar (and soil) delta(15)N appear to be related to the residence time of whole ecosystem N.
Resumo:
Centrifuge experiments modeling single-phase flow in prototype porous media typically use the same porous medium and permeant. Then, well-known scaling laws are used to transfer the results to the prototype. More general scaling laws that relax these restrictions are presented. For permeants that are immiscible with an accompanying gas phase, model-prototype (i.e., centrifuge model experiment-target system) scaling is demonstrated. Scaling is shown to be feasible for Miller-similar (or geometrically similar) media. Scalings are presented for a more, general class, Lisle-similar media, based on the equivalence mapping of Richards' equation onto itself. Whereas model-prototype scaling of Miller-similar media can be realized easily for arbitrary boundary conditions, Lisle-similarity in a finite length medium generally, but not always, involves a mapping to a moving boundary problem. An exception occurs for redistribution in Lisle-similar porous media, which is shown to map to spatially fixed boundary conditions. Complete model-prototype scalings for this example are derived.
Resumo:
[1] We attempt to generate new solutions for the moisture content form of the one-dimensional Richards' [1931] equation using the Lisle [1992] equivalence mapping. This mapping is used as no more general set of transformations exists for mapping the one-dimensional Richards' equation into itself. Starting from a given solution, the mapping has the potential to generate an infinite number of new solutions for a series of nonlinear diffusivity and hydraulic conductivity functions. We first seek new analytical solutions satisfying Richards' equation subject to a constant flux surface boundary condition for a semi-infinite dry soil, starting with the Burgers model. The first iteration produces an existing solution, while subsequent iterations are shown to endlessly reproduce this same solution. Next, we briefly consider the problem of redistribution in a finite-length soil. In this case, Lisle's equivalence mapping is generalized to account for arbitrary initial conditions. As was the case for infiltration, however, it is found that new analytical solutions are not generated using the equivalence mapping, although existing solutions are recovered.
Resumo:
Observations of accelerating seismic activity prior to large earthquakes in natural fault systems have raised hopes for intermediate-term eartquake forecasting. If this phenomena does exist, then what causes it to occur? Recent theoretical work suggests that the accelerating seismic release sequence is a symptom of increasing long-wavelength stress correlation in the fault region. A more traditional explanation, based on Reid's elastic rebound theory, argues that an accelerating sequence of seismic energy release could be a consequence of increasing stress in a fault system whose stress moment release is dominated by large events. Both of these theories are examined using two discrete models of seismicity: a Burridge-Knopoff block-slider model and an elastic continuum based model. Both models display an accelerating release of seismic energy prior to large simulated earthquakes. In both models there is a correlation between the rate of seismic energy release with the total root-mean-squared stress and the level of long-wavelength stress correlation. Furthermore, both models exhibit a systematic increase in the number of large events at high stress and high long-wavelength stress correlation levels. These results suggest that either explanation is plausible for the accelerating moment release in the models examined. A statistical model based on the Burridge-Knopoff block-slider is constructed which indicates that stress alone is sufficient to produce accelerating release of seismic energy with time prior to a large earthquake.
Resumo:
Background: To investigate the association between selected social and behavioural (infant feeding and preventive dental practices) variables and the presence of early childhood caries in preschool children within the north Brisbane region. Methods: A cross sectional sample of 2515 children aged four to five years were examined in a preschool setting using prevalence (percentage with caries) and severity (dmft) indices. A self-administered questionnaire obtained information regarding selected social and behavioural variables. The data were modelled using multiple logistic regression analysis at the 5 per cent level of significance. Results: The final explanatory model for caries presence in four to five year old children included the variables breast feeding from three to six months of age (OR=0.7, CI=0.5, 1.0), sleeping with the bottle (OR=1.9, CI=1.5, 2.4), sipping from the bottle (OR=1.6, CI=1.2, 2.0), ethnicity other than Caucasian (OR=1.9, CI=1.4, 2.5), annual family income $20,000-$35,000 (OR = 1.7, CI=1.3, 2.3) and annual family income less than $20,000 (OR=2.1, CI=1.5, 2.8). Conclusion: A statistical model for early childhood caries in preschool children within the north Brisbane region has been constructed using selected social and behavioural determinants. Epidemiological data can be used for improved public oral health service planning and resource allocation within the region.
Resumo:
Areas of the landscape that are priorities for conservation should be those that are both vulnerable to threatening processes and that if lost or degraded, will result in conservation targets being compromised. While much attention is directed towards understanding the patterns of biodiversity, much less is given to determining the areas of the landscape most vulnerable to threats. We assessed the relative vulnerability of remaining areas of native forest to conversion to plantations in the ecologically significant temperate rainforest region of south central Chile. The area of the study region is 4.2 million ha and the extent of plantations is approximately 200000 ha. First, the spatial distribution of native forest conversion to plantations was determined. The variables related to the spatial distribution of this threatening process were identified through the development of a classification tree and the generation of a multivariate. spatially explicit, statistical model. The model of native forest conversion explained 43% of the deviance and the discrimination ability of the model was high. Predictions were made of where native forest conversion is likely to occur in the future. Due to patterns of climate, topography, soils and proximity to infrastructure and towns, remaining forest areas differ in their relative risk of being converted to plantations. Another factor that may increase the vulnerability of remaining native forest in a subset of the study region is the proposed construction of a highway. We found that 90% of the area of existing plantations within this region is within 2.5 km of roads. When the predictions of native forest conversion were recalculated accounting for the construction of this highway, it was found that: approximately 27000 ha of native forest had an increased probability of conversion. The areas of native forest identified to be vulnerable to conversion are outside of the existing reserve network. (C) 2004 Elsevier Ltd. All tights reserved.