904 resultados para calibration of rainfall-runoff models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to knowledge gaps in relation to urban stormwater quality processes, an in-depth understanding of model uncertainty can enhance decision making. Uncertainty in stormwater quality models can originate from a range of sources such as the complexity of urban rainfall-runoff-stormwater pollutant processes and the paucity of observed data. Unfortunately, studies relating to epistemic uncertainty, which arises from the simplification of reality are limited and often deemed mostly unquantifiable. This paper presents a statistical modelling framework for ascertaining epistemic uncertainty associated with pollutant wash-off under a regression modelling paradigm using Ordinary Least Squares Regression (OLSR) and Weighted Least Squares Regression (WLSR) methods with a Bayesian/Gibbs sampling statistical approach. The study results confirmed that WLSR assuming probability distributed data provides more realistic uncertainty estimates of the observed and predicted wash-off values compared to OLSR modelling. It was also noted that the Bayesian/Gibbs sampling approach is superior compared to the most commonly adopted classical statistical and deterministic approaches commonly used in water quality modelling. The study outcomes confirmed that the predication error associated with wash-off replication is relatively higher due to limited data availability. The uncertainty analysis also highlighted the variability of the wash-off modelling coefficient k as a function of complex physical processes, which is primarily influenced by surface characteristics and rainfall intensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The motion response of marine structures in waves can be studied using finite-dimensional linear-time-invariant approximating models. These models, obtained using system identification with data computed by hydrodynamic codes, find application in offshore training simulators, hardware-in-the-loop simulators for positioning control testing, and also in initial designs of wave-energy conversion devices. Different proposals have appeared in the literature to address the identification problem in both time and frequency domains, and recent work has highlighted the superiority of the frequency-domain methods. This paper summarises practical frequency-domain estimation algorithms that use constraints on model structure and parameters to refine the search of approximating parametric models. Practical issues associated with the identification are discussed, including the influence of radiation model accuracy in force-to-motion models, which are usually the ultimate modelling objective. The illustration examples in the paper are obtained using a freely available MATLAB toolbox developed by the authors, which implements the estimation algorithms described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dynamics describing the motion response of a marine structure in waves can be represented within a linear framework by the Cummins Equation. This equation contains a convolution term that represents the component of the radiation forces associated with fluid memory effects. Several methods have been proposed in the literature for the identification of parametric models to approximate and replace this convolution term. This replacement can facilitate the model implementation in simulators and the analysis of motion control designs. Some of the reported identification methods consider the problem in the time domain while other methods consider the problem in the frequency domain. This paper compares the application of these identification methods. The comparison is based not only on the quality of the estimated models, but also on the ease of implementation, ease of use, and the flexibility of the identification method to incorporate prior information related to the model being identified. To illustrate the main points arising from the comparison, a particular example based on the coupled vertical motion of a modern containership vessel is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multivariate predictive models are widely used tools for assessment of aquatic ecosystem health and models have been successfully developed for the prediction and assessment of aquatic macroinvertebrates, diatoms, local stream habitat features and fish. We evaluated the ability of a modelling method based on the River InVertebrate Prediction and Classification System (RIVPACS) to accurately predict freshwater fish assemblage composition and assess aquatic ecosystem health in rivers and streams of south-eastern Queensland, Australia. The predictive model was developed, validated and tested in a region of comparatively high environmental variability due to the unpredictable nature of rainfall and river discharge. The model was concluded to provide sufficiently accurate and precise predictions of species composition and was sensitive enough to distinguish test sites impacted by several common types of human disturbance (particularly impacts associated with catchment land use and associated local riparian, in-stream habitat and water quality degradation). The total number of fish species available for prediction was low in comparison to similar applications of multivariate predictive models based on other indicator groups, yet the accuracy and precision of our model was comparable to outcomes from such studies. In addition, our model developed for sites sampled on one occasion and in one season only (winter), was able to accurately predict fish assemblage composition at sites sampled during other seasons and years, provided that they were not subject to unusually extreme environmental conditions (e.g. extended periods of low flow that restricted fish movement or resulted in habitat desiccation and local fish extinctions).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finite element (FE) model studies have made important contributions to our understanding of functional biomechanics of the lumbar spine. However, if a model is used to answer clinical and biomechanical questions over a certain population, their inherently large inter-subject variability has to be considered. Current FE model studies, however, generally account only for a single distinct spinal geometry with one set of material properties. This raises questions concerning their predictive power, their range of results and on their agreement with in vitro and in vivo values. Eight well-established FE models of the lumbar spine (L1-5) of different research centres around the globe were subjected to pure and combined loading modes and compared to in vitro and in vivo measurements for intervertebral rotations, disc pressures and facet joint forces. Under pure moment loading, the predicted L1-5 rotations of almost all models fell within the reported in vitro ranges, and their median values differed on average by only 2° for flexion-extension, 1° for lateral bending and 5° for axial rotation. Predicted median facet joint forces and disc pressures were also in good agreement with published median in vitro values. However, the ranges of predictions were larger and exceeded those reported in vitro, especially for the facet joint forces. For all combined loading modes, except for flexion, predicted median segmental intervertebral rotations and disc pressures were in good agreement with measured in vivo values. In light of high inter-subject variability, the generalization of results of a single model to a population remains a concern. This study demonstrated that the pooled median of individual model results, similar to a probabilistic approach, can be used as an improved predictive tool in order to estimate the response of the lumbar spine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this book by two Australian authors is to: introduce the audience to the full complement of contextual elements found within program theory; offer practical suggestions to engage with theories of change, theories of action and logic models; and provide substantial evidence for this approach through scholarly literature, practice case studies together with the authors' combined experience of 60 years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Designed for undergraduate and postgraduate students, academic researchers and industrial practitioners, this book provides comprehensive case studies on numerical computing of industrial processes and step-by-step procedures for conducting industrial computing. It assumes minimal knowledge in numerical computing and computer programming, making it easy to read, understand and follow. Topics discussed include fundamentals of industrial computing, finite difference methods, the Wavelet-Collocation Method, the Wavelet-Galerkin Method, High Resolution Methods, and comparative studies of various methods. These are discussed using examples of carefully selected models from real processes of industrial significance. The step-by-step procedures in all these case studies can be easily applied to other industrial processes without a need for major changes and thus provide readers with useful frameworks for the applications of engineering computing in fundamental research problems and practical development scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to its ability to represent intricate systems with material nonlinearities as well as irregular loading, boundary, geometrical and material domains, the finite element (FE) method has been recognized as an important computational tool in spinal biomechanics. Current FE models generally account for a single distinct spinal geometry with one set of material properties despite inherently large inter-subject variability. The uncertainty and high variability in tissue material properties, geometry, loading and boundary conditions has cast doubt on the reliability of their predictions and comparability with reported in vitro and in vivo values. A multicenter study was undertaken to compare the results of eight well-established models of the lumbar spine that have been developed, validated and applied for many years. Models were subjected to pure and combined loading modes and their predictions were compared to in vitro and in vivo measurements for intervertebral rotations, disc pressures and facet joint forces. Under pure moment loading, the predicted L1-5 rotations of almost all models fell within the reported in vitro ranges; their median values differed on average by only 2° for flexion-extension, 1° for lateral bending and 5° for axial rotation. Predicted median facet joint forces and disc pressures were also in good agreement with previously published median in vitro values. However, the ranges of predictions were larger and exceeded the in vitro ranges, especially for facet joint forces. For all combined loading modes, except for flexion, predicted median segmental intervertebral rotations and disc pressures were in good agreement with in vivo values. The simulations yielded median facet joint forces of 0 N in flexion, 38 N in extension, 14 N in lateral bending and 60 N in axial rotation that could not be validated due to the paucity of in vivo facet joint forces. In light of high inter-subject variability, one must be cautious when generalizing predictions obtained from one deterministic model. This study demonstrates however that the predictive power increases when FE models are combined together. The median of individual numerical results can hence be used as an improved tool in order to estimate the response of the lumbar spine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Constructed wetlands are among the most common Water Sensitive Urban Design (WSUD) measures for stormwater treatment. These systems have been extensively studied to understand their performance and influential treatment processes. Unfortunately, most past studies have been undertaken considering a wetland system as a lumped system with a primary focus on the reduction of the event mean concentration (EMC) values of specific pollutant species or total pollutant load removal. This research study adopted an innovative approach by partitioning the inflow runoff hydrograph and then investigating treatment performance in each partition and their relationships with a range of hydraulic factors. The study outcomes confirmed that influenced by rainfall characteristics, the constructed wetland displays different treatment characteristics for the initial and later sectors of the runoff hydrograph. The treatment of small rainfall events (<15 mm) is comparatively better at the beginning of runoff events while the trends in pollutant load reductions for large rainfall events (>15 mm) are generally lower at the beginning and gradually increase towards the end of rainfall events. This highlights the importance of ensuring that the inflow into a constructed wetland has low turbulence in order to achieve consistent treatment performance for both, small and large rainfall events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Railway capacity determination and expansion are very important topics. In prior research, the competition between different entities such as train services and train types, on different network corridors however have been ignored, poorly modelled, or else assumed to be static. In response, a comprehensive set of multi-objective models have been formulated in this article to perform a trade-off analysis. These models determine the total absolute capacity of railway networks as the most equitable solution according to a clearly defined set of competing objectives. The models also perform a sensitivity analysis of capacity with respect to those competing objectives. The models have been extensively tested on a case study and their significant worth is shown. The models were solved using a variety of techniques however an adaptive E constraint method was shown to be most superior. In order to identify only the best solution, a Simulated Annealing meta-heuristic was implemented and tested. However a linearization technique based upon separable programming was also developed and shown to be superior in terms of solution quality but far less in terms of computational time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2–6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Ephrin-B2 is the sole physiologically-relevant ligand of the receptor tyrosine kinase EphB4, which is over-expressed in many epithelial cancers, including 66% of prostate cancers, and contributes to cancer cell survival, invasion and migration. Crucially, however, the cancer-promoting EphB4 signalling pathways are independent of interaction with its ligand ephrin-B2, as activation of ligand-dependent signalling causes tumour suppression. Ephrin-B2, however, is often found on the surface of endothelial cells of the tumour vasculature, where it can regulate angiogenesis to support tumour growth. Proteolytic cleavage of endothelial cell ephrin-B2 has previously been suggested as one mechanism whereby the interaction between tumour cell-expressed EphB4 and endothelial cell ephrin-B2 is regulated to support both cancer promotion and angiogenesis. Methods An in silico approach was used to search accessible surfaces of 3D protein models for cleavage sites for the key prostate cancer serine protease, KLK4, and this identified murine ephrin-B2 as a potential KLK4 substrate. Mouse ephrin-B2 was then confirmed as a KLK4 substrate by in vitro incubation of recombinant mouse ephrin-B2 with active recombinant human KLK4. Cleavage products were visualised by SDS-PAGE, silver staining and Western blot and confirmed by N-terminal sequencing. Results At low molar ratios, KLK4 cleaved murine ephrin-B2 but other prostate-specific KLK family members (KLK2 and KLK3/PSA) were less efficient, suggesting cleavage was KLK4-selective. The primary KLK4 cleavage site in murine ephrin-B2 was verified and shown to correspond to one of the in silico predicted sites between extracellular domain residues arginine 178 and asparagine 179. Surprisingly, the highly homologous human ephrin-B2 was poorly cleaved by KLK4 at these low molar ratios, likely due to the 3 amino acid differences at this primary cleavage site. Conclusion These data suggest that in in vivo mouse xenograft models, endogenous mouse ephrin-B2, but not human tumour ephrin-B2, may be a downstream target of cancer cell secreted human KLK4. This is a critical consideration when interpreting data from murine explants of human EphB4+/KLK4+ cancer cells, such as prostate cancer cells, where differential effects may be seen in mouse models as opposed to human clinical situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study compares Value-at-Risk (VaR) measures for Australian banks over a period that includes the Global Financial Crisis (GFC) to determine whether the methodology and parameter selection are important for capital adequacy holdings that will ultimately support a bank in a crisis period. VaR methodology promoted under Basel II was largely criticised during the GFC for its failure to capture downside risk. However, results from this study indicate that 1-year parametric and historical models produce better measures of VaR than models with longer time frames. VaR estimates produced using Monte Carlo simulations show a high percentage of violations but with lower average magnitude of a violation when they occur. VaR estimates produced by the ARMA GARCH model also show a relatively high percentage of violations, however, the average magnitude of a violation is quite low. Our findings support the design of the revised Basel II VaR methodology which has also been adopted under Basel III.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular phylogenetic studies of homologous sequences of nucleotides often assume that the underlying evolutionary process was globally stationary, reversible, and homogeneous (SRH), and that a model of evolution with one or more site-specific and time-reversible rate matrices (e.g., the GTR rate matrix) is enough to accurately model the evolution of data over the whole tree. However, an increasing body of data suggests that evolution under these conditions is an exception, rather than the norm. To address this issue, several non-SRH models of molecular evolution have been proposed, but they either ignore heterogeneity in the substitution process across sites (HAS) or assume it can be modeled accurately using the distribution. As an alternative to these models of evolution, we introduce a family of mixture models that approximate HAS without the assumption of an underlying predefined statistical distribution. This family of mixture models is combined with non-SRH models of evolution that account for heterogeneity in the substitution process across lineages (HAL). We also present two algorithms for searching model space and identifying an optimal model of evolution that is less likely to over- or underparameterize the data. The performance of the two new algorithms was evaluated using alignments of nucleotides with 10 000 sites simulated under complex non-SRH conditions on a 25-tipped tree. The algorithms were found to be very successful, identifying the correct HAL model with a 75% success rate (the average success rate for assigning rate matrices to the tree's 48 edges was 99.25%) and, for the correct HAL model, identifying the correct HAS model with a 98% success rate. Finally, parameter estimates obtained under the correct HAL-HAS model were found to be accurate and precise. The merits of our new algorithms were illustrated with an analysis of 42 337 second codon sites extracted from a concatenation of 106 alignments of orthologous genes encoded by the nuclear genomes of Saccharomyces cerevisiae, S. paradoxus, S. mikatae, S. kudriavzevii, S. castellii, S. kluyveri, S. bayanus, and Candida albicans. Our results show that second codon sites in the ancestral genome of these species contained 49.1% invariable sites, 39.6% variable sites belonging to one rate category (V1), and 11.3% variable sites belonging to a second rate category (V2). The ancestral nucleotide content was found to differ markedly across these three sets of sites, and the evolutionary processes operating at the variable sites were found to be non-SRH and best modeled by a combination of eight edge-specific rate matrices (four for V1 and four for V2). The number of substitutions per site at the variable sites also differed markedly, with sites belonging to V1 evolving slower than those belonging to V2 along the lineages separating the seven species of Saccharomyces. Finally, sites belonging to V1 appeared to have ceased evolving along the lineages separating S. cerevisiae, S. paradoxus, S. mikatae, S. kudriavzevii, and S. bayanus, implying that they might have become so selectively constrained that they could be considered invariable sites in these species.