39 resultados para Empirical Models

em Deakin Research Online - Australia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Land-use patterns in the catchment areas of Sri Lankan reservoirs, which were quantified using Geographical Information Systems (GIS), were used to develop quantitative models for yield prediction. The validity of these models was evaluated through the application to five reservoirs that were not used in the development of the models, and by comparing with the actual fish yield data of these reservoirs collected by an independent body. The robustness of the predictive models developed was tested by principal component analysis (PCA) on limnological characteristics, land-use patterns of the catchments and fish yields. The predicted fish yields in five Sri Lankan reservoirs, using the empirical models based on the ratios of forest cover and/or shrub cover to reservoir capacity or reservoir area were in close agreement with the observed fish yields. The scores of PCA ordination of productivity-related limnological parameters and those of land-use patterns were linearly related to fish yields. The relationship between the PCA scores of limnological characteristics and land-use types had the appropriate algebraic form, which substantiates the influence of the limnological factors and land-use types on reservoir fish yields. It is suggested that the relatively high predictive power of the models developed on the basis of GIS methodologies can be used for more accurate assessment of reservoir fisheries. The study supports the importance and the need for an integrated management strategy for the whole watershed to enhance fish yields.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Numerous mathematical models have been developed to evaluate both initial and transient stage removal efficiency of deep bed filters. Microscopic models either using trajectory analysis or convective diffusion equations were used to compute the initial removal efficiency. These models predicted the removal efficiency under favorable filtration conditions quantitatively, but failed to predict the removal efficiency under unfavorable conditions. They underestimated the removal efficiency under unfavorable conditions. Thus, semi-empirical formulations were developed to compute initial removal efficiencies under unfavorable conditions. Also, correction for the adhesion of particles onto filter grains improved the results obtained for removal efficiency from the trajectory analysis. Macroscopic models were used to predict the transient stage removal efficiency of deep bed filters. The O’Melia and Ali1 model assumed that the particle removal is due to filter grains as well as the particles that are already deposited onto the filter grain. Thus, semi-empirical models were used to predict the ripening of filtration. Several modifications were made to the model developed by O’Melia and Ali to predict the deterioration of particle removal during the transient stages of filtration. Models considering the removal of particles under favorable conditions and the accumulation of charges on the filter grains during the transient stages were also developed. This article evaluates those models and their applicability under different operating conditions of filtration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The influence of the grain size on the deformation of Mg–3Al–1Zn was examined in compression at 300 °C. At low strains the flow stress increases with increasing grain size. This is interpreted in terms of dynamic recrystallization. Empirical models of dynamic recrystallization are developed and employed to generate a microstructure map.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A major challenge facing freshwater ecologists and managers is the development of models that link stream ecological condition to catchment scale effects, such as land use. Previous attempts to make such models have followed two general approaches. The bottom-up approach employs mechanistic models, which can quickly become too complex to be useful. The top-down approach employs empirical models derived from large data sets, and has often suffered from large amounts of unexplained variation in stream condition.

We believe that the lack of success of both modelling approaches may be at least partly explained by scientists considering too wide a breadth of catchment type. Thus, we believe that by stratifying large sets of catchments into groups of similar types prior to modelling, both types of models may be improved. This paper describes preliminary work using a Bayesian classification software package, ‘Autoclass’ (Cheeseman and Stutz 1996) to create classes of catchments within the Murray Darling Basin based on physiographic data.

Autoclass uses a model-based classification method that employs finite mixture modelling and trades off model fit versus complexity, leading to a parsimonious solution. The software provides information on the posterior probability that the classification is ‘correct’ and also probabilities for alternative classifications. The importance of each attribute in defining the individual classes is calculated and presented, assisting description of the classes. Each case is ‘assigned’ to a class based on membership probability, but the probability of membership of other classes is also provided. This feature deals very well with cases that do not fit neatly into a larger class. Lastly, Autoclass requires the user to specify the measurement error of continuous variables.

Catchments were derived from the Australian digital elevation model. Physiographic data werederived from national spatial data sets. There was very little information on measurement errors for the spatial data, and so a conservative error of 5% of data range was adopted for all continuous attributes. The incorporation of uncertainty into spatial data sets remains a research challenge.

The results of the classification were very encouraging. The software found nine classes of catchments in the Murray Darling Basin. The classes grouped together geographically, and followed altitude and latitude gradients, despite the fact that these variables were not included in the classification. Descriptions of the classes reveal very different physiographic environments, ranging from dry and flat catchments (i.e. lowlands), through to wet and hilly catchments (i.e. mountainous areas). Rainfall and slope were two important discriminators between classes. These two attributes, in particular, will affect the ways in which the stream interacts with the catchment, and can thus be expected to modify the effects of land use change on ecological condition. Thus, realistic models of the effects of land use change on streams would differ between the different types of catchments, and sound management practices will differ.

A small number of catchments were assigned to their primary class with relatively low probability. These catchments lie on the boundaries of groups of catchments, with the second most likely class being an adjacent group. The locations of these ‘uncertain’ catchments show that the Bayesian classification dealt well with cases that do not fit neatly into larger classes.

Although the results are intuitive, we cannot yet assess whether the classifications described in this paper would assist the modelling of catchment scale effects on stream ecological condition. It is most likely that catchment classification and modelling will be an iterative process, where the needs of the model are used to guide classification, and the results of classifications used to suggest further refinements to models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Numerous mathematical models have been developed to evaluate both initial and transient stage removal efficiency of deep bed filters. Microscopic models either using trajectory analysis or convective-diffusion equations were used to compute the initial removal efficiency. These models predicted the removal efficiency under favorable filtration conditions quantitatively, but failed to predict the removal efficiency under unfavorable conditions. They underestimated the removal efficiency under unfavorable conditions. Thus, semi-empirical formulations were developed to compute initial removal efficiencies under unfavorable conditions. Also, correction for the adhesion of particles onto filter grains improved the results obtained for removal efficiency from the trajectory analysis. Macroscopic models were used to predict the transient stage removal efficiency of deep bed filters. O’Melia and Ali’s model assumed that the particle removal is due to filter grains as well as the particles that are already deposited onto the filter grain. Thus, semi-empirical models were used to predict the ripening of filtration. Several modifications were made to the model developed by O’Melia and Ali to predict the deterioration of particle removal during the transient stages of filtration. Models considering the removal of particles under favorable conditions and the accumulation of charges on the filter grains during the transient stages were also developed. This paper evaluates those models and their applicability under different operating conditions of filtration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although economists have developed a series of approaches to modelling the existence of labour market discrimination, rarely is this topic examined by analysing self-report survey data. After reviewing theories and empirical models of labour market discrimination, we examine self-reported experience of discrimination at different stages in the labour market, among three racial groups utilising U.S. data from the 2001-2003 National Survey of American Life. Our findings indicate that African Americans and Caribbean blacks consistently report more experience of discrimination in the labour market than their non-Hispanic white counterparts. At different stages of the labour market, including hiring, termination and promotion, these groups are more likely to report discrimination than non-Hispanic whites. After controlling for social desirability bias and several human capital and socio-demographic covariates, the results remain robust for African Americans. However, the findings for Caribbean blacks were no longer significant after adjusting for social desirability bias. Although self-report data is rarely utilised to assess racial discrimination in labour economics, our study confirms the utility of this approach as demonstrated in similar research from other disciplines. Our results indicate that after adjusting for relevant confounders self-report survey data is a viable approach to estimating racial discrimination in the labour market. Implications of the study and directions for future research are provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Predicting hydrogen sulphide concentration in sewer network through modelling tools will be beneficial for many stakeholders to design appropriate mitigation strategies. However, the hydrogen sulphide modelling in a sewer network is crucially dependent on the hydraulic modelling of the sewer. The establishment of precise hydrogen sulphide and hydraulic modelling however requires detailed and accurate information about the sewer network structure and the model parameters. This paper outlines a novel approach for the development of hydraulic and hydrogen sulphide modelling to predict the concentration of hydrogen sulphide in sewer network. The approach combines the calculation of wastewater generation and implementation of flow routing on the EPA SWMM 5.0 platform to allow hydrodynamic simulations. Dynamic wave routing is used for hydraulic simulations. It is considered to be the best approach to route existing/old sewer flow. The build-up of hydrogen sulphide model includes the empirical models of hydrogen sulphide generation and emission. Trial of the model was conducted to simulate a sewer network in Seoul, South Korea with some hypothetical data. Further analysis on the use of chemical dosing on the sewer pipe was also performed by the model. Promising results have been obtained through the model, however calibration and validation of the model is required. The presented methodology provides a possibility of the free platform SWMM to be used as a prediction tool of hydrogen sulphide generation. © 2014 © 2014 Balaban Desalination Publications. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reforestation is an important tool for reducing or reversing biodiversity loss and mitigating climate change. However, there are many potential compromises between the structural (biodiversity) and functional (carbon sequestration and water yield) effects of reforestation, which can be affected by decisions on spatial design and establishment of plantings. We review the environmental responses to reforestation and show that manipulating the configuration of plantings (location, size, species mix and tree density) increases a range of environmental benefits. More extensive tree plantings (>10. ha) provide more habitat, and greater improvements to carbon and water cycling. Planting a mixture of native trees and shrubs is best for biodiversity, while traditional plantation species, generally non-native species, sequester C faster. Tree density can be manipulated at planting or during early development to accelerate structural maturity and to manage water yields. A diversity of habitats will be created by planting in a variety of landscape positions and by emulating the patchy distribution of forest types, which characterized many regions prior to extensive landscape transformation. Areas with shallow aquifers can be planted to reduce water pollution or avoided to maintain water yields. Reforestation should be used to build forest networks that are surrounded by low-intensity land use and that provide links within regions and between biomes. While there are adequate models for C sequestration and changes in water yields after reforestation, the quantitative understanding of changes in habitat resources and species composition is more limited. Development of spatial and temporal modelling platforms based on empirical models of structural and functional outcomes of reforestation is essential for deciding how to reconfigure agricultural regions. To build such platforms, we must quantify: (a) the influence of previous land uses, establishment methods, species mixes and interactions with adjacent land uses on environmental (particularly biodiversity) outcomes of reforestation and (b) the ways in which responses measured at the level of individual plantings scale up to watersheds and regions. Models based on this information will help widespread reforestation for carbon sequestration to improve native biodiversity, nutrient cycling and water balance at regional scales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reforestation of agricultural land with mixed-species environmental plantings (native trees and shrubs) can contribute to mitigation of climate change through sequestration of carbon. Although soil carbon sequestration following reforestation has been investigated at site- and regional-scales, there are few studies across regions where the impact of a broad range of site conditions and management practices can be assessed. We collated new and existing data on soil organic carbon (SOC, 0-30 cm depth, N = 117 sites) and litter (N = 106 sites) under mixed-species plantings and an agricultural pair or baseline across southern and eastern Australia. Sites covered a range of previous land uses, initial SOC stocks, climatic conditions and management types. Differences in total SOC stocks following reforestation were significant at 52% of sites, with a mean rate of increase of 0.57 ± 0.06 Mg C ha-1 y-1. Increases were largely in the particulate fraction, which increased significantly at 46% of sites compared with increases at 27% of sites for the humus fraction. Although relative increase was highest in the particulate fraction, the humus fraction was the largest proportion of total SOC and so absolute differences in both fractions were similar. Accumulation rates of carbon in litter were 0.39 ± 0.02 Mg C ha-1 y-1, increasing the total (soil + litter) annual rate of carbon sequestration by 68%. Previously-cropped sites accumulated more SOC than previously-grazed sites. The explained variance differed widely among empirical models of differences in SOC stocks following reforestation according to SOC fraction and depth for previously-grazed (R2 = 0.18-0.51) and previously-cropped (R2 = 0.14-0.60) sites. For previously-grazed sites, differences in SOC following reforestation were negatively related to total SOC in the pasture. By comparison, for previously-cropped sites, differences in SOC were positively related to mean annual rainfall. This improved broad-scale understanding of the magnitude and predictors of changes in stocks of soil and litter C following reforestation is valuable for the development of policy on carbon markets and the establishment of future mixed-species environmental plantings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The co-occurrence of problem drinking and binge eating and purging has been well documented. However, there has been relatively little investigation of etiological models that may influence the development of this  co-occurrence. This study tests the hypotheses that impulsivity is heightened in eating disordered women compared with controls, and that women with comorbid bulimia and alcohol use disorders show higher impulsivity than bulimic-only women. The Impulsivity scale, BIS/BAS scales, State Anxiety Inventory, and a behavioural measure of reward responsiveness (CARROT) were administered to 22 women with bulimia, 23 women with comorbid bulimia and alcohol abuse/dependence, and 21 control women. As hypothesised, eating disordered women scored higher than controls on several self-report measures of impulsivity and sorted cards faster during a financially rewarded trial on the behavioural task. Also, as predicted, comorbid women scored higher than bulimic women on the Impulsivity scale. These findings suggest that individual differences in impulsiveness and a tendency to approach rewarding stimuli may contribute to developing these disorders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the recent spectacular corporate collapses of Parmalat in Europe, Enron and WorldCom in the USA and HIH in Australia and argues for a re-examination of corporate governance regulations, particularly in relation to accounting standards regarding the valuation of assets. The recommendation that is put forward in this regard is based upon empirical evidence arising from further examination of the empirical results in (Hossari and Rahman, 2004). Specifically, the recommendation is based upon the realization that, among the 48 financial ratios across the 50-plus refereed studies, five financial ratios, all of which contained assets as one of the variables, were a relatively robust indicator of corporate collapse. The five ratios are: Net Income/Total Assets, Current Assets/Current Liabilities, Total Liabilities/Total Assets, Working Capital/Total Assets, and Earnings Before Interest and Taxes/Total Assets. This paper suggests that it's not the failure of the corporate collapse prediction models, rather it's the erosion of the reliability of some key input data, namely assets and the valuation thereof, that is largely responsible for the apparent failure of these models in capturing impending collapses, such as those that we witnessed in the recent past. Such empirical findings support the argument that assets are soft targets for misrepresentation, because of the leeway granted in accounting standards with regards to their valuation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The five-factor ‘Behavioural-Intentions Battery’ was developed by Zeithaml, Berry and Parasuraman (1996), to measure customer behavioural and attitudinal intentions. The structure of this model was re-examined by Bloomer, de Ruyter and Wetzels (1999) across different service industries. They concluded that service loyalty is a multi dimensional construct consisting of four, not five, distinct dimensions. To date, neither model has been tested within a banking environment. This research independently tested the ‘goodness of fit’ of both the four and five-factor models, to data collected from branch bank customers. Data were collected via questionnaire with a sample of 348 banking customers. A confirmatory factor analysis was conducted upon the two opposing factor structures, revealing that the five-factor structure has a superior model fit; however, the fit is ‘marginal’.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Efficiently inducing precise causal models accurately reflecting given data sets is the ultimate goal of causal discovery. The algorithm proposed by Wallace et al. [10] has demonstrated its ability in discovering Linear Causal Models from data. To explore the ways to improve efficiency, this research examines three different encoding schemes and four searching strategies. The experimental results reveal that (1) specifying parents encoding method is the best among three encoding methods we examined; (2) In the discovery of linear causal models, local Hill climbing works very well compared to other more sophisticated methods, like Markov Chain Monte Carto (MCMC), Genetic Algorithm (GA) and Parallel MCMC searching.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of ensemble models in many problem domains has increased significantly in the last fewyears. The ensemble modeling, in particularly boosting, has shown a great promise in improving predictive performance of a model. Combining the ensemble members is normally done in a co-operative fashion where each of the ensemble members performs the same task and their predictions are aggregated to obtain the improved performance. However, it is also possible to combine the ensemble members in a competitive fashion where the best prediction of a relevant ensemble member is selected for a particular input. This option has been previously somewhat overlooked. The aim of this article is to investigate and compare the competitive and co-operative approaches to combining the models in the ensemble. A comparison is made between a competitive ensemble model and that of MARS with bagging, mixture of experts, hierarchical mixture of experts and a neural network ensemble over several public domain regression problems that have a high degree of nonlinearity and noise. The empirical results showa substantial advantage of competitive learning versus the co-operative learning for all the regression problems investigated. The requirements for creating the efficient ensembles and the available guidelines are also discussed.