122 resultados para Structure-based model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Biased estimation has the advantage of reducing the mean squared error (MSE) of an estimator. The question of interest is how biased estimation affects model selection. In this paper, we introduce biased estimation to a range of model selection criteria. Specifically, we analyze the performance of the minimum description length (MDL) criterion based on biased and unbiased estimation and compare it against modern model selection criteria such as Kay's conditional model order estimator (CME), the bootstrap and the more recently proposed hook-and-loop resampling based model selection. The advantages and limitations of the considered techniques are discussed. The results indicate that, in some cases, biased estimators can slightly improve the selection of the correct model. We also give an example for which the CME with an unbiased estimator fails, but could regain its power when a biased estimator is used.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Shrinking product lifecycles, tough international competition, swiftly changing technologies, ever increasing customer quality expectation and demanding high variety options are some of the forces that drive next generation of development processes. To overcome these challenges, design cost and development time of product has to be reduced as well as quality to be improved. Design reuse is considered one of the lean strategies to win the race in this competitive environment. design reuse can reduce the product development time, product development cost as well as number of defects which will ultimately influence the product performance in cost, time and quality. However, it has been found that no or little work has been carried out for quantifying the effectiveness of design reuse in product development performance such as design cost, development time and quality. Therefore, in this study we propose a systematic design reuse based product design framework and developed a design leanness index (DLI) as a measure of effectiveness of design reuse. The DLI is a representative measure of reuse effectiveness in cost, development time and quality. Through this index, a clear relationship between reuse measure and product development performance metrics has been established. Finally, a cost based model has been developed to maximise the design leanness index for a product within the given set of constraints achieving leanness in design process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presentation addresses design-based research that became a catalyst for social change among a disadvantaged school community. The aim of the longitudinal research was to protoype an evidence-based model for whole school digital and print literacy pedagogy renewal among students from low socioeconomic, Indigenous, and migrant backgrounds. Applying Anthony Gidden’s principle of the “duality of structure”, the paper presentation interprets how the collective agency of researchers and the school community began to transform the structural properties of the institution in a two-way dynamism, so that the structural properties of the school were not outside of individual action, but were implicated in its reproduction and transformation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introducing engineering-based model-eliciting experiences in the elementary curriculum is a new and increasingly important domain of research by mathematics, science, technology, and engineering educators. Recent research has raised questions about the context of engineering problems that are meaningful, engaging, and inspiring for young students. In the present study an environmental engineering activity was implemented in two classes of 11-year-old students in Cyprus. The problem required students to develop a procedure for selecting among alternative countries from which to buy water. Students created a range of models that adequately solved the problem although not all models took into account all of the data provided. The models varied in the number of problem factors taken into consideration and also in the different approaches adopted in dealing with the problem factors. At least two groups of students integrated into their models the environmental aspect of the problem (energy consumption, water pollution) and further refined their models. Results indicate that engineering model-eliciting activities can be introduced effectively into the elementary curriculum, providing rich opportunities for students to deal with engineering contexts and to apply their learning in mathematics and science to solving real-world engineering problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Individual-based models describing the migration and proliferation of a population of cells frequently restrict the cells to a predefined lattice. An implicit assumption of this type of lattice based model is that a proliferative population will always eventually fill the lattice. Here we develop a new lattice-free individual-based model that incorporates cell-to-cell crowding effects. We also derive approximate mean-field descriptions for the lattice-free model in two special cases motivated by commonly used experimental setups. Lattice-free simulation results are compared to these mean-field descriptions and to a corresponding lattice-based model. Data from a proliferation experiment is used to estimate the parameters for the new model, including the cell proliferation rate, showing that the model fits the data well. An important aspect of the lattice-free model is that the confluent cell density is not predefined, as with lattice-based models, but an emergent model property. As a consequence of the more realistic, irregular configuration of cells in the lattice-free model, the population growth rate is much slower at high cell densities and the population cannot reach the same confluent density as an equivalent lattice-based model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We utilise the well-developed quantum decision models known to the QI community to create a higher order social decision making model. A simple Agent Based Model (ABM) of a society of agents with changing attitudes towards a social issue is presented, where the private attitudes of individuals in the system are represented using a geometric structure inspired by quantum theory. We track the changing attitudes of the members of that society, and their resulting propensities to act, or not, in a given social context. A number of new issues surrounding this "scaling up" of quantum decision theories are discussed, as well as new directions and opportunities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Airport system is complex. Passenger dynamics within it appear to be complicate as well. Passenger behaviours outside standard processes are regarded more significant in terms of public hazard and service rate issues. In this paper, we devised an individual agent decision model to simulate stochastic passenger behaviour in airport departure terminal. Bayesian networks are implemented into the decision making model to infer the probabilities that passengers choose to use any in-airport facilities. We aim to understand dynamics of the discretionary activities of passengers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider a hybrid model, created by coupling a continuum and an agent-based model of infectious disease. The framework of the hybrid model provides a mechanism to study the spread of infection at both the individual and population levels. This approach captures the stochastic spatial heterogeneity at the individual level, which is directly related to deterministic population level properties. This facilitates the study of spatial aspects of the epidemic process. A spatial analysis, involving counting the number of infectious agents in equally sized bins, reveals when the spatial domain is nonhomogeneous.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Designing the smart grid requires combining varied models. As their number increases, so does the complexity of the software. Having a well thought architecture for the software then becomes crucial. This paper presents MODAM, a framework designed to combine agent-based models in a flexible and extensible manner, using well known software engineering design solutions (OSGi specification [1] and Eclipse plugins [2]). Details on how to build a modular agent-based model for the smart grid are given in this paper, illustrated by an example for a small network.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Over the last decade, the majority of existing search techniques is either keyword- based or category-based, resulting in unsatisfactory effectiveness. Meanwhile, studies have illustrated that more than 80% of users preferred personalized search results. As a result, many studies paid a great deal of efforts (referred to as col- laborative filtering) investigating on personalized notions for enhancing retrieval performance. One of the fundamental yet most challenging steps is to capture precise user information needs. Most Web users are inexperienced or lack the capability to express their needs properly, whereas the existent retrieval systems are highly sensitive to vocabulary. Researchers have increasingly proposed the utilization of ontology-based tech- niques to improve current mining approaches. The related techniques are not only able to refine search intentions among specific generic domains, but also to access new knowledge by tracking semantic relations. In recent years, some researchers have attempted to build ontological user profiles according to discovered user background knowledge. The knowledge is considered to be both global and lo- cal analyses, which aim to produce tailored ontologies by a group of concepts. However, a key problem here that has not been addressed is: how to accurately match diverse local information to universal global knowledge. This research conducts a theoretical study on the use of personalized ontolo- gies to enhance text mining performance. The objective is to understand user information needs by a \bag-of-concepts" rather than \words". The concepts are gathered from a general world knowledge base named the Library of Congress Subject Headings. To return desirable search results, a novel ontology-based mining approach is introduced to discover accurate search intentions and learn personalized ontologies as user profiles. The approach can not only pinpoint users' individual intentions in a rough hierarchical structure, but can also in- terpret their needs by a set of acknowledged concepts. Along with global and local analyses, another solid concept matching approach is carried out to address about the mismatch between local information and world knowledge. Relevance features produced by the Relevance Feature Discovery model, are determined as representatives of local information. These features have been proven as the best alternative for user queries to avoid ambiguity and consistently outperform the features extracted by other filtering models. The two attempt-to-proposed ap- proaches are both evaluated by a scientific evaluation with the standard Reuters Corpus Volume 1 testing set. A comprehensive comparison is made with a num- ber of the state-of-the art baseline models, including TF-IDF, Rocchio, Okapi BM25, the deploying Pattern Taxonomy Model, and an ontology-based model. The gathered results indicate that the top precision can be improved remarkably with the proposed ontology mining approach, where the matching approach is successful and achieves significant improvements in most information filtering measurements. This research contributes to the fields of ontological filtering, user profiling, and knowledge representation. The related outputs are critical when systems are expected to return proper mining results and provide personalized services. The scientific findings have the potential to facilitate the design of advanced preference mining models, where impact on people's daily lives.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a recently introduced model-based method for precedent-free fault detection and isolation (FDI) is modified to deal with multiple input, multiple output (MIMO) systems and is applied to an automotive engine with exhaust gas recirculation (EGR) system. Using normal behavior data generated by a high fidelity engine simulation, the growing structure multiple model system (GSMMS) approach is used to construct dynamic models of normal behavior for the EGR system and its constituent subsystems. Using the GSMMS models as a foundation, anomalous behavior is detected whenever statistically significant departures of the most recent modeling residuals away from the modeling residuals displayed during normal behavior are observed. By reconnecting the anomaly detectors (ADs) to the constituent subsystems, EGR valve, cooler, and valve controller faults are isolated without the need for prior training using data corresponding to particular faulty system behaviors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lyngbya majuscula is a cyanobacterium (blue-green algae) occurring naturally in tropical and subtropical coastal areas worldwide. Deception Bay, in Northern Moreton Bay, Queensland, has a history of Lyngbya blooms, and forms a case study for this investigation. The South East Queensland (SEQ) Healthy Waterways Partnership, collaboration between government, industry, research and the community, was formed to address issues affecting the health of the river catchments and waterways of South East Queensland. The Partnership coordinated the Lyngbya Research and Management Program (2005-2007) which culminated in a Coastal Algal Blooms (CAB) Action Plan for harmful and nuisance algal blooms, such as Lyngbya majuscula. This first phase of the project was predominantly of a scientific nature and also facilitated the collection of additional data to better understand Lyngbya blooms. The second phase of this project, SEQ Healthy Waterways Strategy 2007-2012, is now underway to implement the CAB Action Plan and as such is more management focussed. As part of the first phase of the project, a Science model for the initiation of a Lyngbya bloom was built using Bayesian Networks (BN). The structure of the Science Bayesian Network was built by the Lyngbya Science Working Group (LSWG) which was drawn from diverse disciplines. The BN was then quantified with annual data and expert knowledge. Scenario testing confirmed the expected temporal nature of bloom initiation and it was recommended that the next version of the BN be extended to take this into account. Elicitation for this BN thus occurred at three levels: design, quantification and verification. The first level involved construction of the conceptual model itself, definition of the nodes within the model and identification of sources of information to quantify the nodes. The second level included elicitation of expert opinion and representation of this information in a form suitable for inclusion in the BN. The third and final level concerned the specification of scenarios used to verify the model. The second phase of the project provides the opportunity to update the network with the newly collected detailed data obtained during the previous phase of the project. Specifically the temporal nature of Lyngbya blooms is of interest. Management efforts need to be directed to the most vulnerable periods to bloom initiation in the Bay. To model the temporal aspects of Lyngbya we are using Object Oriented Bayesian networks (OOBN) to create ‘time slices’ for each of the periods of interest during the summer. OOBNs provide a framework to simplify knowledge representation and facilitate reuse of nodes and network fragments. An OOBN is more hierarchical than a traditional BN with any sub-network able to contain other sub-networks. Connectivity between OOBNs is an important feature and allows information flow between the time slices. This study demonstrates more sophisticated use of expert information within Bayesian networks, which combine expert knowledge with data (categorized using expert-defined thresholds) within an expert-defined model structure. Based on the results from the verification process the experts are able to target areas requiring greater precision and those exhibiting temporal behaviour. The time slices incorporate the data for that time period for each of the temporal nodes (instead of using the annual data from the previous static Science BN) and include lag effects to allow the effect from one time slice to flow to the next time slice. We demonstrate a concurrent steady increase in the probability of initiation of a Lyngbya bloom and conclude that the inclusion of temporal aspects in the BN model is consistent with the perceptions of Lyngbya behaviour held by the stakeholders. This extended model provides a more accurate representation of the increased risk of algal blooms in the summer months and show that the opinions elicited to inform a static BN can be readily extended to a dynamic OOBN, providing more comprehensive information for decision makers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.