932 resultados para Dynamic environments
Resumo:
To harness safe operation of Web-based systems in Web environments, we propose an SSPA (Server-based SHA-1 Page-digest Algorithm) to verify the integrity of Web contents before the server issues an HTTP response to a user request. In addition to standard security measures, our Java implementation of the SSPA, which is called the Dynamic Security Surveillance Agent (DSSA), provides further security in terms of content integrity to Web-based systems. Its function is to prevent the display of Web contents that have been altered through the malicious acts of attackers and intruders on client machines. This is to protect the reputation of organisations from cyber-attacks and to ensure the safe operation of Web systems by dynamically monitoring the integrity of a Web site's content on demand. We discuss our findings in terms of the applicability and practicality of the proposed system. We also discuss its time metrics, specifically in relation to its computational overhead at the Web server, as well as the overall latency from the clients' point of view, using different Internet access methods. The SSPA, our DSSA implementation, some experimental results and related work are all discussed
Resumo:
Computational neuroscience aims to elucidate the mechanisms of neural information processing and population dynamics, through a methodology of incorporating biological data into complex mathematical models. Existing simulation environments model at a particular level of detail; none allow a multi-level approach to neural modelling. Moreover, most are not engineered to produce compute-efficient solutions, an important issue because sufficient processing power is a major impediment in the field. This project aims to apply modern software engineering techniques to create a flexible high performance neural modelling environment, which will allow rigorous exploration of model parameter effects, and modelling at multiple levels of abstraction.
Resumo:
PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.
Resumo:
In this paper, dynamic modeling and simulation of the hydropurification reactor in a purified terephthalic acid production plant has been investigated by gray-box technique to evaluate the catalytic activity of palladium supported on carbon (0.5 wt.% Pd/C) catalyst. The reaction kinetics and catalyst deactivation trend have been modeled by employing artificial neural network (ANN). The network output has been incorporated with the reactor first principle model (FPM). The simulation results reveal that the gray-box model (FPM and ANN) is about 32 percent more accurate than FPM. The model demonstrates that the catalyst is deactivated after eleven months. Moreover, the catalyst lifetime decreases about two and half months in case of 7 percent increase of reactor feed flowrate. It is predicted that 10 percent enhancement of hydrogen flowrate promotes catalyst lifetime at the amount of one month. Additionally, the enhancement of 4-carboxybenzaldehyde concentration in the reactor feed improves CO and benzoic acid synthesis. CO is a poison to the catalyst, and benzoic acid might affect the product quality. The model can be applied into actual working plants to analyze the Pd/C catalyst efficient functioning and the catalytic reactor performance.
Resumo:
There are currently 23,500 level crossings in Australia, broadly divided into one of two categories: active level crossings which are fully automatic and have boom barriers, alarm bells, flashing lights, and pedestrian gates; and passive level crossings, which are not automatic and aim to control road and pedestrianised walkways solely with stop and give way signs. Active level crossings are considered to be the gold standard for transport ergonomics when grade separation (i.e. constructing an over- or underpass) is not viable. In Australia, the current strategy is to annually upgrade passive level crossings with active controls but active crossings are also associated with traffic congestion, largely as a result of extended closure times. The percentage of time level crossings are closed to road vehicles during peak periods increases with the rise in the frequency of train services. The popular perception appears to be that once a level crossing is upgraded, one is free to wipe their hands and consider the job done. However, there may also be environments where active protection is not enough, but where the setting may not justify the capital costs of grade separation. Indeed, the associated congestion and traffic delay could compromise safety by contributing to the risk taking behaviour by motorists and pedestrians. In these environments it is important to understand what human factor issues are present and ask the question of whether a one size fits all solution is indeed the most ergonomically sound solution for today’s transport needs.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
This paper presents an approach for dynamic state estimation of aggregated generators by introducing a new correction factor for equivalent inter-area power flows. The spread of generators from the center of inertia of each area is summarized by the correction term α on the equivalent power flow between the areas and is applied to the identification and estimation process. A nonlinear time varying Kalman filter is applied to estimate the equivalent angles and velocities of coherent areas by reducing the effect of local modes on the estimated states. The approach is simulated on two test systems and the results show the effect of the correction factor and the performance of the state estimation by estimating the inter-area dynamics of the system.
Resumo:
Dynamic Bayesian Networks (DBNs) provide a versatile platform for predicting and analysing the behaviour of complex systems. As such, they are well suited to the prediction of complex ecosystem population trajectories under anthropogenic disturbances such as the dredging of marine seagrass ecosystems. However, DBNs assume a homogeneous Markov chain whereas a key characteristics of complex ecosystems is the presence of feedback loops, path dependencies and regime changes whereby the behaviour of the system can vary based on past states. This paper develops a method based on the small world structure of complex systems networks to modularise a non-homogeneous DBN and enable the computation of posterior marginal probabilities given evidence in forwards inference. It also provides an approach for an approximate solution for backwards inference as convergence is not guaranteed for a path dependent system. When applied to the seagrass dredging problem, the incorporation of path dependency can implement conditional absorption and allows release from the zero state in line with environmental and ecological observations. As dredging has a marked global impact on seagrass and other marine ecosystems of high environmental and economic value, using such a complex systems model to develop practical ways to meet the needs of conservation and industry through enhancing resistance and/or recovery is of paramount importance.
Resumo:
This thesis is a development of a methodology to predict the radio transmitter signal attenuation, via vertical density profiling of digitised objects, through the use of Light Detection and Ranging (LiDaR) measurements. The resulting map of indexed signal attenuation is useful for dynamic radio transmitter placement within the geospatial data set without expensive and tedious radio measurements.
Resumo:
Predicting temporal responses of ecosystems to disturbances associated with industrial activities is critical for their management and conservation. However, prediction of ecosystem responses is challenging due to the complexity and potential non-linearities stemming from interactions between system components and multiple environmental drivers. Prediction is particularly difficult for marine ecosystems due to their often highly variable and complex natures and large uncertainties surrounding their dynamic responses. Consequently, current management of such systems often rely on expert judgement and/or complex quantitative models that consider only a subset of the relevant ecological processes. Hence there exists an urgent need for the development of whole-of-systems predictive models to support decision and policy makers in managing complex marine systems in the context of industry based disturbances. This paper presents Dynamic Bayesian Networks (DBNs) for predicting the temporal response of a marine ecosystem to anthropogenic disturbances. The DBN provides a visual representation of the problem domain in terms of factors (parts of the ecosystem) and their relationships. These relationships are quantified via Conditional Probability Tables (CPTs), which estimate the variability and uncertainty in the distribution of each factor. The combination of qualitative visual and quantitative elements in a DBN facilitates the integration of a wide array of data, published and expert knowledge and other models. Such multiple sources are often essential as one single source of information is rarely sufficient to cover the diverse range of factors relevant to a management task. Here, a DBN model is developed for tropical, annual Halophila and temperate, persistent Amphibolis seagrass meadows to inform dredging management and help meet environmental guidelines. Specifically, the impacts of capital (e.g. new port development) and maintenance (e.g. maintaining channel depths in established ports) dredging is evaluated with respect to the risk of permanent loss, defined as no recovery within 5 years (Environmental Protection Agency guidelines). The model is developed using expert knowledge, existing literature, statistical models of environmental light, and experimental data. The model is then demonstrated in a case study through the analysis of a variety of dredging, environmental and seagrass ecosystem recovery scenarios. In spatial zones significantly affected by dredging, such as the zone of moderate impact, shoot density has a very high probability of being driven to zero by capital dredging due to the duration of such dredging. Here, fast growing Halophila species can recover, however, the probability of recovery depends on the presence of seed banks. On the other hand, slow growing Amphibolis meadows have a high probability of suffering permanent loss. However, in the maintenance dredging scenario, due to the shorter duration of dredging, Amphibolis is better able to resist the impacts of dredging. For both types of seagrass meadows, the probability of loss was strongly dependent on the biological and ecological status of the meadow, as well as environmental conditions post-dredging. The ability to predict the ecosystem response under cumulative, non-linear interactions across a complex ecosystem highlights the utility of DBNs for decision support and environmental management.
Resumo:
The ability to test large arrays of cell and biomaterial combinations in 3D environments is still rather limited in the context of tissue engineering and regenerative medicine. This limitation can be generally addressed by employing highly automated and reproducible methodologies. This study reports on the development of a highly versatile and upscalable method based on additive manufacturing for the fabrication of arrays of scaffolds, which are enclosed into individualized perfusion chambers. Devices containing eight scaffolds and their corresponding bioreactor chambers are simultaneously fabricated utilizing a dual extrusion additive manufacturing system. To demonstrate the versatility of the concept, the scaffolds, while enclosed into the device, are subsequently surface-coated with a biomimetic calcium phosphate layer by perfusion with simulated body fluid solution. 96 scaffolds are simultaneously seeded and cultured with human osteoblasts under highly controlled bidirectional perfusion dynamic conditions over 4 weeks. Both coated and noncoated resulting scaffolds show homogeneous cell distribution and high cell viability throughout the 4 weeks culture period and CaP-coated scaffolds result in a significantly increased cell number. The methodology developed in this work exemplifies the applicability of additive manufacturing as a tool for further automation of studies in the field of tissue engineering and regenerative medicine.
Resumo:
There are 23,500 level crossings in Australia. In these types of environments it is important to understand what human factor issues are present and how road users and pedestrians engage with crossings. A series of on-site observations were performed over a 2-day period at a 3-track active crossing. This was followed by 52 interviews with local business owners and members of the public. Data were captured using a manual-coding scheme for recording and categorising violations. Over 700 separate road user and pedestrian violations were recorded, with representations in multiple categories. Time stamping revealed that the crossing was active for 59% of the time in some morning periods. Further, trains could take up to 4-min to arrive following its first activation. Many pedestrians jaywalked under side rails and around active boom gates. In numerous cases pedestrians put themselves at risk in order to beat or catch the approaching train, ignored signs to stop walking when the lights were flashing. Analysis of interview data identified themes associated with congestion, safety, and violations. This work offers insight into context specific issues associated with active level crossing protection.
Resumo:
In this paper, we use reinforcement learning (RL) as a tool to study price dynamics in an electronic retail market consisting of two competing sellers, and price sensitive and lead time sensitive customers. Sellers, offering identical products, compete on price to satisfy stochastically arriving demands (customers), and follow standard inventory control and replenishment policies to manage their inventories. In such a generalized setting, RL techniques have not previously been applied. We consider two representative cases: 1) no information case, were none of the sellers has any information about customer queue levels, inventory levels, or prices at the competitors; and 2) partial information case, where every seller has information about the customer queue levels and inventory levels of the competitors. Sellers employ automated pricing agents, or pricebots, which use RL-based pricing algorithms to reset the prices at random intervals based on factors such as number of back orders, inventory levels, and replenishment lead times, with the objective of maximizing discounted cumulative profit. In the no information case, we show that a seller who uses Q-learning outperforms a seller who uses derivative following (DF). In the partial information case, we model the problem as a Markovian game and use actor-critic based RL to learn dynamic prices. We believe our approach to solving these problems is a new and promising way of setting dynamic prices in multiseller environments with stochastic demands, price sensitive customers, and inventory replenishments.