935 resultados para System modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new approach to improving the effectiveness of autonomous systems that deal with dynamic environments. The basis of the approach is to find repeating patterns of behavior in the dynamic elements of the system, and then to use predictions of the repeating elements to better plan goal directed behavior. It is a layered approach involving classifying, modeling, predicting and exploiting. Classifying involves using observations to place the moving elements into previously defined classes. Modeling involves recording features of the behavior on a coarse grained grid. Exploitation is achieved by integrating predictions from the model into the behavior selection module to improve the utility of the robot's actions. This is in contrast to typical approaches that use the model to select between different strategies or plays. Three methods of adaptation to the dynamic features of the environment are explored. The effectiveness of each method is determined using statistical tests over a number of repeated experiments. The work is presented in the context of predicting opponent behavior in the highly dynamic and multi-agent robot soccer domain (RoboCup)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear, non-stationary signals are commonly found in a variety of disciplines such as biology, medicine, geology and financial modeling. The complexity (e.g. nonlinearity and non-stationarity) of such signals and their low signal to noise ratios often make it a challenging task to use them in critical applications. In this paper we propose a new neural network based technique to address those problems. We show that a feed forward, multi-layered neural network can conveniently capture the states of a nonlinear system in its connection weight-space, after a process of supervised training. The performance of the proposed method is investigated via computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The design, development, and use of complex systems models raises a unique class of challenges and potential pitfalls, many of which are commonly recurring problems. Over time, researchers gain experience in this form of modeling, choosing algorithms, techniques, and frameworks that improve the quality, confidence level, and speed of development of their models. This increasing collective experience of complex systems modellers is a resource that should be captured. Fields such as software engineering and architecture have benefited from the development of generic solutions to recurring problems, called patterns. Using pattern development techniques from these fields, insights from communities such as learning and information processing, data mining, bioinformatics, and agent-based modeling can be identified and captured. Collections of such 'pattern languages' would allow knowledge gained through experience to be readily accessible to less-experienced practitioners and to other domains. This paper proposes a methodology for capturing the wisdom of computational modelers by introducing example visualization patterns, and a pattern classification system for analyzing the relationship between micro and macro behaviour in complex systems models. We anticipate that a new field of complex systems patterns will provide an invaluable resource for both practicing and future generations of modelers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Firenzuola turbidite system formed during a paroxysmal phase of thrust propagation, involving the upper Serravallian deposits of the Marnoso-arenacea Formation (MAF). During this phase the coeval growth of two major tectonic structures, the M. Castellaccio thrust and the Verghereto high, played a key role, causing a closure of the inner basin and a coeval shift of the depocentre to the outer basin. This work focuses on this phase of fragmentation of the MAF basin; it is based on a new detailed high-resolution stratigraphic framework, which was used to determine the timing of growth of the involved structures and their direct influence on sediment dispersal and on the lateral and vertical turbidite facies distribution. The Firenzuola turbidite system stratigraphy is characterized by the occurrence of mass-transport complexes (MTCs) and thick sandstone accumulation in the depocentral area, which passes to finer drape over the structural highs; the differentiation between these two zones increases over time and ends with the deposition of marly units over the structural highs and the emplacement of the Visignano MTC. According to the stratigraphic pattern and turbidite facies characteristics, the Firenzuola System has been split into two phases, namely Firenzuola I and Firenzuola II: the former is quite similar to the underlying deposits, while the latter shows the main fragmentation phase, testifying the progressive isolation of the inner basin and a coeval shift of the depocentre to the outer basin. The final stratigraphic and sedimentological dataset has been used to create a quantitative high-resolution 3D facies distribution using the Petrel software platform. This model allows a detailed analysis of lateral and vertical facies variations that can be exported to several reservoirs settings in hydrocarbon exploration and exploitation areas, since facies distributions and geometries of the reservoir bodies of many sub-surface turbidite basins show a significant relationship to the syndepositional structural activity, but are beyond seismic resolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a general methodology for estimating and incorporating uncertainty in the controller and forward models for noisy nonlinear control problems. Conditional distribution modeling in a neural network context is used to estimate uncertainty around the prediction of neural network outputs. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localize the possible control solutions to consider. A nonlinear multivariable system with different delays between the input-output pairs is used to demonstrate the successful application of the developed control algorithm. The proposed method is suitable for redundant control systems and allows us to model strongly non Gaussian distributions of control signal as well as processes with hysteresis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper assesses the extent to which the equity markets of Hungary, Poland the Czech Republic and Russia have become less segmented. Using a variety of tests it is shown there has been a consistent increase in the co-movement of some Eastern European markets and developed markets. Using the variance decompositions from a vector autoregressive representation of returns it is shown that for Poland and Hungary global factors are having an increasing influence on equity returns, suggestive of increased equity market integration. In this paper we model a system of bivariate equity market correlations as a smooth transition logistic trend model in order to establish how rapidly the countries of Eastern Europe are moving away from market segmentation. We find that Hungary is the country which is becoming integrated the most quickly. © 2005 ELsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to describe how the application of systems thinking to designing, managing and improving business processes has resulted in a new and unique holonic-based process modeling methodology know as process orientated holonic modeling. Design/methodology/approach: The paper describes key systems thinking axioms that are built upon in an overview of the methodology; the techniques are described using an example taken from a large organization designing and manufacturing capital goods equipment operating within a complex and dynamic environment. These were produced in an 18 month project, using an action research approach, to improve quality and process efficiency. Findings: The findings of this research show that this new methodology can support process depiction and improvement in industrial sectors which are characterized by environments of high variety and low volume (e.g. projects; such as the design and manufacture of a radar system or a hybrid production process) which do not provide repetitive learning opportunities. In such circumstances, the methodology has not only been able to deliver holonic-based process diagrams but also been able to transfer strategic vision from top management to middle and operational levels without being reductionistic. Originality/value: This paper will be of interest to organizational analysts looking at large complex projects whom require a methodology that does not confine them to thinking reductionistically in "task-breakdown" based approaches. The novel ideas in this paper have great impact on the way analysts should perceive organizational processes. Future research is applying the methodology in similar environments in other industries. © Emerald Group Publishing Limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of systems thinking to designing, managing, and improving business processes has developed a new "holonic-based" process modeling methodology. The theoretical background and the methodology are described using examples taken from a large organization designing and manufacturing capital goods equipment operating within a complex and dynamic environment. A key point of differentiation attributed to this methodology is that it allows a set of models to be produced without taking a task breakdown approach but instead uses systems thinking and a construct known as the "holon" to build process descriptions as a system of systems (i.e., a holarchy). The process-oriented holonic modeling methodology has been used for total quality management and business process engineering exercises in different industrial sectors and builds models that connect the strategic vision of a company to its operational processes. Exercises have been conducted in response to environmental pressures to make operations align with strategic thinking as well as becoming increasingly agile and efficient. This unique methodology is best applied in environments of high complexity, low volume, and high variety, where repeated learning opportunities are few and far between (e.g., large development projects). © 2007 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A consequence of a loss of coolant accident is the damage of adjacent insulation materials (IM). IM may then be transported to the containment sump strainers where water is drawn into the ECCS (emergency core cooling system). Blockage of the strainers by IM lead to an increased pressure drop acting on the operating ECCS pumps. IM can also penetrate the strainers, enter the reactor coolant system and then accumulate in the reactor pressure vessel. An experimental and theoretical study that concentrates on mineral wool fiber transport in the containment sump and the ECCS is being performed. The study entails fiber generation and the assessment of fiber transport in single and multi-effect experiments. The experiments include measurement of the terminal settling velocity, the strainer pressure drop, fiber sedimentation and resuspension in a channel flow and jet flow in a rectangular tank. An integrated test facility is also operated to assess the compounded effects. Each experimental facility is used to provide data for the validation of equivalent computational fluid dynamic models. The channel flow facility allows the determination of the steady state distribution of the fibers at different flow velocities. The fibers are modeled in the Eulerian-Eulerian reference frame as spherical wetted agglomerates. The fiber agglomerate size, density, the relative viscosity of the fluid-fiber mixture and the turbulent dispersion of the fibers all affect the steady state accumulation of fibers at the channel base. In the current simulations, two fiber phases are separately considered. The particle size is kept constant while the density is modified, which affects both the terminal velocity and volume fraction. The relative viscosity is only significant at higher concentrations. The numerical model finds that the fibers accumulate at the channel base even at high velocities; therefore, modifications to the drag and turbulent dispersion forces can be made to reduce fiber accumulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Once the factory worker was considered to be a necessary evil, soon to be replaced by robotics and automation. Today, many manufacturers appreciate that people in direct productive roles can provide important flexibility and responsiveness, and so significantly contribute to business success. The challenge is no longer to design people out of the factory, but to design factory environment that help to get the best performance from people. This paper describes research that has set out to help to achieve this by expanding the capabilities of simulation modeling tools currently used by practitioners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

WiMAX has been introduced as a competitive alternative for metropolitan broadband wireless access technologies. It is connection oriented and it can provide very high data rates, large service coverage, and flexible quality of services (QoS). Due to the large number of connections and flexible QoS supported by WiMAX, the uplink access in WiMAX networks is very challenging since the medium access control (MAC) protocol must efficiently manage the bandwidth and related channel allocations. In this paper, we propose and investigate a cost-effective WiMAX bandwidth management scheme, named the WiMAX partial sharing scheme (WPSS), in order to provide good QoS while achieving better bandwidth utilization and network throughput. The proposed bandwidth management scheme is compared with a simple but inefficient scheme, named the WiMAX complete sharing scheme (WCPS). A maximum entropy (ME) based analytical model (MEAM) is proposed for the performance evaluation of the two bandwidth management schemes. The reason for using MEAM for the performance evaluation is that MEAM can efficiently model a large-scale system in which the number of stations or connections is generally very high, while the traditional simulation and analytical (e.g., Markov models) approaches cannot perform well due to the high computation complexity. We model the bandwidth management scheme as a queuing network model (QNM) that consists of interacting multiclass queues for different service classes. Closed form expressions for the state and blocking probability distributions are derived for those schemes. Simulation results verify the MEAM numerical results and show that WPSS can significantly improve the network's performance compared to WCPS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of interoperable research infrastructures has increased significantly with the growing awareness of the efforts made by the Global Earth Observation System of Systems (GEOSS). One of the Societal Benefit Areas (SBA) that is benefiting most from GEOSS is biodiversity, given the costs of monitoring the environment and managing complex information, from space observations to species records including their genetic characteristics. But GEOSS goes beyond simple data sharing to encourage the publishing and combination of models, an approach which can ease the handling of complex multi-disciplinary questions. It is the purpose of this paper to illustrate these concepts by presenting eHabitat, a basic Web Processing Service (WPS) for computing the likelihood of finding ecosystems with equal properties to those specified by a user. When chained with other services providing data on climate change, eHabitat can be used for ecological forecasting and becomes a useful tool for decision-makers assessing different strategies when selecting new areas to protect. eHabitat can use virtually any kind of thematic data that can be considered as useful when defining ecosystems and their future persistence under different climatic or development scenarios. The paper will present the architecture and illustrate the concepts through case studies which forecast the impact of climate change on protected areas or on the ecological niche of an African bird.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We combine all the known experimental demonstrations and spectroscopic parameters into a numerical model of the Ho3+ -doped fluoride glass fiber laser system. Core-pumped and cladding-pumped arrangements were simulated for all the population-bottlenecking mitigation schemes that have been tested, and good agreement between the model and the previously reported experimental results was achieved in most but not in all cases. In a similar way to Er3+ -doped fluoride glass fiber lasers, we found that the best match with measurements required scaled-down rate parameters for the energy transfer processes that operate in moderate to highly concentrated systems. The model isolated the dominant processes affecting the performance of each of the bottlenecking mitigation schemes and pump arrangements. It was established that pump excited-state absorption is the main factor affecting the performance of the core-pumped demonstrations of the laser, while energy transfer between rare earth ions is the main factor controlling the performance in cladding-pumped systems.