902 resultados para dynamic modeling and simulation
Resumo:
In the past two decades, multi-agent systems (MAS) have emerged as a new paradigm for conceptualizing large and complex distributed software systems. A multi-agent system view provides a natural abstraction for both the structure and the behavior of modern-day software systems. Although there were many conceptual frameworks for using multi-agent systems, there was no well established and widely accepted method for modeling multi-agent systems. This dissertation research addressed the representation and analysis of multi-agent systems based on model-oriented formal methods. The objective was to provide a systematic approach for studying MAS at an early stage of system development to ensure the quality of design. ^ Given that there was no well-defined formal model directly supporting agent-oriented modeling, this study was centered on three main topics: (1) adapting a well-known formal model, predicate transition nets (PrT nets), to support MAS modeling; (2) formulating a modeling methodology to ease the construction of formal MAS models; and (3) developing a technique to support machine analysis of formal MAS models using model checking technology. PrT nets were extended to include the notions of dynamic structure, agent communication and coordination to support agent-oriented modeling. An aspect-oriented technique was developed to address the modularity of agent models and compositionality of incremental analysis. A set of translation rules were defined to systematically translate formal MAS models to concrete models that can be verified through the model checker SPIN (Simple Promela Interpreter). ^ This dissertation presents the framework developed for modeling and analyzing MAS, including a well-defined process model based on nested PrT nets, and a comprehensive methodology to guide the construction and analysis of formal MAS models.^
Resumo:
With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
Virtual machines (VMs) are powerful platforms for building agile datacenters and emerging cloud systems. However, resource management for a VM-based system is still a challenging task. First, the complexity of application workloads as well as the interference among competing workloads makes it difficult to understand their VMs’ resource demands for meeting their Quality of Service (QoS) targets; Second, the dynamics in the applications and system makes it also difficult to maintain the desired QoS target while the environment changes; Third, the transparency of virtualization presents a hurdle for guest-layer application and host-layer VM scheduler to cooperate and improve application QoS and system efficiency. This dissertation proposes to address the above challenges through fuzzy modeling and control theory based VM resource management. First, a fuzzy-logic-based nonlinear modeling approach is proposed to accurately capture a VM’s complex demands of multiple types of resources automatically online based on the observed workload and resource usages. Second, to enable fast adaption for resource management, the fuzzy modeling approach is integrated with a predictive-control-based controller to form a new Fuzzy Modeling Predictive Control (FMPC) approach which can quickly track the applications’ QoS targets and optimize the resource allocations under dynamic changes in the system. Finally, to address the limitations of black-box-based resource management solutions, a cross-layer optimization approach is proposed to enable cooperation between a VM’s host and guest layers and further improve the application QoS and resource usage efficiency. The above proposed approaches are prototyped and evaluated on a Xen-based virtualized system and evaluated with representative benchmarks including TPC-H, RUBiS, and TerraFly. The results demonstrate that the fuzzy-modeling-based approach improves the accuracy in resource prediction by up to 31.4% compared to conventional regression approaches. The FMPC approach substantially outperforms the traditional linear-model-based predictive control approach in meeting application QoS targets for an oversubscribed system. It is able to manage dynamic VM resource allocations and migrations for over 100 concurrent VMs across multiple hosts with good efficiency. Finally, the cross-layer optimization approach further improves the performance of a virtualized application by up to 40% when the resources are contended by dynamic workloads.
Resumo:
An increase in the demand for the freight shipping in the United States has been predicted for the near future and Longer Combination Vehicles (LCVs), which can carry more loads in each trip, seem like a good solution for the problem. Currently, utilizing LCVs is not permitted in most states of the US and little research has been conducted on the effects of these heavy vehicles on the roads and bridges. In this research, efforts are made to study these effects by comparing the dynamic and fatigue effects of LCVs with more common trucks. Ten Steel and prestressed concrete bridges with span lengths ranging from 30’ to 140’ are designed and modeled using the grid system in MATLAB. Additionally, three more real bridges including two single span simply supported steel bridges and a three span continuous steel bridge are modeled using the same MATLAB code. The equations of motion of three LCVs as well as eight other trucks are derived and these vehicles are subjected to different road surface conditions and bumps on the roads and the designed and real bridges. By forming the bridge equations of motion using the mass, stiffness and damping matrices and considering the interaction between the truck and the bridge, the differential equations are solved using the ODE solver in MATLAB and the results of the forces in tires as well as the deflections and moments in the bridge members are obtained. The results of this study show that for most of the bridges, LCVs result in the smallest values of Dynamic Amplification Factor (DAF) whereas the Single Unit Trucks cause the highest values of DAF when traveling on the bridges. Also in most cases, the values of DAF are observed to be smaller than the 33% threshold suggested by the design code. Additionally, fatigue analysis of the bridges in this study confirms that by replacing the current truck traffic with higher capacity LCVs, in most cases, the remaining fatigue life of the bridge is only slightly decreased which means that taking advantage of these larger vehicles can be a viable option for decision makers.
Resumo:
This paper presents a theoretical model on the vibration analysis of micro scale fluid-loaded rectangular isotropic plates, based on the Lamb's assumption of fluid-structure interaction and the Rayleigh-Ritz energy method. An analytical solution for this model is proposed, which can be applied to most cases of boundary conditions. The dynamical experimental data of a series of microfabricated silicon plates are obtained using a base-excitation dynamic testing facility. The natural frequencies and mode shapes in the experimental results are in good agreement with the theoretical simulations for the lower order modes. The presented theoretical and experimental investigations on the vibration characteristics of the micro scale plates are of particular interest in the design of microplate based biosensing devices. Copyright © 2009 by ASME.
Resumo:
In the half-duplex relay channel applying the decode-and-forward protocol the relay introduces energy over random time intervals into the channel as observed at the destination. Consequently, during simulation the average signal power seen at the destination becomes known at run-time only. Therefore, in order to obtain specific performance measures at the signal-to-noise ratio (SNR) of interest, strategies are required to adjust the noise variance during simulation run-time. It is necessary that these strategies result in the same performance as measured under real-world conditions. This paper introduces three noise power allocation strategies and demonstrates their applicability using numerical and simulation results.
Resumo:
An experimental and numerical study of turbulent fire suppression is presented. For this work, a novel and canonical facility has been developed, featuring a buoyant, turbulent, methane or propane-fueled diffusion flame suppressed via either nitrogen dilution of the oxidizer or application of a fine water mist. Flames are stabilized on a slot burner surrounded by a co-flowing oxidizer, which allows controlled delivery of either suppressant to achieve a range of conditions from complete combustion through partial and total flame quenching. A minimal supply of pure oxygen is optionally applied along the burner to provide a strengthened flame base that resists liftoff extinction and permits the study of substantially weakened turbulent flames. The carefully designed facility features well-characterized inlet and boundary conditions that are especially amenable to numerical simulation. Non-intrusive diagnostics provide detailed measurements of suppression behavior, yielding insight into the governing suppression processes, and aiding the development and validation of advanced suppression models. Diagnostics include oxidizer composition analysis to determine suppression potential, flame imaging to quantify visible flame structure, luminous and radiative emissions measurements to assess sooting propensity and heat losses, and species-based calorimetry to evaluate global heat release and combustion efficiency. The studied flames experience notable suppression effects, including transition in color from bright yellow to dim blue, expansion in flame height and structural intermittency, and reduction in radiative heat emissions. Still, measurements indicate that the combustion efficiency remains close to unity, and only near the extinction limit do the flames experience an abrupt transition from nearly complete combustion to total extinguishment. Measurements are compared with large eddy simulation results obtained using the Fire Dynamics Simulator, an open-source computational fluid dynamics software package. Comparisons of experimental and simulated results are used to evaluate the performance of available models in predicting fire suppression. Simulations in the present configuration highlight the issue of spurious reignition that is permitted by the classical eddy-dissipation concept for modeling turbulent combustion. To address this issue, simple treatments to prevent spurious reignition are developed and implemented. Simulations incorporating these treatments are shown to produce excellent agreement with the experimentally measured data, including the global combustion efficiency.
Resumo:
Water regimes in the Brazilian Cerrados are sensitive to climatological disturbances and human intervention. The risk that critical water-table levels are exceeded over long periods of time can be estimated by applying stochastic methods in modeling the dynamic relationship between water levels and driving forces such as precipitation and evapotranspiration. In this study, a transfer function-noise model, the so called PIRFICT-model, is applied to estimate the dynamic relationship between water-table depth and precipitation surplus/deficit in a watershed with a groundwater monitoring scheme in the Brazilian Cerrados. Critical limits were defined for a period in the Cerrados agricultural calendar, the end of the rainy season, when extremely shallow levels (< 0.5-m depth) can pose a risk to plant health and machinery before harvesting. By simulating time-series models, the risk of exceeding critical thresholds during a continuous period of time (e.g. 10 days) is described by probability levels. These simulated probabilities were interpolated spatially using universal kriging, incorporating information related to the drainage basin from a digital elevation model. The resulting map reduced model uncertainty. Three areas were defined as presenting potential risk at the end of the rainy season. These areas deserve attention with respect to water-management and land-use planning.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
Myocardial fibrosis detected via delayed-enhanced magnetic resonance imaging (MRI) has been shown to be a strong indicator for ventricular tachycardia (VT) inducibility. However, little is known regarding how inducibility is affected by the details of the fibrosis extent, morphology, and border zone configuration. The objective of this article is to systematically study the arrhythmogenic effects of fibrosis geometry and extent, specifically on VT inducibility and maintenance. We present a set of methods for constructing patient-specific computational models of human ventricles using in vivo MRI data for patients suffering from hypertension, hypercholesterolemia, and chronic myocardial infarction. Additional synthesized models with morphologically varied extents of fibrosis and gray zone (GZ) distribution were derived to study the alterations in the arrhythmia induction and reentry patterns. Detailed electrophysiological simulations demonstrated that (1) VT morphology was highly dependent on the extent of fibrosis, which acts as a structural substrate, (2) reentry tended to be anchored to the fibrosis edges and showed transmural conduction of activations through narrow channels formed within fibrosis, and (3) increasing the extent of GZ within fibrosis tended to destabilize the structural reentry sites and aggravate the VT as compared to fibrotic regions of the same size and shape but with lower or no GZ. The approach and findings represent a significant step toward patient-specific cardiac modeling as a reliable tool for VT prediction and management of the patient. Sensitivities to approximation nuances in the modeling of structural pathology by image-based reconstruction techniques are also implicated.
Resumo:
The strategic orientations of a firm are considered crucial for enhancing firm performance and their impact can be even greater when associated with dynamic capabilities, particularly in complex and dynamic environments. This study empirically analyzes the relationship between market, entrepreneurial and learning orientations, dynamic capabilities, and performance using an integrative approach hitherto little explored. Using a sample of 209 knowledge intensive business service firms, this paper applies structural equation modeling to explore both direct effects of strategic orientations and the mediating role of dynamic capabilities on performance. The study demonstrates that learning orientation and one of the dimensions of entrepreneurial orientation have a direct positive effect on performance. On the other hand, dynamic capabilities mediate the relationships between some of the strategic orientations and firm performance. Overall, when dynamic capabilities are combined with the appropriate strategic orientations, they enhance firm performance. This paper contributes to a better understanding of the knowledge economy, given the important role knowledge intensive business services play in such a dynamic and pivotal sector.
Resumo:
This thesis deals with optimization techniques and modeling of vehicular networks. Thanks to the models realized with the integer linear programming (ILP) and the heuristic ones, it was possible to study the performances in 5G networks for the vehicular. Thanks to Software-defined networking (SDN) and Network functions virtualization (NFV) paradigms it was possible to study the performances of different classes of service, such as the Ultra Reliable Low Latency Communications (URLLC) class and enhanced Mobile BroadBand (eMBB) class, and how the functional split can have positive effects on network resource management. Two different protection techniques have been studied: Shared Path Protection (SPP) and Dedicated Path Protection (DPP). Thanks to these different protections, it is possible to achieve different network reliability requirements, according to the needs of the end user. Finally, thanks to a simulator developed in Python, it was possible to study the dynamic allocation of resources in a 5G metro network. Through different provisioning algorithms and different dynamic resource management techniques, useful results have been obtained for understanding the needs in the vehicular networks that will exploit 5G. Finally, two models are shown for reconfiguring backup resources when using shared resource protection.
Resumo:
The main purpose of this work is to develop a numerical platform for the turbulence modeling and optimal control of liquid metal flows. Thanks to their interesting thermal properties, liquid metals are widely studied as coolants for heat transfer applications in the nuclear context. However, due to their low Prandtl numbers, the standard turbulence models commonly used for coolants as air or water are inadequate. Advanced turbulence models able to capture the anisotropy in the flow and heat transfer are then necessary. In this thesis, a new anisotropic four-parameter turbulence model is presented and validated. The proposed model is based on explicit algebraic models and solves four additional transport equations for dynamical and thermal turbulent variables. For the validation of the model, several flow configurations are considered for different Reynolds and Prandtl numbers, namely fully developed flows in a plane channel and cylindrical pipe, and forced and mixed convection in a backward-facing step geometry. Since buoyancy effects cannot be neglected in liquid metals-cooled fast reactors, the second aim of this work is to provide mathematical and numerical tools for the simulation and optimization of liquid metals in mixed and natural convection. Optimal control problems for turbulent buoyant flows are studied and analyzed with the Lagrange multipliers method. Numerical algorithms for optimal control problems are integrated into the numerical platform and several simulations are performed to show the robustness, consistency, and feasibility of the method.
Resumo:
Protected crop production is a modern and innovative approach to cultivating plants in a controlled environment to optimize growth, yield, and quality. This method involves using structures such as greenhouses or tunnels to create a sheltered environment. These productive solutions are characterized by a careful regulation of variables like temperature, humidity, light, and ventilation, which collectively contribute to creating an optimal microclimate for plant growth. Heating, cooling, and ventilation systems are used to maintain optimal conditions for plant growth, regardless of external weather fluctuations. Protected crop production plays a crucial role in addressing challenges posed by climate variability, population growth, and food security. Similarly, animal husbandry involves providing adequate nutrition, housing, medical care and environmental conditions to ensure animal welfare. Then, sustainability is a critical consideration in all forms of agriculture, including protected crop and animal production. Sustainability in animal production refers to the practice of producing animal products in a way that minimizes negative impacts on the environment, promotes animal welfare, and ensures the long-term viability of the industry. Then, the research activities performed during the PhD can be inserted exactly in the field of Precision Agriculture and Livestock farming. Here the focus is on the computational fluid dynamic (CFD) approach and environmental assessment applied to improve yield, resource efficiency, environmental sustainability, and cost savings. It represents a significant shift from traditional farming methods to a more technology-driven, data-driven, and environmentally conscious approach to crop and animal production. On one side, CFD is powerful and precise techniques of computer modeling and simulation of airflows and thermo-hygrometric parameters, that has been applied to optimize the growth environment of crops and the efficiency of ventilation in pig barns. On the other side, the sustainability aspect has been investigated and researched in terms of Life Cycle Assessment analyses.