37 resultados para systems modeling


Relevância:

40.00% 40.00%

Publicador:

Resumo:

A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Choosing between Light Rail Transit (LRT) and Bus Rapid Transit (BRT) systems is often controversial and not an easy task for transportation planners who are contemplating the upgrade of their public transportation services. These two transit systems provide comparable services for medium-sized cities from the suburban neighborhood to the Central Business District (CBD) and utilize similar right-of-way (ROW) categories. The research is aimed at developing a method to assist transportation planners and decision makers in determining the most feasible system between LRT and BRT. ^ Cost estimation is a major factor when evaluating a transit system. Typically, LRT is more expensive to build and implement than BRT, but has significantly lower Operating and Maintenance (OM) costs than BRT. This dissertation examines the factors impacting capacity and costs, and develops cost models, which are a capacity-based cost estimate for the LRT and BRT systems. Various ROW categories and alignment configurations of the systems are also considered in the developed cost models. Kikuchi's fleet size model (1985) and cost allocation method are used to develop the cost models to estimate the capacity and costs. ^ The comparison between LRT and BRT are complicated due to many possible transportation planning and operation scenarios. In the end, a user-friendly computer interface integrated with the established capacity-based cost models, the LRT and BRT Cost Estimator (LBCostor), was developed by using Microsoft Visual Basic language to facilitate the process and will guide the users throughout the comparison operations. The cost models and the LBCostor can be used to analyze transit volumes, alignments, ROW configurations, number of stops and stations, headway, size of vehicle, and traffic signal timing at the intersections. The planners can make the necessary changes and adjustments depending on their operating practices. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Groundwater systems of different densities are often mathematically modeled to understand and predict environmental behavior such as seawater intrusion or submarine groundwater discharge. Additional data collection may be justified if it will cost-effectively aid in reducing the uncertainty of a model's prediction. The collection of salinity, as well as, temperature data could aid in reducing predictive uncertainty in a variable-density model. However, before numerical models can be created, rigorous testing of the modeling code needs to be completed. This research documents the benchmark testing of a new modeling code, SEAWAT Version 4. The benchmark problems include various combinations of density-dependent flow resulting from variations in concentration and temperature. The verified code, SEAWAT, was then applied to two different hydrological analyses to explore the capacity of a variable-density model to guide data collection. ^ The first analysis tested a linear method to guide data collection by quantifying the contribution of different data types and locations toward reducing predictive uncertainty in a nonlinear variable-density flow and transport model. The relative contributions of temperature and concentration measurements, at different locations within a simulated carbonate platform, for predicting movement of the saltwater interface were assessed. Results from the method showed that concentration data had greater worth than temperature data in reducing predictive uncertainty in this case. Results also indicated that a linear method could be used to quantify data worth in a nonlinear model. ^ The second hydrological analysis utilized a model to identify the transient response of the salinity, temperature, age, and amount of submarine groundwater discharge to changes in tidal ocean stage, seasonal temperature variations, and different types of geology. The model was compared to multiple kinds of data to (1) calibrate and verify the model, and (2) explore the potential for the model to be used to guide the collection of data using techniques such as electromagnetic resistivity, thermal imagery, and seepage meters. Results indicated that the model can be used to give insight to submarine groundwater discharge and be used to guide data collection. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the past two decades, multi-agent systems (MAS) have emerged as a new paradigm for conceptualizing large and complex distributed software systems. A multi-agent system view provides a natural abstraction for both the structure and the behavior of modern-day software systems. Although there were many conceptual frameworks for using multi-agent systems, there was no well established and widely accepted method for modeling multi-agent systems. This dissertation research addressed the representation and analysis of multi-agent systems based on model-oriented formal methods. The objective was to provide a systematic approach for studying MAS at an early stage of system development to ensure the quality of design. ^ Given that there was no well-defined formal model directly supporting agent-oriented modeling, this study was centered on three main topics: (1) adapting a well-known formal model, predicate transition nets (PrT nets), to support MAS modeling; (2) formulating a modeling methodology to ease the construction of formal MAS models; and (3) developing a technique to support machine analysis of formal MAS models using model checking technology. PrT nets were extended to include the notions of dynamic structure, agent communication and coordination to support agent-oriented modeling. An aspect-oriented technique was developed to address the modularity of agent models and compositionality of incremental analysis. A set of translation rules were defined to systematically translate formal MAS models to concrete models that can be verified through the model checker SPIN (Simple Promela Interpreter). ^ This dissertation presents the framework developed for modeling and analyzing MAS, including a well-defined process model based on nested PrT nets, and a comprehensive methodology to guide the construction and analysis of formal MAS models.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the most popular techniques for creating spatialized virtual sounds is based on the use of Head-Related Transfer Functions (HRTFs). HRTFs are signal processing models that represent the modifications undergone by the acoustic signal as it travels from a sound source to each of the listener's eardrums. These modifications are due to the interaction of the acoustic waves with the listener's torso, shoulders, head and pinnae, or outer ears. As such, HRTFs are somewhat different for each listener. For a listener to perceive synthesized 3-D sound cues correctly, the synthesized cues must be similar to the listener's own HRTFs. ^ One can measure individual HRTFs using specialized recording systems, however, these systems are prohibitively expensive and restrict the portability of the 3-D sound system. HRTF-based systems also face several computational challenges. This dissertation presents an alternative method for the synthesis of binaural spatialized sounds. The sound entering the pinna undergoes several reflective, diffractive and resonant phenomena, which determine the HRTF. Using signal processing tools, such as Prony's signal modeling method, an appropriate set of time delays and a resonant frequency were used to approximate the measured Head-Related Impulse Responses (HRIRs). Statistical analysis was used to find out empirical equations describing how the reflections and resonances are determined by the shape and size of the pinna features obtained from 3D images of 15 experimental subjects modeled in the project. These equations were used to yield “Model HRTFs” that can create elevation effects. ^ Listening tests conducted on 10 subjects show that these model HRTFs are 5% more effective than generic HRTFs when it comes to localizing sounds in the frontal plane. The number of reversals (perception of sound source above the horizontal plane when actually it is below the plane and vice versa) was also reduced by 5.7%, showing the perceptual effectiveness of this approach. The model is simple, yet versatile because it relies on easy to measure parameters to create an individualized HRTF. This low-order parameterized model also reduces the computational and storage demands, while maintaining a sufficient number of perceptually relevant spectral cues. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three new technologies have been brought together to develop a miniaturized radiation monitoring system. The research involved (1) Investigation a new HgI$\sb2$ detector. (2) VHDL modeling. (3) FPGA implementation. (4) In-circuit Verification. The packages used included an EG&G's crystal(HgI$\sb2$) manufactured at zero gravity, the Viewlogic's VHDL and Synthesis, Xilinx's technology library, its FPGA implementation tool, and a high density device (XC4003A). The results show: (1) Reduced cycle-time between Design and Hardware implementation; (2) Unlimited Re-design and implementation using the static RAM technology; (3) Customer based design, verification, and system construction; (4) Well suited for intelligent systems. These advantages excelled conventional chip design technologies and methods in easiness, short cycle time, and price in medium sized VLSI applications. It is also expected that the density of these devices will improve radically in the near future. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clusters are aggregations of atoms or molecules, generally intermediate in size between individual atoms and aggregates that are large enough to be called bulk matter. Clusters can also be called nanoparticles, because their size is on the order of nanometers or tens of nanometers. A new field has begun to take shape called nanostructured materials which takes advantage of these atom clusters. The ultra-small size of building blocks leads to dramatically different properties and it is anticipated that such atomically engineered materials will be able to be tailored to perform as no previous material could.^ The idea of ionized cluster beam (ICB) thin film deposition technique was first proposed by Takagi in 1972. It was based upon using a supersonic jet source to produce, ionize and accelerate beams of atomic clusters onto substrates in a vacuum environment. Conditions for formation of cluster beams suitable for thin film deposition have only recently been established following twenty years of effort. Zinc clusters over 1,000 atoms in average size have been synthesized both in our lab and that of Gspann. More recently, other methods of synthesizing clusters and nanoparticles, using different types of cluster sources, have come under development.^ In this work, we studied different aspects of nanoparticle beams. The work includes refinement of a model of the cluster formation mechanism, development of a new real-time, in situ cluster size measurement method, and study of the use of ICB in the fabrication of semiconductor devices.^ The formation process of the vaporized-metal cluster beam was simulated and investigated using classical nucleation theory and one dimensional gas flow equations. Zinc cluster sizes predicted at the nozzle exit are in good quantitative agreement with experimental results in our laboratory.^ A novel in situ real-time mass, energy and velocity measurement apparatus has been designed, built and tested. This small size time-of-flight mass spectrometer is suitable to be used in our cluster deposition systems and does not suffer from problems related to other methods of cluster size measurement like: requirement for specialized ionizing lasers, inductive electrical or electromagnetic coupling, dependency on the assumption of homogeneous nucleation, limits on the size measurement and non real-time capability. Measured ion energies using the electrostatic energy analyzer are in good accordance with values obtained from computer simulation. The velocity (v) is measured by pulsing the cluster beam and measuring the time of delay between the pulse and analyzer output current. The mass of a particle is calculated from m = (2E/v$\sp2).$ The error in the measured value of background gas mass is on the order of 28% of the mass of one N$\sb2$ molecule which is negligible for the measurement of large size clusters. This resolution in cluster size measurement is very acceptable for our purposes.^ Selective area deposition onto conducting patterns overlying insulating substrates was demonstrated using intense, fully-ionized cluster beams. Parameters influencing the selectivity are ion energy, repelling voltage, the ratio of the conductor to insulator dimension, and substrate thickness. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A two-phase three-dimensional computational model of an intermediate temperature (120--190°C) proton exchange membrane (PEM) fuel cell is presented. This represents the first attempt to model PEM fuel cells employing intermediate temperature membranes, in this case, phosphoric acid doped polybenzimidazole (PBI). To date, mathematical modeling of PEM fuel cells has been restricted to low temperature operation, especially to those employing Nafion ® membranes; while research on PBI as an intermediate temperature membrane has been solely at the experimental level. This work is an advancement in the state of the art of both these fields of research. With a growing trend toward higher temperature operation of PEM fuel cells, mathematical modeling of such systems is necessary to help hasten the development of the technology and highlight areas where research should be focused.^ This mathematical model accounted for all the major transport and polarization processes occurring inside the fuel cell, including the two phase phenomenon of gas dissolution in the polymer electrolyte. Results were presented for polarization performance, flux distributions, concentration variations in both the gaseous and aqueous phases, and temperature variations for various heat management strategies. The model predictions matched well with published experimental data, and were self-consistent.^ The major finding of this research was that, due to the transport limitations imposed by the use of phosphoric acid as a doping agent, namely low solubility and diffusivity of dissolved gases and anion adsorption onto catalyst sites, the catalyst utilization is very low (∼1--2%). Significant cost savings were predicted with the use of advanced catalyst deposition techniques that would greatly reduce the eventual thickness of the catalyst layer, and subsequently improve catalyst utilization. The model also predicted that an increase in power output in the order of 50% is expected if alternative doping agents to phosphoric acid can be found, which afford better transport properties of dissolved gases, reduced anion adsorption onto catalyst sites, and which maintain stability and conductive properties at elevated temperatures.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. The goal of this study is to improve the favorable molecular interactions between starch and PPC by addition of grafting monomers MA and ROM as compatibilizers, which would advance the mechanical properties of starch/PPC composites. ^ Methodology. DFT and semi-empirical methods based calculations were performed on three systems: (a) starch/PPC, (b) starch/PPC-MA, and (c) starch-ROM/PPC. Theoretical computations involved the determination of optimal geometries, binding-energies and vibrational frequencies of the blended polymers. ^ Findings. Calculations performed on five starch/PPC composites revealed hydrogen bond formation as the driving force behind stable composite formation, also confirmed by the negative relative energies of the composites indicating the existence of binding forces between the constituent co-polymers. The interaction between starch and PPC is also confirmed by the computed decrease in stretching CO and OH group frequencies participating in hydrogen bond formation, which agree qualitatively with the experimental values. ^ A three-step mechanism of grafting MA on PPC was proposed to improve the compatibility of PPC with starch. Nine types of 'blends' produced by covalent bond formation between starch and MA-grafted PPC were found to be energetically stable, with blends involving MA grafted at the 'B' and 'C' positions of PPC indicating a binding-energy increase of 6.8 and 6.2 kcal/mol, respectively, as compared to the non-grafted starch/PPC composites. A similar increase in binding-energies was also observed for three types of 'composites' formed by hydrogen bond formation between starch and MA-grafted PPC. ^ Next, grafting of ROM on starch and subsequent blend formation with PPC was studied. All four types of blends formed by the reaction of ROM-grafted starch with PPC were found to be more energetically stable as compared to the starch/PPC composite and starch/PPC-MA composites and blends. A blend of PPC and ROM grafted at the ' a&d12; ' position on amylose exhibited a maximal increase of 17.1 kcal/mol as compared with the starch/PPC-MA blend. ^ Conclusions. ROM was found to be a more effective compatibilizer in improving the favorable interactions between starch and PPC as compared to MA. The ' a&d12; ' position was found to be the most favorable attachment point of ROM to amylose for stable blend formation with PPC.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the introduction of fiber reinforced polymers (FRP) for the repair and retrofit of concrete structures in the 1980’s, considerable research has been devoted to the feasibility of their application and predictive modeling of their performance. However, the effects of flaws present in the constitutive components and the practices in substrate preparation and treatment have not yet been thoroughly studied. This research aims at investigating the effect of surface preparation and treatment for the pre-cured FRP systems and the groove size tolerance for near surface mounted (NSM) FRP systems; and to set thresholds for guaranteed system performance. This study was conducted as part of the National Cooperative Highway Research Program (NCHRP) Project 10-59B to develop construction specifications and process control manual for repair and retrofit of concrete structures using bonded FRP systems. The research included both analytical and experimental components. The experimental program for the pre-cured FRP systems consisted of a total of twenty-four (24) reinforced concrete (RC) T-beams with various surface preparation parameters and surface flaws, including roughness, flatness, voids and cracks (cuts). For the NSM FRP systems, a total of twelve (12) additional RC T-beams were tested with different grooves sizes for FRP bars and strips. The analytical program included developing an elaborate nonlinear finite element model using the general purpose software ANSYS. The bond interface between FRP and concrete was modeled by a series of nonlinear springs. The model was validated against test data from the present study as well as those available from the literature. The model was subsequently used to extend the experimental range of parameters for surface flatness in pre-cured FRP systems and for groove size study in the NSM FRP systems. Test results, confirmed by further analyses, indicated that contrary to the general belief in the industry, the impact of surface roughness on the global performance of pre-cured FRP systems was negligible. The study also verified that threshold limits set for wet lay-up FRP systems can be extended to pre-cured systems. The study showed that larger surface voids and cracks (cuts) can adversely impact both the strength and ductility of pre-cured FRP systems. On the other hand, frequency (or spacing) of surface cracks (cuts) may only affect system ductility rather than its strength. Finally, within the range studied, groove size tolerance of ±1/8 in. does not appear to have an adverse effect on the performance of NSM FRP systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The availability and pervasiveness of new communication services, such as mobile networks and multimedia communication over digital networks, has resulted in strong demands for approaches to modeling and realizing customized communication systems. The stovepipe approach used to develop today's communication applications is no longer effective because it results in a lengthy and expensive development cycle. To address this need, the Communication Virtual Machine (CVM) technology has been developed by researchers at Florida International University. The CVM technology includes the Communication Modeling Language (CML) and the platform, CVM, to model and rapidly realize communication models. ^ In this dissertation, we investigate the basic communication primitives needed to capture and specify an end-user's requirements for communication-intensive applications, and how these specifications can be automatically realized. To identify the basic communication primitives, we perform a feature analysis on a set of communication-intensive scenarios from the healthcare domain. Based on the feature analysis, we define a new version of CML that includes the meta-model definition (abstract syntax and static semantics) and a partial behavior model (operational semantics). To validate our CML definition, we present a case study that shows how one of the scenarios from the healthcare domain is modeled and automatically realized. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High efficiency of power converters placed between renewable energy sources and the utility grid is required to maximize the utilization of these sources. Power quality is another aspect that requires large passive elements (inductors, capacitors) to be placed between these sources and the grid. The main objective is to develop higher-level high frequency-based power converter system (HFPCS) that optimizes the use of hybrid renewable power injected into the power grid. The HFPCS provides high efficiency, reduced size of passive components, higher levels of power density realization, lower harmonic distortion, higher reliability, and lower cost. The dynamic modeling for each part in this system is developed, simulated and tested. The steady-state performance of the grid-connected hybrid power system with battery storage is analyzed. Various types of simulations were performed and a number of algorithms were developed and tested to verify the effectiveness of the power conversion topologies. A modified hysteresis-control strategy for the rectifier and the battery charging/discharging system was developed and implemented. A voltage oriented control (VOC) scheme was developed to control the energy injected into the grid. The developed HFPCS was compared experimentally with other currently available power converters. The developed HFPCS was employed inside a microgrid system infrastructure, connecting it to the power grid to verify its power transfer capabilities and grid connectivity. Grid connectivity tests verified these power transfer capabilities of the developed converter in addition to its ability of serving the load in a shared manner. In order to investigate the performance of the developed system, an experimental setup for the HF-based hybrid generation system was constructed. We designed a board containing a digital signal processor chip on which the developed control system was embedded. The board was fabricated and experimentally tested. The system's high precision requirements were verified. Each component of the system was built and tested separately, and then the whole system was connected and tested. The simulation and experimental results confirm the effectiveness of the developed converter system for grid-connected hybrid renewable energy systems as well as for hybrid electric vehicles and other industrial applications.