967 resultados para Modeling methods
Resumo:
Петър Господинов, Добри Данков, Владимир Русинов, Стефан Стефанов - Изследвано е цилиндрично течение на Кует за разреден газ между два въртящи се цилиндъра. Получени са профилите на налягането, скоростта и температурата по метода на прякото статистическо моделиране (DSMC) и чрез числено решаване на уравненията на Навие-Стокс за свиваем флуид. Резултатите сочат много добро съвпадение за малки числа на Кнудсен Kn = 0.02. Показано е, че при различни кинематични гранични условия, газът изостава или избързва спрямо скоростта на стената, или има поведение на твърдо еластично тяло. Получените резултати са важни при решаването на неравнинни, задачи от микрофлуидиката с отчитане на ефектите на кривината.
Resumo:
Heuristics, simulation, artificial intelligence techniques and combinations thereof have all been employed in the attempt to make computer systems adaptive, context-aware, reconfigurable and self-managing. This paper complements such efforts by exploring the possibility to achieve runtime adaptiveness using mathematically-based techniques from the area of formal methods. It is argued that formal methods @ runtime represents a feasible approach, and promising preliminary results are summarised to support this viewpoint. The survey of existing approaches to employing formal methods at runtime is accompanied by a discussion of their challenges and of the future research required to overcome them. © 2011 Springer-Verlag.
Resumo:
Regional climate models (RCMs) provide reliable climatic predictions for the next 90 years with high horizontal and temporal resolution. In the 21st century northward latitudinal and upward altitudinal shift of the distribution of plant species and phytogeographical units is expected. It is discussed how the modeling of phytogeographical unit can be reduced to modeling plant distributions. Predicted shift of the Moesz line is studied as case study (with three different modeling approaches) using 36 parameters of REMO regional climate data-set, ArcGIS geographic information software, and periods of 1961-1990 (reference period), 2011-2040, and 2041-2070. The disadvantages of this relatively simple climate envelope modeling (CEM) approach are then discussed and several ways of model improvement are suggested. Some statistical and artificial intelligence (AI) methods (logistic regression, cluster analysis and other clustering methods, decision tree, evolutionary algorithm, artificial neural network) are able to provide development of the model. Among them artificial neural networks (ANN) seems to be the most suitable algorithm for this purpose, which provides a black box method for distribution modeling.
Resumo:
In the years 2004 and 2005, we collected samples of phytoplankton, zooplankton, and macroinvertebrates in an artificial small pond in Budapest (Hungary). We set up a simulation model predicting the abundances of the cyclopoids, Eudiaptomus zachariasi, and Ischnura pumilio by considering only temperature and the abundance of population of the previous day. Phytoplankton abundance was simulated by considering not only temperature but the abundances of the three mentioned groups. When we ran the model with the data series of internationally accepted climate change scenarios, the different outcomes were discussed. Comparative assessment of the alternative climate change scenarios was also carried out with statistical methods.
Resumo:
Aims: In the Mediterranean areas of Europe, leishmanisasis is one of the most emerging vector-borne diseases. Members of genus Phlebotomus are the primary vectors of the genus Leishmania. To track the human health effect of climate change it is a very important interdisciplinary question to study whether the climatic requirements and geographical distribution of the vectors of human pathogen organisms correlate with each other. Our study intended to explore the potential effects of ongoing climate change, in particular through a potential upward altitudinal and latitudinal shift of the distribution of the parasite Leishmania infantum, its vectors Phlebotomus ariasi, P. neglectus, P. perfiliewi, P. perniciosus, and P. tobbi, and some other sandfly species: P. papatasi, P. sergenti, and P. similis. Methods: By using a climate envelope modelling (CEM) method we modelled the current and future (2011-2070) potential distribution of 8 European sandfly species and L. infantum based on the current distribution using the REMO regional climate model. Results: We found that by the end of the 2060’s most parts of Western Europe can be colonized by sandfly species, mostly by P. ariasi and P. pernicosus. P. ariasi showed the greatest potential northward expansion. For all the studied vectors of L. infantum the entire Mediterranean Basin and South-Eastern Europe seemed to be suitable. L. infantum can affect the Eastern Mediterranean, without notable northward expansion. Our model resulted 1 to 2 months prolongation of the potentially active period of P. neglectus P. papatasi and P. perniciosus for the 2060’s in Southern Hungary. Conclusion: Our findings confirm the concerns that leishmanisais can become a real hazard for the major part of the European population to the end of the 21th century and the Carpathian Basin is a particularly vulnerable area.
Resumo:
Three new technologies have been brought together to develop a miniaturized radiation monitoring system. The research involved (1) Investigation a new HgI$\sb2$ detector. (2) VHDL modeling. (3) FPGA implementation. (4) In-circuit Verification. The packages used included an EG&G's crystal(HgI$\sb2$) manufactured at zero gravity, the Viewlogic's VHDL and Synthesis, Xilinx's technology library, its FPGA implementation tool, and a high density device (XC4003A). The results show: (1) Reduced cycle-time between Design and Hardware implementation; (2) Unlimited Re-design and implementation using the static RAM technology; (3) Customer based design, verification, and system construction; (4) Well suited for intelligent systems. These advantages excelled conventional chip design technologies and methods in easiness, short cycle time, and price in medium sized VLSI applications. It is also expected that the density of these devices will improve radically in the near future. ^
Resumo:
Clusters are aggregations of atoms or molecules, generally intermediate in size between individual atoms and aggregates that are large enough to be called bulk matter. Clusters can also be called nanoparticles, because their size is on the order of nanometers or tens of nanometers. A new field has begun to take shape called nanostructured materials which takes advantage of these atom clusters. The ultra-small size of building blocks leads to dramatically different properties and it is anticipated that such atomically engineered materials will be able to be tailored to perform as no previous material could.^ The idea of ionized cluster beam (ICB) thin film deposition technique was first proposed by Takagi in 1972. It was based upon using a supersonic jet source to produce, ionize and accelerate beams of atomic clusters onto substrates in a vacuum environment. Conditions for formation of cluster beams suitable for thin film deposition have only recently been established following twenty years of effort. Zinc clusters over 1,000 atoms in average size have been synthesized both in our lab and that of Gspann. More recently, other methods of synthesizing clusters and nanoparticles, using different types of cluster sources, have come under development.^ In this work, we studied different aspects of nanoparticle beams. The work includes refinement of a model of the cluster formation mechanism, development of a new real-time, in situ cluster size measurement method, and study of the use of ICB in the fabrication of semiconductor devices.^ The formation process of the vaporized-metal cluster beam was simulated and investigated using classical nucleation theory and one dimensional gas flow equations. Zinc cluster sizes predicted at the nozzle exit are in good quantitative agreement with experimental results in our laboratory.^ A novel in situ real-time mass, energy and velocity measurement apparatus has been designed, built and tested. This small size time-of-flight mass spectrometer is suitable to be used in our cluster deposition systems and does not suffer from problems related to other methods of cluster size measurement like: requirement for specialized ionizing lasers, inductive electrical or electromagnetic coupling, dependency on the assumption of homogeneous nucleation, limits on the size measurement and non real-time capability. Measured ion energies using the electrostatic energy analyzer are in good accordance with values obtained from computer simulation. The velocity (v) is measured by pulsing the cluster beam and measuring the time of delay between the pulse and analyzer output current. The mass of a particle is calculated from m = (2E/v$\sp2).$ The error in the measured value of background gas mass is on the order of 28% of the mass of one N$\sb2$ molecule which is negligible for the measurement of large size clusters. This resolution in cluster size measurement is very acceptable for our purposes.^ Selective area deposition onto conducting patterns overlying insulating substrates was demonstrated using intense, fully-ionized cluster beams. Parameters influencing the selectivity are ion energy, repelling voltage, the ratio of the conductor to insulator dimension, and substrate thickness. ^
Resumo:
Modern software systems are often large and complicated. To better understand, develop, and manage large software systems, researchers have studied software architectures that provide the top level overall structural design of software systems for the last decade. One major research focus on software architectures is formal architecture description languages, but most existing research focuses primarily on the descriptive capability and puts less emphasis on software architecture design methods and formal analysis techniques, which are necessary to develop correct software architecture design. ^ Refinement is a general approach of adding details to a software design. A formal refinement method can further ensure certain design properties. This dissertation proposes refinement methods, including a set of formal refinement patterns and complementary verification techniques, for software architecture design using Software Architecture Model (SAM), which was developed at Florida International University. First, a general guideline for software architecture design in SAM is proposed. Second, specification construction through property-preserving refinement patterns is discussed. The refinement patterns are categorized into connector refinement, component refinement and high-level Petri nets refinement. These three levels of refinement patterns are applicable to overall system interaction, architectural components, and underlying formal language, respectively. Third, verification after modeling as a complementary technique to specification refinement is discussed. Two formal verification tools, the Stanford Temporal Prover (STeP) and the Simple Promela Interpreter (SPIN), are adopted into SAM to develop the initial models. Fourth, formalization and refinement of security issues are studied. A method for security enforcement in SAM is proposed. The Role-Based Access Control model is formalized using predicate transition nets and Z notation. The patterns of enforcing access control and auditing are proposed. Finally, modeling and refining a life insurance system is used to demonstrate how to apply the refinement patterns for software architecture design using SAM and how to integrate the access control model. ^ The results of this dissertation demonstrate that a refinement method is an effective way to develop a high assurance system. The method developed in this dissertation extends existing work on modeling software architectures using SAM and makes SAM a more usable and valuable formal tool for software architecture design. ^
Resumo:
Purpose. The goal of this study is to improve the favorable molecular interactions between starch and PPC by addition of grafting monomers MA and ROM as compatibilizers, which would advance the mechanical properties of starch/PPC composites. ^ Methodology. DFT and semi-empirical methods based calculations were performed on three systems: (a) starch/PPC, (b) starch/PPC-MA, and (c) starch-ROM/PPC. Theoretical computations involved the determination of optimal geometries, binding-energies and vibrational frequencies of the blended polymers. ^ Findings. Calculations performed on five starch/PPC composites revealed hydrogen bond formation as the driving force behind stable composite formation, also confirmed by the negative relative energies of the composites indicating the existence of binding forces between the constituent co-polymers. The interaction between starch and PPC is also confirmed by the computed decrease in stretching CO and OH group frequencies participating in hydrogen bond formation, which agree qualitatively with the experimental values. ^ A three-step mechanism of grafting MA on PPC was proposed to improve the compatibility of PPC with starch. Nine types of 'blends' produced by covalent bond formation between starch and MA-grafted PPC were found to be energetically stable, with blends involving MA grafted at the 'B' and 'C' positions of PPC indicating a binding-energy increase of 6.8 and 6.2 kcal/mol, respectively, as compared to the non-grafted starch/PPC composites. A similar increase in binding-energies was also observed for three types of 'composites' formed by hydrogen bond formation between starch and MA-grafted PPC. ^ Next, grafting of ROM on starch and subsequent blend formation with PPC was studied. All four types of blends formed by the reaction of ROM-grafted starch with PPC were found to be more energetically stable as compared to the starch/PPC composite and starch/PPC-MA composites and blends. A blend of PPC and ROM grafted at the ' a&d12; ' position on amylose exhibited a maximal increase of 17.1 kcal/mol as compared with the starch/PPC-MA blend. ^ Conclusions. ROM was found to be a more effective compatibilizer in improving the favorable interactions between starch and PPC as compared to MA. The ' a&d12; ' position was found to be the most favorable attachment point of ROM to amylose for stable blend formation with PPC.^
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
In the past two decades, multi-agent systems (MAS) have emerged as a new paradigm for conceptualizing large and complex distributed software systems. A multi-agent system view provides a natural abstraction for both the structure and the behavior of modern-day software systems. Although there were many conceptual frameworks for using multi-agent systems, there was no well established and widely accepted method for modeling multi-agent systems. This dissertation research addressed the representation and analysis of multi-agent systems based on model-oriented formal methods. The objective was to provide a systematic approach for studying MAS at an early stage of system development to ensure the quality of design. ^ Given that there was no well-defined formal model directly supporting agent-oriented modeling, this study was centered on three main topics: (1) adapting a well-known formal model, predicate transition nets (PrT nets), to support MAS modeling; (2) formulating a modeling methodology to ease the construction of formal MAS models; and (3) developing a technique to support machine analysis of formal MAS models using model checking technology. PrT nets were extended to include the notions of dynamic structure, agent communication and coordination to support agent-oriented modeling. An aspect-oriented technique was developed to address the modularity of agent models and compositionality of incremental analysis. A set of translation rules were defined to systematically translate formal MAS models to concrete models that can be verified through the model checker SPIN (Simple Promela Interpreter). ^ This dissertation presents the framework developed for modeling and analyzing MAS, including a well-defined process model based on nested PrT nets, and a comprehensive methodology to guide the construction and analysis of formal MAS models.^
Resumo:
Annual Average Daily Traffic (AADT) is a critical input to many transportation analyses. By definition, AADT is the average 24-hour volume at a highway location over a full year. Traditionally, AADT is estimated using a mix of permanent and temporary traffic counts. Because field collection of traffic counts is expensive, it is usually done for only the major roads, thus leaving most of the local roads without any AADT information. However, AADTs are needed for local roads for many applications. For example, AADTs are used by state Departments of Transportation (DOTs) to calculate the crash rates of all local roads in order to identify the top five percent of hazardous locations for annual reporting to the U.S. DOT. ^ This dissertation develops a new method for estimating AADTs for local roads using travel demand modeling. A major component of the new method involves a parcel-level trip generation model that estimates the trips generated by each parcel. The model uses the tax parcel data together with the trip generation rates and equations provided by the ITE Trip Generation Report. The generated trips are then distributed to existing traffic count sites using a parcel-level trip distribution gravity model. The all-or-nothing assignment method is then used to assign the trips onto the roadway network to estimate the final AADTs. The entire process was implemented in the Cube demand modeling system with extensive spatial data processing using ArcGIS. ^ To evaluate the performance of the new method, data from several study areas in Broward County in Florida were used. The estimated AADTs were compared with those from two existing methods using actual traffic counts as the ground truths. The results show that the new method performs better than both existing methods. One limitation with the new method is that it relies on Cube which limits the number of zones to 32,000. Accordingly, a study area exceeding this limit must be partitioned into smaller areas. Because AADT estimates for roads near the boundary areas were found to be less accurate, further research could examine the best way to partition a study area to minimize the impact.^
Resumo:
• Premise of the study: Species in the aquatic genus Nymphoides have inflorescences that appear to arise from the petioles of floating leaves. The inflorescence-floating leaf complex can produce vegetative propagules and/or additional inflorescences and leaves. We analyzed the morphology of N. aquatica to determine how this complex relates to whole plant architecture and whether whole plant growth is sympodial or monopodial. • Methods: We used dissections, measurements, and microscopic observations of field-collected plants and plants cultivated for 2 years in outdoor tanks in south Florida, USA. • Key results: Nymphoides aquatica had a submerged plagiotropic rhizome that produced floating leaves in an alternate/spiral phyllotaxy. Rhizomes were composed of successive sympodial units that varied in the number of leaves produced before the apex terminated. The basic sympodial unit had a prophyll that subtended a renewal-shoot bud, a short-petioled leaf (SPL) with floating lamina, and an inflorescence; the SPL axillary bud expanded as a vegetative propagule. Plants produced either successive basic sympodial units or expanded sympodia that intercalated long-petioled leaves between the prophyll and the SPL. • Conclusions: Nymphoides aquatica grows sympodially, forming a rhizome composed of successive basic sympodia and expanded sympodial units. Variations on these types of sympodial growth help explain the branching patterns and leaf morphologies described for other Nymphoides species. Monitoring how these two sympodial phases are affected by water depth provides an ecologically meaningful way to assess N. aquatica’s responses to altered hydrology.
Resumo:
The anisotropy of the Biscayne Aquifer which serves as the source of potable water for Miami-Dade County was investigated by applying geophysical methods. Electrical resistivity imaging, self potential and ground penetration radar techniques were employed in both regional and site specific studies. In the regional study, electrical anisotropy and resistivity variation with depth were investigated with azimuthal square array measurements at 13 sites. The observed coefficient of electrical anisotropy ranged from 1.01 to 1.36. The general direction of measured anisotropy is uniform for most sites and trends W-E or SE-NW irrespective of depth. Measured electrical properties were used to estimate anisotropic component of the secondary porosity and hydraulic anisotropy which ranged from 1 to 11% and 1.18 to 2.83 respectively. 1-D sounding analysis was used to models the variation of formation resistivity with depth. Resistivities decreased from NW (close to the margins of the everglades) to SE on the shores of Biscayne Bay. Porosity calculated from Archie's law, ranged from 18 to 61% with higher values found along the ridge. Higher anisotropy, porosities and hydraulic conductivities were on the Atlantic Coastal Ridge and lower values at low lying areas west of the ridge. The cause of higher anisotropy and porosity is attributed to higher dissolution rates of the oolitic facies of the Miami Formation composing the ridge. The direction of minimum resistivity from this study is similar to the predevelopment groundwater flow direction indicated in published modeling studies. Detailed investigations were carried out to evaluate higher anisotropy at West Perrine Park located on the ridge and Snapper Creek Municipal well field where the anisotropy trend changes with depth. The higher anisotropy is attributed to the presence of solution cavities oriented in the E-SE direction on the ridge. Similarly, the change in hydraulic anisotropy at the well field might be related to solution cavities, the surface canal and groundwater extraction wells.^
Resumo:
Among the most surprising findings in Physics Education Research is the lack of positive results on attitudinal measures, such as Colorado Learning Attitudes about Science Survey (CLASS) and Maryland Physics Expectations Survey (MPEX). The uniformity with which physics teaching manages to negatively shift attitudes toward physics learning is striking. Strategies which have been shown to improve conceptual learning, such as interactive engagement and studio-format classes, provide more authentic science experiences for students; yet do not seem to be sufficient to produce positive attitudinal results. Florida International University’s Physics Education Research Group has implemented Modeling Instruction in University Physics classes as part of an overall effort toward building a research and learning community. Modeling Instruction is explicitly designed to engage students in scientific practices that include model building, validation, and revision. Results from a preinstruction/postinstruction CLASS measurement show attitudinal improvements through both semesters of an introductory physics sequence, as well as over the entire two-course sequence. In this Brief Report, we report positive shifts from the CLASS in one section of a modeling-based introductory physics sequence, for both mechanics (N=22) and electricity and magnetism (N=23). Using the CLASS results and follow up interviews, we examine how these results reflect on modeling instruction and the unique student community and population at FIU.