947 resultados para modeling and model calibration
Resumo:
The objective of this study is to demonstrate using weak form partial differential equation (PDE) method for a finite-element (FE) modeling of a new constitutive relation without the need of user subroutine programming. The viscoelastic asphalt mixtures were modeled by the weak form PDE-based FE method as the examples in the paper. A solid-like generalized Maxwell model was used to represent the deforming mechanism of a viscoelastic material, the constitutive relations of which were derived and implemented in the weak form PDE module of Comsol Multiphysics, a commercial FE program. The weak form PDE modeling of viscoelasticity was verified by comparing Comsol and Abaqus simulations, which employed the same loading configurations and material property inputs in virtual laboratory test simulations. Both produced identical results in terms of axial and radial strain responses. The weak form PDE modeling of viscoelasticity was further validated by comparing the weak form PDE predictions with real laboratory test results of six types of asphalt mixtures with two air void contents and three aging periods. The viscoelastic material properties such as the coefficients of a Prony series model for the relaxation modulus were obtained by converting from the master curves of dynamic modulus and phase angle. Strain responses of compressive creep tests at three temperatures and cyclic load tests were predicted using the weak form PDE modeling and found to be comparable with the measurements of the real laboratory tests. It was demonstrated that the weak form PDE-based FE modeling can serve as an efficient method to implement new constitutive models and can free engineers from user subroutine programming.
Resumo:
The object of this paper is presenting the University of Economics – Varna, using a 3D model with 3Ds MAX. Created in 1920, May 14, University of Economics - Varna is a cultural institution with a place and style of its own. With the emergence of the three-dimensional modeling we entered a new stage of the evolution of computer graphics. The main target is to preserve the historical vision, to demonstrate forward-thinking and using of future-oriented approaches.
Resumo:
This research develops a methodology and model formulation which suggests locations for rapid chargers to help assist infrastructure development and enable greater battery electric vehicle (BEV) usage. The model considers the likely travel patterns of BEVs and their subsequent charging demands across a large road network, where no prior candidate site information is required. Using a GIS-based methodology, polygons are constructed which represent the charging demand zones for particular routes across a real-world road network. The use of polygons allows the maximum number of charging combinations to be considered whilst limiting the input intensity needed for the model. Further polygons are added to represent deviation possibilities, meaning that placement of charge points away from the shortest path is possible, given a penalty function. A validation of the model is carried out by assessing the expected demand at current rapid charging locations and comparing to recorded empirical usage data. Results suggest that the developed model provides a good approximation to real world observations, and that for the provision of charging, location matters. The model is also implemented where no prior candidate site information is required. As such, locations are chosen based on the weighted overlay between several different routes where BEV journeys may be expected. In doing so many locations, or types of locations, could be compared against one another and then analysed in relation to siting practicalities, such as cost, land permission and infrastructure availability. Results show that efficient facility location, given numerous siting possibilities across a large road network can be achieved. Slight improvements to the standard greedy adding technique are made by adding combination weightings which aim to reward important long distance routes that require more than one charge to complete.
Resumo:
Modern high-power, pulsed lasers are driven by strong intracavity fluctuations. Critical in driving the intracavity dynamics is the nontrivial phase profiles generated and their periodic modification from either nonlinear mode-coupling, spectral filtering or dispersion management. Understanding the theoretical origins of the intracavity fluctuations helps guide the design, optimization and construction of efficient, high-power and high-energy pulsed laser cavities. Three specific mode-locking component are presented for enhancing laser energy: waveguide arrays, spectral filtering and dispersion management. Each component drives a strong intracavity dynamics that is captured through various modeling and analytic techniques.
Resumo:
Az intertemporális döntések fontos szerepet játszanak a közgazdasági modellezésben, és azt írják le, hogy milyen átváltást alkalmazunk két különböző időpont között. A közgazdasági modellezésben az exponenciális diszkontálás a legelterjedtebb, annak ellenére, hogy az empirikus vizsgálatok alapján gyenge a magyarázó ereje. A gazdaságpszichológiában elterjedt általánosított hiperbolikus diszkontálás viszont nagyon nehezen alkalmazható közgazdasági modellezési célra. Így tudott gyorsan elterjedni a kvázi-hiperbolikus diszkontálási modell, amelyik úgy ragadja meg a főbb pszichológiai jelenségeket, hogy kezelhető marad a modellezés során. A cikkben azt állítjuk, hogy hibás az a megközelítés, hogy hosszú távú döntések esetén, főleg sorozatok esetén helyettesíthető a két hiperbolikus diszkontálás egymással. Így a hosszú távú kérdéseknél érdemes felülvizsgálni a kvázi-hiperbolikus diszkontálással kapott eredményeket, ha azok az általánosított hiperbolikus diszkontálási modellel való helyettesíthetőséget feltételezték. ____ Intertemporal choice is one of the crucial questions in economic modeling and it describes decisions which require trade-offs among outcomes occurring in different points in time. In economic modeling the exponential discounting is the most well known, however it has weak validity in empirical studies. Although according to psychologists generalized hyperbolic discounting has the strongest descriptive validity it is very complex and hard to use in economic models. In response to this challenge quasi-hyperbolic discounting was proposed. It has the most important properties of generalized hyperbolic discounting while tractability remains in analytical modeling. Therefore it is common to substitute generalized hyperbolic discounting with quasi-hyperbolic discounting. This paper argues that the substitution of these two models leads to different conclusions in long term decisions especially in the case of series; hence all the models that use quasi-hyperbolic discounting for long term decisions should be revised if they states that generalized hyperbolic discounting model would have the same conclusion.
Resumo:
The impact of climate change on the potential distribution of four Mediterranean pine species – Pinus brutia Ten., Pinus halepensis Mill., Pinus pinaster Aiton, and Pinus pinea L. – was studied by the Climate Envelope Model (CEM) to examine whether these species are suitable for the use as ornamental plants without frost protection in the Carpathian Basin. The model was supported by EUFORGEN digital area database (distribution maps), ESRI ArcGIS 10 software’s Spatial Analyst module (modeling environment), PAST (calibration of the model with statistical method), and REMO regional climate model (climatic data). The climate data were available in a 25 km resolution grid for the reference period (1961–1990) and two future periods (2011–2040, 2041–2070). The regional climate model was based on the IPCC SRES A1B scenario. While the potential distribution of P. brutia was not predicted to expand remarkably, an explicit shift of the distribution of the other three species was shown. Northwestern African distribution segments seem to become abandoned in the future. Current distribution of P. brutia may be highly endangered by the climate change. P. halepensis in the southern part and P. pinaster in the western part of the Carpathian Basin may find suitable climatic conditions in the period of 2041–2070.
Resumo:
The future northward expansion of the arthropod vectors of leishmaniasis caused by climate change seems to be essential veterinary and medical problem. Our aim was to build and evaluate a Climate Envelope Model (CEM) to assess the potential effects of climate change on five European sandfly species. The studied species – Phlebotomus ariasi Tonn., P. neglectus Tonn., P. papatasi Scop., P. perfiliewi Parrot, P. perniciosus Newst., P. sergenti Parrot, P. similis Perfiliev, P. tobbi Adler, Theodor et Lourie – are important vectors of the parasite Leishmania infantum or other Leishmania species. The projections were based on REMO regional climate model with European domain. The climate data were available in a 25 km resolution grid for the reference period (1961-90) and two future periods (2011-40, 2041-70). The regional climate model was based on the IPCC SRES A1B scenario. Three types of climatic parameters were used for every month (averaged in the 30-years periods). The model was supported by VBORNET digital area database (distribution maps), ESRI ArcGIS 10 software’s Spatial Analyst module (modeling environment), PAST (calibration of the model with statistical method). Iterative model evaluation was done by summarizing two types of model errors based on an aggregated distribution. The results show that the best model results can be achieved by leaving 5-5 percentiles from the two extrema of the mean temperature, 2-2 percentiles from the two extrema of the minimum temperature, 0 percentile from the minimum of and 8 percentiles from the maximum of the precipitation.
Resumo:
Modern software systems are often large and complicated. To better understand, develop, and manage large software systems, researchers have studied software architectures that provide the top level overall structural design of software systems for the last decade. One major research focus on software architectures is formal architecture description languages, but most existing research focuses primarily on the descriptive capability and puts less emphasis on software architecture design methods and formal analysis techniques, which are necessary to develop correct software architecture design. ^ Refinement is a general approach of adding details to a software design. A formal refinement method can further ensure certain design properties. This dissertation proposes refinement methods, including a set of formal refinement patterns and complementary verification techniques, for software architecture design using Software Architecture Model (SAM), which was developed at Florida International University. First, a general guideline for software architecture design in SAM is proposed. Second, specification construction through property-preserving refinement patterns is discussed. The refinement patterns are categorized into connector refinement, component refinement and high-level Petri nets refinement. These three levels of refinement patterns are applicable to overall system interaction, architectural components, and underlying formal language, respectively. Third, verification after modeling as a complementary technique to specification refinement is discussed. Two formal verification tools, the Stanford Temporal Prover (STeP) and the Simple Promela Interpreter (SPIN), are adopted into SAM to develop the initial models. Fourth, formalization and refinement of security issues are studied. A method for security enforcement in SAM is proposed. The Role-Based Access Control model is formalized using predicate transition nets and Z notation. The patterns of enforcing access control and auditing are proposed. Finally, modeling and refining a life insurance system is used to demonstrate how to apply the refinement patterns for software architecture design using SAM and how to integrate the access control model. ^ The results of this dissertation demonstrate that a refinement method is an effective way to develop a high assurance system. The method developed in this dissertation extends existing work on modeling software architectures using SAM and makes SAM a more usable and valuable formal tool for software architecture design. ^
Resumo:
Mediation techniques provide interoperability and support integrated query processing among heterogeneous databases. While such techniques help data sharing among different sources, they increase the risk for data security, such as violating access control rules. Successful protection of information by an effective access control mechanism is a basic requirement for interoperation among heterogeneous data sources. ^ This dissertation first identified the challenges in the mediation system in order to achieve both interoperability and security in the interconnected and collaborative computing environment, which includes: (1) context-awareness, (2) semantic heterogeneity, and (3) multiple security policy specification. Currently few existing approaches address all three security challenges in mediation system. This dissertation provides a modeling and architectural solution to the problem of mediation security that addresses the aforementioned security challenges. A context-aware flexible authorization framework was developed in the dissertation to deal with security challenges faced by mediation system. The authorization framework consists of two major tasks, specifying security policies and enforcing security policies. Firstly, the security policy specification provides a generic and extensible method to model the security policies with respect to the challenges posed by the mediation system. The security policies in this study are specified by 5-tuples followed by a series of authorization constraints, which are identified based on the relationship of the different security components in the mediation system. Two essential features of mediation systems, i. e., relationship among authorization components and interoperability among heterogeneous data sources, are the focus of this investigation. Secondly, this dissertation supports effective access control on mediation systems while providing uniform access for heterogeneous data sources. The dynamic security constraints are handled in the authorization phase instead of the authentication phase, thus the maintenance cost of security specification can be reduced compared with related solutions. ^
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^
Resumo:
With the recent explosion in the complexity and amount of digital multimedia data, there has been a huge impact on the operations of various organizations in distinct areas, such as government services, education, medical care, business, entertainment, etc. To satisfy the growing demand of multimedia data management systems, an integrated framework called DIMUSE is proposed and deployed for distributed multimedia applications to offer a full scope of multimedia related tools and provide appealing experiences for the users. This research mainly focuses on video database modeling and retrieval by addressing a set of core challenges. First, a comprehensive multimedia database modeling mechanism called Hierarchical Markov Model Mediator (HMMM) is proposed to model high dimensional media data including video objects, low-level visual/audio features, as well as historical access patterns and frequencies. The associated retrieval and ranking algorithms are designed to support not only the general queries, but also the complicated temporal event pattern queries. Second, system training and learning methodologies are incorporated such that user interests are mined efficiently to improve the retrieval performance. Third, video clustering techniques are proposed to continuously increase the searching speed and accuracy by architecting a more efficient multimedia database structure. A distributed video management and retrieval system is designed and implemented to demonstrate the overall performance. The proposed approach is further customized for a mobile-based video retrieval system to solve the perception subjectivity issue by considering individual user's profile. Moreover, to deal with security and privacy issues and concerns in distributed multimedia applications, DIMUSE also incorporates a practical framework called SMARXO, which supports multilevel multimedia security control. SMARXO efficiently combines role-based access control (RBAC), XML and object-relational database management system (ORDBMS) to achieve the target of proficient security control. A distributed multimedia management system named DMMManager (Distributed MultiMedia Manager) is developed with the proposed framework DEMUR; to support multimedia capturing, analysis, retrieval, authoring and presentation in one single framework.
Resumo:
The availability and pervasiveness of new communication services, such as mobile networks and multimedia communication over digital networks, has resulted in strong demands for approaches to modeling and realizing customized communication systems. The stovepipe approach used to develop today's communication applications is no longer effective because it results in a lengthy and expensive development cycle. To address this need, the Communication Virtual Machine (CVM) technology has been developed by researchers at Florida International University. The CVM technology includes the Communication Modeling Language (CML) and the platform, CVM, to model and rapidly realize communication models. ^ In this dissertation, we investigate the basic communication primitives needed to capture and specify an end-user's requirements for communication-intensive applications, and how these specifications can be automatically realized. To identify the basic communication primitives, we perform a feature analysis on a set of communication-intensive scenarios from the healthcare domain. Based on the feature analysis, we define a new version of CML that includes the meta-model definition (abstract syntax and static semantics) and a partial behavior model (operational semantics). To validate our CML definition, we present a case study that shows how one of the scenarios from the healthcare domain is modeled and automatically realized. ^
Resumo:
Rapid advances in electronic communication devices and technologies have resulted in a shift in the way communication applications are being developed. These new development strategies provide abstract views of the underlying communication technologies and lead to the so-called user-centric communication applications. One user-centric communication (UCC) initiative is the Communication Virtual Machine (CVM) technology, which uses the Communication Modeling Language (CML) for modeling communication services and the CVM for realizing these services. In communication-intensive domains such as telemedicine and disaster management, there is an increasing need for user-centric communication applications that are domain-specific and that support the dynamic coordination of communication services commonly found in collaborative communication scenarios. However, UCC approaches like the CVM offer little support for the dynamic coordination of communication services resulting from inherent dependencies between individual steps of a collaboration task. Users either have to manually coordinate communication services, or reply on a process modeling technique to build customized solutions for services in a specific domain that are usually costly, rigidly defined and technology specific. ^ This dissertation proposes a domain-specific modeling approach to address this problem by extending the CVM technology with communication-specific abstractions of workflow concepts commonly found in business processes. The extension involves (1) the definition of the Workflow Communication Modeling Language (WF-CML), a superset of CML, and (2) the extension of the functionality of CVM to process communication-specific workflows. The definition of WF-CML includes the meta-model and the dynamic semantics for control constructs and concurrency. We also extended the CVM prototype to handle the modeling and realization of WF-CML models. A comparative study of the proposed approach with other workflow environments validates the claimed benefits of WF-CML and CVM.^
Resumo:
Urban growth models have been used for decades to forecast urban development in metropolitan areas. Since the 1990s cellular automata, with simple computational rules and an explicitly spatial architecture, have been heavily utilized in this endeavor. One such cellular-automata-based model, SLEUTH, has been successfully applied around the world to better understand and forecast not only urban growth but also other forms of land-use and land-cover change, but like other models must be fed important information about which particular lands in the modeled area are available for development. Some of these lands are in categories for the purpose of excluding urban growth that are difficult to quantify since their function is dictated by policy. One such category includes voluntary differential assessment programs, whereby farmers agree not to develop their lands in exchange for significant tax breaks. Since they are voluntary, today’s excluded lands may be available for development at some point in the future. Mapping the shifting mosaic of parcels that are enrolled in such programs allows this information to be used in modeling and forecasting. In this study, we added information about California’s Williamson Act into SLEUTH’s excluded layer for Tulare County. Assumptions about the voluntary differential assessments were used to create a sophisticated excluded layer that was fed into SLEUTH’s urban growth forecasting routine. The results demonstrate not only a successful execution of this method but also yielded high goodness-of-fit metrics for both the calibration of enrollment termination as well as the urban growth modeling itself.
Resumo:
Recently, researchers have begun to investigate the benefits of cross-training teams. It has been hypothesized that cross-training should help improve team processes and team performance (Cannon-Bowers, Salas, Blickensderfer, & Bowers, 1998; Travillian, Volpe, Cannon-Bowers, & Salas, 1993). The current study extends previous research by examining different methods of cross-training (positional clarification and positional modeling) and the impact they have on team process and performance in both more complex and less complex environments. One hundred and thirty-five psychology undergraduates were placed in 45 three-person teams. Participants were randomly assigned to roles within teams. Teams were asked to “fly” a series of missions on a PC-based helicopter flight simulation. ^ Results suggest that cross-training improves team mental model accuracy and similarity. Accuracy of team mental models was found to be a predictor of coordination quality, but similarity of team mental models was not. Neither similarity nor accuracy of team mental models was found to be a predictor of backup behavior (quality and quantity). As expected, both team coordination (quality) and backup behaviors (quantity and quality) were significant predictors of overall team performance. Contrary to expectations, there was no interaction between cross-training and environmental complexity. Results from this study further cross-training research by establishing positional clarification and positional modeling as training strategies for improving team performance. ^