906 resultados para Model driven architecture (MDA) initiative
Resumo:
In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.
Resumo:
Neutron multiplicities for several targets and spallation products of proton-induced reactions in thin targets of interest to an accelerator-driven system obtained with the CRISP code have been reported. This code is a Monte Carlo calculation that simulates the intranuclear cascade and evaporationl fission competition processes. Results are compared with experimental data, and agreement between each other can be considered quite satisfactory in a very broad energy range of incitant particles and different targets.
Resumo:
We consider the dynamics of cargo driven by a collection of interacting molecular motors in the context of ail asymmetric simple exclusion process (ASEP). The model is formulated to account for (i) excluded-volume interactions, (ii) the observed asymmetry of the stochastic movement of individual motors and (iii) interactions between motors and cargo. Items (i) and (ii) form the basis of ASEP models and have already been considered to study the behavior of motor density profile [A. Parmeggiani. T. Franosch, E. Frey, Phase Coexistence in driven one-dimensional transport, Phys. Rev. Lett. 90 (2003) 086601-1-086601-4]. Item (iii) is new. It is introduced here as an attempt to describe explicitly the dependence of cargo movement on the dynamics of motors in this context. The steady-state Solutions Of the model indicate that the system undergoes a phase transition of condensation type as the motor density varies. We study the consequences of this transition to the behavior of the average cargo velocity. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The variations of tropical precipitation are antiphased between the hemispheres on orbital timescales. This antiphasing arises through the alternating strength of incoming solar radiation in the two hemispheres, which affects monsoon intensity and hence the position of the meridional atmospheric circulation of the Hadley cells(1-4). Here we compare an oxygen isotopic record recovered from a speleothem from northeast Brazil for the past 26,000 years with existing reconstructions of precipitation in tropical South America(5-8). During the Holocene, we identify a similar, but zonally oriented, antiphasing of precipitation within the same hemisphere: northeast Brazil experiences humid conditions during low summer insolation and aridity when summer insolation is high, whereas the rest of southern tropical South America shows opposite characteristics. Simulations with a general circulation model that incorporates isotopic variations support this pattern as well as the link to insolation-driven monsoon activity. Our results suggest that convective heating over tropical South America and associated adjustments in large-scale subsidence over northeast Brazil lead to a remote forcing of the South American monsoon, which determines most of the precipitation changes in the region on orbital timescales.
Resumo:
Architectures based on Coordinated Atomic action (CA action) concepts have been used to build concurrent fault-tolerant systems. This conceptual model combines concurrent exception handling with action nesting to provide a general mechanism for both enclosing interactions among system components and coordinating forward error recovery measures. This article presents an architectural model to guide the formal specification of concurrent fault-tolerant systems. This architecture provides built-in Communicating Sequential Processes (CSPs) and predefined channels to coordinate exception handling of the user-defined components. Hence some safety properties concerning action scoping and concurrent exception handling can be proved by using the FDR (Failure Divergence Refinement) verification tool. As a result, a formal and general architecture supporting software fault tolerance is ready to be used and proved as users define components with normal and exceptional behaviors. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We consider the time evolution of an exactly solvable cellular automaton with random initial conditions both in the large-scale hydrodynamic limit and on the microscopic level. This model is a version of the totally asymmetric simple exclusion process with sublattice parallel update and thus may serve as a model for studying traffic jams in systems of self-driven particles. We study the emergence of shocks from the microscopic dynamics of the model. In particular, we introduce shock measures whose time evolution we can compute explicitly, both in the thermodynamic limit and for open boundaries where a boundary-induced phase transition driven by the motion of a shock occurs. The motion of the shock, which results from the collective dynamics of the exclusion particles, is a random walk with an internal degree of freedom that determines the jump direction. This type of hopping dynamics is reminiscent of some transport phenomena in biological systems.
Resumo:
Thesis is to Introduce an Intelligent cross platform architecture with Multi-agent system in order to equip the simulation Models with agents, having intelligent behavior, reactive and pro-active nature and rational in decision making.
Resumo:
Parkinson’s disease (PD) is an increasing neurological disorder in an aging society. The motor and non-motor symptoms of PD advance with the disease progression and occur in varying frequency and duration. In order to affirm the full extent of a patient’s condition, repeated assessments are necessary to adjust medical prescription. In clinical studies, symptoms are assessed using the unified Parkinson’s disease rating scale (UPDRS). On one hand, the subjective rating using UPDRS relies on clinical expertise. On the other hand, it requires the physical presence of patients in clinics which implies high logistical costs. Another limitation of clinical assessment is that the observation in hospital may not accurately represent a patient’s situation at home. For such reasons, the practical frequency of tracking PD symptoms may under-represent the true time scale of PD fluctuations and may result in an overall inaccurate assessment. Current technologies for at-home PD treatment are based on data-driven approaches for which the interpretation and reproduction of results are problematic. The overall objective of this thesis is to develop and evaluate unobtrusive computer methods for enabling remote monitoring of patients with PD. It investigates first-principle data-driven model based novel signal and image processing techniques for extraction of clinically useful information from audio recordings of speech (in texts read aloud) and video recordings of gait and finger-tapping motor examinations. The aim is to map between PD symptoms severities estimated using novel computer methods and the clinical ratings based on UPDRS part-III (motor examination). A web-based test battery system consisting of self-assessment of symptoms and motor function tests was previously constructed for a touch screen mobile device. A comprehensive speech framework has been developed for this device to analyze text-dependent running speech by: (1) extracting novel signal features that are able to represent PD deficits in each individual component of the speech system, (2) mapping between clinical ratings and feature estimates of speech symptom severity, and (3) classifying between UPDRS part-III severity levels using speech features and statistical machine learning tools. A novel speech processing method called cepstral separation difference showed stronger ability to classify between speech symptom severities as compared to existing features of PD speech. In the case of finger tapping, the recorded videos of rapid finger tapping examination were processed using a novel computer-vision (CV) algorithm that extracts symptom information from video-based tapping signals using motion analysis of the index-finger which incorporates a face detection module for signal calibration. This algorithm was able to discriminate between UPDRS part III severity levels of finger tapping with high classification rates. Further analysis was performed on novel CV based gait features constructed using a standard human model to discriminate between a healthy gait and a Parkinsonian gait. The findings of this study suggest that the symptom severity levels in PD can be discriminated with high accuracies by involving a combination of first-principle (features) and data-driven (classification) approaches. The processing of audio and video recordings on one hand allows remote monitoring of speech, gait and finger-tapping examinations by the clinical staff. On the other hand, the first-principles approach eases the understanding of symptom estimates for clinicians. We have demonstrated that the selected features of speech, gait and finger tapping were able to discriminate between symptom severity levels, as well as, between healthy controls and PD patients with high classification rates. The findings support suitability of these methods to be used as decision support tools in the context of PD assessment.
Resumo:
Vehicle activated signs (VAS) display a warning message when drivers exceed a particular threshold. VAS are often installed on local roads to display a warning message depending on the speed of the approaching vehicles. VAS are usually powered by electricity; however, battery and solar powered VAS are also commonplace. This thesis investigated devel-opment of an automatic trigger speed of vehicle activated signs in order to influence driver behaviour, the effect of which has been measured in terms of reduced mean speed and low standard deviation. A comprehen-sive understanding of the effectiveness of the trigger speed of the VAS on driver behaviour was established by systematically collecting data. Specif-ically, data on time of day, speed, length and direction of the vehicle have been collected for the purpose, using Doppler radar installed at the road. A data driven calibration method for the radar used in the experiment has also been developed and evaluated. Results indicate that trigger speed of the VAS had variable effect on driv-ers’ speed at different sites and at different times of the day. It is evident that the optimal trigger speed should be set near the 85th percentile speed, to be able to lower the standard deviation. In the case of battery and solar powered VAS, trigger speeds between the 50th and 85th per-centile offered the best compromise between safety and power consump-tion. Results also indicate that different classes of vehicles report differ-ences in mean speed and standard deviation; on a highway, the mean speed of cars differs slightly from the mean speed of trucks, whereas a significant difference was observed between the classes of vehicles on lo-cal roads. A differential trigger speed was therefore investigated for the sake of completion. A data driven approach using Random forest was found to be appropriate in predicting trigger speeds respective to types of vehicles and traffic conditions. The fact that the predicted trigger speed was found to be consistently around the 85th percentile speed justifies the choice of the automatic model.
Resumo:
The reliable evaluation of the flood forecasting is a crucial problem for assessing flood risk and consequent damages. Different hydrological models (distributed, semi-distributed or lumped) have been proposed in order to deal with this issue. The choice of the proper model structure has been investigated by many authors and it is one of the main sources of uncertainty for a correct evaluation of the outflow hydrograph. In addition, the recent increasing of data availability makes possible to update hydrological models as response of real-time observations. For these reasons, the aim of this work it is to evaluate the effect of different structure of a semi-distributed hydrological model in the assimilation of distributed uncertain discharge observations. The study was applied to the Bacchiglione catchment, located in Italy. The first methodological step was to divide the basin in different sub-basins according to topographic characteristics. Secondly, two different structures of the semi-distributed hydrological model were implemented in order to estimate the outflow hydrograph. Then, synthetic observations of uncertain value of discharge were generated, as a function of the observed and simulated value of flow at the basin outlet, and assimilated in the semi-distributed models using a Kalman Filter. Finally, different spatial patterns of sensors location were assumed to update the model state as response of the uncertain discharge observations. The results of this work pointed out that, overall, the assimilation of uncertain observations can improve the hydrologic model performance. In particular, it was found that the model structure is an important factor, of difficult characterization, since can induce different forecasts in terms of outflow discharge. This study is partly supported by the FP7 EU Project WeSenseIt.
Resumo:
As a result of urbanization, stormwater runoff flow rates and volumes are significantly increased due to increasing impervious land cover and the decreased availability of depression storage. Storage tanks are the basic devices to efficiently control the flow rate in drainage systems during wet weather. Presented in the paper conception of vacuum-driven detention tanks allows to increase the storage capacity by usage of space above the free surface water elevation at the inlet channel. Partial vacuum storage makes possible to gain cost savings by reduction of both the horizontal area of the detention tank and necessary depth of foundations. Simulation model of vacuum-driven storage tank has been developed to estimate potential profits of its application in urban drainage system. Although SWMM5 has no direct options for vacuum tanks an existing functions (i.e. control rules) have been used to reflect its operation phases. Rainfall data used in simulations were recorded at raingage in Czestochowa during years 2010÷2012 with time interval of 10minutes. Simulation results gives overview to practical operation and maintenance cost (energy demand) of vacuum driven storage tanks depending of the ratio: vacuum-driven volume to total storage capacity. The following conclusion can be drawn from this investigations: vacuum-driven storage tanks are characterized by uncomplicated construction and control systems, thus can be applied in newly developed as well as in the existing urban drainage systems. the application of vacuum in underground detention facilities makes possible to increase of the storage capacity of existing reservoirs by usage the space above the maximum depth. Possible increase of storage capacity can achieve even a few dozen percent at relatively low investment costs. vacuum driven storage tanks can be included in existing simulation software (i.e. SWMM) using options intended for pumping stations (including control and action rules ).
Resumo:
With the service life of water supply network (WSN) growth, the growing phenomenon of aging pipe network has become exceedingly serious. As urban water supply network is hidden underground asset, it is difficult for monitoring staff to make a direct classification towards the faults of pipe network by means of the modern detecting technology. In this paper, based on the basic property data (e.g. diameter, material, pressure, distance to pump, distance to tank, load, etc.) of water supply network, decision tree algorithm (C4.5) has been carried out to classify the specific situation of water supply pipeline. Part of the historical data was used to establish a decision tree classification model, and the remaining historical data was used to validate this established model. Adopting statistical methods were used to access the decision tree model including basic statistical method, Receiver Operating Characteristic (ROC) and Recall-Precision Curves (RPC). These methods has been successfully used to assess the accuracy of this established classification model of water pipe network. The purpose of classification model was to classify the specific condition of water pipe network. It is important to maintain the pipeline according to the classification results including asset unserviceable (AU), near perfect condition (NPC) and serious deterioration (SD). Finally, this research focused on pipe classification which plays a significant role in maintaining water supply networks in the future.
Resumo:
Driven by Web 2.0 technology and the almost ubiquitous presence of mobile devices, Volunteered Geographic Information (VGI) is knowing an unprecedented growth. These notable technological advancements have opened fruitful perspectives also in the field of water management and protection, raising the demand for a reconsideration of policies which also takes into account the emerging trend of VGI. This research investigates the opportunity of leveraging such technology to involve citizens equipped with common mobile devices (e.g. tablets and smartphones) in a campaign of report of water-related phenomena. The work is carried out in collaboration with ADBPO - Autorità di bacino del fiume Po (Po river basin Authority), i.e. the entity responsible for the environmental planning and protection of the basin of river Po. This is the longest Italian river, spreading over eight among the twenty Italian Regions and characterized by complex environmental issues. To enrich ADBPO official database with user-generated contents, a FOSS (Free and Open Source Software) architecture was designed which allows not only user field-data collection, but also data Web publication through standard protocols. Open Data Kit suite allows users to collect georeferenced multimedia information using mobile devices equipped with location sensors (e.g. the GPS). Users can report a number of environmental emergencies, problems or simple points of interest related to the Po river basin, taking pictures of them and providing other contextual information. Field-registered data is sent to a server and stored into a PostgreSQL database with PostGIS spatial extension. GeoServer provides then data dissemination on the Web, while specific OpenLayers-based viewers were built to optimize data access on both desktop computers and mobile devices. Besides proving the suitability of FOSS in the frame of VGI, the system represents a successful prototype for the exploitation of user local, real-time information aimed at managing and protecting water resources.
Resumo:
Due to the increase in water demand and hydropower energy, it is getting more important to operate hydraulic structures in an efficient manner while sustaining multiple demands. Especially, companies, governmental agencies, consultant offices require effective, practical integrated tools and decision support frameworks to operate reservoirs, cascades of run-of-river plants and related elements such as canals by merging hydrological and reservoir simulation/optimization models with various numerical weather predictions, radar and satellite data. The model performance is highly related with the streamflow forecast, related uncertainty and its consideration in the decision making. While deterministic weather predictions and its corresponding streamflow forecasts directly restrict the manager to single deterministic trajectories, probabilistic forecasts can be a key solution by including uncertainty in flow forecast scenarios for dam operation. The objective of this study is to compare deterministic and probabilistic streamflow forecasts on an earlier developed basin/reservoir model for short term reservoir management. The study is applied to the Yuvacık Reservoir and its upstream basin which is the main water supply of Kocaeli City located in the northwestern part of Turkey. The reservoir represents a typical example by its limited capacity, downstream channel restrictions and high snowmelt potential. Mesoscale Model 5 and Ensemble Prediction System data are used as a main input and the flow forecasts are done for 2012 year using HEC-HMS. Hydrometeorological rule-based reservoir simulation model is accomplished with HEC-ResSim and integrated with forecasts. Since EPS based hydrological model produce a large number of equal probable scenarios, it will indicate how uncertainty spreads in the future. Thus, it will provide risk ranges in terms of spillway discharges and reservoir level for operator when it is compared with deterministic approach. The framework is fully data driven, applicable, useful to the profession and the knowledge can be transferred to other similar reservoir systems.
Resumo:
The Information Technology (IT) is a concept which has gained importance for organizations. It is expected that the strategic use of IT not only sustain the business operations of enterprises, but mainly leverage the initiative of new competitive strategies. However, these expectations on the earnings with the IT not have been achieved and questions arise about the return of the investments in IT. One of the causes is credited to the lack of alignment between the strategies of business and IT. The search of strategic alignment between IT and business takes to the necessity of measure it. This assessment can help identify whether the perceptions of business executives and IT executives, about the strategic alignment of IT, are similar or different. The objective of this work is to investigate the perceptions of business executives and IT executives in relation to the IT strategic alignment implemented in a selected organization. It was conducted a case study, in a company that provides services to the financial market. As a result, this work identified that there is no statistically significant difference between the perceptions of business executives and IT executives, related to the level of IT strategic alignment maturity implemented in the organization, and highlighted factors that promote this alignment: (a) senior management supports the IT (b) IT takes part of strategic planning, (c) IT understands the business of the company, and (d) there is a partnership between business and IT executives. Additionally, it was proposed that these similar perceptions result from the sharing of assumptions, knowledge and common expectations for the IT strategic alignment between the two groups of executives interviewed, and that led the company to achieve a higher level of IT strategic alignment. Each Practice of Strategic Alignment was examined separately. Although not have statistically significant differences between the perceptions of business executives and IT executives, the practices of Communication, Measures of Value and Competence, and Skills were better assessed by business executives and the practices of Governance and Partnerships have been better perceived by IT executives. The practice of Scope and Architecture and the IT Strategic Alignment, showed no differences in perceptions between the two groups of executives.