998 resultados para Variability Modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

American tegumentary leishmaniasis (ATL) is a disease transmitted to humans by the female sandflies of the genus Lutzomyia. Several factors are involved in the disease transmission cycle. In this work only rainfall and deforestation were considered to assess the variability in the incidence of ATL. In order to reach this goal, monthly recorded data of the incidence of ATL in Orán, Salta, Argentina, were used, in the period 1985-2007. The square root of the relative incidence of ATL and the corresponding variance were formulated as time series, and these data were smoothed by moving averages of 12 and 24 months, respectively. The same procedure was applied to the rainfall data. Typical months, which are April, August, and December, were found and allowed us to describe the dynamical behavior of ATL outbreaks. These results were tested at 95% confidence level. We concluded that the variability of rainfall would not be enough to justify the epidemic outbreaks of ATL in the period 1997-2000, but it consistently explains the situation observed in the years 2002 and 2004. Deforestation activities occurred in this region could explain epidemic peaks observed in both years and also during the entire time of observation except in 2005-2007.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The usual way of modeling variability using threshold voltage shift and drain current amplification is becoming inaccurate as new sources of variability appear in sub-22nm devices. In this work we apply the four-injector approach for variability modeling to the simulation of SRAMs with predictive technology models from 20nm down to 7nm nodes. We show that the SRAMs, designed following ITRS roadmap, present stability metrics higher by at least 20% compared to a classical variability modeling approach. Speed estimation is also pessimistic, whereas leakage is underestimated if sub-threshold slope and DIBL mismatch and their correlations with threshold voltage are not considered.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Nowadays, Software Product Line (SPL) engineering [1] has been widely-adopted in software development due to the significant improvements that has provided, such as reducing cost and time-to-market and providing flexibility to respond to planned changes [2]. SPL takes advantage of common features among the products of a family through the systematic reuse of the core-assets and the effective management of variabilities across the products. SPL features are realized at the architectural level in product-line architecture (PLA) models. Therefore, suitable modeling and specification techniques are required to model variability. In fact, architectural variability modeling has become a challenge for SPLE due to the fact that PLA modeling requires not only modeling variability at the level of the external architecture configuration (see [3,4] literature reviews), but also at the level of internal specification of components [5]. In addition, PLA modeling requires preserving the traceability between features and PLAs. Finally, it is important to take into account that PLA modeling should guide architects in modeling the PLA core assets and variability, and in deriving the customized products. To deal with these needs, we present in this demonstration the FPLA Modeling Framework.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Software Products Lines (SPL) is a software engineering approach to developing software system families that share common features and differ in other features according to the requested software systems. The adoption of the SPL approach can promote several benefits such as cost reduction, product quality, productivity, and time to market. On the other hand, the SPL approach brings new challenges to the software evolution that must be considered. Recent research work has explored and proposed automated approaches based on code analysis and traceability techniques for change impact analysis in the context of SPL development. There are existing limitations concerning these approaches such as the customization of the analysis functionalities to address different strategies for change impact analysis, and the change impact analysis of fine-grained variability. This dissertation proposes a change impact analysis tool for SPL development, called Squid Impact Analyzer. The tool allows the implementation of change impact analysis based on information from variability modeling, mapping of variability to code assets, and existing dependency relationships between code assets. An assessment of the tool is conducted through an experiment that compare the change impact analysis results provided by the tool with real changes applied to several evolution releases from a SPL for media management in mobile devices

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of our work is to present solutions and a methodical support for automated techniques and procedures in domain engineering, in particular for variability modeling. Our approach is based upon Semantic Modeling concepts, for which semantic description, representation patterns and inference mechanisms are defined. Thus, model-driven techniques enriched with semantics will allow flexibility and variability in representation means, reasoning power and the required analysis depth for the identification, interpretation and adaptation of artifact properties and qualities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract One of the most important challenges of this decade is the Internet of Things (IoT) that pursues the integration of real-world objects in Internet. One of the key areas of the IoT is the Ambient Assisted Living (AAL) systems, which should be able to react to variable and continuous changes while ensuring their acceptance and adoption by users. This means that AAL systems need to work as self-adaptive systems. The autonomy property inherent to software agents, makes them a suitable choice for developing self-adaptive systems. However, agents lack the mechanisms to deal with the variability present in the IoT domain with regard to devices and network technologies. To overcome this limitation we have already proposed a Software Product Line (SPL) process for the development of self-adaptive agents in the IoT. Here we analyze the challenges that poses the development of self-adaptive AAL systems based on agents. To do so, we focus on the domain and application engineering of the self-adaptation concern of our SPL process. In addition, we provide a validation of our development process for AAL systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Compartmental and physiologically based toxicokinetic modeling coupled with Monte Carlo simulation were used to quantify the impact of biological variability (physiological, biochemical, and anatomic parameters) on the values of a series of bio-indicators of metal and organic industrial chemical exposures. A variability extent index and the main parameters affecting biological indicators were identified. Results show a large diversity in interindividual variability for the different categories of biological indicators examined. Measurement of the unchanged substance in blood, alveolar air, or urine is much less variable than the measurement of metabolites, both in blood and urine. In most cases, the alveolar flow and cardiac output were identified as the prime parameters determining biological variability, thus suggesting the importance of workload intensity on absorbed dose for inhaled chemicals.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Optical density measurements were used to estimate the effect of heat treatments on the single-cell lag times of Listeria innocua fitted to a shifted gamma distribution. The single-cell lag time was subdivided into repair time ( the shift of the distribution assumed to be uniform for all cells) and adjustment time (varying randomly from cell to cell). After heat treatments in which all of the cells recovered (sublethal), the repair time and the mean and the variance of the single-cell adjustment time increased with the severity of the treatment. When the heat treatments resulted in a loss of viability (lethal), the repair time of the survivors increased with the decimal reduction of the cell numbers independently of the temperature, while the mean and variance of the single-cell adjustment times remained the same irrespective of the heat treatment. Based on these observations and modeling of the effect of time and temperature of the heat treatment, we propose that the severity of a heat treatment can be characterized by the repair time of the cells whether the heat treatment is lethal or not, an extension of the F value concept for sublethal heat treatments. In addition, the repair time could be interpreted as the extent or degree of injury with a multiple-hit lethality model. Another implication of these results is that the distribution of the time for cells to reach unacceptable numbers in food is not affected by the time-temperature combination resulting in a given decimal reduction.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The water stored in and flowing through the subsurface is fundamental for sustaining human activities and needs, feeding water and its constituents to surface water bodies and supporting the functioning of their ecosystems. Quantifying the changes that affect the subsurface water is crucial for our understanding of its dynamics and changes driven by climate change and other changes in the landscape, such as in land-use and water-use. It is inherently difficult to directly measure soil moisture and groundwater levels over large spatial scales and long times. Models are therefore needed to capture the soil moisture and groundwater level dynamics over such large spatiotemporal scales. This thesis develops a modeling framework that allows for long-term catchment-scale screening of soil moisture and groundwater level changes. The novelty in this development resides in an explicit link drawn between catchment-scale hydroclimatic and soil hydraulics conditions, using observed runoff data as an approximation of soil water flux and accounting for the effects of snow storage-melting dynamics on that flux. Both past and future relative changes can be assessed by use of this modeling framework, with future change projections based on common climate model outputs. By direct model-observation comparison, the thesis shows that the developed modeling framework can reproduce the temporal variability of large-scale changes in soil water storage, as obtained from the GRACE satellite product, for most of 25 large study catchments around the world. Also compared with locally measured soil water content and groundwater level in 10 U.S. catchments, the modeling approach can reasonably well reproduce relative seasonal fluctuations around long-term average values. The developed modeling framework is further used to project soil moisture changes due to expected future climate change for 81 catchments around the world. The future soil moisture changes depend on the considered radiative forcing scenario (RCP) but are overall large for the occurrence frequency of dry and wet events and the inter-annual variability of seasonal soil moisture. These changes tend to be higher for the dry events and the dry season, respectively, than for the corresponding wet quantities, indicating increased drought risk for some parts of the world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Below cloud scavenging processes have been investigated considering a numerical simulation, local atmospheric conditions and particulate matter (PM) concentrations, at different sites in Germany. The below cloud scavenging model has been coupled with bulk particulate matter counter TSI (Trust Portacounter dataset, consisting of the variability prediction of the particulate air concentrations during chosen rain events. The TSI samples and meteorological parameters were obtained during three winter Campaigns: at Deuselbach, March 1994, consisting in three different events; Sylt, April 1994 and; Freiburg, March 1995. The results show a good agreement between modeled and observed air concentrations, emphasizing the quality of the conceptual model used in the below cloud scavenging numerical modeling. The results between modeled and observed data have also presented high square Pearson coefficient correlations over 0.7 and significant, except the Freiburg Campaign event. The differences between numerical simulations and observed dataset are explained by the wind direction changes and, perhaps, the absence of advection mass terms inside the modeling. These results validate previous works based on the same conceptual model.