958 resultados para Unobserved-component model
Resumo:
Over the past years, the paradigm of component-based software engineering has been established in the construction of complex mission-critical systems. Due to this trend, there is a practical need for techniques that evaluate critical properties (such as safety, reliability, availability or performance) of these systems. In this paper, we review several high-level techniques for the evaluation of safety properties for component-based systems and we propose a new evaluation model (State Event Fault Trees) that extends safety analysis towards a lower abstraction level. This model possesses a state-event semantics and strong encapsulation, which is especially useful for the evaluation of component-based software systems. Finally, we compare the techniques and give suggestions for their combined usage
Resumo:
Engineering adaptive software is an increasingly complex task. Here, we demonstrate Genie, a tool that supports the modelling, generation, and operation of highly reconfigurable, component-based systems. We showcase how Genie is used in two case-studies: i) the development and operation of an adaptive flood warning system, and ii) a service discovery application. In this context, adaptation is enabled by the Gridkit reflective middleware platform.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015
Resumo:
A new mesoscale simulation model for solids dissolution based on an computationally efficient and versatile digital modelling approach (DigiDiss) is considered and validated against analytical solutions and published experimental data for simple geometries. As the digital model is specifically designed to handle irregular shapes and complex multi-component structures, use of the model is explored for single crystals (sugars) and clusters. Single crystals and the cluster were first scanned using X-ray microtomography to obtain a digital version of their structures. The digitised particles and clusters were used as a structural input to digital simulation. The same particles were then dissolved in water and the dissolution process was recorded by a video camera and analysed yielding: the overall dissolution times and images of particle size and shape during the dissolution. The results demonstrate the coherence of simulation method to reproduce experimental behaviour, based on known chemical and diffusion properties of constituent phase. The paper discusses how further sophistications to the modelling approach will need to include other important effects such as complex disintegration effects (particle ejection, uncertainties in chemical properties). The nature of the digital modelling approach is well suited to for future implementation with high speed computation using hybrid conventional (CPU) and graphical processor (GPU) systems.
Resumo:
Motivated by environmental protection concerns, monitoring the flue gas of thermal power plant is now often mandatory due to the need to ensure that emission levels stay within safe limits. Optical based gas sensing systems are increasingly employed for this purpose, with regression techniques used to relate gas optical absorption spectra to the concentrations of specific gas components of interest (NOx, SO2 etc.). Accurately predicting gas concentrations from absorption spectra remains a challenging problem due to the presence of nonlinearities in the relationships and the high-dimensional and correlated nature of the spectral data. This article proposes a generalized fuzzy linguistic model (GFLM) to address this challenge. The GFLM is made up of a series of “If-Then” fuzzy rules. The absorption spectra are input variables in the rule antecedent. The rule consequent is a general nonlinear polynomial function of the absorption spectra. Model parameters are estimated using least squares and gradient descent optimization algorithms. The performance of GFLM is compared with other traditional prediction models, such as partial least squares, support vector machines, multilayer perceptron neural networks and radial basis function networks, for two real flue gas spectral datasets: one from a coal-fired power plant and one from a gas-fired power plant. The experimental results show that the generalized fuzzy linguistic model has good predictive ability, and is competitive with alternative approaches, while having the added advantage of providing an interpretable model.
Resumo:
Motivated by environmental protection concerns, monitoring the flue gas of thermal power plant is now often mandatory due to the need to ensure that emission levels stay within safe limits. Optical based gas sensing systems are increasingly employed for this purpose, with regression techniques used to relate gas optical absorption spectra to the concentrations of specific gas components of interest (NOx, SO2 etc.). Accurately predicting gas concentrations from absorption spectra remains a challenging problem due to the presence of nonlinearities in the relationships and the high-dimensional and correlated nature of the spectral data. This article proposes a generalized fuzzy linguistic model (GFLM) to address this challenge. The GFLM is made up of a series of “If-Then” fuzzy rules. The absorption spectra are input variables in the rule antecedent. The rule consequent is a general nonlinear polynomial function of the absorption spectra. Model parameters are estimated using least squares and gradient descent optimization algorithms. The performance of GFLM is compared with other traditional prediction models, such as partial least squares, support vector machines, multilayer perceptron neural networks and radial basis function networks, for two real flue gas spectral datasets: one from a coal-fired power plant and one from a gas-fired power plant. The experimental results show that the generalized fuzzy linguistic model has good predictive ability, and is competitive with alternative approaches, while having the added advantage of providing an interpretable model.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.
Resumo:
The Ocean Model Intercomparison Project (OMIP) aims to provide a framework for evaluating, understanding, and improving the ocean and sea-ice components of global climate and earth system models contributing to the Coupled Model Intercomparison Project Phase 6 (CMIP6). OMIP addresses these aims in two complementary manners: (A) by providing an experimental protocol for global ocean/sea-ice models run with a prescribed atmospheric forcing, (B) by providing a protocol for ocean diagnostics to be saved as part of CMIP6. We focus here on the physical component of OMIP, with a companion paper (Orr et al., 2016) offering details for the inert chemistry and interactive biogeochemistry. The physical portion of the OMIP experimental protocol follows that of the interannual Coordinated Ocean-ice Reference Experiments (CORE-II). Since 2009, CORE-I (Normal Year Forcing) and CORE-II have become the standard method to evaluate global ocean/sea-ice simulations and to examine mechanisms for forced ocean climate variability. The OMIP diagnostic protocol is relevant for any ocean model component of CMIP6, including the DECK (Diagnostic, Evaluation and Characterization of Klima experiments), historical simulations, FAFMIP (Flux Anomaly Forced MIP), C4MIP (Coupled Carbon Cycle Climate MIP), DAMIP (Detection and Attribution MIP), DCPP (Decadal Climate Prediction Project), ScenarioMIP (Scenario MIP), as well as the ocean-sea ice OMIP simulations. The bulk of this paper offers scientific rationale for saving these diagnostics.
Resumo:
The Ocean Model Intercomparison Project (OMIP) is an endorsed project in the Coupled Model Intercomparison Project Phase 6 (CMIP6). OMIP addresses CMIP6 science questions, investigating the origins and consequences of systematic model biases. It does so by providing a framework for evaluating (including assessment of systematic biases), understanding, and improving ocean, sea-ice, tracer, and biogeochemical components of climate and earth system models contributing to CMIP6. Among the WCRP Grand Challenges in climate science (GCs), OMIP primarily contributes to the regional sea level change and near-term (climate/decadal) prediction GCs. OMIP provides (a) an experimental protocol for global ocean/sea-ice models run with a prescribed atmospheric forcing; and (b) a protocol for ocean diagnostics to be saved as part of CMIP6. We focus here on the physical component of OMIP, with a companion paper (Orr et al., 2016) detailing methods for the inert chemistry and interactive biogeochemistry. The physical portion of the OMIP experimental protocol follows the interannual Coordinated Ocean-ice Reference Experiments (CORE-II). Since 2009, CORE-I (Normal Year Forcing) and CORE-II (Interannual Forcing) have become the standard methods to evaluate global ocean/sea-ice simulations and to examine mechanisms for forced ocean climate variability. The OMIP diagnostic protocol is relevant for any ocean model component of CMIP6, including the DECK (Diagnostic, Evaluation and Characterization of Klima experiments), historical simulations, FAFMIP (Flux Anomaly Forced MIP), C4MIP (Coupled Carbon Cycle Climate MIP), DAMIP (Detection and Attribution MIP), DCPP (Decadal Climate Prediction Project), ScenarioMIP, HighResMIP (High Resolution MIP), as well as the ocean/sea-ice OMIP simulations.
Resumo:
The goal of this project is to learn the necessary steps to create a finite element model, which can accurately predict the dynamic response of a Kohler Engines Heavy Duty Air Cleaner (HDAC). This air cleaner is composed of three glass reinforced plastic components and two air filters. Several uncertainties arose in the finite element (FE) model due to the HDAC’s component material properties and assembly conditions. To help understand and mitigate these uncertainties, analytical and experimental modal models were created concurrently to perform a model correlation and calibration. Over the course of the project simple and practical methods were found for future FE model creation. Similarly, an experimental method for the optimal acquisition of experimental modal data was arrived upon. After the model correlation and calibration was performed a validation experiment was used to confirm the FE models predictive capabilities.
Resumo:
To characterize the recently described SCI1 (stigma/style cell cycle inhibitor 1) gene relationship with the auxin pathway, we have taken the advantage of the Arabidopsis model system and its available tools. At first, we have analyzed the At1g79200 T-DNA insertion mutants and constructed various transgenic plants. The loss- and gain-of-function plants displayed cell number alterations in upper pistils that were controlled by the amino-terminal domain of the protein. These data also confirmed that this locus holds the functional homolog (AtSCI1) of the Nicotiana tabacum SCI1 gene. Then, we have provided some evidences the auxin synthesis/signaling pathways are required for downstream proper AtSCI1 control of cell number: (a) its expression is downregulated in yuc2yuc6 and npy1 auxin-deficient mutants, (b) triple (yuc2yuc6sci1) and double (npy1sci1) mutants mimicked the auxin-deficient phenotypes, with no synergistic interactions, and (c) the increased upper pistil phenotype in these last mutants, which is a consequence of an increased cell number, was able to be complemented by AtSCI1 overexpression. Taken together, our data strongly suggests SCI1 as a component of the auxin signaling transduction pathway to control cell proliferation/differentiation in stigma/style, representing a molecular effector of this hormone on pistil development.
Resumo:
Context. The turbulent pumping effect corresponds to the transport of magnetic flux due to the presence of density and turbulence gradients in convectively unstable layers. In the induction equation it appears as an advective term and for this reason it is expected to be important in the solar and stellar dynamo processes. Aims. We explore the effects of turbulent pumping in a flux-dominated Babcock-Leighton solar dynamo model with a solar-like rotation law. Methods. As a first step, only vertical pumping has been considered through the inclusion of a radial diamagnetic term in the induction equation. In the second step, a latitudinal pumping term was included and then, a near-surface shear was included. Results. The results reveal the importance of the pumping mechanism in solving current limitations in mean field dynamo modeling, such as the storage of the magnetic flux and the latitudinal distribution of the sunspots. If a meridional flow is assumed to be present only in the upper part of the convective zone, it is the full turbulent pumping that regulates both the period of the solar cycle and the latitudinal distribution of the sunspot activity. In models that consider shear near the surface, a second shell of toroidal field is generated above r = 0.95 R(circle dot) at all latitudes. If the full pumping is also included, the polar toroidal fields are efficiently advected inwards, and the toroidal magnetic activity survives only at the observed latitudes near the equator. With regard to the parity of the magnetic field, only models that combine turbulent pumping with near-surface shear always converge to the dipolar parity. Conclusions. This result suggests that, under the Babcock-Leighton approach, the equartorward motion of the observed magnetic activity is governed by the latitudinal pumping of the toroidal magnetic field rather than by a large scale coherent meridional flow. Our results support the idea that the parity problem is related to the quadrupolar imprint of the meridional flow on the poloidal component of the magnetic field and the turbulent pumping positively contributes to wash out this imprint.
Resumo:
Aims. A model-independent reconstruction of the cosmic expansion rate is essential to a robust analysis of cosmological observations. Our goal is to demonstrate that current data are able to provide reasonable constraints on the behavior of the Hubble parameter with redshift, independently of any cosmological model or underlying gravity theory. Methods. Using type Ia supernova data, we show that it is possible to analytically calculate the Fisher matrix components in a Hubble parameter analysis without assumptions about the energy content of the Universe. We used a principal component analysis to reconstruct the Hubble parameter as a linear combination of the Fisher matrix eigenvectors (principal components). To suppress the bias introduced by the high redshift behavior of the components, we considered the value of the Hubble parameter at high redshift as a free parameter. We first tested our procedure using a mock sample of type Ia supernova observations, we then applied it to the real data compiled by the Sloan Digital Sky Survey (SDSS) group. Results. In the mock sample analysis, we demonstrate that it is possible to drastically suppress the bias introduced by the high redshift behavior of the principal components. Applying our procedure to the real data, we show that it allows us to determine the behavior of the Hubble parameter with reasonable uncertainty, without introducing any ad-hoc parameterizations. Beyond that, our reconstruction agrees with completely independent measurements of the Hubble parameter obtained from red-envelope galaxies.
Resumo:
Background: The effects of renal denervation on cardiovascular reflexes and markers of nephropathy in diabetic-hypertensive rats have not yet been explored. Methods: Aim: To evaluate the effects of renal denervation on nephropathy development mechanisms (blood pressure, cardiovascular autonomic changes, renal GLUT2) in diabetic-hypertensive rats. Forty-one male spontaneously hypertensive rats (SHR) similar to 250 g were injected with STZ or not; 30 days later, surgical renal denervation (RD) or sham procedure was performed; 15 days later, glycemia and albuminuria (ELISA) were evaluated. Catheters were implanted into the femoral artery to evaluate arterial pressure (AP) and heart rate variability (spectral analysis) one day later in conscious animals. Animals were killed, kidneys removed, and cortical renal GLUT2 quantified (Western blotting). Results: Higher glycemia (p < 0.05) and lower mean AP were observed in diabetics vs. nondiabetics (p < 0.05). Heart rate was higher in renal-denervated hypertensive and lower in diabetic-hypertensive rats (384.8 +/- 37, 431.3 +/- 36, 316.2 +/- 5, 363.8 +/- 12 bpm in SHR, RD-SHR, STZ-SHR and RD-STZ-SHR, respectively). Heart rate variability was higher in renal-denervated diabetic-hypertensive rats (55.75 +/- 25.21, 73.40 +/- 53.30, 148.4 +/- 93 in RD-SHR, STZ-SHR-and RD-STZ-SHR, respectively, p < 0.05), as well as the LF component of AP variability (1.62 +/- 0.9, 2.12 +/- 0.9, 7.38 +/- 6.5 in RD-SHR, STZ-SHR and RD-STZ-SHR, respectively, p < 0.05). GLUT2 renal content was higher in all groups vs. SHR. Conclusions: Renal denervation in diabetic-hypertensive rats improved previously reduced heart rate variability. The GLUT2 equally overexpressed by diabetes and renal denervation may represent a maximal derangement effect of each condition.