936 resultados para repository, process model, version, storage


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The development of self-adaptive software (SaS) has specific characteristics compared to traditional one, since it allows that changes to be incorporated at runtime. Automated processes have been used as a feasible solution to conduct the software adaptation at runtime. In parallel, reference model has been used to aggregate knowledge and architectural artifacts, since capture the systems essence of specific domains. However, there is currently no reference model based on reflection for the development of SaS. Thus, the main contribution of this paper is to present a reference model based on reflection for development of SaS that have a need to adapt at runtime. To present the applicability of this model, a case study was conducted and good perspective to efficiently contribute to the area of SaS has been obtained.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Two versions of the threshold contact process ordinary and conservative - are studied on a square lattice. In the first, particles are created on active sites, those having at least two nearest neighbor sites occupied, and are annihilated spontaneously. In the conservative version, a particle jumps from its site to an active site. Mean-field analysis suggests the existence of a first-order phase transition, which is confirmed by Monte Carlo simulations. In the thermodynamic limit, the two versions are found to give the same results. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

doi: 10.1111/j.1741-2358.2011.00526.x Biological evaluation of the bone healing process after application of two potentially osteogenic proteins: an animal experimental model Objective: The aim of this work was to analyse qualitatively and quantitatively the newly formed bone after insertion of rhBMP-2 and protein extracted from Hevea brasiliensis (P-1), associated or not with a carrier in critical bone defects created in Wistar rat calvarial bone, using histological and histomorphometrical analyses. Materials and methods: Eighty-four male Wistar rats were used, divided into two groups, according to the period of time until the sacrifice (2 and 6 weeks). Each one of these groups was subdivided into six groups with seven animals each, according to the treatments: (1) 5 mu g of pure rhBMP-2, (2) 5 mu g of rhBMP-2/monoolein gel, (3) pure monoolein gel, (4) 5 mu g of pure P-1, (5) 5 mu g of P-1/monoolein gel and (6) critical bone defect controls. The animals were euthanised and the calvarial bone tissue removed for histological and histomorphometrical analyses. Result and conclusion: The results showed an improvement in the bone healing process using the rhBMP-2 protein, associated or not with a material carrier in relation to the other groups, and this process demonstrated to be time dependent.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article suggests a pricing model for commodities used to produce biofuel. The model is based on the concept that the deterministic component of the Wiener process is not constant and depends on time and exogenous variables. The model, which incorporates theory of storage, the convenience yield and the seasonality of harvests, was applied in the Brazilian sugar market. After predictions were made with the Kalman filter, the model produced results that were statistically more accurate than those returned by the two-factor model available in the literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract Background Using univariate and multivariate variance components linkage analysis methods, we studied possible genotype × age interaction in cardiovascular phenotypes related to the aging process from the Framingham Heart Study. Results We found evidence for genotype × age interaction for fasting glucose and systolic blood pressure. Conclusions There is polygenic genotype × age interaction for fasting glucose and systolic blood pressure and quantitative trait locus × age interaction for a linkage signal for systolic blood pressure phenotypes located on chromosome 17 at 67 cM.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this work, we reported some results about the stochastic quantization of the spherical model. We started by reviewing some basic aspects of this method with emphasis in the connection between the Langevin equation and the supersymmetric quantum mechanics, aiming at the application of the corresponding connection to the spherical model. An intuitive idea is that when applied to the spherical model this gives rise to a supersymmetric version that is identified with one studied in Phys. Rev. E 85, 061109, (2012). Before investigating in detail this aspect, we studied the stochastic quantization of the mean spherical model that is simpler to implement than the one with the strict constraint. We also highlight some points concerning more traditional methods discussed in the literature like canonical and path integral quantization. To produce a supersymmetric version, grounded in the Nicolai map, we investigated the stochastic quantization of the strict spherical model. We showed in fact that the result of this process is an off-shell supersymmetric extension of the quantum spherical model (with the precise supersymmetric constraint structure). That analysis establishes a connection between the classical model and its supersymmetric quantum counterpart. The supersymmetric version in this way constructed is a more natural one and gives further support and motivations to investigate similar connections in other models of the literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Trabajo realizado por Antonio Machado Carrillo, Juan Antonio Bermejo e Ignacio Lorenzo

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite numerous studies about nitrogen-cycling in forest ecosystems, many uncertainties remain, especially regarding the longer-term nitrogen accumulation. To contribute to filling this gap, the dynamic process-based model TRACE, with the ability to simulate 15N tracer redistribution in forest ecosystems was used to study N cycling processes in a mountain spruce forest of the northern edge of the Alps in Switzerland (Alptal, SZ). Most modeling analyses of N-cycling and C-N interactions have very limited ability to determine whether the process interactions are captured correctly. Because the interactions in such a system are complex, it is possible to get the whole-system C and N cycling right in a model without really knowing if the way the model combines fine-scale interactions to derive whole-system cycling is correct. With the possibility to simulate 15N tracer redistribution in ecosystem compartments, TRACE features a very powerful tool for the validation of fine-scale processes captured by the model. We first adapted the model to the new site (Alptal, Switzerland; long-term low-dose N-amendment experiment) by including a new algorithm for preferential water flow and by parameterizing of differences in drivers such as climate, N deposition and initial site conditions. After the calibration of key rates such as NPP and SOM turnover, we simulated patterns of 15N redistribution to compare against 15N field observations from a large-scale labeling experiment. The comparison of 15N field data with the modeled redistribution of the tracer in the soil horizons and vegetation compartments shows that the majority of fine-scale processes are captured satisfactorily. Particularly, the model is able to reproduce the fact that the largest part of the N deposition is immobilized in the soil. The discrepancies of 15N recovery in the LF and M soil horizon can be explained by the application method of the tracer and by the retention of the applied tracer by the well developed moss layer, which is not considered in the model. Discrepancies in the dynamics of foliage and litterfall 15N recovery were also observed and are related to the longevity of the needles in our mountain forest. As a next step, we will use the final Alptal version of the model to calculate the effects of climate change (temperature, CO2) and N deposition on ecosystem C sequestration in this regionally representative Norway spruce (Picea abies) stand.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES: Donation after circulatory declaration of death (DCDD) could significantly improve the number of cardiac grafts for transplantation. Graft evaluation is particularly important in the setting of DCDD given that conditions of cardio-circulatory arrest and warm ischaemia differ, leading to variable tissue injury. The aim of this study was to identify, at the time of heart procurement, means to predict contractile recovery following cardioplegic storage and reperfusion using an isolated rat heart model. Identification of reliable approaches to evaluate cardiac grafts is key in the development of protocols for heart transplantation with DCDD. METHODS: Hearts isolated from anaesthetized male Wistar rats (n = 34) were exposed to various perfusion protocols. To simulate DCDD conditions, rats were exsanguinated and maintained at 37°C for 15-25 min (warm ischaemia). Isolated hearts were perfused with modified Krebs-Henseleit buffer for 10 min (unloaded), arrested with cardioplegia, stored for 3 h at 4°C and then reperfused for 120 min (unloaded for 60 min, then loaded for 60 min). Left ventricular (LV) function was assessed using an intraventricular micro-tip pressure catheter. Statistical significance was determined using the non-parametric Spearman rho correlation analysis. RESULTS: After 120 min of reperfusion, recovery of LV work measured as developed pressure (DP)-heart rate (HR) product ranged from 0 to 15 ± 6.1 mmHg beats min(-1) 10(-3) following warm ischaemia of 15-25 min. Several haemodynamic parameters measured during early, unloaded perfusion at the time of heart procurement, including HR and the peak systolic pressure-HR product, correlated significantly with contractile recovery after cardioplegic storage and 120 min of reperfusion (P < 0.001). Coronary flow, oxygen consumption and lactate dehydrogenase release also correlated significantly with contractile recovery following cardioplegic storage and 120 min of reperfusion (P < 0.05). CONCLUSIONS: Haemodynamic and biochemical parameters measured at the time of organ procurement could serve as predictive indicators of contractile recovery. We believe that evaluation of graft suitability is feasible prior to transplantation with DCDD, and may, consequently, increase donor heart availability.