930 resultados para Dual-process Model
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
An economic model including the labor resource and the process stage configuration is proposed to design g charts allowing for all the design parameters to be varied in an adaptive way. A random shift size is considered during the economic design selection. The results obtained for a benchmark of 64 process stage scenarios show that the activities configuration and some process operating parameters influence the selection of the best control chart strategy: to model the random shift size, its exact distribution can be approximately fitted by a discrete distribution obtained from a relatively small sample of historical data. However, an accurate estimation of the inspection costs associated to the SPC activities is far from being achieved. An illustrative example shows the implementation of the proposed economic model in a real industrial case. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The Predispatch model (PD) calculates a short-term generation policy for power systems. In this work a PD model is proposed that improves two modeling aspects generally neglected in the literature: voltage/reactive power constraints and ramp rate constraints for generating units. Reactive power constraints turn the PD into a non-linear problem and the ramp rate constraints couple the problem dynamically in time domain. The solution of the PD is turned into a harder task when such constraints are introduced. The dual decomposition/ lagrangian relaxation technique is used in the solution approach for handing dynamic constraints. As a result the PD is decomposed into a series of independent Optimal Power Flow (FPO) sub problems, in which the reactive power is represented in detail. The solution of the independent FPO is coordinated by means of Lagrange multipliers, so that dynamic constraints are iteratively satisfied. Comparisons between dispatch policies calculated with and without the representation of ramp rate constraints are performed, using the IEEE 30 bus test system. The results point-out the importance of representing such constraints in the generation dispatch policy. © 2004 IEEE.
Resumo:
This paper presents a proposal of a model to measure the efficiency of outsourced companies in the aeronautical industry applying the methods DEA and AHP. It also proposes an evaluation in the relation between the variables of the process and the value obtained for the effiiency. The criteria of Quality, Time and Cost were considered the outputs of the process, and those criteria were quantified by AHP for DEA matrix.The number of technical documents received by those outsorced companies were considered the input of the process. The other purpose is to separate the companies in groups considered able to receive an investment to improve their process. Copyright © 2008 SAE International.
Resumo:
In this paper we present a mixed integer model that integrates lot sizing and lot scheduling decisions for the production planning of a soft drink company. The main contribution of the paper is to present a model that differ from others in the literature for the constraints related to the scheduling decisions. The proposed strategy is compared to other strategies presented in the literature.
Correspondence between the self-dual model and the topologically massive electrodynamics: A new view
Resumo:
Following the study of the Topologically Massive Theories under the Hamilton-Jacobi, we now analyze the constraint structure of the Self-Dual model as well as its correspondence with the Topologically Massive Electrodynamics. © 2013 American Institute of Physics.
Resumo:
The development of self-adaptive software (SaS) has specific characteristics compared to traditional one, since it allows that changes to be incorporated at runtime. Automated processes have been used as a feasible solution to conduct the software adaptation at runtime. In parallel, reference model has been used to aggregate knowledge and architectural artifacts, since capture the systems essence of specific domains. However, there is currently no reference model based on reflection for the development of SaS. Thus, the main contribution of this paper is to present a reference model based on reflection for development of SaS that have a need to adapt at runtime. To present the applicability of this model, a case study was conducted and good perspective to efficiently contribute to the area of SaS has been obtained.
Resumo:
doi: 10.1111/j.1741-2358.2011.00526.x Biological evaluation of the bone healing process after application of two potentially osteogenic proteins: an animal experimental model Objective: The aim of this work was to analyse qualitatively and quantitatively the newly formed bone after insertion of rhBMP-2 and protein extracted from Hevea brasiliensis (P-1), associated or not with a carrier in critical bone defects created in Wistar rat calvarial bone, using histological and histomorphometrical analyses. Materials and methods: Eighty-four male Wistar rats were used, divided into two groups, according to the period of time until the sacrifice (2 and 6 weeks). Each one of these groups was subdivided into six groups with seven animals each, according to the treatments: (1) 5 mu g of pure rhBMP-2, (2) 5 mu g of rhBMP-2/monoolein gel, (3) pure monoolein gel, (4) 5 mu g of pure P-1, (5) 5 mu g of P-1/monoolein gel and (6) critical bone defect controls. The animals were euthanised and the calvarial bone tissue removed for histological and histomorphometrical analyses. Result and conclusion: The results showed an improvement in the bone healing process using the rhBMP-2 protein, associated or not with a material carrier in relation to the other groups, and this process demonstrated to be time dependent.
Resumo:
Abstract Background Using univariate and multivariate variance components linkage analysis methods, we studied possible genotype × age interaction in cardiovascular phenotypes related to the aging process from the Framingham Heart Study. Results We found evidence for genotype × age interaction for fasting glucose and systolic blood pressure. Conclusions There is polygenic genotype × age interaction for fasting glucose and systolic blood pressure and quantitative trait locus × age interaction for a linkage signal for systolic blood pressure phenotypes located on chromosome 17 at 67 cM.
Resumo:
This work investigated the effects of frequency and precision of feedback on the learning of a dual-motor task. One hundred and twenty adults were randomly assigned to six groups of different knowledge of results (KR), frequency (100%, 66% or 33%) and precision (specific or general) levels. In the stabilization phase, participants performed the dual task (combination of linear positioning and manual force control) with the provision of KR. Ten non-KR adaptation trials were performed for the same task, but with the introduction of an electromagnetic opposite traction force. The analysis showed a significant main effect for frequency of KR. The participants who received KR in 66% of the stabilization trials showed superior adaptation performance than those who received 100% or 33%. This finding reinforces that there is an optimal level of information, neither too high nor too low, for motor learning to be effective.
Resumo:
Model based calibration has gained popularity in recent years as a method to optimize increasingly complex engine systems. However virtually all model based techniques are applied to steady state calibration. Transient calibration is by and large an emerging technology. An important piece of any transient calibration process is the ability to constrain the optimizer to treat the problem as a dynamic one and not as a quasi-static process. The optimized air-handling parameters corresponding to any instant of time must be achievable in a transient sense; this in turn depends on the trajectory of the same parameters over previous time instances. In this work dynamic constraint models have been proposed to translate commanded to actually achieved air-handling parameters. These models enable the optimization to be realistic in a transient sense. The air handling system has been treated as a linear second order system with PD control. Parameters for this second order system have been extracted from real transient data. The model has been shown to be the best choice relative to a list of appropriate candidates such as neural networks and first order models. The selected second order model was used in conjunction with transient emission models to predict emissions over the FTP cycle. It has been shown that emission predictions based on air-handing parameters predicted by the dynamic constraint model do not differ significantly from corresponding emissions based on measured air-handling parameters.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
In business literature, the conflicts among workers, shareholders and the management have been studied mostly in the frame of stakeholder theory. The stakeholder theory recognizes this issue as an agency problem, and tries to solve the problem by establishing a contractual relationship between the agent and principals. However, as Marcoux pointed out, the appropriateness of the contract as a medium to reduce the agency problem should be questioned. As an alternative, the cooperative model minimizes the agency costs by integrating the concept of workers, owners and management. Mondragon Corporation is a successful example of the cooperative model which grew into the sixth largest corporation in Spain. However, the cooperative model has long been ignored in discussions of corporate governance, mainly because the success of the cooperative model is extremely difficult to duplicate in reality. This thesis hopes to revitalize the scholarly examination of cooperatives by developing a new model that overcomes the fundamental problem in the cooperative model: the limited access to capital markets. By dividing the ownership interest into financial and control interest, the dual ownership structure allows cooperatives to issue stock in the capital market by making a financial product out of financial interest.