947 resultados para model efficiency


Relevância:

40.00% 40.00%

Publicador:

Resumo:

For industrialised economy of ourdays, remanufacturing represents perhaps the largest unexploited resource and opportunity for realising a greater growth of the economy in an environmental-conscious manner. The aim of this paper is to investigate of the impact of remanufacturing in the economy from an economic-efficiency point of view. In static context this phenomenon was analysed in the literature. We use the multi-sector input–output framework in a dynamic context to study intra-period relationships of the sectors of economy. We extend the classical dynamic input–output model taking into consideration the activity of remanufacturing .We try to answer the question, whether the remanufacturing/reuse increases the growth possibility of an economy. We expose a sufficient condition concerning the effectivity of an economy with remanufacturing. By this evaluation we analyse a possible sustainable development of the economy on the basis of the product recovery management of industries.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Efficiency represents the ratio of work done to energy expended. In human movement, it is desirable to maximise the work done or minimise the energy expenditure. Whilst research has examined the efficiency of human movement for the lower and upper body, there is a paucity of research which considers the efficiency of a total body movement. Rowing is a movement which encompasses all parts of the body to generate locomotion and is a useful modality to measure total body efficiency. It was the aim of this research to develop a total body model of efficiency and explore how skill level of participants and assumptions of the modelling process affected the efficiency estimates Three studies were used to develop and evaluate the efficiency model. Firstly, the efficiency of ten healthy males was established using rowing, cycling and arm cranking. The model included internal work from motion capture and efficiency estimates were comparable to published literature, indicating the suitability of the model to estimate efficiency. Secondly, the model was developed to include a multi-segmented trunk and twelve novice and twelve skilled participants were assessed for efficiency. Whilst the efficiency estimates were similar to published results, novice participants were assessed as more efficient. Issues such as the unique physiology of trained rowers and a lack of energy transfers in the model were considered contributing factors. Finally the model was redeveloped to account for energy transfers, where skilled participants had higher efficiency at large workloads. This work presents a novel model for estimating efficiency during a rowing motion. The specific inclusion of energy transfers expands previous knowledge of internal work and efficiency, demonstrating a need to include energy transfers in the assessment of efficiency of a total body action.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

According to law number 12.715/2012, Brazilian government instituted guidelines for a program named Inovar-Auto. In this context, energy efficiency is a survival requirement for Brazilian automotive industry from September 2016. As proposed by law, energy efficiency is not going to be calculated by models only. It is going to be calculated by the whole universe of new vehicles registered. In this scenario, the composition of vehicles sold in market will be a key factor on profits of each automaker. Energy efficiency and its consequences should be taken into consideration in all of its aspects. In this scenario, emerges the following question: which is the efficiency curve of one automaker for long term, allowing them to adequate to rules, keep balancing on investment in technologies, increasing energy efficiency without affecting competitiveness of product lineup? Among several variables to be considered, one can highlight the analysis of manufacturing costs, customer value perception and market share, which characterizes this problem as a multi-criteria decision-making. To tackle the energy efficiency problem required by legislation, this paper proposes a framework of multi-criteria decision-making. The proposed framework combines Delphi group and Analytic Hierarchy Process to identify suitable alternatives for automakers to incorporate in main Brazilian vehicle segments. A forecast model based on artificial neural networks was used to estimate vehicle sales demand to validate expected results. This approach is demonstrated with a real case study using public vehicles sales data of Brazilian automakers and public energy efficiency data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper provides an agent-based software exploration of the wellknown free market efficiency/equality trade-off. Our study simulates the interaction of agents producing, trading and consuming goods in the presence of different market structures, and looks at how efficient the producers/consumers mapping turn out to be as well as the resulting distribution of welfare among agents at the end of an arbitrarily large number of iterations. Two market mechanisms are compared: the competitive market (a double auction market in which agents outbid each other in order to buy and sell products) and the random one (in which products are allocated randomly). Our results confirm that the superior efficiency of the competitive market (an effective and never stopping producers/consumers mapping and a superior aggregative welfare) comes at a very high price in terms of inequality (above all when severe budget constraints are in play).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In an organisation any optimization process of its issues faces increasing challenges and requires new approaches to the organizational phenomenon. Indeed, in this work it is addressed the problematic of efficiency dynamics through intangible variables that may support a different view of the corporations. It focuses on the challenges that information management and the incorporation of context brings to competitiveness. Thus, in this work it is presented the analysis and development of an intelligent decision support system in terms of a formal agenda built on a Logic Programming based methodology to problem solving, complemented with an attitude to computing grounded on Artificial Neural Networks. The proposed model is in itself fairly precise, with an overall accuracy, sensitivity and specificity with values higher than 90 %. The proposed solution is indeed unique, catering for the explicit treatment of incomplete, unknown, or even self-contradictory information, either in a quantitative or qualitative arrangement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The productivity associated with commonly available disassembly methods today seldomly makes disassembly the preferred end-of-life solution for massive take back product streams. Systematic reuse of parts or components, or recycling of pure material fractions are often not achievable in an economically sustainable way. In this paper a case-based review of current disassembly practices is used to analyse the factors influencing disassembly feasibility. Data mining techniques were used to identify major factors influencing the profitability of disassembly operations. Case characteristics such as involvement of the product manufacturer in the end-of-life treatment and continuous ownership are some of the important dimensions. Economic models demonstrate that the efficiency of disassembly operations should be increased an order of magnitude to assure the competitiveness of ecologically preferred, disassembly oriented end-of-life scenarios for large waste of electric and electronic equipment (WEEE) streams. Technological means available to increase the productivity of the disassembly operations are summarized. Automated disassembly techniques can contribute to the robustness of the process, but do not allow to overcome the efficiency gap if not combined with appropriate product design measures. Innovative, reversible joints, collectively activated by external trigger signals, form a promising approach to low cost, mass disassembly in this context. A short overview of the state-of-the-art in the development of such self-disassembling joints is included. (c) 2008 CIRP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the application of the lumped dissipation model in the analysis of reinforced concrete structures, emphasizing the nonlinear behaviour of the materials The presented model is based on the original models developed by Cipollina and Florez-Lopez (1995) [12]. Florez-Lopez (1995) [13] and Picon and Florez-Lopez (2000) [14] However, some modifications were introduced in the functions that control the damage evolution in order to improve the results obtained. The efficiency of the new approach is evaluated by means of a comparison with experimental results on reinforced concrete structures such as simply supported beams, plane frames and beam-to-column connections Finally, the adequacy of the numerical model representing the global behaviour of framed structures is investigated and the limits of the analysis are discussed (C) 2009 Elsevier Ltd All rights reserved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chloride attack in marine environments or in structures where deicing salts are used will not always show profiles with concentrations that decrease from the external surface to the interior of the concrete. Some profiles show an increase in chloride concentrations from when a peak is formed. This type of profile must be analyzed in a different way from the traditional model of Fick`s second law to generate more precise service life models. A model for forecasting the penetration of chloride ions as a function of time for profiles having formed a peak. To confirm the efficiency of this model, it is necessary to observe the behavior of a chloride profile with peak in a specific structure over a period of time. To achieve this, two chloride profiles with different ages (22 and 27 years) were extracted from the same structure. The profile obtained from the 22-year sample was used to estimate the chloride profile at 27 years using three models: a) the traditional model using Fick`s second law and extrapolating the value of C(S)-external surface chloride concentration; b) the traditional model using Fick`s second law and shifting the x-axis to the peak depth; c) the previously proposed model. The results from these models were compared with the actual profile measured in the 27-year sample and the results were analyzed. The model was presented with good precision for this study of case, requiring to be tested with other structures in use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to develop and validate a mechanistic model for the degradation of phenol by the Fenton process. Experiments were performed in semi-batch operation, in which phenol, catechol and hydroquinone concentrations were measured. Using the methodology described in Pontes and Pinto [R.F.F. Pontes, J.M. Pinto, Analysis of integrated kinetic and flow models for anaerobic digesters, Chemical Engineering journal 122 (1-2) (2006) 65-80], a stoichiometric model was first developed, with 53 reactions and 26 compounds, followed by the corresponding kinetic model. Sensitivity analysis was performed to determine the most influential kinetic parameters of the model that were estimated with the obtained experimental results. The adjusted model was used to analyze the impact of the initial concentration and flow rate of reactants on the efficiency of the Fenton process to degrade phenol. Moreover, the model was applied to evaluate the treatment cost of wastewater contaminated with phenol in order to meet environmental standards. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The leaf area index (LAI) of fast-growing Eucalyptus plantations is highly dynamic both seasonally and interannually, and is spatially variable depending on pedo-climatic conditions. LAI is very important in determining the carbon and water balance of a stand, but is difficult to measure during a complete stand rotation and at large scales. Remote-sensing methods allowing the retrieval of LAI time series with accuracy and precision are therefore necessary. Here, we tested two methods for LAI estimation from MODIS 250m resolution red and near-infrared (NIR) reflectance time series. The first method involved the inversion of a coupled model of leaf reflectance and transmittance (PROSPECT4), soil reflectance (SOILSPECT) and canopy radiative transfer (4SAIL2). Model parameters other than the LAI were either fixed to measured constant values, or allowed to vary seasonally and/or with stand age according to trends observed in field measurements. The LAI was assumed to vary throughout the rotation following a series of alternately increasing and decreasing sigmoid curves. The parameters of each sigmoid curve that allowed the best fit of simulated canopy reflectance to MODIS red and NIR reflectance data were obtained by minimization techniques. The second method was based on a linear relationship between the LAI and values of the GEneralized Soil Adjusted Vegetation Index (GESAVI), which was calibrated using destructive LAI measurements made at two seasons, on Eucalyptus stands of different ages and productivity levels. The ability of each approach to reproduce field-measured LAI values was assessed, and uncertainty on results and parameter sensitivities were examined. Both methods offered a good fit between measured and estimated LAI (R(2) = 0.80 and R(2) = 0.62 for model inversion and GESAVI-based methods, respectively), but the GESAVI-based method overestimated the LAI at young ages. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Correct modeling of root water uptake partitioning over depth is an important issue in hydrological and crop growth models. Recently a physically based model to describe root water uptake was developed at single root scale and upscaled to the root system scale considering a homogeneous distribution of roots per soil layer. Root water uptake partitioning is calculated over soil layers or compartments as a function of respective soil hydraulic conditions, specifically the soil matric flux potential, root characteristics and a root system efficiency factor to compensate for within-layer root system heterogeneities. The performance of this model was tested in an experiment performed in two-compartment split-pot lysimeters with sorghum plants. The compartments were submitted to different irrigation cycles resulting in contrasting water contents over time. The root system efficiency factor was determined to be about 0.05. Release of water from roots to soil was predicted and observed on several occasions during the experiment; however, model predictions suggested root water release to occur more often and at a higher rate than observed. This may be due to not considering internal root system resistances, thus overestimating the ease with which roots can act as conductors of water. Excluding these erroneous predictions from the dataset, statistical indices show model performance to be of good quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A reversible linear master equation model is presented for pressure- and temperature-dependent bimolecular reactions proceeding via multiple long-lived intermediates. This kinetic treatment, which applies when the reactions are measured under pseudo-first-order conditions, facilitates accurate and efficient simulation of the time dependence of the populations of reactants, intermediate species and products. Detailed exploratory calculations have been carried out to demonstrate the capabilities of the approach, with applications to the bimolecular association reaction C3H6 + H reversible arrow C3H7 and the bimolecular chemical activation reaction C2H2 +(CH2)-C-1--> C3H3+H. The efficiency of the method can be dramatically enhanced through use of a diffusion approximation to the master equation, and a methodology for exploiting the sparse structure of the resulting rate matrix is established.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous work has identified several short-comings in the ability of four spring wheat and one barley model to simulate crop processes and resource utilization. This can have important implications when such models are used within systems models where final soil water and nitrogen conditions of one crop define the starting conditions of the following crop. In an attempt to overcome these limitations and to reconcile a range of modelling approaches, existing model components that worked demonstrably well were combined with new components for aspects where existing capabilities were inadequate. This resulted in the Integrated Wheat Model (I_WHEAT), which was developed as a module of the cropping systems model APSIM. To increase predictive capability of the model, process detail was reduced, where possible, by replacing groups of processes with conservative, biologically meaningful parameters. I_WHEAT does not contain a soil water or soil nitrogen balance. These are present as other modules of APSIM. In I_WHEAT, yield is simulated using a linear increase in harvest index whereby nitrogen or water limitations can lead to early termination of grainfilling and hence cessation of harvest index increase. Dry matter increase is calculated either from the amount of intercepted radiation and radiation conversion efficiency or from the amount of water transpired and transpiration efficiency, depending on the most limiting resource. Leaf area and tiller formation are calculated from thermal time and a cultivar specific phyllochron interval. Nitrogen limitation first reduces leaf area and then affects radiation conversion efficiency as it becomes more severe. Water or nitrogen limitations result in reduced leaf expansion, accelerated leaf senescence or tiller death. This reduces the radiation load on the crop canopy (i.e. demand for water) and can make nitrogen available for translocation to other organs. Sensitive feedbacks between light interception and dry matter accumulation are avoided by having environmental effects acting directly on leaf area development, rather than via biomass production. This makes the model more stable across environments without losing the interactions between the different external influences. When comparing model output with models tested previously using data from a wide range of agro-climatic conditions, yield and biomass predictions were equal to the best of those models, but improvements could be demonstrated for simulating leaf area dynamics in response to water and nitrogen supply, kernel nitrogen content, and total water and nitrogen use. I_WHEAT does not require calibration for any of the environments tested. Further model improvement should concentrate on improving phenology simulations, a more thorough derivation of coefficients to describe leaf area development and a better quantification of some processes related to nitrogen dynamics. (C) 1998 Elsevier Science B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mixture model for long-term survivors has been adopted in various fields such as biostatistics and criminology where some individuals may never experience the type of failure under study. It is directly applicable in situations where the only information available from follow-up on individuals who will never experience this type of failure is in the form of censored observations. In this paper, we consider a modification to the model so that it still applies in the case where during the follow-up period it becomes known that an individual will never experience failure from the cause of interest. Unless a model allows for this additional information, a consistent survival analysis will not be obtained. A partial maximum likelihood (ML) approach is proposed that preserves the simplicity of the long-term survival mixture model and provides consistent estimators of the quantities of interest. Some simulation experiments are performed to assess the efficiency of the partial ML approach relative to the full ML approach for survival in the presence of competing risks.