7 resultados para lean implementation time

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo


Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE: Due to their toxicity, diesel emissions have been submitted to progressively more restrictive regulations in developed countries. However, in Brazil, the implementation of the Cleaner Diesel Technologies policy (Euro IV standards for vehicles produced in 2009 and low-sulfur diesel with 50 ppm of sulfur) was postponed until 2012 without a comprehensive analysis of the effect of this delay on public health parameters. We aimed to evaluate the impact of the delay in implementing the Cleaner Diesel Technologies policy on health indicators and monetary health costs in Brazil. METHODS: The primary estimator of exposure to air pollution was the concentration of ambient fine particulate matter (particles with aerodynamic diameters, <2.5 mu m, [PM2.5]). This parameter was measured daily in six Brazilian metropolitan areas during 2007-2008. We calculated 1) the projected reduction in the PM2.5 that would have been achieved if the Euro IV standards had been implemented in 2009 and 2) the expected reduction after implementation in 2012. The difference between these two time curves was transformed into health outcomes using previous dose-response curves. The economic valuation was performed based on the DALY (disability-adjusted life years) method. RESULTS: The delay in implementing the Cleaner Diesel Technologies policy will result in an estimated excess of 13,984 deaths up to 2040. Health expenditures are projected to be increased by nearly US$ 11.5 billion for the same period. CONCLUSIONS: The present results indicate that a significant health burden will occur because of the postponement in implementing the Cleaner Diesel Technologies policy. These results also reinforce the concept that health effects must be considered when revising fuel and emission policies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Companies are currently choosing to integrate logics and systems to achieve better solutions. These combinations also include companies striving to join the logic of material requirement planning (MRP) system with the systems of lean production. The purpose of this article was to design an MRP as part of the implementation of an enterprise resource planning (ERP) in a company that produces agricultural implements, which has used the lean production system since 1998. This proposal is based on the innovation theory, theory networks, lean production systems, ERP systems and the hybrid production systems, which use both components and MRP systems, as concepts of lean production systems. The analytical approach of innovation networks enables verification of the links and relationships among the companies and departments of the same corporation. The analysis begins with the MRP implementation project carried out in a Brazilian metallurgical company and follows through the operationalisation of the MRP project, until its production stabilisation. The main point is that the MRP system should help the company's operations with regard to its effective agility to respond in time to demand fluctuations, facilitating the creation process and controlling the branch offices in other countries that use components produced in the matrix, hence ensuring more accurate estimates of stockpiles. Consequently, it presents the enterprise knowledge development organisational modelling methodology in order to represent further models (goals, actors and resources, business rules, business process and concepts) that should be included in this MRP implementation process for the new configuration of the production system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Several models have been designed to predict survival of patients with heart failure. These, while available and widely used for both stratifying and deciding upon different treatment options on the individual level, have several limitations. Specifically, some clinical variables that may influence prognosis may have an influence that change over time. Statistical models that include such characteristic may help in evaluating prognosis. The aim of the present study was to analyze and quantify the impact of modeling heart failure survival allowing for covariates with time-varying effects known to be independent predictors of overall mortality in this clinical setting. Methodology: Survival data from an inception cohort of five hundred patients diagnosed with heart failure functional class III and IV between 2002 and 2004 and followed-up to 2006 were analyzed by using the proportional hazards Cox model and variations of the Cox's model and also of the Aalen's additive model. Principal Findings: One-hundred and eighty eight (188) patients died during follow-up. For patients under study, age, serum sodium, hemoglobin, serum creatinine, and left ventricular ejection fraction were significantly associated with mortality. Evidence of time-varying effect was suggested for the last three. Both high hemoglobin and high LV ejection fraction were associated with a reduced risk of dying with a stronger initial effect. High creatinine, associated with an increased risk of dying, also presented an initial stronger effect. The impact of age and sodium were constant over time. Conclusions: The current study points to the importance of evaluating covariates with time-varying effects in heart failure models. The analysis performed suggests that variations of Cox and Aalen models constitute a valuable tool for identifying these variables. The implementation of covariates with time-varying effects into heart failure prognostication models may reduce bias and increase the specificity of such models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: This research aims to assess apprentices' and trainees' work conditions, psychosocial factors at work, as well as health symptoms after joining the labor force. Background: Despite the fact that there are over 3.5 million young working students in Brazil, this increasing rate brings with it difficult working conditions such as work pressure, heavy workloads, and lack of safety training. Method: This study was carried out in a nongovernmental organization (NGO) with 40 young members of a first job program in the city of Sao Paulo, Brazil. They filled out a comprehensive questionnaire focused on sociodemographic variables, working conditions, and health symptoms. Individual and collective semi-structured interviews were conducted. Empirical data analysis was performed using analysis of content. Results: The majority of participants mentioned difficulties in dealing with the pressure and their share of responsibilities at work. Body pains, headaches, sleep deprivation during the workweek, and frequent colds were mentioned. Lack of appropriate task and safety training contributed to the occurrence of work injuries. Conclusion: Having a full-time job during the day coupled with evening high school attendance may jeopardize these people's health and future. Application: This study can make a contribution to the revision and implementation of work training programs for adolescents. It can also help in the creation of more sensible policies regarding youth employment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growing demands for industrial products are imposing an increasingly intense level of competitiveness on the industrial operations. In the meantime, the convergence of information technology (IT) and automation technology (AT) is showing itself to be a tool of great potential for the modernization and improvement of industrial plants. However, for this technology fully to achieve its potential, several obstacles need to be overcome, including the demonstration of the reasoning behind estimations of benefits, investments and risks used to plan the implementation of corporative technology solutions. This article focuses on the evolutionary development of planning and adopting processes of IT & AT convergence. It proposes the incorporation of IT & AT convergence practices into Lean Thinking/Six Sigma, via the method used for planning the convergence of technological activities, known as the Smarter Operation Transformation (SOT) methodology. This article illustrates the SOT methodology through its application in a Brazilian company in the sector of consumer goods. In this application, it is shown that with IT & AT convergence is possible with low investment, in order to reduce the risk of not achieving the goals of key indicators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work is to present an efficient method for phasor estimation based on a compact Genetic Algorithm (cGA) implemented in Field Programmable Gate Array (FPGA). To validate the proposed method, an Electrical Power System (EPS) simulated by the Alternative Transients Program (ATP) provides data to be used by the cGA. This data is as close as possible to the actual data provided by the EPS. Real life situations such as islanding, sudden load increase and permanent faults were considered. The implementation aims to take advantage of the inherent parallelism in Genetic Algorithms in a compact and optimized way, making them an attractive option for practical applications in real-time estimations concerning Phasor Measurement Units (PMUs).