993 resultados para Load factor


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the industries of wood processing (sawmills), where timber is sawn in equipment such as band saws, circular saws, trowel, thicknessers, among others, that mechanically transform this resource and use of electric motors, which are not unusually poorly scaled working or overloaded, often a factor that is not found in these industries and has fundamental importance in the production process is energy efficiency that is achieved by both technological innovation and through all the practices and policies that aim to lower energy consumption, lowering energy costs and increasing the amount of energy offered no change in generation. For both during the design of an electrical installation, both overall and in various sectors of the installation, investigations are necessary, considerations and uses of variables and factors that put into practice the theme of energy efficiency. Therefore, in this paper, these factors were calculated and analyzed for a wood processing industry (sawmill) in the municipality of Taquarivaí - SP, namely: active power, power factor, demand factor and load factor. Where they were small in relation to the literature, these events that occur when devices are connected at the same time and due to the conditions of processing the wood, where the engines have large variations in electricity consumption during the unfolding of the same, due to efforts with the load and idle moments between each machining operation in the equipment

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this work is to determine the membership functions for the construction of a fuzzy controller to evaluate the energy situation of the company with respect to load and power factors. The energy assessment of a company is performed by technicians and experts based on the indices of load and power factors, and analysis of the machines used in production processes. This assessment is conducted periodically to detect whether the procedures performed by employees in relation to how of use electricity energy are correct. With a fuzzy controller, this performed can be done by machines. The construction of a fuzzy controller is initially characterized by the definition of input and output variables, and their associated membership functions. We also need to define a method of inference and a processor output. Finally, you need the help of technicians and experts to build a rule base, consisting of answers that provide these professionals in function of characteristics of the input variables. The controller proposed in this paper has as input variables load and power factors, and output the company situation. Their membership functions representing fuzzy sets called by linguistic qualities, as “VERY BAD” and “GOOD”. With the method of inference Mandani and the processor to exit from the Center of Area chosen, the structure of a fuzzy controller is established, simply by the choice by technicians and experts of the field energy to determine a set of rules appropriate for the chosen company. Thus, the interpretation of load and power factors by software comes to meeting the need of creating a single index that indicates an overall basis (rational and efficient) as the energy is being used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents the application of fuzzy theory to support the decision of implementing energy efficiency program in sawmills operating in the processing of Pinustaeda and Pinuselliotii. The justification of using a system based on fuzzy theory for analysis of consumption and the specific factors involved, such is the diversity of rates / factors. With the fuzzy theory, we can build a reliable system for verifying actual energy efficiency. The indices and factors characteristic of industrial activity were measured and used as the basis for the fuzzy system. We developed a management system and technology. The system involves the management practices in energy efficiency, maintenance of plant and equipment and the presence of qualified staff. The technological system involves the power factor, load factor, the factor of demand and the specific consumption. The first response provides the possibility of increased energy efficiency and the second level of energy efficiency in the industry studied. With this tool, programs can be developed for energy conservation and energy efficiency in the industrial timber with wide application in this area that is as diverse as production processes. The same systems developed can be used in other industrial activities, provided they are used indices and characteristic features of the sectors involved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

SSome factors including the deregulation in the U.S and the liberalization in Europe of the airline industry are essential to understanding why the number of partnership agreements between airlines has increased during the last 25 years. These events, coupled with the continuous economic downturn and the 9/11 catastrophe seem to be the perfect framework for the tendency to develop airline strategic alliances. However, it has been observed that this trend was not followed during the period 2005-2008. The purpose of this paper is to analyze if a benefit was experienced by the major airlines who became a member of the current 3 big alliances compared to the major airlines that decided not to become a member or were not admitted into the alliances during 2005-2008. The methodology of this report includes an analysis of several airlines’ performance figures. These performance figures include the revenue passenger kilometers (RPKs), the passenger load factor (PLF) and also the market share (MS). The figures will be compared between the aligned airlines and others which have similar business models. The value of this paper is to reveal whether being aligned provides advantages to major airlines under a bearish airline market in a globalized environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La evaluación de la seguridad de estructuras antiguas de fábrica es un problema abierto.El material es heterogéneo y anisótropo, el estado previo de tensiones difícil de conocer y las condiciones de contorno inciertas. A comienzos de los años 50 se demostró que el análisis límite era aplicable a este tipo de estructuras, considerándose desde entonces como una herramienta adecuada. En los casos en los que no se produce deslizamiento la aplicación de los teoremas del análisis límite estándar constituye una herramienta formidable por su simplicidad y robustez. No es necesario conocer el estado real de tensiones. Basta con encontrar cualquier solución de equilibrio, y que satisfaga las condiciones de límite del material, en la seguridad de que su carga será igual o inferior a la carga real de inicio de colapso. Además esta carga de inicio de colapso es única (teorema de la unicidad) y se puede obtener como el óptimo de uno cualquiera entre un par de programas matemáticos convexos duales. Sin embargo, cuando puedan existir mecanismos de inicio de colapso que impliquen deslizamientos, cualquier solución debe satisfacer tanto las restricciones estáticas como las cinemáticas, así como un tipo especial de restricciones disyuntivas que ligan las anteriores y que pueden plantearse como de complementariedad. En este último caso no está asegurada la existencia de una solución única, por lo que es necesaria la búsqueda de otros métodos para tratar la incertidumbre asociada a su multiplicidad. En los últimos años, la investigación se ha centrado en la búsqueda de un mínimo absoluto por debajo del cual el colapso sea imposible. Este método es fácil de plantear desde el punto de vista matemático, pero intratable computacionalmente, debido a las restricciones de complementariedad 0 y z 0 que no son ni convexas ni suaves. El problema de decisión resultante es de complejidad computacional No determinista Polinomial (NP)- completo y el problema de optimización global NP-difícil. A pesar de ello, obtener una solución (sin garantía de exito) es un problema asequible. La presente tesis propone resolver el problema mediante Programación Lineal Secuencial, aprovechando las especiales características de las restricciones de complementariedad, que escritas en forma bilineal son del tipo y z = 0; y 0; z 0 , y aprovechando que el error de complementariedad (en forma bilineal) es una función de penalización exacta. Pero cuando se trata de encontrar la peor solución, el problema de optimización global equivalente es intratable (NP-difícil). Además, en tanto no se demuestre la existencia de un principio de máximo o mínimo, existe la duda de que el esfuerzo empleado en aproximar este mínimo esté justificado. En el capítulo 5, se propone hallar la distribución de frecuencias del factor de carga, para todas las soluciones de inicio de colapso posibles, sobre un sencillo ejemplo. Para ello, se realiza un muestreo de soluciones mediante el método de Monte Carlo, utilizando como contraste un método exacto de computación de politopos. El objetivo final es plantear hasta que punto está justificada la busqueda del mínimo absoluto y proponer un método alternativo de evaluación de la seguridad basado en probabilidades. Las distribuciones de frecuencias, de los factores de carga correspondientes a las soluciones de inicio de colapso obtenidas para el caso estudiado, muestran que tanto el valor máximo como el mínimo de los factores de carga son muy infrecuentes, y tanto más, cuanto más perfecto y contínuo es el contacto. Los resultados obtenidos confirman el interés de desarrollar nuevos métodos probabilistas. En el capítulo 6, se propone un método de este tipo basado en la obtención de múltiples soluciones, desde puntos de partida aleatorios y calificando los resultados mediante la Estadística de Orden. El propósito es determinar la probabilidad de inicio de colapso para cada solución.El método se aplica (de acuerdo a la reducción de expectativas propuesta por la Optimización Ordinal) para obtener una solución que se encuentre en un porcentaje determinado de las peores. Finalmente, en el capítulo 7, se proponen métodos híbridos, incorporando metaheurísticas, para los casos en que la búsqueda del mínimo global esté justificada. Abstract Safety assessment of the historic masonry structures is an open problem. The material is heterogeneous and anisotropic, the previous state of stress is hard to know and the boundary conditions are uncertain. In the early 50's it was proven that limit analysis was applicable to this kind of structures, being considered a suitable tool since then. In cases where no slip occurs, the application of the standard limit analysis theorems constitutes an excellent tool due to its simplicity and robustness. It is enough find any equilibrium solution which satisfy the limit constraints of the material. As we are certain that this load will be equal to or less than the actual load of the onset of collapse, it is not necessary to know the actual stresses state. Furthermore this load for the onset of collapse is unique (uniqueness theorem), and it can be obtained as the optimal from any of two mathematical convex duals programs However, if the mechanisms of the onset of collapse involve sliding, any solution must satisfy both static and kinematic constraints, and also a special kind of disjunctive constraints linking the previous ones, which can be formulated as complementarity constraints. In the latter case, it is not guaranted the existence of a single solution, so it is necessary to look for other ways to treat the uncertainty associated with its multiplicity. In recent years, research has been focused on finding an absolute minimum below which collapse is impossible. This method is easy to set from a mathematical point of view, but computationally intractable. This is due to the complementarity constraints 0 y z 0 , which are neither convex nor smooth. The computational complexity of the resulting decision problem is "Not-deterministic Polynomialcomplete" (NP-complete), and the corresponding global optimization problem is NP-hard. However, obtaining a solution (success is not guaranteed) is an affordable problem. This thesis proposes solve that problem through Successive Linear Programming: taking advantage of the special characteristics of complementarity constraints, which written in bilinear form are y z = 0; y 0; z 0 ; and taking advantage of the fact that the complementarity error (bilinear form) is an exact penalty function. But when it comes to finding the worst solution, the (equivalent) global optimization problem is intractable (NP-hard). Furthermore, until a minimum or maximum principle is not demonstrated, it is questionable that the effort expended in approximating this minimum is justified. XIV In chapter 5, it is proposed find the frequency distribution of the load factor, for all possible solutions of the onset of collapse, on a simple example. For this purpose, a Monte Carlo sampling of solutions is performed using a contrast method "exact computation of polytopes". The ultimate goal is to determine to which extent the search of the global minimum is justified, and to propose an alternative approach to safety assessment based on probabilities. The frequency distributions for the case study show that both the maximum and the minimum load factors are very infrequent, especially when the contact gets more perfect and more continuous. The results indicates the interest of developing new probabilistic methods. In Chapter 6, is proposed a method based on multiple solutions obtained from random starting points, and qualifying the results through Order Statistics. The purpose is to determine the probability for each solution of the onset of collapse. The method is applied (according to expectations reduction given by the Ordinal Optimization) to obtain a solution that is in a certain percentage of the worst. Finally, in Chapter 7, hybrid methods incorporating metaheuristics are proposed for cases in which the search for the global minimum is justified.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A eficiência e a racionalidade energética da iluminação pública têm relevante importância no sistema elétrico, porque contribui para diminuir a necessidade de investimentos na construção de novas fontes geradoras de energia elétrica e nos desperdícios energéticos. Apresenta-se como objetivo deste trabalho de pesquisa o desenvolvimento e aplicação do IDE (índice de desempenho energético), fundamentado no sistema de inferência nebulosa e indicadores de eficiência e racionalidade de uso da energia elétrica. A opção em utilizar a inferência nebulosa deve-se aos fatos de sua capacidade de reproduzir parte do raciocínio humano, e estabelecer relação entre a diversidade de indicadores envolvidos. Para a consecução do sistema de inferência nebulosa, foram definidas como variáveis de entrada: os indicadores de eficiência e racionalidade; o método de inferência foi baseado em regras produzidas por especialista em iluminação pública, e como saída um número real que caracteriza o IDE. Os indicadores de eficiência e racionalidade são divididos em duas classes: globais e específicos. Os indicadores globais são: FP (fator de potência), FC (fator de carga) e FD (fator de demanda). Os indicadores específicos são: FU (fator de utilização), ICA (consumo de energia por área iluminada), IE (intensidade energética) e IL (intensidade de iluminação natural). Para a aplicação deste trabalho, foi selecionada e caracterizada a iluminação pública da Cidade Universitária \"Armando de Salles Oliveira\" da Universidade de São Paulo. Sendo assim, o gestor do sistema de iluminação, a partir do índice desenvolvido neste trabalho, dispõe de condições para avaliar o uso da energia elétrica e, desta forma, elaborar e simular estratégias com o objetivo de economizá-la.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper analyzes the impact of load factor, facility and generator types on the productivity of Korean electric power plants. In order to capture important differences in the effect of load policy on power output, we use a semiparametric smooth coefficient (SPSC) model that allows us to model heterogeneous performances across power plants and over time by allowing underlying technologies to be heterogeneous. The SPSC model accommodates both continuous and discrete covariates. Various specification tests are conducted to compare performance of the SPSC model. Using a unique generator level panel dataset spanning the period 1995-2006, we find that the impact of load factor, generator and facility types on power generation varies substantially in terms of magnitude and significance across different plant characteristics. The results have strong implication for generation policy in Korea as outlined in this study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper analyzes the impact of load factor, facility and generator types on the productivity of Korean electric power plants. In order to capture important differences in the effect of load policy on power output, we use a semiparametric smooth coefficient (SPSC) model that allows us to model heterogeneous performances across power plants and over time by allowing underlying technologies to be heterogeneous. The SPSC model accommodates both continuous and discrete covariates. Various specification tests are conducted to assess the performance of the SPSC model. Using a unique generator level panel dataset spanning the period 1995-2006, we find that the impact of load factor, generator and facility types on power generation varies substantially in terms of magnitude and significance across different plant characteristics. The results have strong implications for generation policy in Korea as outlined in this study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Federal Highway Administration (FHWA) mandated utilizing the Load and Resistance Factor Design (LRFD) approach for all new bridges initiated in the United States after October 1, 2007. To achieve part of this goal, a database for Drilled Shaft Foundation Testing (DSHAFT) was developed and reported on by Garder, Ng, Sritharan, and Roling in 2012. DSHAFT is aimed at assimilating high-quality drilled shaft test data from Iowa and the surrounding regions. DSHAFT is currently housed on a project website (http://srg.cce.iastate.edu/dshaft) and contains data for 41 drilled shaft tests. The objective of this research was to utilize the DSHAFT database and develop a regional LRFD procedure for drilled shafts in Iowa with preliminary resistance factors using a probability-based reliability theory. This was done by examining current design and construction practices used by the Iowa Department of Transportation (DOT) as well as recommendations given in the American Association of State Highway and Transportation Officials (AASHTO) LRFD Bridge Design Specifications and the FHWA drilled shaft guidelines. Various analytical methods were used to estimate side resistance and end bearing of drilled shafts in clay, sand, intermediate geomaterial (IGM), and rock. Since most of the load test results obtained from O-cell do not pass the 1-in. top displacement criterion used by the Iowa DOT and the 5% of shaft diameter for top displacement criterion recommended by AASHTO, three improved procedures are proposed to generate and extend equivalent top load-displacement curves that enable the quantification of measured resistances corresponding to the displacement criteria. Using the estimated and measured resistances, regional resistance factors were calibrated following the AASHTO LRFD framework and adjusted to resolve any anomalies observed among the factors. To illustrate the potential and successful use of drilled shafts in Iowa, the design procedures of drilled shaft foundations were demonstrated and the advantages of drilled shafts over driven piles were addressed in two case studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Coxsackievirus B3 (CVB3) infection can result in myocarditis, which in turn may lead to a protracted immune response and subsequent dilated cardiomyopathy. Human decay-accelerating factor (DAF), a binding receptor for CVB3, was synthesized as a soluble IgG1-Fc fusion protein (DAF-Fc). In vitro, DAF-Fc was able to inhibit complement activity and block infection by CVB3, although blockade of infection varied widely among strains of CVB3. To determine the effects of DAF-Fc in vivo, 40 adolescent A/J mice were infected with a myopathic strain of CVB3 and given DAF-Fc treatment 3 days before infection, during infection, or 3 days after infection; the mice were compared with virus alone and sham-infected animals. Sections of heart, spleen, kidney, pancreas, and liver were stained with hematoxylin and eosin and submitted to in situ hybridization for both positive-strand and negative-strand viral RNA to determine the extent of myocarditis and viral infection, respectively. Salient histopathologic features, including myocardial lesion area, cell death, calcification and inflammatory cell infiltration, pancreatitis, and hepatitis were scored without knowledge of the experimental groups. DAF-Fc treatment of mice either preceding or concurrent with CVB3 infection resulted in a significant decrease in myocardial lesion area and cell death and a reduction in the presence of viral RNA. All DAF-Fc treatment groups had reduced infectious CVB3 recoverable from the heart after infection. DAF-Fc may be a novel therapeutic agent for active myocarditis and acute dilated cardiomyopathy if given early in the infectious period, although more studies are needed to determine its mechanism and efficacy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

About 95% of HTLV-1 infected patients remain asymptomatic throughout life, and the risk factors associated with the development of related diseases, such as HAM/TSP and ATL, are not fully understood. The human leukocyte antigen-G molecule (HLA-G), a nonclassical HLA class I molecule encoded by MHC, is expressed in several pathological conditions, including viral infection, and is related to immunosuppressive effects that allow the virus-infected cells to escape the antiviral defense of the host. The 14-bp insertion/deletion polymorphism of exon 8 HLA-G gene influences the stability of the transcripts and could be related to HTLV-1-infected cell protection and to the increase of proviral load. The present study analyzed by conventional PCR the 14-bp insertion/deletion polymorphism of exon 8 HLA-G gene in 150 unrelated healthy subjects, 82 HTLV-1 infected patients with symptoms (33 ATL and 49 HAM), and 56 asymptomatic HTLV-1 infected patients (HAC). In addition, the proviral load was determined by quantitative real-time PCR in all infected groups and correlated with 14-bp insertion/deletion genotypes. The heterozygote genotype frequencies were significantly higher in HAM, in the symptomatic group, and in infected patients compared to control (p < 0.05). The proviral load was higher in the symptomatic group than the HAC group (p < 0.0005). The comparison of proviral load and genotypes showed that -14-bp/-14-bp genotype had a higher proviral load than +14-bp/-14-bp and +14-bp/+14-bp genotypes. Although HLA-G 14-bp polymorphism does not appear to be associated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates the effects of chronic methionine intake on bradykinin (BK)-relaxation. Vascular reactivity experiments were performed on carotid rings from male Wistar rats. Treatment with methionine (0.1, 1 or 2 g kg(-1) per day) for 8 and 16 weeks, but not for 2 and 4 weeks, reduced the relaxation induced by BK. Indomethacin, a non-selective cyclooxygenase (COX) inhibitor, and SQ29548, a selective thromboxane A(2) (TXA(2))/prostaglandin H(2) (PGH(2)) receptor antagonist prevented the reduction in BK-relaxation observed in the carotid from methionine-treated rats. Conversely, AH6809, a selective prostaglandin F(2 alpha) (PGF(2 alpha)) receptor antagonist did not alter BK-relaxation in the carotid from methionine-treated rats. The nitric oxide synthase (NOS) inhibitors L-NAME, L-NNA and 7-nitroindazole reduced the relaxation induced by BK in carotids from control and methionine-treated rats. In summary, we found that chronic methionine intake impairs the endothelium-dependent relaxation induced by BK and this effect is due to an increased production of endothelial vasoconstrictor prostanoids (possibly TXA(2)) that counteracts the relaxant action displayed by the peptide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to compare SEMG activities during axial load exercises on a stable base of support and on a medicine ball (relatively unstable). Twelve healthy male volunteers were tested (x = 23 +/- 7y). Surface EMG was recorded from the biceps brachii, anterior deltoid, clavicular portion of pectoralis major, upper trapezius and serratus anterior using surface differential electrodes. All SEMG data are reported as percentage of RMS mean values obtained in maximal voluntary contractions for each muscle studied. A 3-way within factor repeated measures analysis of variance was performed to compare RMS normalized values. The RMS normalized values of the deltoid were always greater during the exercises performed on a medicine ball in relation to those performed on a stable base of support. The trapezius showed greater mean electric activation amplitude values on the wall-press exercise on a medicine ball, and the pectoralis major on the push-up. The serratus and biceps did not show significant differences of electric activation amplitude in relation to both tested bases of support. Independent of the base of support, none of the studied muscles showed significant differences of electric activation amplitude during the bench-press exercise. The results contribute to the identification of the levels of muscular activation amplitude during exercises that are common in clinical practice of rehabilitation of the shoulder and the differences in terms of type of base of support used. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biological nitrogen removal via the nitrite pathway in wastewater treatment is very important in Saving the cost of aeration and as an electron donor for denitrification. Wastewater nitrification and nitrite accumulation were carried out in a biofilm airlift reactor with autotrophic nitrifying biofilm. The biofilm reactor showed almost complete nitrification and most of the oxidized ammonium was present as nitrite at the ammonium load of 1.5 to 3.5 kg N/m3.d. Nitrite accumulation was stably achieved by the selective inhibition of nitrite oxidizers with free ammonia and dissolved oxygen limitation. Stable 100% conversion to nitrite could also be achieved even under the absence of free ammonia inhibition on nitrite oxidizers. Batch ammonium oxidation and nitrite oxidation with nitrite accumulating nitrifying biofilm showed that nitrite Oxidation was completely inhibited when free ammonia is higher than 0.2 mg N/L. However, nitrite oxidation activity was recovered as soon as the free ammonia concentration was below the threshold level when dissolved oxygen concentration was not the limiting factor. Fluorescence in situ hybridization analysis of cryosectioned nitrite accumulating nitrifying biofilm showed that the β-subclass of Proteobacteria, where ammonia oxidizers belong, was distributed outside the biofilm whereas the α-subclass of Proteobacteria, where nitrite oxidizers belong, was found mainly in the inner part of the biofilm. It is likely that dissolved oxygen deficiency or limitation in the inner part of the nitrifying biofilm, where nitrite oxidizers exist, is responsible for the complete shut down of the nitrite oxidizers activity under the absence of free ammonia inhibition.