962 resultados para Residual autocorrelation and autocovariance matrices


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to evaluate the use of rosemary (Rosmarinus officinalis) extract (RE), celery (Apium graveolis), and low levels of NO3 and NO2 as natural agents to enhance the quality of colonial salami. Salami was produced according to three treatments: (A) Control: 0.1% curing salt; (B) Rosemary: 0.05% curing salt + 0.5% RE (rosemary extract); and (C) Rosemary+celery: 0.14% Veg 503 + 0.27% Veg 504 (sea salt plus celery) + 0.5% of RE (rosemary extract). There was no effect (P > 0.05) of the treatments on water activity, Na content, and residual NO3 and NO2. Fatty acids C18:2 and C20:4 were reduced (P < 0.05) during the ripening period in the Control treatment indicating possible oxidation. The use of celery resulted in lower pH values (P < 0.05) in the salami. Reduced addition of NO3 and NO2 resulted in salami lighter in color (higher L* values, P < 0.05) at the 12th day of ripening. In conclusion, celery-based products proved to be an effective source of NO2 and NO3 for color development, but the low pH of the product indicates the need for better evaluation of its use in fermented salami. The RE (rosemary extract) reduced fat oxidation in salami, but this needs further evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Au Canada, nous remarquons une prédominance du diabète de type 2 au sein des communautés autochtones. Une approche ethnobotanique est utilisée en collaboration avec la Nation Crie de Eeyou Istchee afin de déterminer quels traitements à base de plantes peuvent être utilisés pour contrer les différentes conditions qui, collectivement, forment le diabète. Les pharmacopées de deux communautés cries, soit celles de Waskaganish et de Nemaska, ont été établies puis comparées à celles de étudiées antérieurement : communautés Whapmagoostui et Mistissini. Malgré les différences géographiques de ces groupes, leurs utilisations sont majoritairement semblables, avec pour seule exception le contraste entre les communautés de Nemaska et de Whapmagoostui. De plus, nous avons complété l’évaluation du taux cytoprotecteur des aiguilles, de l’écorce et des cônes de l’épinette noire (Picea mariana). Les extraits provenant de tous les organes des plantes démontrent une protection qui dépend de la concentration. La réponse spécifique d’organes peut varier selon l’habitat; ainsi, les plantes poussant dans les tourbières ou dans les forêts, sur le littoral ou à des terres l’intérieur démontrent des différences quant à leur efficacité. Bref, l’écorce démontre une relation dose-effet plus forte dans la forêt littorale, tandis que les aiguilles n’indiquent pas de changements significatifs selon leur environnement de croissance. La bioactivité observée démontre une corrélation avec le contenu phénolique et non avec l’activité de l’agent antioxydant. Ces résultats contribuent à péciser les activités antidiabétiques des plantes de la forêt boréale canadienne, telles qu’identifiées au niveau cellulaire par les guérisseurs Cries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diese Arbeit umfaßt das elektromechanische Design und die Designoptimierung von weit durchstimmbaren optischen multimembranbasierten Bauelementen, mit vertikal orientierten Kavitäten, basierend auf der Finiten Element Methode (FEM). Ein multimembran InP/Luft Fabry-Pérot optischer Filter wird dargestellt und umfassend analysiert. In dieser Arbeit wird ein systematisches strukturelles Designverfahren dargestellt. Genaue analytische elektromechanischer Modelle für die Bauelemente sind abgeleitet worden. Diese können unschätzbare Werkzeuge sein, um am Anfang der Designphase schnell einen klaren Einblick zur Verfügung zu stellen. Mittels des FEM Programms ist der durch die nicht-lineare Verspannung hervorgerufene versteifende Effekt nachgeforscht und sein Effekt auf die Verlängerung der mechanischen Durchstimmungsstrecke der Bauelemente demonstriert worden. Interessant war auch die Beobachtung, dass die normierte Relation zwischen Ablenkung und Spannung ein unveränderliches Profil hat. Die Deformation der Membranflächen der in dieser Arbeit dargestellten Bauelementformen erwies sich als ein unerwünschter, jedoch manchmal unvermeidbarer Effekt. Es zeigt sich aber, dass die Wahl der Größe der strukturellen Dimensionen den Grad der Membrandeformation im Falle der Aktuation beeinflusst. Diese Arbeit stellt ein elektromechanisches in FEMLAB implementierte quasi-3D Modell, das allgemein für die Modellierung dünner Strukturen angewendet werden kann, dar; und zwar indem man diese als 2D-Objekte betrachtet und die dritte Dimension als eine konstante Größe (z.B. die Schichtdicke) oder eine Größe, welche eine mathematische Funktion ist, annimmt. Diese Annahme verringert drastisch die Berechnungszeit sowie den erforderlichen Arbeitsspeicherbedarf. Weiter ist es für die Nachforschung des Effekts der Skalierung der durchstimmbaren Bauelemente verwendet worden. Eine neuartige Skalierungstechnik wurde abgeleitet und verwendet. Die Ergebnisse belegen, dass das daraus resultierende, skalierte Bauelement fast genau die gleiche mechanische Durchstimmung wie das unskalierte zeigt. Die Einbeziehung des Einflusses von axialen Verspannungen und Gradientenverspannungen in die Berechnungen erforderte die Änderung der Standardimplementierung des 3D Mechanikberechnungsmodus, der mit der benutzten FEM Software geliefert wurde. Die Ergebnisse dieser Studie zeigen einen großen Einfluss der Verspannung auf die Durchstimmungseigenschaften der untersuchten Bauelemente. Ferner stimmten die Ergebnisse der theoretischen Modellrechnung mit den experimentellen Resultaten sehr gut überein.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The traditional task of a central bank is to preserve price stability and, in doing so, not to impair the real economy more than necessary. To meet this challenge, it is of great relevance whether inflation is only driven by inflation expectations and the current output gap or whether it is, in addition, influenced by past inflation. In the former case, as described by the New Keynesian Phillips curve, the central bank can immediately and simultaneously achieve price stability and equilibrium output, the so-called ‘divine coincidence’ (Blanchard and Galí 2007). In the latter case, the achievement of price stability is costly in terms of output and will be pursued over several periods. Similarly, it is important to distinguish this latter case, which describes ‘intrinsic’ inflation persistence, from that of ‘extrinsic’ inflation persistence, where the sluggishness of inflation is not a ‘structural’ feature of the economy but merely ‘inherited’ from the sluggishness of the other driving forces, inflation expectations and output. ‘Extrinsic’ inflation persistence is usually considered to be the less challenging case, as policy-makers are supposed to fight against the persistence in the driving forces, especially to reduce the stickiness of inflation expectations by a credible monetary policy, in order to reestablish the ‘divine coincidence’. The scope of this dissertation is to contribute to the vast literature and ongoing discussion on inflation persistence: Chapter 1 describes the policy consequences of inflation persistence and summarizes the empirical and theoretical literature. Chapter 2 compares two models of staggered price setting, one with a fixed two-period duration and the other with a stochastic duration of prices. I show that in an economy with a timeless optimizing central bank the model with the two-period alternating price-setting (for most parameter values) leads to more persistent inflation than the model with stochastic price duration. This result amends earlier work by Kiley (2002) who found that the model with stochastic price duration generates more persistent inflation in response to an exogenous monetary shock. Chapter 3 extends the two-period alternating price-setting model to the case of 3- and 4-period price durations. This results in a more complex Phillips curve with a negative impact of past inflation on current inflation. As simulations show, this multi-period Phillips curve generates a too low degree of autocorrelation and too early turnings points of inflation and is outperformed by a simple Hybrid Phillips curve. Chapter 4 starts from the critique of Driscoll and Holden (2003) on the relative real-wage model of Fuhrer and Moore (1995). While taking the critique seriously that Fuhrer and Moore’s model will collapse to a much simpler one without intrinsic inflation persistence if one takes their arguments literally, I extend the model by a term for inequality aversion. This model extension is not only in line with experimental evidence but results in a Hybrid Phillips curve with inflation persistence that is observably equivalent to that presented by Fuhrer and Moore (1995). In chapter 5, I present a model that especially allows to study the relationship between fairness attitudes and time preference (impatience). In the model, two individuals take decisions in two subsequent periods. In period 1, both individuals are endowed with resources and are able to donate a share of their resources to the other individual. In period 2, the two individuals might join in a common production after having bargained on the split of its output. The size of the production output depends on the relative share of resources at the end of period 1 as the human capital of the individuals, which is built by means of their resources, cannot fully be substituted one against each other. Therefore, it might be rational for a well-endowed individual in period 1 to act in a seemingly ‘fair’ manner and to donate own resources to its poorer counterpart. This decision also depends on the individuals’ impatience which is induced by the small but positive probability that production is not possible in period 2. As a general result, the individuals in the model economy are more likely to behave in a ‘fair’ manner, i.e., to donate resources to the other individual, the lower their own impatience and the higher the productivity of the other individual. As the (seemingly) ‘fair’ behavior is modelled as an endogenous outcome and as it is related to the aspect of time preference, the presented framework might help to further integrate behavioral economics and macroeconomics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Habitat area requirements of forest songbirds vary greatly among species, but the causes of this variation are not well understood. Large area requirements could result from advantages for certain species when settling their territories near those of conspecifics. This phenomenon would result in spatial aggregations much larger than single territories. Species that aggregate their territories could show reduced population viability in highly fragmented forests, since remnant patches may remain unoccupied if they are too small to accommodate several territories. The objectives of this study were twofold: (1) to seek evidence of territory clusters of forest birds at various spatial scales, lags of 250-550 m, before and after controlling for habitat spatial patterns; and (2) to measure the relationship between spatial autocorrelation and apparent landscape sensitivity for these species. In analyses that ignored spatial variation of vegetation within remnant forest patches, nine of the 17 species studied significantly aggregated their territories within patches. After controlling for forest vegetation, the locations of eight out of 17 species remained significantly clustered. The aggregative pattern that we observed may, thus, be indicative of a widespread phenomenon in songbird populations. Furthermore, there was a tendency for species associated with higher forest cover to be more spatially aggregated [ERRATUM].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of the genetic variance/covariance matrix (G-matrix) is a recent and fruitful approach in evolutionary biology, providing a window of investigating for the evolution of complex characters. Although G-matrix studies were originally conducted for microevolutionary timescales, they could be extrapolated to macroevolution as long as the G-matrix remains relatively constant, or proportional, along the period of interest. A promising approach to investigating the constancy of G-matrices is to compare their phenotypic counterparts (P-matrices) in a large group of related species; if significant similarity is found among several taxa, it is very likely that the underlying G-matrices are also equivalent. Here we study the similarity of covariance and correlation structure in a broad sample of Old World monkeys and apes (Catarrhini). We made phylogenetically structured comparisons of correlation and covariance matrices derived from 39 skull traits, ranging from between species to the superfamily level. We also compared the overall magnitude of integration between skull traits (r(2)) for all Catarrhim genera. Our results show that P-matrices were not strictly constant among catarrhines, but the amount of divergence observed among taxa was generally low. There was significant and positive correlation between the amount of divergence in correlation and covariance patterns among the 30 genera and their phylogenetic distances derived from a recently proposed phylogenetic hypothesis. Our data demonstrate that the P-matrices remained relatively similar along the evolutionary history of catarrhines, and comparisons with the G-matrix available for a New World monkey genus (Saguinus) suggests that the same holds for all anthropoids. The magnitude of integration, in contrast, varied considerably among genera, indicating that evolution of the magnitude, rather than the pattern of inter-trait correlations, might have played an important role in the diversification of the catarrhine skull. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent developments have highlighted the importance of forest amount at large spatial scales and of matrix quality for ecological processes in remnants. These developments, in turn, suggest the potential for reducing biodiversity loss through the maintenance of a high percentage of forest combined with sensitive management of anthropogenic areas. We conducted a multi-taxa survey to evaluate the potential for biodiversity maintenance in an Atlantic forest landscape that presented a favorable context from a theoretical perspective (high proportion of mature forest partly surrounded by structurally complex matrices). We sampled ferns, butterflies, frogs, lizards, bats, small mammals and birds in interiors and edges of large and small mature forest remnants and two matrices (second-growth forests and shade cacao plantations), as well as trees in interiors of small and large remnants. By considering richness, abundance and composition of forest specialists and generalists, we investigated the biodiversity value of matrix habitats (comparing them with interiors of large remnants for all groups except tree), and evaluated area (for all groups) and edge effects (for all groups except trees) in mature forest remnants. our results suggest that in landscapes comprising high amounts of mature forest and low contrasting matrices: (1) shade cacao plantations and second-growth forests harbor an appreciable number of forest specialists; (2) most forest specialist assemblages are not affected by area or edge effects, while most generalist assemblages proliferate at edges of small remnants. Nevertheless, differences in tree assemblages, especially among smaller trees, Suggest that observed patterns are unlikely to be stable over time. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Psidium guajava ""Paluma"", a tropical tree species, is known to be an efficient ozone indicator in tropical countries. When exposed to ozone, this species displays a characteristic leaf injury identified by inter-veinal red stippling on adaxial leaf surfaces. Following 30 days of three ozone treatments consisting of carbon filtered air (CF - AOT40 = 17 ppb h), ambient non-filtered air (NF - AOT40 = 542 ppb h) and ambient non-filtered air + 40 ppb ozone (NF + O(3) - AOT40 - 7802 ppb h), the amounts of residual anthocyanins and tannins present in 10 P. guajava (""Paluma"") saplings were quantified. Higher amounts of anthocyanins were found in the NF + O(3) treatment (1.6%) when compared to the CF (0.97%) and NF (1.30%) (p < 0.05), and of total tannins in the NF + O(3) treatment (0.16%) compared to the CIF (0.14%). Condensed tannins showed the same tendency as enhanced amounts. Regression analyses using amounts of tannins and anthocyanins, AOT40 and the leaf injury index (LII), showed a correlation between the leaf injury index and quantities of anthocyanins and total tannins. These results are in accordance with the association between the incidence of red-stippled leaves and ozone polluted environments. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The molecular architecture of azopolymers may be controlled via chemical synthesis and with selection of a suitable film-forming method, which is important for improving their properties for practical uses. Here we address the main challenge of combining the photoinduced birefringence features of azopolymers with the higher thermal and mechanical stabilities of poly(methyl methacrylate) (PMMA) using Atom Transfer Radical Polymerization (ATRP) to synthesize diblock- and triblock-copolymers of an azomonomer and the monomer methyl methacrylate. Langmuir-Blodgett (LB) films made with the copolymers mixed with cadmium stearate displayed essentially the same optically induced birefringence characteristics, in terms of maximum and residual birefringence and time for writing, as the mixed LB films with the homopolymer poly[4-(N-ethyl-N-(2-methacryloxyethyl))amino-2`-chloro-4`-nitroazobenzene] (HPDR13), also synthesized via ATRP. In fact, the controlled architecture of HPDR13 chains led to Langmuir films that could be more closely packed and reach higher collapse pressures than the corresponding films obtained with HPDR13-conv synthesized via conventional radicalar polymerization. This allowed LB films to be fabricated from neat HPDR13, which was not possible with HPDR13-conv. The enhanced organization in the LB films produced with controlled azopolymer chains, however, led to a smaller free volume available for isomerization of the azochromophores, thus yielding a lower photoinduced birefringence than in the HPDR13-conv films. The combination of ATRP synthesis and LB technology is then promising to obtain optical storage in films with improved thermal and mechanical processabilities, though a further degree of control must be sought to exploit film organization while maintaining the necessary free volume in the films. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente trabalho tem como objetivo estudar o comportamento de um solo residual melhorado através do uso de técnicas mecânicas de compactação e da adição de cimento. Complementarmente foram realizadas análises numéricas destes materiais tratados quando utilizados como base de fundações superficiais. O programa experimental incluiu a retirada de amostras intactas e de material amolgado para a execução de ensaios triaxiais saturados drenados com medida interna de deformações, a fim de estudar o comportamento do solo natural e do solo tratado, quer por compactação, quer por adição de cimento e compactação. Além disto, tais ensaios são determinantes na obtenção de parâmetros constitutivos para a realização de simulações numéricas. O Método dos Elementos Finitos foi utilizado para simular o comportamento carga versus recalque de placas assentes sobre o solo natural e sobre camadas de solo melhorado. O modelo Hiperbólico foi empregado na análise numérica para modelar o comportamento tensãodeformação dos materiais. Os resultados das simulações dos ensaios de placa sobre camadas de solo melhorado demonstraram que houve um aumento significativo da capacidade de suporte, além de uma redução considerável dos recalques, quando comparados ao comportamento do solo natural.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the institutional change introduced by the public disclosure of an education development index (IDEB, Basic Education Development Index) in 2007 to identify the e ect of education accountability on yardstick competition in education spending for Brazilian municipalities. Our results are threefold. First, political incentives are pervasive in setting the education expenditures. The spatial strategic behavior on education spending is estimated lower for lame-ducks and for those incumbents with majority support at the city council. This suggests a strong relation between commitment and accountability which reinforces yardstick competition theory. Second, we nd a minor reduction (20%) in spatial interaction for public education spending after IDEB's disclosure | compared to the spatial correlation before the disclosure of the index. This suggests that public release of information may decrease the importance of the neighbors` counterpart information on voter`s decision. Third, exploring the discontinuity of IDEB`s disclosure rule around the cut-o of 30 students enrolled in the grade under assessment, our estimates suggest that the spatial autocorrelation | and hence yardstick competition | is reduced in 54%. Finally, an unforeseen result suggests that the disclosure of IDEB increases expenditures, more than 100% according to our estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work addresses biodiesel by transesterification from the use of waste frying oil as a possible technological alternative for both reducing greenhouse gas emissions and by presenting themselves as an environmental call to designate a rational use of oil when no longer played in the environment to become renewable energy. It has proposed location of a residual oil and fat treatment plant to produce biodiesel, using models of Location and Routing for the improvement of routes. To achieve the goal, questionnaires were administered in establishments that use oil or vegetable fat in their productive activities in order to quantify the residue, to analyze actions and environmental perception of people who work directly with the residue on the destination you are being given to oil and fat used. It has indicated using of two single setup location, the method of Center of Gravity and the model of Ardalan, a geographical point that minimizes the costs of transporting waste to the treatment plant. Actions have been proposed for the improvement of collection routes this residue using the Routing Method of Scanning, as an illustration. The results demonstrated the lack of knowledge of the people who deal directly with large amounts of waste, on the environmental impacts caused by their incorrect disposal. The models used were uniform since point out to neighborhoods in similar regions. The neighborhoods of Lagoa Nova / Morro Branco (Ardalan) and Nova Descoberta (Center of Gravity) as ideal for the installation of treatment plant. However, it is suggested to be tested other models that take into account new variables than those used (supply of waste and the distance between points). The routing through the method of scanning has shown that it is possible, in a simple way to optimize routes in order to reduce distances and therefore the logistics costs in the collection of such waste. Introducing a route as a test to gather the twenty largest oil suppliers used in sample frying, using as a main factor time 8 hour of working shift every day

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work reported here involved an investigation into the grinding process, one of the last finishing processes carried out on a production line. Although several input parameters are involved in this process, attention today focuses strongly on the form and amount of cutting fluid employed, since these substances may be seriously pernicious to human health and to the environment, and involve high purchasing and maintenance costs when utilized and stored incorrectly. The type and amount of cutting fluid used directly affect some of the main output variables of the grinding process which are analyzed here, such as tangential cutting force, specific grinding energy, acoustic emission, diametrical wear, roughness, residual stress and scanning electron microscopy. To analyze the influence of these variables, an optimised fluid application methodology was developed (involving rounded 5, 4 and 3 turn diameter nozzles and high fluid application pressures) to reduce the amount of fluid used in the grinding process and improve its performance in comparison with the conventional fluid application method (of diffuser nozzles and lower fluid application pressure). To this end, two types of cutting fluid (a 5% synthetic emulsion and neat oil) and two abrasive tools (an aluminium oxide and a superabrasive CBN grinding wheel) were used. The results revealed that, in every situation, the optimised application of cutting fluid significantly improved the efficiency of the process, particularly the combined use of neat oil and CBN grinding wheel. (c) 2005 Elsevier Ltd. All rights reserved.