995 resultados para Output variables
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The Sanding is a complex process involving many variables that affect the quality of the part produced, working mainly in the timber industry in the production of panels (MDF, MDP, HDF, etc...) and furniture. However, these industries use the sanding process empirically, not optimizing it. The aim of this study was to compare the behavior of sandpaper white aluminum oxide (OA-white) and Black silicon carbide (SiC-black), analyzing variables in the process as: strength, power, emission, vibration, wear particle size of sanding, and its consequences on the surface finish of the workpiece. Made the process of plane grinding samples of Pinus elliottii, processed in parallel to the fibers, which were sanded with sandpaper grain OA white and black 3-SiC abrasive conditions (new, moderately eroded and severely eroded) grain sizes in 3 (80, 100, and 120 mesh). 6 replicates was performed for each condition tested. Each trial was captured output variables of the sanding process: strength, power, emission and vibration. With two stages totaling 108 trials. After the sanded samples, it has the same surface quality by raising the surface roughness Ra. Through experiment, it can be concluded that abrasives OA-white tended to have higher strength, power, emissions and less vibration in the sanding process, compared to the SiC-black. However, surface finish exhibited similar to the particle size of 80 to 100 mesh, worn abrasive conditions. However, the particle size of 120 mesh, obtained by the roughness of sandpaper OA-bank was higher compared to SiC-black to all conditions of sandpaper due to its toughness
Resumo:
Pós-graduação em Design - FAAC
Resumo:
Pós-graduação em Engenharia Mecânica - FEB
Resumo:
Pós-graduação em Engenharia Mecânica - FEB
Resumo:
With the currently strict environmental law in present days, researchers and industries are seeking to reduce the amount of cutting fluid used in machining. Minimum quantity lubrication is a potential alternative to reduce environmental impacts and overall process costs. This technique can substantially reduce cutting fluids in grinding, as well as provide better performance in relation to conventional cutting fluid application (abundant fluid flow). The present work aims to test the viability of minimum quantity lubrication (with and without water) in grinding of advanced ceramics, when compared to conventional method (abundant fluid flow). Measured output variables were grinding power, surface roughness, roundness errors and wheel wear, as well as scanning electron micrographs. The results show that minimum quantity lubrication with water (1:1) was superior to conventional lubrication-cooling in terms of surface quality, also reducing wheel wear, when compared to the other methods tested.
Resumo:
The objective of this work is to determine the membership functions for the construction of a fuzzy controller to evaluate the energy situation of the company with respect to load and power factors. The energy assessment of a company is performed by technicians and experts based on the indices of load and power factors, and analysis of the machines used in production processes. This assessment is conducted periodically to detect whether the procedures performed by employees in relation to how of use electricity energy are correct. With a fuzzy controller, this performed can be done by machines. The construction of a fuzzy controller is initially characterized by the definition of input and output variables, and their associated membership functions. We also need to define a method of inference and a processor output. Finally, you need the help of technicians and experts to build a rule base, consisting of answers that provide these professionals in function of characteristics of the input variables. The controller proposed in this paper has as input variables load and power factors, and output the company situation. Their membership functions representing fuzzy sets called by linguistic qualities, as “VERY BAD” and “GOOD”. With the method of inference Mandani and the processor to exit from the Center of Area chosen, the structure of a fuzzy controller is established, simply by the choice by technicians and experts of the field energy to determine a set of rules appropriate for the chosen company. Thus, the interpretation of load and power factors by software comes to meeting the need of creating a single index that indicates an overall basis (rational and efficient) as the energy is being used.
Resumo:
Backgrounds Ea aims: The boundaries between the categories of body composition provided by vectorial analysis of bioimpedance are not well defined. In this paper, fuzzy sets theory was used for modeling such uncertainty. Methods: An Italian database with 179 cases 18-70 years was divided randomly into developing (n = 20) and testing samples (n = 159). From the 159 registries of the testing sample, 99 contributed with unequivocal diagnosis. Resistance/height and reactance/height were the input variables in the model. Output variables were the seven categories of body composition of vectorial analysis. For each case the linguistic model estimated the membership degree of each impedance category. To compare such results to the previously established diagnoses Kappa statistics was used. This demanded singling out one among the output set of seven categories of membership degrees. This procedure (defuzzification rule) established that the category with the highest membership degree should be the most likely category for the case. Results: The fuzzy model showed a good fit to the development sample. Excellent agreement was achieved between the defuzzified impedance diagnoses and the clinical diagnoses in the testing sample (Kappa = 0.85, p < 0.001). Conclusions: fuzzy linguistic model was found in good agreement with clinical diagnoses. If the whole model output is considered, information on to which extent each BIVA category is present does better advise clinical practice with an enlarged nosological framework and diverse therapeutic strategies. (C) 2012 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Resumo:
After sintering advanced ceramics, there are invariably distortions, caused in large part by the heterogeneous distribution of density gradients along the compacted piece. To correct distortions, machining is generally used to manufacture pieces within dimensional and geometric tolerances. Hence, narrow material removal limit conditions are applied, which minimize the generation of damage. Another alternative is machining the compacted piece before sintering, called the green ceramic stage, which allows machining without damage to mechanical strength. Since the greatest concentration of density gradients is located in the outer-most layers of the compacted piece, this study investigated the removal of different allowance values by means of green machining. The output variables are distortion after sintering, tool wear, cutting force, and the surface roughness of the green ceramics and the sintered ones. The following results have been noted: less distortion is verified in the sintered piece after 1mm allowance removal; and the higher the tool wear the worse the surface roughness of both green and sintered pieces.
Resumo:
The purpose of this study is to evaluate the influence of the cutting parameters of high-speed machining milling on the characteristics of the surface integrity of hardened AISI H13 steel. High-speed machining has been used intensively in the mold and dies industry. The cutting parameters used as input variables were cutting speed (v c), depth of cut (a p), working engagement (a e) and feed per tooth (f z ), while the output variables were three-dimensional (3D) workpiece roughness parameters, surface and cross section microhardness, residual stress and white layer thickness. The subsurface layers were examined by scanning electron and optical microscopy. Cross section hardness was measured with an instrumented microhardness tester. Residual stress was measured by the X-ray diffraction method. From a statistical standpoint (the main effects of the input parameters were evaluated by analysis of variance), working engagement (a e) was the cutting parameter that exerted the strongest effect on most of the 3D roughness parameters. Feed per tooth (f z ) was the most important cutting parameter in cavity formation. Cutting speed (v c) and depth of cut (a p) did not significantly affect the 3D roughness parameters. Cutting speed showed the strongest influence on residual stress, while depth of cut exerted the strongest effect on the formation of white layer and on the increase in surface hardness.
Resumo:
The assessment of the RAMS (Reliability, Availability, Maintainability and Safety) performances of system generally includes the evaluations of the “Importance” of its components and/or of the basic parameters of the model through the use of the Importance Measures. The analytical equations proposed in this study allow the estimation of the first order Differential Importance Measure on the basis of the Birnbaum measures of components, under the hypothesis of uniform percentage changes of parameters. The aging phenomena are introduced into the model by assuming exponential-linear or Weibull distributions for the failure probabilities. An algorithm based on a combination of MonteCarlo simulation and Cellular Automata is applied in order to evaluate the performance of a networked system, made up of source nodes, user nodes and directed edges subjected to failure and repair. Importance Sampling techniques are used for the estimation of the first and total order Differential Importance Measures through only one simulation of the system “operational life”. All the output variables are computed contemporaneously on the basis of the same sequence of the involved components, event types (failure or repair) and transition times. The failure/repair probabilities are forced to be the same for all components; the transition times are sampled from the unbiased probability distributions or it can be also forced, for instance, by assuring the occurrence of at least a failure within the system operational life. The algorithm allows considering different types of maintenance actions: corrective maintenance that can be performed either immediately upon the component failure or upon finding that the component has failed for hidden failures that are not detected until an inspection; and preventive maintenance, that can be performed upon a fixed interval. It is possible to use a restoration factor to determine the age of the component after a repair or any other maintenance action.
Resumo:
Research has shown repeatedly that the “feeling better” effect of exercise is far more moderate than generally claimed. Examinations of subgroups in secondary analyses also indicate that numerous further variables influence this relationship. One reason for inconsistencies in this research field is the lack of adequate theoretical analyses. Well-being output variables frequently possess no construct definition, and little attention is paid to moderating and mediating variables. This article integrates the main models in an overview and analyzes how secondary analyses define well-being and which areas of the construct they focus on. It then applies a moderator and/or mediator framework to examine which person and environmental variables can be found in the existing explanatory approaches in sport science and how they specify the influence of these moderating and mediating variables. Results show that the broad understanding of well-being in many secondary analyses makes findings difficult to interpret. Moreover, physiological explanatory approaches focus more on affective changes in well-being, whereas psychological approaches also include cognitive changes. The approaches focus mostly on either physical or psychological person variables and rarely combine the two, as in, for example, the dual-mode model. Whereas environmental variables specifying the treatment more closely (e.g., its intensity) are comparatively frequent, only the social support model formulates variables such as the framework in which exercise is presented. The majority of explanatory approaches use simple moderator and/or mediator models such as the basic mediated (e.g., distraction hypothesis) or multiple mediated (e.g., monoamine hypotheses) model. The discussion draws conclusions for future research.
Resumo:
An important step to assess water availability is to have monthly time series representative of the current situation. In this context, a simple methodology is presented for application in large-scale studies in regions where a properly calibrated hydrologic model is not available, using the output variables simulated by regional climate models (RCMs) of the European project PRUDENCE under current climate conditions (period 1961–1990). The methodology compares different interpolation methods and alternatives to generate annual times series that minimise the bias with respect to observed values. The objective is to identify the best alternative to obtain bias-corrected, monthly runoff time series from the output of RCM simulations. This study uses information from 338 basins in Spain that cover the entire mainland territory and whose observed values of natural runoff have been estimated by the distributed hydrological model SIMPA. Four interpolation methods for downscaling runoff to the basin scale from 10 RCMs are compared with emphasis on the ability of each method to reproduce the observed behaviour of this variable. The alternatives consider the use of the direct runoff of the RCMs and the mean annual runoff calculated using five functional forms of the aridity index, defined as the ratio between potential evapotranspiration and precipitation. In addition, the comparison with respect to the global runoff reference of the UNH/GRDC dataset is evaluated, as a contrast of the “best estimator” of current runoff on a large scale. Results show that the bias is minimised using the direct original interpolation method and the best alternative for bias correction of the monthly direct runoff time series of RCMs is the UNH/GRDC dataset, although the formula proposed by Schreiber (1904) also gives good results
Resumo:
Matlab, uno de los paquetes de software matemático más utilizados actualmente en el mundo de la docencia y de la investigación, dispone de entre sus muchas herramientas una específica para el procesado digital de imágenes. Esta toolbox de procesado digital de imágenes está formada por un conjunto de funciones adicionales que amplían la capacidad del entorno numérico de Matlab y permiten realizar un gran número de operaciones de procesado digital de imágenes directamente a través del programa principal. Sin embargo, pese a que MATLAB cuenta con un buen apartado de ayuda tanto online como dentro del propio programa principal, la bibliografía disponible en castellano es muy limitada y en el caso particular de la toolbox de procesado digital de imágenes es prácticamente nula y altamente especializada, lo que requiere que los usuarios tengan una sólida formación en matemáticas y en procesado digital de imágenes. Partiendo de una labor de análisis de todas las funciones y posibilidades disponibles en la herramienta del programa, el proyecto clasificará, resumirá y explicará cada una de ellas a nivel de usuario, definiendo todas las variables de entrada y salida posibles, describiendo las tareas más habituales en las que se emplea cada función, comparando resultados y proporcionando ejemplos aclaratorios que ayuden a entender su uso y aplicación. Además, se introducirá al lector en el uso general de Matlab explicando las operaciones esenciales del programa, y se aclararán los conceptos más avanzados de la toolbox para que no sea necesaria una extensa formación previa. De este modo, cualquier alumno o profesor que se quiera iniciar en el procesado digital de imágenes con Matlab dispondrá de un documento que le servirá tanto para consultar y entender el funcionamiento de cualquier función de la toolbox como para implementar las operaciones más recurrentes dentro del procesado digital de imágenes. Matlab, one of the most used numerical computing environments in the world of research and teaching, has among its many tools a specific one for digital image processing. This digital image processing toolbox consists of a set of additional functions that extend the power of the digital environment of Matlab and allow to execute a large number of operations of digital image processing directly through the main program. However, despite the fact that MATLAB has a good help section both online and within the main program, the available bibliography is very limited in Castilian and is negligible and highly specialized in the particular case of the image processing toolbox, being necessary a strong background in mathematics and digital image processing. Starting from an analysis of all the available functions and possibilities in the program tool, the document will classify, summarize and explain each function at user level, defining all input and output variables possible, describing common tasks in which each feature is used, comparing results and providing illustrative examples to help understand its use and application. In addition, the reader will be introduced in the general use of Matlab explaining the essential operations within the program and clarifying the most advanced concepts of the toolbox so that an extensive prior formation will not be necessary. Thus, any student or teacher who wants to start digital image processing with Matlab will have a document that will serve to check and understand the operation of any function of the toolbox and also to implement the most recurrent operations in digital image processing.
Resumo:
Wake effect represents one of the most important aspects to be analyzed at the engineering phase of every wind farm since it supposes an important power deficit and an increase of turbulence levels with the consequent decrease of the lifetime. It depends on the wind farm design, wind turbine type and the atmospheric conditions prevailing at the site. Traditionally industry has used analytical models, quick and robust, which allow carry out at the preliminary stages wind farm engineering in a flexible way. However, new models based on Computational Fluid Dynamics (CFD) are needed. These models must increase the accuracy of the output variables avoiding at the same time an increase in the computational time. Among them, the elliptic models based on the actuator disk technique have reached an extended use during the last years. These models present three important problems in case of being used by default for the solution of large wind farms: the estimation of the reference wind speed upstream of each rotor disk, turbulence modeling and computational time. In order to minimize the consequence of these problems, this PhD Thesis proposes solutions implemented under the open source CFD solver OpenFOAM and adapted for each type of site: a correction on the reference wind speed for the general elliptic models, the semi-parabollic model for large offshore wind farms and the hybrid model for wind farms in complex terrain. All the models are validated in terms of power ratios by means of experimental data derived from real operating wind farms.