881 resultados para Compressed workweek


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Theories of sparse signal representation, wherein a signal is decomposed as the sum of a small number of constituent elements, play increasing roles in both mathematical signal processing and neuroscience. This happens despite the differences between signal models in the two domains. After reviewing preliminary material on sparse signal models, I use work on compressed sensing for the electron tomography of biological structures as a target for exploring the efficacy of sparse signal reconstruction in a challenging application domain. My research in this area addresses a topic of keen interest to the biological microscopy community, and has resulted in the development of tomographic reconstruction software which is competitive with the state of the art in its field. Moving from the linear signal domain into the nonlinear dynamics of neural encoding, I explain the sparse coding hypothesis in neuroscience and its relationship with olfaction in locusts. I implement a numerical ODE model of the activity of neural populations responsible for sparse odor coding in locusts as part of a project involving offset spiking in the Kenyon cells. I also explain the validation procedures we have devised to help assess the model's similarity to the biology. The thesis concludes with the development of a new, simplified model of locust olfactory network activity, which seeks with some success to explain statistical properties of the sparse coding processes carried out in the network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com este trabalho pretende-se analisar o consumo de energia na indústria de faiança e identificar medidas de poupança energética. Em 2014, o consumo específico foi de 191 kgep/t e a intensidade carbónica 2,15 tCO2e/t, tendo havido uma redução de, respectivamente, 50,2% e 1,3%, comparativamente a 2010. O consumo total correspondeu a 1108 tep, sendo 66% relativo ao consumo de gás natural. Foi utilizado um analisador de energia eléctrica nos principais equipamentos consumidores, e na desagregação de consumos térmicos, efectuaram-se leituras no contador geral de gás natural e foram utilizados dados das auditorias ambiental e energética. O processo de cozedura é responsável por 58% do consumo térmico da instalação, seguido da pintura com 24%. A conformação é o sector com maior consumo de energia eléctrica, correspondendo a 23% do consumo total. As perdas térmicas pelos gases de exaustão dos equipamentos de combustão e pela envolvente do forno, considerando os mecanismos de convecção natural e radiação, correspondem a cerca de 6% do consumo térmico total, sendo necessário tomar medidas a nível do isolamento térmico e da redução do excesso de ar. A instalação de variadores de velocidade nos ventiladores do ar de combustão do forno poderia resultar em poupanças significativas, em particular, no consumo de gás natural – redução de 4 tep/ano e cerca de 2500€/ano– tendo um tempo de retorno do investimento inferior a 1 ano. Deverá ser, no entanto, garantida a alimentação de ar combustão a todos os queimadores, bem como, a combustão completa do gás natural. O funcionamento contínuo do forno poderia resultar no aumento da sua eficiência energética, com redução de custos de operação e manutenção, sendo necessário avaliar os custos adicionais de stock e de mão de obra. Verificou-se que as medidas relacionadas com a monitorização de consumos, eliminação de fugas de ar comprimido e a instalação de variadores de velocidade nos ventiladores do ar de combustão do forno poderiam resultar em reduções de consumo de 26 tep e de emissões de 66tCO2e, num total de quase 14 000€.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Refinaria de Matosinhos é um dos complexos industriais da Galp Energia. A sua estação de tratamento de águas residuais industriais (ETARI) – designada internamente por Unidade 7000 – é composta por quatro tratamentos: o pré-tratamento, o tratamento físico-químico, o tratamento biológico e o pós-tratamento. Dada a interligação existente, é fundamental a otimização de cada um dos tratamentos. Este trabalho teve como objetivos a identificação dos problemas e/ou possibilidades de melhoria do pré-tratamento, tratamento físico-químico e pós-tratamento e principalmente a otimização do tratamento biológico da ETARI. No pré-tratamento verificou-se que a separação de óleos e lamas não era eficaz uma vez que se formam emulsões destas duas fases. Como solução, sugeriu-se a adição de agentes desemulsionantes, que se revelou economicamente inviável. Assim, sugeriu-se como alternativa o recurso a técnicas de tratamento da emulsão gerada, tais como a extração com solvente, centrifugação, ultrassons e micro-ondas. No tratamento físico-químico constatou-se que o controlo da unidade de saturação de ar na água era feito com base na análise visual dos operadores, o que pode conduzir a condições de operação afastadas das ótimas para este tratamento. Assim, sugeriu-se a realização de um estudo de otimização desta unidade com vista à determinação da razão ar/sólidos ótima para este efluente. Para além disto, constatou-se, ainda, que os consumos de coagulante aumentaram cerca de -- % no último ano, pelo que foi sugerido o estudo da viabilidade do processo de eletrocoagulação como substituto do sistema de coagulação existente. No pós-tratamento identificou-se o processo de lavagem dos filtros como sendo a etapa com possibilidade de ser otimizada. Através de um estudo preliminar concluiu-se que a lavagem contínua de um filtro por cada turno melhorava o desempenho dos mesmos. Constatou-se, ainda, que a introdução de ar comprimido na água de lavagem promove uma maior remoção de detritos do leito de areia, no entanto esta prática parece influenciar negativamente o desempenho dos filtros. No caso do tratamento biológico, identificaram-se problemas ao nível do tempo de retenção hidráulico do tratamento biológico II, que apresentou elevada variabilidade. Apesar de identificado concluiu-se que este problema era de difícil solução. Verificou-se, também, que o oxigénio dissolvido não era monitorizado, pelo que se sugeriu a instalação de uma sonda de oxigénio dissolvido numa zona de baixa turbulência do tanque de arejamento. Concluiu-se que o oxigénio era distribuído de forma homogénea por todo o tanque de arejamento e tentou-se identificar quais os fatores que influenciariam este parâmetro, no entanto, dada a elevada variabilidade do efluente e das condições de tratamento, tal não foi possível. Constatou-se, também, que o doseamento de fosfato para o tratamento biológico II era pouco eficiente já Otimização dos sistemas biológicos e melhorias nos tratamentos da ETARI da Refinaria de Matosinhos que em -- % dos dias se verificaram níveis baixos de fosfato no licor misto (< - mg/L). Foi, por isso, proposta a alteração do atual sistema de doseamento por gravidade para um sistema de bomba doseadora. Para além disso verificou-se que os consumos deste nutriente aumentaram significativamente no último ano (cerca de --%), situação que se constatou estar relacionada com um aumento da população microbiana para este período. Foi possível relacionar-se o aparecimento frequente de lamas à superfície dos decantadores secundários com incrementos repentinos de condutividade, pelo que se sugeriu o armazenamento do efluente nas bacias de tempestade, nestas situações. Verificou-se que a remoção de azoto era praticamente ineficaz uma vez que a conversão de azoto amoniacal em nitratos foi muito baixa. Assim, sugeriu-se o recurso à técnica de bio-augmentação ou a transformação do sistema de lamas ativadas num sistema bietápico. Por fim, constatou-se que a temperatura do efluente à entrada da ETARI apresenta valores bastante elevados para o tratamento biológico (aproximadamente de --º C) pelo que se sugeriu a instalação de uma sonda de temperatura no tanque de arejamento de modo a controlar de forma mais eficaz a temperatura do licor misto. Ainda no que diz respeito ao tratamento biológico, foi possível desenvolver-se um conjunto de ferramentas que visaram o funcionamento otimizado deste tratamento. Nesse sentido, foram apresentadas várias sugestões de melhoria: a utilização do índice volumétrico de lamas como indicador da qualidade das lamas em alternativa à percentagem de lamas; foi desenvolvido um conjunto de fluxogramas para a orientação dos operadores de exterior na resolução de problemas; foi criada uma “janela de operação” que pretende ser um guia de apoio à operação; foi ainda proposta a monitorização frequente da idade das lamas e da razão alimento/microrganismo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tesis versa sobre el an álisis de la forma de objetos 2D. En visión articial existen numerosos aspectos de los que se pueden extraer información. Uno de los más usados es la forma o el contorno de esos objetos. Esta característica visual de los objetos nos permite, mediante el procesamiento adecuado, extraer información de los objetos, analizar escenas, etc. No obstante el contorno o silueta de los objetos contiene información redundante. Este exceso de datos que no aporta nuevo conocimiento debe ser eliminado, con el objeto de agilizar el procesamiento posterior o de minimizar el tamaño de la representación de ese contorno, para su almacenamiento o transmisión. Esta reducción de datos debe realizarse sin que se produzca una pérdida de información importante para representación del contorno original. Se puede obtener una versión reducida de un contorno eliminando puntos intermedios y uniendo los puntos restantes mediante segmentos. Esta representación reducida de un contorno se conoce como aproximación poligonal. Estas aproximaciones poligonales de contornos representan, por tanto, una versión comprimida de la información original. El principal uso de las mismas es la reducción del volumen de información necesario para representar el contorno de un objeto. No obstante, en los últimos años estas aproximaciones han sido usadas para el reconocimiento de objetos. Para ello los algoritmos de aproximaci ón poligonal se han usado directamente para la extracci ón de los vectores de caracter ísticas empleados en la fase de aprendizaje. Las contribuciones realizadas por tanto en esta tesis se han centrado en diversos aspectos de las aproximaciones poligonales. En la primera contribución se han mejorado varios algoritmos de aproximaciones poligonales, mediante el uso de una fase de preprocesado que acelera estos algoritmos permitiendo incluso mejorar la calidad de las soluciones en un menor tiempo. En la segunda contribución se ha propuesto un nuevo algoritmo de aproximaciones poligonales que obtiene soluciones optimas en un menor espacio de tiempo que el resto de métodos que aparecen en la literatura. En la tercera contribución se ha propuesto un algoritmo de aproximaciones que es capaz de obtener la solución óptima en pocas iteraciones en la mayor parte de los casos. Por último, se ha propuesto una versi ón mejorada del algoritmo óptimo para obtener aproximaciones poligonales que soluciona otro problema de optimización alternativo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research studies the sintering of ferritic steel chips from the machining process. Were sintered metal powder obtained from machining process chips for face milling of a ferritic steel. The chip was produced by machining and characterized by SEM and EDS, and underwent a process of high energy mill powder characterized also by SEM and EDS. Were constructed three types of matrixes for uniaxial compression (relation l / d greater than 2.5). The differences in the design of the matrixes were essentially in the direction of load application, which for cylindrical case axial direction, while for the rectangular arrays, the longer side. Two samples were compressed with different geometries, a cylindrical and rectangular with the same compaction pressure of 700 MPa. The samples were sintered in a vacuum resistive furnace, heating rate 20 °C / min., isotherm 1300 °C for 60 minutes, and cooling rate of 25 °C / min to room temperature. The starting material of the rectangular sample was further annealed up to temperature of 800 ° C for 30 min. Sintered samples were characterized by scanning electron microscopy, optical microscopy and EDS. The sample compressed in the cylindrical matrix did not show a regular density reflecting in the sintered microstructure revealed by the irregular geometry of the pores, characterizing that the sintering was not complete, reaching only the second phase. As for the specimen compacted in the rectangular array, the analysis performed by scanning electron microscopy, optical microscopy and EDS indicate a good densification, and homogeneous microstructure in their full extent. Additionally, the EDS analyzes indicate no significant changes in chemical composition in the process steps. Therefore, it is concluded that recycling of chips, from the processed ferritic steel is feasible by the powder metallurgy. It makes possible rationalize raw material and energy by manufacture of known properties components from chips generated by the machining process, being benefits to the environment

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Steel is an alloy EUROFER promising for use in nuclear reactors, or in applications where the material is subjected to temperatures up to 550 ° C due to their lower creep resistance under. One way to increase this property, so that the steel work at higher temperatures it is necessary to prevent sliding of its grain boundaries. Factors that influence this slip contours are the morphology of the grains, the angle and speed of the grain boundaries. This speed can be decreased in the presence of a dispersed phase in the material, provided it is fine and homogeneously distributed. In this context, this paper presents the development of a new material metal matrix composite (MMC) which has as starting materials as stainless steel EUROFER 97, and two different kinds of tantalum carbide - TaC, one with average crystallite sizes 13.78 nm synthesized in UFRN and another with 40.66 nm supplied by Aldrich. In order to improve the mechanical properties of metal matrix was added by powder metallurgy, nano-sized particles of the two types of TaC. This paper discusses the effect of dispersion of carbides in the microstructure of sintered parts. Pure steel powders with the addition of 3% TaC UFRN and 3% TaC commercial respectively, were ground in grinding times following: a) 5 hours in the planetary mill for all post b) 8 hours of grinding in the mill Planetary only for steel TaC powders of commercial and c) 24 hours in the conventional ball mill mixing the pure steel milled for 5 hours in the planetary mill with 3% TaC commercial. Each of the resulting particulate samples were cold compacted under a uniaxial pressure of 600MPa, on a cylindrical matrix of 5 mm diameter. Subsequently, the compressed were sintered in a vacuum furnace at temperatures of 1150 to 1250 ° C with an increment of 20 ° C and 10 ° C per minute and maintained at these isotherms for 30, 60 and 120 minutes and cooled to room temperature. The distribution, size and dispersion of steel and composite particles were determined by x-ray diffraction, scanning electron microscopy followed by chemical analysis (EDS). The structures of the sintered bodies were observed by optical microscopy and scanning electron accompanied by EDS beyond the x-ray diffraction. Initial studies sintering the obtained steel EUROFER 97 a positive reply in relation to improvement of the mechanical properties independent of the processing, because it is obtained with sintered microhardness values close to and even greater than 100% of the value obtained for the HV 333.2 pure steel as received in the form of a bar

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Carbon monoliths with high densities are studied as adsorbents for the storage of H2, CH4, and CO2 at ambient temperature and high pressures. The starting monolith A3 (produced by ATMI Co.) was activated under a CO2 flow at 1073 K, applying different activation times up to 48 h. Micropore volumes and apparent surface areas were deduced from N2 and CO2 adsorption isotherms at 77 K and 273 K, respectively. CO2 and CH4 isotherms were measured up to 3 MPa and H2 up to 20 MPa. The BET surface area of the starting monolith (941 m2/g) could be significantly increased up to 1586 m2/g, and the developed porosity is almost exclusively comprised of micropores <1 nm. Total storage amounts take into account the compressed gas in the void space of the material, in addition to the adsorbed gas. Remarkably, high total storage amounts are reached for CO2 (482 g/L), CH4 (123 g/L), and H2 (18 g/L). These values are much higher than for other sorbents with similar surface areas, due to the high density of the starting monolith and of the activated ones, for which the density decreases only slightly (from 1.0 g/cm3 to 0.8 g /cm3 upon CO2 activation). The findings reveal the suitability of high density activated carbon monoliths for gas storage application. Thus, the amounts of stored gas can be increased by more than a 70 % in the case of H2 at 20 MPa, almost 5.5 times in the case of CH4 at 3 MPa, and more than 7.5 times in the case of CO2 at 3 MPa when adsorbents are used for gas storage under the investigated conditions rather than simple compression. Furthermore, the obtained results have been recently confirmed by a scale-up study in which 2.64 kg of high density monolith adsorbent was filled a tank cylinder of 2.5 L (Carbon, 76, 2014, 123).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biomass is considered the largest renewable energy source that can be used in an environmentally sustainable. From the pyrolysis of biomass is possible to obtain products with higher energy density and better use properties. The liquid resultant of this process is traditionally called bio-oil. The use of infrared burners in industrial applications has many advantages in terms of technical-operational, for example, uniformity in the heat supply in the form of radiation and convection, with a greater control of emissions due to the passage of exhaust gases through a macroporous ceramic bed. This paper presents a commercial infrared burner adapted with an ejector proposed able to burn a hybrid configuration of liquefied petroleum gas (LPG) and bio-oil diluted. The dilution of bio-oil with absolute ethanol aimed to decrease the viscosity of the fluid, and improving the stability and atomization. It was introduced a temperature controller with thermocouple modulating two stages (low heat / high heat), and solenoid valves for fuels supply. The infrared burner has been tested, being the diluted bio-oil atomized, and evaluated its performance by conducting energy balance. The method of thermodynamic analysis to estimate the load was used an aluminum plate located at the exit of combustion gases and the distribution of temperatures measured by thermocouples. The dilution reduced the viscosity of the bio-oil in 75.4% and increased by 11% the lower heating value (LHV) of the same, providing a stable combustion to the burner through the atomizing with compressed air and burns combined with LPG. Injecting the hybrid fuel there was increase in the heat transfer from the plate to the environment in 21.6% and gain useful benefit of 26.7%, due to the improved in the efficiency of the 1st Law of Thermodynamics of infrared burner

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Material digital de DVD

Relevância:

10.00% 10.00%

Publicador:

Resumo:

57 hojas : ilustraciones, fotografías.