974 resultados para Application area
Resumo:
The production and use of biofuels has increased in the present context of sustainable development. Biofuel production from plant biomass produces not only biofuel or ethanol but also co-products containing lignin, modified lignin, and lignin derivatives. This research investigated the utilization of lignin-containing biofuel co-products (BCPs) in pavement soil stabilization as a new application area. Laboratory tests were conducted to evaluate the performance and the moisture susceptibility of two types of BCP-treated soil samples compared to the performance of untreated and traditional stabilizer-treated (fly ash) soil samples. The two types of BCPs investigated were (1) a liquid type with higher lignin content (co-product A) and (b) a powder type with lower lignin content (co-product B). Various additive combinations (co-product A and fly ash, co-products A and B, etc.) were also evaluated as alternatives to stand-alone co-products. Test results indicate that BCPs are effective in stabilizing the Iowa Class 10 soil classified as CL or A-6(8) and have excellent resistance to moisture degradation. Strengths and moisture resistance in comparison to traditional additives (fly ash) could be obtained through the use of combined additives (co-product A + fly ash; co-product A + co-product B). Utilizing BCPs as a soil stabilizer appears to be one of the many viable answers to the profitability of the bio-based products and the bioenergy business. Future research is needed to evaluate the freeze-thaw durability and for resilient modulus characterization of BCP-modified layers for a variety of pavement subgrade and base soil types. In addition, the long-term performance of these BCPs should be evaluated under actual field conditions and traffic loadings. Innovative uses of BCP in pavement-related applications could not only provide additional revenue streams to improve the economics of biorefineries, but could also serve to establish green road infrastructures.
Resumo:
Diplomityön ensimmäisenä tavoitteena on selvittää robotisoidun särmäyssolun tehokkaimmat käyttöalueet särmättävien kappaleiden muotojen ja laitteiston teknisten edellytysten perusteella. Toisena tavoitteena on tuoda esille robotisoidun särmäyssolun käyttöönottoon liittyviä ongelmia ja antaa käytännön ohjeita niiden ratkaisemiseksi. Tuloksia sovelletaan jatkossa kohdeyrityksen markkinoiman särmäyssolun tuotekehitykseen. Särmäyksen automatisoinnin pääpiirteitä tarkastellaan tutkimalla markkinoilla olevien robottisolujen toimintaperiaatteita ja ohjelmointitapoja. Työssä on myös esitetty kohdeyrityksen omat tavoitteet ja lähtökohdat särmäysmenetelmien kehittämiseen, joista tärkeimmät ovat integroitavuus joustavaan valmistusjärjestelmään ja solun kehittäminen myyntiartikkeliksi. Työssä esitellään robotisoidun särmäyssolun toimintaa kuvaamalla työkierron toiminnot pääpiirteissään. Samassa yhteydessä esitellään myös solun konekanta sekä koneiden välillä tapahtuva tiedon siirto. Erityisenä mielenkiinnon kohteena ovat olleet joustavan valmistusjärjestelmän soluohjaimen toiminnot ja särmäyssolun toimivuus FMS:n osana. Analyyttisessä osuudessa tutkitaan kappaleiden särmättävyyttä robotisoidussa tuotantoratkaisussa. Lähtökohdaksi on otettu särmäyspuristimen, robotin, lisälaitteiden ja kappaleen muotojen asettamat rajoitukset sekä toisaalta robotisoinnin tuomat uudet mahdollisuudet. Tulosten perusteella robotisointi soveltuu parhaiten painaville tai monimutkaisille kappaleille, joiden manuaalisärmäys vie paljon aikaa. Taloudellisia käyttöalueita kartoitettiin tutkimalla eräkokoon, ohjelmointiajan, kappaleajan ja särmien määrän vaikutusta valmistuskustannuksiin. Robotisoinnin on todettu kannattavan yrityksissä, joissa sarjat ovat usein toistuvia ja eräkoot yli 150 kappaleen suuruisia. Kappaleen muoto ja särmien määrä vaikuttaa monin tavoin kappaleaikaan ja siten myös valmistuskustannuksiin. Robotisoinnin kannattavuutta on näissä tapauksissa arvioitava aina tapauskohtaisesti työkierron vaatimien toimintojen perusteella.
Resumo:
This thesis studies the properties and usability of operators called t-norms, t-conorms, uninorms, as well as many valued implications and equivalences. Into these operators, weights and a generalized mean are embedded for aggregation, and they are used for comparison tasks and for this reason they are referred to as comparison measures. The thesis illustrates how these operators can be weighted with a differential evolution and aggregated with a generalized mean, and the kinds of measures of comparison that can be achieved from this procedure. New operators suitable for comparison measures are suggested. These operators are combination measures based on the use of t-norms and t-conorms, the generalized 3_-uninorm and pseudo equivalence measures based on S-type implications. The empirical part of this thesis demonstrates how these new comparison measures work in the field of classification, for example, in the classification of medical data. The second application area is from the field of sports medicine and it represents an expert system for defining an athlete's aerobic and anaerobic thresholds. The core of this thesis offers definitions for comparison measures and illustrates that there is no actual difference in the results achieved in comparison tasks, by the use of comparison measures based on distance, versus comparison measures based on many valued logical structures. The approach has been highly practical in this thesis and all usage of the measures has been validated mainly by practical testing. In general, many different types of operators suitable for comparison tasks have been presented in fuzzy logic literature and there has been little or no experimental work with these operators.
Resumo:
Tämä diplomityö esittelee monikäyttöisen kamerajärjestelmän, mikä on pääsääntöisesti tarkoitettu valovahvistimen perään integroitavaksi. Tarkoituksena on parantaa sotilaan pimeätoimintakykyä ja urbaanissa ympäristössä operointia. Järjestelmän yhtenä moduulina on silmänäyttö, joka perustuu itse-emittoivaan orgaaniseen LED-näyttöelementtiin ja perinteiseen okulaarioptiikkaan. Työssä tutustutaan näyttöelementin teknologiaan, sen mahdollisiin rajoituksiin sekä selvitetään optisilla mittauksilla, voitaisiinko konventionaalinen linssiokulaari korvata muovisella prismaokulaarilla. Tämä mahdollistaisi pienemmän ja kevyemmän silmänäytön kehittämisen, mikä parantaisi kamerajärjestelmän käytettävyyttä ja kilpailukykyä.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
Laser cutting implementation possibilities into paper making machine was studied as the main objective of the work. Laser cutting technology application was considered as a replacement tool for conventional cutting methods used in paper making machines for longitudinal cutting such as edge trimming at different paper making process and tambour roll slitting. Laser cutting of paper was tested in 70’s for the first time. Since then, laser cutting and processing has been applied for paper materials with different level of success in industry. Laser cutting can be employed for longitudinal cutting of paper web in machine direction. The most common conventional cutting methods include water jet cutting and rotating slitting blades applied in paper making machines. Cutting with CO2 laser fulfils basic requirements for cutting quality, applicability to material and cutting speeds in all locations where longitudinal cutting is needed. Literature review provided description of advantages, disadvantages and challenges of laser technology when it was applied for cutting of paper material with particular attention to cutting of moving paper web. Based on studied laser cutting capabilities and problem definition of conventional cutting technologies, preliminary selection of the most promising application area was carried out. Laser cutting (trimming) of paper web edges in wet end was estimated to be the most promising area where it can be implemented. This assumption was made on the basis of rate of web breaks occurrence. It was found that up to 64 % of total number of web breaks occurred in wet end, particularly in location of so called open draws where paper web was transferred unsupported by wire or felt. Distribution of web breaks in machine cross direction revealed that defects of paper web edge was the main reason of tearing initiation and consequent web break. The assumption was made that laser cutting was capable of improvement of laser cut edge tensile strength due to high cutting quality and sealing effect of the edge after laser cutting. Studies of laser ablation of cellulose supported this claim. Linear energy needed for cutting was calculated with regard to paper web properties in intended laser cutting location. Calculated linear cutting energy was verified with series of laser cutting. Practically obtained laser energy needed for cutting deviated from calculated values. This could be explained by difference in heat transfer via radiation in laser cutting and different absorption characteristics of dry and moist paper material. Laser cut samples (both dry and moist (dry matter content about 25-40%)) were tested for strength properties. It was shown that tensile strength and strain break of laser cut samples are similar to corresponding values of non-laser cut samples. Chosen method, however, did not address tensile strength of laser cut edge in particular. Thus, the assumption of improving strength properties with laser cutting was not fully proved. Laser cutting effect on possible pollution of mill broke (recycling of trimmed edge) was carried out. Laser cut samples (both dry and moist) were tested on the content of dirt particles. The tests revealed that accumulation of dust particles on the surface of moist samples can take place. This has to be taken into account to prevent contamination of pulp suspension when trim waste is recycled. Material loss due to evaporation during laser cutting and amount of solid residues after cutting were evaluated. Edge trimming with laser would result in 0.25 kg/h of solid residues and 2.5 kg/h of lost material due to evaporation. Schemes of laser cutting implementation and needed laser equipment were discussed. Generally, laser cutting system would require two laser sources (one laser source for each cutting zone), set of beam transfer and focusing optics and cutting heads. In order to increase reliability of system, it was suggested that each laser source would have double capacity. That would allow to perform cutting employing one laser source working at full capacity for both cutting zones. Laser technology is in required level at the moment and do not require additional development. Moreover, capacity of speed increase is high due to availability high power laser sources what can support the tendency of speed increase of paper making machines. Laser cutting system would require special roll to maintain cutting. The scheme of such roll was proposed as well as roll integration into paper making machine. Laser cutting can be done in location of central roll in press section, before so-called open draw where many web breaks occur, where it has potential to improve runability of a paper making machine. Economic performance of laser cutting was done as comparison of laser cutting system and water jet cutting working in the same conditions. It was revealed that laser cutting would still be about two times more expensive compared to water jet cutting. This is mainly due to high investment cost of laser equipment and poor energy efficiency of CO2 lasers. Another factor is that laser cutting causes material loss due to evaporation whereas water jet cutting almost does not cause material loss. Despite difficulties of laser cutting implementation in paper making machine, its implementation can be beneficial. The crucial role in that is possibility to improve cut edge strength properties and consequently reduce number of web breaks. Capacity of laser cutting to maintain cutting speeds which exceed current speeds of paper making machines what is another argument to consider laser cutting technology in design of new high speed paper making machines.
Resumo:
In der vorliegenden Arbeit wird die Konzeption und Realisierung der Persistenz-, Verteilungs- und Versionierungsbibliothek CoObRA 2 vorgestellt. Es werden zunächst die Anforderungen an ein solches Rahmenwerk aufgenommen und vorhandene Technologien für dieses Anwendungsgebiet vorgestellt. Das in der neuen Bibliothek eingesetzte Verfahren setzt Änderungsprotokolle beziehungsweise -listen ein, um Persistenzdaten für Dokumente und Versionen zu definieren. Dieses Konzept wird dabei durch eine Abbildung auf Kontrukte aus der Graphentheorie gestützt, um die Semantik von Modell, Änderungen und deren Anwendung zu definieren. Bei der Umsetzung werden insbesondere das Design der Bibliothek und die Entscheidungen, die zu der gewählten Softwarearchitektur führten, eingehend erläutert. Dies ist zentraler Aspekt der Arbeit, da die Flexibilität des Rahmenwerks eine wichtige Anforderung darstellt. Abschließend werden die Einsatzmöglichkeiten an konkreten Beispielanwendungen erläutert und bereits gemachte Erfahrungen beim Einsatz in CASE-Tools, Forschungsanwendungen und Echtzeit-Simulationsumgebungen präsentiert.
Resumo:
Virtual Reality (VR) is widely used in visualizing medical datasets. This interest has emerged due to the usefulness of its techniques and features. Such features include immersion, collaboration, and interactivity. In a medical visualization context, immersion is important, because it allows users to interact directly and closelywith detailed structures in medical datasets. Collaboration on the other hand is beneficial, because it gives medical practitioners the chance to share their expertise and offer feedback and advice in a more effective and intuitive approach. Interactivity is crucial in medical visualization and simulation systems, because responsiveand instantaneous actions are key attributes in applications, such as surgical simulations. In this paper we present a case study that investigates the use of VR in a collaborative networked CAVE environment from a medical volumetric visualization perspective. The study will present a networked CAVE application, which has been built to visualize and interact with volumetric datasets. We will summarize the advantages of such an application and the potential benefits of our system. We also will describe the aspects related to this application area and the relevant issues of such implementations.
Resumo:
The aim of this study was to evaluate the plasma concentration of diclofenac sodium (DS) in dogs submitted to diclofenaco phonophoresis and to evaluate if phonophoresis induces greater absorption of this drug in dogs. Five dogs were used in eight different groups at different times: One group received oral administration of 40mg of DS per dog and seven groups received topical application of emulgel DS. The topical application area was 20cm(2). A continuous ultrasound frequency of 1MHz and intensity of 0.4W cm(-2) was used. Blood collections were performed before the treatment (T0), and 1h (T1) and 4h (T2) after ultrasound application for all groups. DS concentrations in plasma were measured by high performance liquid choramatohraphy (HPLC). There was significant increase of DS plasma concentration only at T1 in the oral administration group. It was not possible to detect any concentration of DS in the plasma of dogs after topical application of DS, even after DS phonophoresis. The facilitation of transdermal penetration by ultrasound has not been verified under the protocol specified in this research.
ANN statistical image recognition method for computer vision in agricultural mobile robot navigation
Resumo:
The main application area in this project, is to deploy image processing and segmentation techniques in computer vision through an omnidirectional vision system to agricultural mobile robots (AMR) used for trajectory navigation problems, as well as localization matters. Thereby, computational methods based on the JSEG algorithm were used to provide the classification and the characterization of such problems, together with Artificial Neural Networks (ANN) for image recognition. Hence, it was possible to run simulations and carry out analyses of the performance of JSEG image segmentation technique through Matlab/Octave computational platforms, along with the application of customized Back-propagation Multilayer Perceptron (MLP) algorithm and statistical methods as structured heuristics methods in a Simulink environment. Having the aforementioned procedures been done, it was practicable to classify and also characterize the HSV space color segments, not to mention allow the recognition of segmented images in which reasonably accurate results were obtained. © 2010 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Ciências da Motricidade - IBRC
Resumo:
Pós-graduação em Agronomia (Proteção de Plantas) - FCA
Resumo:
Pós-graduação em Engenharia Mecânica - FEG
Resumo:
The aim of the research was to evaluate the effect of adjuvants on the spray drift applications from mixture of 2,4-D + glyphosate. The trial was carried out in field conditions in a completely randomized design. The treatments corresponded to solutions containing mixture of the herbicides 2,4-D + glyphosate (670 and 1068g ha-1, respectively) adding the adjuvants (v v-1): mineral oil (0.5%); anti-drift agent (0.09%); spreader-sticker A (0.1%); liquid fertilizer (0.05%); spreader-sticker B (0.25%); and only herbicides without adjuvantes (control). Nylon strings were used to drift determination outside the application area (1, 5, 10, 20, 50, 100 and 200 m away) with 4 replications and six foam cylinders placed on the boom of the sprayer were used to collect the droplets subject to drift. The applications were performed simultaneously, using a specific salt tracer for each spray solution to quantify the deposits by spectrophotometer. It was not possible to verify effect of the adjuvants on drift at different distances of the application area. Based on droplets collected above the boom spray, it was found that susceptibility to drift was lower with the mineral oil and the anti-drift agent. The drift risk was higher with the liquid fertilizer and the spreader-sticker B.