891 resultados para Enzymes - Industrial applications


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Molecular dynamics simulations of silicate and borate glasses and melts: Structure, diffusion dynamics and vibrational properties. In this work computer simulations of the model glass formers SiO2 and B2O3 are presented, using the techniques of classical molecular dynamics (MD) simulations and quantum mechanical calculations, based on density functional theory (DFT). The latter limits the system size to about 100−200 atoms. SiO2 and B2O3 are the two most important network formers for industrial applications of oxide glasses. Glass samples are generated by means of a quench from the melt with classical MD simulations and a subsequent structural relaxation with DFT forces. In addition, full ab initio quenches are carried out with a significantly faster cooling rate. In principle, the structural properties are in good agreement with experimental results from neutron and X-ray scattering, in all cases. A special focus is on the study of vibrational properties, as they give access to low-temperature thermodynamic properties. The vibrational spectra are calculated by the so-called ”frozen phonon” method. In all cases, the DFT curves show an acceptable agreement with experimental results of inelastic neutron scattering. In case of the model glass former B2O3, a new classical interaction potential is parametrized, based on the liquid trajectory of an ab initio MD simulation at 2300 K. In this course, a structural fitting routine is used. The inclusion of 3-body angular interactions leads to a significantly improved agreement of the liquid properties of the classical MD and ab initio MD simulations. However, the generated glass structures, in all cases, show a significantly lower fraction of 3-membered planar boroxol rings as predicted by experimental results (f=60%-80%). The largest boroxol ring fraction of f=15±5% is observed in the full ab initio quenches from 2300 K. In case of SiO2, the glass structures after the quantum mechanical relaxation are the basis for calculations of the linear thermal expansion coefficient αL(T), employing the quasi-harmonic approximation. The striking observation is a change change of sign of αL(T) going along with a temperature range of negative αL(T) at low temperatures, which is in good agreement with experimental results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Hinsicht darauf, dass sich S. cerevisiae-Stämme im Laufe der Domestizierung und Anpassung an verschiedene Habitate genetisch verändert haben, wurde in dieser Arbeit eine repräsentative Auswahl von Labor-, kommerziellen und in der Natur vorkommenden Saccharomyces-Stämmen und ihren Interspezies-Hybriden auf die Verbreitung alleler Varianten der Hexokinase-Gene HXK1 und HXK2 getestet. Von den Hexose-Transportern stand Hxt3p im Mittelpunkt, da seine essentielle Rolle bei der Vergärung von Glucose und Fructose bereits belegt wurde.rnIn dieser Arbeit wurde gezeigt, dass es bedeutende Unterschiede in der Vergärung von Glucose und Fructose zwischen Weinhefen der Gattung Saccharomyces gibt, die z.T. mit Struktur-Varianten des Hexose-Transporter Hxt3p korrelieren. rnInsgesamt 51 Hefestämme wurden auf ihre allele Variante des HXT3-Gens untersucht. Dabei haben sich drei Hauptgruppen (die Fermichamp®-Typ Gruppe, Bierhefen und Hybrid-Stämme) mit unterschiedlichem HXT3-Allel ergeben. Im Zusammenhang mit der Weinherstellung wurden signifikante Nukleotid-Substitutionen innerhalb des HXT3-Gens der robusten S. cerevisiae-Stämme (wie z.B. Sekthefen, kommerzielle Starterkulturen) und Hybrid-Stämmen festgestellt. Diese Hefen zeichneten sich durch die Fähigkeit aus, den Most trotz stressigen Umwelt-Bedingungen (wie hohe Ethanol-Konzentration, reduzierter Ammonium-Gehalt, ungünstiges Glucose:Fructose-Verhältnis) zu vergären. rnDie Experimente deuten darauf hin, dass die HXT3-Allel-Variante des als Starterkultur verwendbaren Stammes Fermichamp®, für den verstärkten Fructose-Abbau verantwortlich ist. Ein gleiches Verhalten der Stämme mit dieser Allel-Variante wurde ebenfalls beobachtet. Getestet wurden die S. cerevisiae-Stämme Fermichamp® und 54.41, die bezüglich Hxt3p-Aminosäuresequenz gleich sind, gegenüber zwei S. cerevisiae-Stämmen mit dem HXT3-Standard-Alleltyp Fermivin® und 33. Der Unterschied in der Hexose-Verwertung zwischen Stämmen mit Fermichamp®- und Standard-Alleltyp war in der Mitte des Gärverlaufs am deutlichsten zu beobachten. Beide Gruppen, sowohl mit HXT3 Fermichamp®- als auch Fermivin®-Alleltyp vergoren die Glucose schneller als die Fructose. Der Unterschied aber zwischen diesen HXT3-Alleltypen bei der Zucker-Verwertung lag darin, dass der Fermichamp®-Typ eine kleinere Differenz in der Abbau-Geschwindigkeit der beiden Hexosen zeigte als der Fermivin®-Typ. Die Zuckeraufnahme-Messungen haben die relativ gute Fructose-Aufnahme dieser Stämme bestätigt.rnEbenfalls korrelierte der fructophile Charakter des Triple-Hybrides S. cerevisiae x S. kudriavzevii x S. bayanus-Stamm HL78 in Transportexperimenten mit verstärkter Aufnahme von Fructose im Vergleich zu Glucose. Insgesamt zeigte dieser Stamm ähnliches Verhalten wie die S. cerevisiae-Stämme Fermichamp® und 54.41. rnIn dieser Arbeit wurde ein Struktur-Modell des Hexose-Transporters Hxt3p erstellt. Als Basis diente die zu 30 % homologe Struktur des Proton/Xylose-Symporters XylE aus Escherichia coli. Anhand des Hxt3p-Modells konnten Sequenzbereiche mit hoher Variabilität (Hotspots) in drei Hxt3p-Isoformen der Hauptgruppen (die Fermichamp®-Typ Gruppe, Bierhefen und Hybrid-Stämme) detektiert werden. Diese signifikanten Aminosäure-Substitutionen, die eine mögliche Veränderung der physikalischen und chemischen Eigenschaften des Carriers mit sich bringen, konzentrieren sich auf drei Bereiche. Dazu gehören die Region zwischen den N- und C-terminalen Domänen, die cytosolische Domäne und der Outside-Loop zwischen Transmembranregion 9 und Transmembranregion 10. rnObwohl die Transportmessungen keinen Zusammenhang zwischen Stämmen mit unterschiedlichen HXT3-Allelen und ihrer Toleranz gegenüber Ethanol ergaben, wurde ein signifikanter Anstieg in der Zuckeraufnahme nach vorheriger 24-stündiger Inkubation mit 4 Vol% Ethanol bei den Teststämmen beobachtet. rnInsgesamt könnten allele Varianten von HXT3-Gen ein nützliches Kriterium bei der Suche nach robusten Hefen für die Weinherstellung oder für andere industrielle Anwendungen sein. Die Auswirkung dieser Modifikationen auf die Struktur und Effizienz des Hexose-Transporters, sowie der mögliche Zusammenhang mit Ethanol-Resistenz müssen weiter ausführlich untersucht werden. rnEin Zusammenhang zwischen den niedrig variablen Allel-Varianten der Hexokinase-Gene HXK1 und HXK2 und dem Zucker-Metabolismus wurde nicht gefunden. Die Hexokinasen der untersuchten Stämme wiesen allerdings generell eine signifikante geringere Affinität zu Fructose im Vergleich zu Glucose auf. Hier liegt sicherlich eine Hauptursache für den Anstieg des Fructose:Glucose-Verhältnisses im Laufe der Vergärung von Traubenmosten.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Polylactic acid (PLA) is a bio-derived, biodegradable polymer with a number of similar mechanical properties to commodity plastics like polyethylene (PE) and polyethylene terephthalate (PETE). There has recently been a great interest in using PLA to replace these typical petroleum-derived polymers because of the developing trend to use more sustainable materials and technologies. However, PLA¿s inherent slow crystallization behavior is not compatible with prototypical polymer processing techniques such as molding and extrusion, and in turn inhibits its widespread use in industrial applications. In order to make PLA into a commercially-viable material, there is a need to process the material in such a way that its tendency to form crystals is enhanced. The industry standard for producing PLA products is via twin screw extrusion (TSE), where polymer pellets are fed into a heated extruder, mixed at a temperature above its melting temperature, and molded into a desired shape. A relatively novel processing technique called solid-state shear pulverization (SSSP) processes the polymer in the solid state so that nucleation sites can develop and fast crystallization can occur. SSSP has also been found to enhance the mechanical properties of a material, but its powder output form is undesirable in industry. A new process called solid-state/melt extrusion (SSME), developed at Bucknell University, combines the TSE and SSSP processes in one instrument. This technique has proven to produce moldable polymer products with increased mechanical strength. This thesis first investigated the effects of the TSE, SSSP, and SSME polymer processing techniques on PLA. The study seeks to determine the process that yields products with the most enhanced thermal and mechanical properties. For characterization, percent crystallinity, crystallization half time, storage modulus, softening temperature, degradation temperature and molecular weight were analyzed for all samples. Through these characterization techniques, it was observed that SSME-processed PLA had enhanced properties relative to TSE- and SSSP-processed PLA. Because of the previous findings, an optimization study for SSME-processed PLA was conducted where throughput and screw design were varied. The optimization study determined PLA processed with a low flow rate and a moderate screw design in an SSME process produced a polymer product with the largest increase in thermal properties and a high retention of polymer structure relative to TSE-, SSSP-, and all other SSME-processed PLA. It was concluded that the SSSP part of processing scissions polymer chains, creating defects within the material, while the TSE part of processing allows these defects to be mixed thoroughly throughout the sample. The study showed that a proper SSME setup allows for both the increase in nucleation sites within the polymer and sufficient mixing, which in turn leads to the development of a large amount of crystals in a short period of time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Copper (Cu) and its alloys are used extensively in domestic and industrial applications. Cu is also an essential element in mammalian nutrition. Since both copper deficiency and copper excess produce adverse health effects, the dose-response curve is U-shaped, although the precise form has not yet been well characterized. Many animal and human studies were conducted on copper to provide a rich database from which data suitable for modeling the dose-response relationship for copper may be extracted. Possible dose-response modeling strategies are considered in this review, including those based on the benchmark dose and categorical regression. The usefulness of biologically based dose-response modeling techniques in understanding copper toxicity was difficult to assess at this time since the mechanisms underlying copper-induced toxicity have yet to be fully elucidated. A dose-response modeling strategy for copper toxicity was proposed associated with both deficiency and excess. This modeling strategy was applied to multiple studies of copper-induced toxicity, standardized with respect to severity of adverse health outcomes and selected on the basis of criteria reflecting the quality and relevance of individual studies. The use of a comprehensive database on copper-induced toxicity is essential for dose-response modeling since there is insufficient information in any single study to adequately characterize copper dose-response relationships. The dose-response modeling strategy envisioned here is designed to determine whether the existing toxicity data for copper excess or deficiency may be effectively utilized in defining the limits of the homeostatic range in humans and other species. By considering alternative techniques for determining a point of departure and low-dose extrapolation (including categorical regression, the benchmark dose, and identification of observed no-effect levels) this strategy will identify which techniques are most suitable for this purpose. This analysis also serves to identify areas in which additional data are needed to better define the characteristics of dose-response relationships for copper-induced toxicity in relation to excess or deficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent improvements in precursor chemistry, reactor geometry and run conditions extend the manufacturing capability of traditional flame aerosol synthesis of oxide nanoparticles to metals, alloys and inorganic complex salts. As an example of a demanding composition, we demonstrate here the one-step flame synthesis of nanoparticles of a 4-element non-oxide phosphor for upconversion applications. The phosphors are characterized in terms of emission capability, phase purity and thermal phase evolution. The preparation of flame-made beta-NaYF4 with dopants of Yb, Tm or Yb, Er furthermore illustrates the now available nanoparticle synthesis tool boxes based on modified flamespray synthesis from our laboratories at ETH Zurich. Since scaling concepts for flame synthesis, including large-scale filtration and powder handling, have become available commercially, the development of industrial applications of complex nanoparticles of metals, alloys or most other thermally stable, inorganic compounds can now be considered a feasible alternative to traditional top-down manufacturing or liquid-intense wet chemistry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Internet of Things based systems are anticipated to gain widespread use in industrial applications. Standardization efforts, like 6L0WPAN and the Constrained Application Protocol (CoAP) have made the integration of wireless sensor nodes possible using Internet technology and web-like access to data (RESTful service access). While there are still some open issues, the interoperability problem in the lower layers can now be considered solved from an enterprise software vendors' point of view. One possible next step towards integration of real-world objects into enterprise systems and solving the corresponding interoperability problems at higher levels is to use semantic web technologies. We introduce an abstraction of real-world objects, called Semantic Physical Business Entities (SPBE), using Linked Data principles. We show that this abstraction nicely fits into enterprise systems, as SPBEs allow a business object centric view on real-world objects, instead of a pure device centric view. The interdependencies between how currently services in an enterprise system are used and how this can be done in a semantic real-world aware enterprise system are outlined, arguing for the need of semantic services and semantic knowledge repositories. We introduce a lightweight query language, which we use to perform a quantitative analysis of our approach to demonstrate its feasibility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Solver Add-in of Microsoft Excel is widely used in courses on Operations Research and in industrial applications. Since the 2010 version of Microsoft Excel, the Solver Add-in comprises a so-called evolutionary solver. We analyze how this metaheuristic can be applied to the resource-constrained project scheduling problem (RCPSP). We present an implementation of a schedule-generation scheme in a spreadsheet, which combined with the evolutionary solver can be used for devising good feasible schedules. Our computational results indicate that using this approach, non-trivial instances of the RCPSP can be (approximately) solved to optimality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Industrial applications of computer vision sometimes require detection of atypical objects that occur as small groups of pixels in digital images. These objects are difficult to single out because they are small and randomly distributed. In this work we propose an image segmentation method using the novel Ant System-based Clustering Algorithm (ASCA). ASCA models the foraging behaviour of ants, which move through the data space searching for high data-density regions, and leave pheromone trails on their path. The pheromone map is used to identify the exact number of clusters, and assign the pixels to these clusters using the pheromone gradient. We applied ASCA to detection of microcalcifications in digital mammograms and compared its performance with state-of-the-art clustering algorithms such as 1D Self-Organizing Map, k-Means, Fuzzy c-Means and Possibilistic Fuzzy c-Means. The main advantage of ASCA is that the number of clusters needs not to be known a priori. The experimental results show that ASCA is more efficient than the other algorithms in detecting small clusters of atypical data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La termografía es un método de inspección y diagnóstico basado en la radiación infrarroja que emiten los cuerpos. Permite medir dicha radiación a distancia y sin contacto, obteniendo un termograma o imagen termográfica, objeto de estudio de este proyecto. Todos los cuerpos que se encuentren a una cierta temperatura emiten radiación infrarroja. Sin embargo, para hacer una inspección termográfica hay que tener en cuenta la emisividad de los cuerpos, capacidad que tienen de emitir radiación, ya que ésta no sólo depende de la temperatura del cuerpo, sino también de sus características superficiales. Las herramientas necesarias para conseguir un termograma son principalmente una cámara termográfica y un software que permita su análisis. La cámara percibe la emisión infrarroja de un objeto y lo convierte en una imagen visible, originalmente monocromática. Sin embargo, después es coloreada por la propia cámara o por un software para una interpretación más fácil del termograma. Para obtener estas imágenes termográficas existen varias técnicas, que se diferencian en cómo la energía calorífica se transfiere al cuerpo. Estas técnicas se clasifican en termografía pasiva, activa y vibrotermografía. El método que se utiliza en cada caso depende de las características térmicas del cuerpo, del tipo de defecto a localizar o la resolución espacial de las imágenes, entre otros factores. Para analizar las imágenes y así obtener diagnósticos y detectar defectos, es importante la precisión. Por ello existe un procesado de las imágenes, para minimizar los efectos provocados por causas externas, mejorar la calidad de la imagen y extraer información de las inspecciones realizadas. La termografía es un método de ensayo no destructivo muy flexible y que ofrece muchas ventajas. Por esta razón el campo de aplicación es muy amplio, abarcando desde aplicaciones industriales hasta investigación y desarrollo. Vigilancia y seguridad, ahorro energético, medicina o medio ambiente, son algunos de los campos donde la termografía aportaimportantes beneficios. Este proyecto es un estudio teórico de la termografía, donde se describen detalladamente cada uno de los aspectos mencionados. Concluye con una aplicación práctica, creando una cámara infrarroja a partir de una webcam, y realizando un análisis de las imágenes obtenidas con ella. Con esto se demuestran algunas de las teorías explicadas, así como la posibilidad de reconocer objetos mediante la termografía. Thermography is a method of testing and diagnosis based on the infrared radiation emitted by bodies. It allows to measure this radiation from a distance and with no contact, getting a thermogram or thermal image, object of study of this project. All bodies that are at a certain temperature emit infrared radiation. However, making a thermographic inspection must take into account the emissivity of the body, capability of emitting radiation. This not only depends on the temperature of the body, but also on its surface characteristics. The tools needed to get a thermogram are mainly a thermal imaging camera and software that allows analysis. The camera sees the infrared emission of an object and converts it into a visible image, originally monochrome. However, after it is colored by the camera or software for easier interpretation of thermogram. To obtain these thermal images it exists various techniques, which differ in how heat energy is transferred to the body. These techniques are classified into passive thermography, active and vibrotermografy. The method used in each case depends on the thermal characteristics of the body, the type of defect to locate or spatial resolution of images, among other factors. To analyze the images and obtain diagnoses and defects, accuracy is important. Thus there is a image processing to minimize the effects caused by external causes, improving image quality and extract information from inspections. Thermography is a non-­‐destructive test method very flexible and offers many advantages. So the scope is very wide, ranging from industrial applications to research and development.Surveillance and security, energy saving, environmental or medicine are some of the areas where thermography provides significant benefits. This project is a theoretical study of thermography, which describes in detail each of these aspects. It concludes with a practical application, creating an infrared camera from a webcam, and making an analysis of the images obtained with it. This will demonstrate some of the theories explained as well as the ability to recognize objects by thermography.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El estudio de materiales, especialmente biológicos, por medios no destructivos está adquiriendo una importancia creciente tanto en las aplicaciones científicas como industriales. Las ventajas económicas de los métodos no destructivos son múltiples. Existen numerosos procedimientos físicos capaces de extraer información detallada de las superficie de la madera con escaso o nulo tratamiento previo y mínima intrusión en el material. Entre los diversos métodos destacan las técnicas ópticas y las acústicas por su gran versatilidad, relativa sencillez y bajo coste. Esta tesis pretende establecer desde la aplicación de principios simples de física, de medición directa y superficial, a través del desarrollo de los algoritmos de decisión mas adecuados basados en la estadística, unas soluciones tecnológicas simples y en esencia, de coste mínimo, para su posible aplicación en la determinación de la especie y los defectos superficiales de la madera de cada muestra tratando, en la medida de lo posible, no alterar su geometría de trabajo. Los análisis desarrollados han sido los tres siguientes: El primer método óptico utiliza las propiedades de la luz dispersada por la superficie de la madera cuando es iluminada por un laser difuso. Esta dispersión produce un moteado luminoso (speckle) cuyas propiedades estadísticas permiten extraer propiedades muy precisas de la estructura tanto microscópica como macroscópica de la madera. El análisis de las propiedades espectrales de la luz laser dispersada genera ciertos patrones mas o menos regulares relacionados con la estructura anatómica, composición, procesado y textura superficial de la madera bajo estudio que ponen de manifiesto características del material o de la calidad de los procesos a los que ha sido sometido. El uso de este tipo de láseres implica también la posibilidad de realizar monitorizaciones de procesos industriales en tiempo real y a distancia sin interferir con otros sensores. La segunda técnica óptica que emplearemos hace uso del estudio estadístico y matemático de las propiedades de las imágenes digitales obtenidas de la superficie de la madera a través de un sistema de scanner de alta resolución. Después de aislar los detalles mas relevantes de las imágenes, diversos algoritmos de clasificacion automatica se encargan de generar bases de datos con las diversas especies de maderas a las que pertenecían las imágenes, junto con los márgenes de error de tales clasificaciones. Una parte fundamental de las herramientas de clasificacion se basa en el estudio preciso de las bandas de color de las diversas maderas. Finalmente, numerosas técnicas acústicas, tales como el análisis de pulsos por impacto acústico, permiten complementar y afinar los resultados obtenidos con los métodos ópticos descritos, identificando estructuras superficiales y profundas en la madera así como patologías o deformaciones, aspectos de especial utilidad en usos de la madera en estructuras. La utilidad de estas técnicas esta mas que demostrada en el campo industrial aun cuando su aplicación carece de la suficiente expansión debido a sus altos costes y falta de normalización de los procesos, lo cual hace que cada análisis no sea comparable con su teórico equivalente de mercado. En la actualidad gran parte de los esfuerzos de investigación tienden a dar por supuesto que la diferenciación entre especies es un mecanismo de reconocimiento propio del ser humano y concentran las tecnologías en la definición de parámetros físicos (módulos de elasticidad, conductividad eléctrica o acústica, etc.), utilizando aparatos muy costosos y en muchos casos complejos en su aplicación de campo. Abstract The study of materials, especially the biological ones, by non-destructive techniques is becoming increasingly important in both scientific and industrial applications. The economic advantages of non-destructive methods are multiple and clear due to the related costs and resources necessaries. There are many physical processes capable of extracting detailed information on the wood surface with little or no previous treatment and minimal intrusion into the material. Among the various methods stand out acoustic and optical techniques for their great versatility, relative simplicity and low cost. This thesis aims to establish from the application of simple principles of physics, surface direct measurement and through the development of the more appropriate decision algorithms based on statistics, a simple technological solutions with the minimum cost for possible application in determining the species and the wood surface defects of each sample. Looking for a reasonable accuracy without altering their work-location or properties is the main objetive. There are three different work lines: Empirical characterization of wood surfaces by means of iterative autocorrelation of laser speckle patterns: A simple and inexpensive method for the qualitative characterization of wood surfaces is presented. it is based on the iterative autocorrelation of laser speckle patterns produced by diffuse laser illumination of the wood surfaces. The method exploits the high spatial frequency content of speckle images. A similar approach with raw conventional photographs taken with ordinary light would be very difficult. A few iterations of the algorithm are necessary, typically three or four, in order to visualize the most important periodic features of the surface. The processed patterns help in the study of surface parameters, to design new scattering models and to classify the wood species. Fractal-based image enhancement techniques inspired by differential interference contrast microscopy: Differential interference contrast microscopy is a very powerful optical technique for microscopic imaging. Inspired by the physics of this type of microscope, we have developed a series of image processing algorithms aimed at the magnification, noise reduction, contrast enhancement and tissue analysis of biological samples. These algorithms use fractal convolution schemes which provide fast and accurate results with a performance comparable to the best present image enhancement algorithms. These techniques can be used as post processing tools for advanced microscopy or as a means to improve the performance of less expensive visualization instruments. Several examples of the use of these algorithms to visualize microscopic images of raw pine wood samples with a simple desktop scanner are provided. Wood species identification using stress-wave analysis in the audible range: Stress-wave analysis is a powerful and flexible technique to study mechanical properties of many materials. We present a simple technique to obtain information about the species of wood samples using stress-wave sounds in the audible range generated by collision with a small pendulum. Stress-wave analysis has been used for flaw detection and quality control for decades, but its use for material identification and classification is less cited in the literature. Accurate wood species identification is a time consuming task for highly trained human experts. For this reason, the development of cost effective techniques for automatic wood classification is a desirable goal. Our proposed approach is fully non-invasive and non-destructive, reducing significantly the cost and complexity of the identification and classification process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The geometrical factors defining an adhesive joint are of great importance as its design greatly conditions the performance of the bonding. One of the most relevant geometrical factors is the thickness of the adhesive as it decisively influences the mechanical properties of the bonding and has a clear economic impact on the manufacturing processes or long runs. The traditional mechanical joints (riveting, welding, etc.) are characterised by a predictable performance, and are very reliable in service conditions. Thus, structural adhesive joints will only be selected in industrial applications demanding mechanical requirements and adverse environmental conditions if the suitable reliability (the same or higher than the mechanical joints) is guaranteed. For this purpose, the objective of this paper is to analyse the influence of the adhesive thickness on the mechanical behaviour of the joint and, by means of a statistical analysis based on Weibull distribution, propose the optimum thickness for the adhesive combining the best mechanical performance and high reliability. This procedure, which is applicable without a great deal of difficulty to other joints and adhesives, provides a general use for a more reliable use of adhesive bondings and, therefore, for a better and wider use in the industrial manufacturing processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The deviation of calibration coefficients from five cup anemometer models over time was analyzed. The analysis was based on a series of laboratory calibrations between January 2001 and August 2010. The analysis was performed on two different groups of anemometers: (1) anemometers not used for any industrial purpose (that is, just stored); and (2) anemometers used in different industrial applications (mainly in the field—or outside—applications like wind farms). Results indicate a loss of performance of the studied anemometers over time. In the case of the unused anemometers the degradation shows a clear pattern. In the case of the anemometers used in the field, the data analyzed also suggest a loss of performance, yet the degradation does not show a clear trend. A recalibration schedule is proposed based on the observed performances variations

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The arrangement of atoms at the surface of a solid accounts for many of its properties: Hardness, chemical activity, corrosion, etc. are dictated by the precise surface structure. Hence, finding it, has a broad range of technical and industrial applications. The ability to solve this problem opens the possibility of designing by computer materials with properties tailored to specific applications. Since the search space grows exponentially with the number of atoms, its solution cannot be achieved for arbitrarily large structures. Presently, a trial and error procedure is used: an expert proposes an structure as a candidate solution and tries a local optimization procedure on it. The solution relaxes to the local minimum in the attractor basin corresponding to the initial point, that might be the one corresponding to the global minimum or not. This procedure is very time consuming and, for reasonably sized surfaces, can take many iterations and much effort from the expert. Here we report on a visualization environment designed to steer this process in an attempt to solve bigger structures and reduce the time needed. The idea is to use an immersive environment to interact with the computation. It has immediate feedback to assess the quality of the proposed structure in order to let the expert explore the space of candidate solutions. The visualization environment is also able to communicate with the de facto local solver used for this problem. The user is then able to send trial structures to the local minimizer and track its progress as they approach the minimum. This allows for simultaneous testing of candidate structures. The system has also proved very useful as an educational tool for the field.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El proceso de soldadura por láser desarrollado en los últimos años ha puesto de manifiesto las posibilidades de aplicación de esta tecnología en diferentes sectores productivos, principalmente en la industria automovilística, en la cual se han demostrado sus ventajas en términos de productividad, eficiencia y calidad. El uso de la tecnología láser, ya sea híbrida o pura, reduce el input térmico al limitar la zona afectada por el calor, sin crear deformaciones y, por tanto, disminuye los re-trabajos post-soldadura necesarios para eliminarlas. Asimismo, se aumenta la velocidad de soldadura, incrementando la productividad y calidad de las uniones. En la última década, el uso de láseres híbridos, (láser + arco) de gran potencia de Neodimio YAG, (Nd: YAG) ha sido cada vez más importante. La instalación de este tipo de fuentes de láser sólido de gran potencia ha sido posible en construcción naval debido a sus ventajas con respecto a las instalaciones de láser de C02 existentes en los astilleros que actualmente utilizan esta tecnología. Los láseres de C02 están caracterizados por su gran potencia y la transmisión del haz a través de espejos. En el caso de las fuentes de Nd:YAG, debido a la longitud de onda a la cual se genera el haz láser, su transmisión pueden ser realizada a través de fibra óptica , haciendo posible la utilización del cabezal láser a gran distancia de la fuente, aparte de la alternativa de integrar el cabezal en unidades robotizadas. El proceso láser distribuye el calor aportado de manera uniforme. Las características mecánicas de dichas uniones ponen de manifiesto la adecuación de la soldadura por láser para su uso en construcción naval, cumpliendo los requerimientos exigidos por las Sociedades de Clasificación. La eficiencia energética de los láseres de C02, con porcentajes superiores al 20%, aparte de las ya estudiadas técnicas de su instalación constituyen las razones por las cuales este tipo de láser es el más usado en el ámbito industrial. El láser de gran potencia de Nd: YAG está presente en el mercado desde hace poco tiempo, y por tanto, su precio es relativamente mayor que el de C02, siendo sus costes de mantenimiento, tanto de lámparas como de diodos necesarios para el bombeo del sólido, igualmente mayores que en el caso del C02. En cambio, el efecto de absorción de parte de la energía en el plasma generado durante el proceso no se produce en el caso del láser de Nd: YAG, utilizando parte de esa energía en estabilizar el arco, siendo necesaria menos potencia de la fuente, reduciendo el coste de la inversión. En función de la aplicación industrial, se deberá realizar el análisis de viabilidad económica correspondiente. Dependiendo de la potencia de la fuente y del tipo de láser utilizado, y por tanto de la longitud de onda a la que se propaga la radiación electromagnética, pueden existen riesgos para la salud. El láser de neodimio se propaga en una longitud de onda, relativamente cercana al rango visible, en la cual se pueden producir daños en los ojos de los operadores. Se deberán establecer las medidas preventivas para evitar los riesgos a los que están expuestos dichos operadores en la utilización de este tipo de energía. La utilización del láser de neodimio: YAG ofrece posibilidades de utilización en construcción naval económicamente rentables, debido su productividad y las buenas características mecánicas de las uniones. Abstract The laser welding process development of the last years shows broad application possibilities in many sectors of industry, mostly in automobile production. The advantages of the laser beam process produce higher productivity, increasing the quality and thermal efficiency. Laser technology, arc-hybrid or pure laser welding, reduces thermal input and thus a smaller heat-affected zone at the work piece. This means less weldment distortion which reduces the amount of subsequent post-weld straightening work that needs to be done. A higher welding speed is achieved by use of the arc and the laser beam, increasing productivity and quality of the joining process. In the last decade use of hybrid technology (laser-GMA hybrid method) with high power sources Nd:YAG lasers, gained in importance. The installation of this type of higher power solid state laser is possible in shipbuilding industrial applications due to its advantages compare with the C02 laser sources installed in the shipyards which use this technology. C02 lasers are characterised by high power output and its beam guidance is via inelastic system of mirrors. In the case of Nd:YAG laser, due to its wavelength, the laser beam can be led by means of a flexible optical fibre even across large distances, which allows three dimensional welding jobs by using of robots. Laser beam welding is a process during which the heat is transferred to the welded material uniformly and the features of the process fulfilled the requirements by Classification Societies. So that, its application to the shipbuilding industry should be possible. The high quantum efficiency of C02 laser, which enabled efficiency factors up to 20%, and relative simple technical possibilities of implementation are the reasons for the fact that it is the most important laser in industrial material machining. High power Nd: YAG laser is established on the market since short time, so that its price is relatively high compared with the C02 laser source and its maintenance cost, lamp or diode pumped solid state laser, is also higher than in the case of C02 lasers. Nevertheless effect of plasma shielding does not exist with Nd:YAG lasers, so that for the gas-shielding welding process the optimal gases can be used regarding arc stability, thus power source are saved and the costs can be optimised. Each industrial application carried out needs its cost efficiency analysis. Depending on the power output and laser type, the dangerousness of reflected irradiation, which even in some meters distance, affects for the healthy operators. For the YAG laser process safety arrangements must be set up in order to avoid the laser radiation being absorbed by the human eye. Due to its wavelength of radiation, being relatively close to the visible range, severe damage to the retina of the eye is possible if sufficient precautions are not taken. Safety aspects are of vital importance to be able to shield the operator as well as other personal. The use of Nd:YAG lasers offers interesting and economically attractive applications in shipbuilding industry. Higher joining rates are possible, and very good mechanical/technological parameters can be achieved.