954 resultados para industrial applications


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Transportprozesse von anisotropen metallischen Nanopartikeln wie zum Beispiel Gold-Nanostäbchen in komplexen Flüssigkeiten und/oder begrenzten Geometrien spielen eine bedeutende Rolle in einer Vielzahl von biomedizinischen und industriellen Anwendungen. Ein Weg zu einem tiefen, grundlegenden Verständnis von Transportmechanismen ist die Verwendung zweier leistungsstarker Methoden - dynamischer Lichtstreuung (DLS) und resonanzverstärkter Lichtstreuung (REDLS) in der Nähe einer Grenzfläche. In dieser Arbeit wurden nanomolare Suspensionen von Gold-Nanostäbchen, stabilisiert mit Cetyltrimethylammoniumbromid (CTAB), mit DLS sowie in der Nähe einer Grenzfläche mit REDLS untersucht. Mit DLS wurde eine wellenlängenabhängige Verstärkung der anisotropen Streuung beobachtet, welche sich durch die Anregung von longitudinaler Oberflächenplasmonenresonanz ergibt. Die hohe Streuintensität nahe der longitudinalen Oberflächenplasmonenresonanzfrequenz für Stäbchen, welche parallel zum anregenden optischen Feld liegen, erlaubte die Auflösung der translationalen Anisotropie in einem isotropen Medium. Diese wellenlängenabhängige anisotrope Lichtstreuung ermöglicht neue Anwendungen wie etwa die Untersuchung der Dynamik einzelner Partikel in komplexen Umgebungen mittels depolarisierter dynamischer Lichtstreuung. In der Nähe einer Grenzfläche wurde eine starke Verlangsamung der translationalen Diffusion beobachtet. Hingegen zeigte sich für die Rotation zwar eine ausgeprägte aber weniger starke Verlangsamung. Um den möglichen Einfluss von Ladung auf der festen Grenzfläche zu untersuchen, wurde das Metall mit elektrisch neutralem Polymethylmethacrylat (PMMA) beschichtet. In einem weiteren Ansatz wurde das CTAB in der Gold-Nanostäbchen Lösung durch das kovalent gebundene 16-Mercaptohexadecyltrimethylammoniumbromid (MTAB) ersetzt. Daraus ergab sich eine deutlich geringere Verlangsamung.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In condensed matter systems, the interfacial tension plays a central role for a multitude of phenomena. It is the driving force for nucleation processes, determines the shape and structure of crystalline structures and is important for industrial applications. Despite its importance, the interfacial tension is hard to determine in experiments and also in computer simulations. While for liquid-vapor interfacial tensions there exist sophisticated simulation methods to compute the interfacial tension, current methods for solid-liquid interfaces produce unsatisfactory results.rnrnAs a first approach to this topic, the influence of the interfacial tension on nuclei is studied within the three-dimensional Ising model. This model is well suited because despite its simplicity, one can learn much about nucleation of crystalline nuclei. Below the so-called roughening temperature, nuclei in the Ising model are not spherical anymore but become cubic because of the anisotropy of the interfacial tension. This is similar to crystalline nuclei, which are in general not spherical but more like a convex polyhedron with flat facets on the surface. In this context, the problem of distinguishing between the two bulk phases in the vicinity of the diffuse droplet surface is addressed. A new definition is found which correctly determines the volume of a droplet in a given configuration if compared to the volume predicted by simple macroscopic assumptions.rnrnTo compute the interfacial tension of solid-liquid interfaces, a new Monte Carlo method called ensemble switch method'' is presented which allows to compute the interfacial tension of liquid-vapor interfaces as well as solid-liquid interfaces with great accuracy. In the past, the dependence of the interfacial tension on the finite size and shape of the simulation box has often been neglected although there is a nontrivial dependence on the box dimensions. As a consequence, one needs to systematically increase the box size and extrapolate to infinite volume in order to accurately predict the interfacial tension. Therefore, a thorough finite-size scaling analysis is established in this thesis. Logarithmic corrections to the finite-size scaling are motivated and identified, which are of leading order and therefore must not be neglected. The astounding feature of these logarithmic corrections is that they do not depend at all on the model under consideration. Using the ensemble switch method, the validity of a finite-size scaling ansatz containing the aforementioned logarithmic corrections is carefully tested and confirmed. Combining the finite-size scaling theory with the ensemble switch method, the interfacial tension of several model systems, ranging from the Ising model to colloidal systems, is computed with great accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Molecular dynamics simulations of silicate and borate glasses and melts: Structure, diffusion dynamics and vibrational properties. In this work computer simulations of the model glass formers SiO2 and B2O3 are presented, using the techniques of classical molecular dynamics (MD) simulations and quantum mechanical calculations, based on density functional theory (DFT). The latter limits the system size to about 100−200 atoms. SiO2 and B2O3 are the two most important network formers for industrial applications of oxide glasses. Glass samples are generated by means of a quench from the melt with classical MD simulations and a subsequent structural relaxation with DFT forces. In addition, full ab initio quenches are carried out with a significantly faster cooling rate. In principle, the structural properties are in good agreement with experimental results from neutron and X-ray scattering, in all cases. A special focus is on the study of vibrational properties, as they give access to low-temperature thermodynamic properties. The vibrational spectra are calculated by the so-called ”frozen phonon” method. In all cases, the DFT curves show an acceptable agreement with experimental results of inelastic neutron scattering. In case of the model glass former B2O3, a new classical interaction potential is parametrized, based on the liquid trajectory of an ab initio MD simulation at 2300 K. In this course, a structural fitting routine is used. The inclusion of 3-body angular interactions leads to a significantly improved agreement of the liquid properties of the classical MD and ab initio MD simulations. However, the generated glass structures, in all cases, show a significantly lower fraction of 3-membered planar boroxol rings as predicted by experimental results (f=60%-80%). The largest boroxol ring fraction of f=15±5% is observed in the full ab initio quenches from 2300 K. In case of SiO2, the glass structures after the quantum mechanical relaxation are the basis for calculations of the linear thermal expansion coefficient αL(T), employing the quasi-harmonic approximation. The striking observation is a change change of sign of αL(T) going along with a temperature range of negative αL(T) at low temperatures, which is in good agreement with experimental results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In Hinsicht darauf, dass sich S. cerevisiae-Stämme im Laufe der Domestizierung und Anpassung an verschiedene Habitate genetisch verändert haben, wurde in dieser Arbeit eine repräsentative Auswahl von Labor-, kommerziellen und in der Natur vorkommenden Saccharomyces-Stämmen und ihren Interspezies-Hybriden auf die Verbreitung alleler Varianten der Hexokinase-Gene HXK1 und HXK2 getestet. Von den Hexose-Transportern stand Hxt3p im Mittelpunkt, da seine essentielle Rolle bei der Vergärung von Glucose und Fructose bereits belegt wurde.rnIn dieser Arbeit wurde gezeigt, dass es bedeutende Unterschiede in der Vergärung von Glucose und Fructose zwischen Weinhefen der Gattung Saccharomyces gibt, die z.T. mit Struktur-Varianten des Hexose-Transporter Hxt3p korrelieren. rnInsgesamt 51 Hefestämme wurden auf ihre allele Variante des HXT3-Gens untersucht. Dabei haben sich drei Hauptgruppen (die Fermichamp®-Typ Gruppe, Bierhefen und Hybrid-Stämme) mit unterschiedlichem HXT3-Allel ergeben. Im Zusammenhang mit der Weinherstellung wurden signifikante Nukleotid-Substitutionen innerhalb des HXT3-Gens der robusten S. cerevisiae-Stämme (wie z.B. Sekthefen, kommerzielle Starterkulturen) und Hybrid-Stämmen festgestellt. Diese Hefen zeichneten sich durch die Fähigkeit aus, den Most trotz stressigen Umwelt-Bedingungen (wie hohe Ethanol-Konzentration, reduzierter Ammonium-Gehalt, ungünstiges Glucose:Fructose-Verhältnis) zu vergären. rnDie Experimente deuten darauf hin, dass die HXT3-Allel-Variante des als Starterkultur verwendbaren Stammes Fermichamp®, für den verstärkten Fructose-Abbau verantwortlich ist. Ein gleiches Verhalten der Stämme mit dieser Allel-Variante wurde ebenfalls beobachtet. Getestet wurden die S. cerevisiae-Stämme Fermichamp® und 54.41, die bezüglich Hxt3p-Aminosäuresequenz gleich sind, gegenüber zwei S. cerevisiae-Stämmen mit dem HXT3-Standard-Alleltyp Fermivin® und 33. Der Unterschied in der Hexose-Verwertung zwischen Stämmen mit Fermichamp®- und Standard-Alleltyp war in der Mitte des Gärverlaufs am deutlichsten zu beobachten. Beide Gruppen, sowohl mit HXT3 Fermichamp®- als auch Fermivin®-Alleltyp vergoren die Glucose schneller als die Fructose. Der Unterschied aber zwischen diesen HXT3-Alleltypen bei der Zucker-Verwertung lag darin, dass der Fermichamp®-Typ eine kleinere Differenz in der Abbau-Geschwindigkeit der beiden Hexosen zeigte als der Fermivin®-Typ. Die Zuckeraufnahme-Messungen haben die relativ gute Fructose-Aufnahme dieser Stämme bestätigt.rnEbenfalls korrelierte der fructophile Charakter des Triple-Hybrides S. cerevisiae x S. kudriavzevii x S. bayanus-Stamm HL78 in Transportexperimenten mit verstärkter Aufnahme von Fructose im Vergleich zu Glucose. Insgesamt zeigte dieser Stamm ähnliches Verhalten wie die S. cerevisiae-Stämme Fermichamp® und 54.41. rnIn dieser Arbeit wurde ein Struktur-Modell des Hexose-Transporters Hxt3p erstellt. Als Basis diente die zu 30 % homologe Struktur des Proton/Xylose-Symporters XylE aus Escherichia coli. Anhand des Hxt3p-Modells konnten Sequenzbereiche mit hoher Variabilität (Hotspots) in drei Hxt3p-Isoformen der Hauptgruppen (die Fermichamp®-Typ Gruppe, Bierhefen und Hybrid-Stämme) detektiert werden. Diese signifikanten Aminosäure-Substitutionen, die eine mögliche Veränderung der physikalischen und chemischen Eigenschaften des Carriers mit sich bringen, konzentrieren sich auf drei Bereiche. Dazu gehören die Region zwischen den N- und C-terminalen Domänen, die cytosolische Domäne und der Outside-Loop zwischen Transmembranregion 9 und Transmembranregion 10. rnObwohl die Transportmessungen keinen Zusammenhang zwischen Stämmen mit unterschiedlichen HXT3-Allelen und ihrer Toleranz gegenüber Ethanol ergaben, wurde ein signifikanter Anstieg in der Zuckeraufnahme nach vorheriger 24-stündiger Inkubation mit 4 Vol% Ethanol bei den Teststämmen beobachtet. rnInsgesamt könnten allele Varianten von HXT3-Gen ein nützliches Kriterium bei der Suche nach robusten Hefen für die Weinherstellung oder für andere industrielle Anwendungen sein. Die Auswirkung dieser Modifikationen auf die Struktur und Effizienz des Hexose-Transporters, sowie der mögliche Zusammenhang mit Ethanol-Resistenz müssen weiter ausführlich untersucht werden. rnEin Zusammenhang zwischen den niedrig variablen Allel-Varianten der Hexokinase-Gene HXK1 und HXK2 und dem Zucker-Metabolismus wurde nicht gefunden. Die Hexokinasen der untersuchten Stämme wiesen allerdings generell eine signifikante geringere Affinität zu Fructose im Vergleich zu Glucose auf. Hier liegt sicherlich eine Hauptursache für den Anstieg des Fructose:Glucose-Verhältnisses im Laufe der Vergärung von Traubenmosten.rn

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Polylactic acid (PLA) is a bio-derived, biodegradable polymer with a number of similar mechanical properties to commodity plastics like polyethylene (PE) and polyethylene terephthalate (PETE). There has recently been a great interest in using PLA to replace these typical petroleum-derived polymers because of the developing trend to use more sustainable materials and technologies. However, PLA¿s inherent slow crystallization behavior is not compatible with prototypical polymer processing techniques such as molding and extrusion, and in turn inhibits its widespread use in industrial applications. In order to make PLA into a commercially-viable material, there is a need to process the material in such a way that its tendency to form crystals is enhanced. The industry standard for producing PLA products is via twin screw extrusion (TSE), where polymer pellets are fed into a heated extruder, mixed at a temperature above its melting temperature, and molded into a desired shape. A relatively novel processing technique called solid-state shear pulverization (SSSP) processes the polymer in the solid state so that nucleation sites can develop and fast crystallization can occur. SSSP has also been found to enhance the mechanical properties of a material, but its powder output form is undesirable in industry. A new process called solid-state/melt extrusion (SSME), developed at Bucknell University, combines the TSE and SSSP processes in one instrument. This technique has proven to produce moldable polymer products with increased mechanical strength. This thesis first investigated the effects of the TSE, SSSP, and SSME polymer processing techniques on PLA. The study seeks to determine the process that yields products with the most enhanced thermal and mechanical properties. For characterization, percent crystallinity, crystallization half time, storage modulus, softening temperature, degradation temperature and molecular weight were analyzed for all samples. Through these characterization techniques, it was observed that SSME-processed PLA had enhanced properties relative to TSE- and SSSP-processed PLA. Because of the previous findings, an optimization study for SSME-processed PLA was conducted where throughput and screw design were varied. The optimization study determined PLA processed with a low flow rate and a moderate screw design in an SSME process produced a polymer product with the largest increase in thermal properties and a high retention of polymer structure relative to TSE-, SSSP-, and all other SSME-processed PLA. It was concluded that the SSSP part of processing scissions polymer chains, creating defects within the material, while the TSE part of processing allows these defects to be mixed thoroughly throughout the sample. The study showed that a proper SSME setup allows for both the increase in nucleation sites within the polymer and sufficient mixing, which in turn leads to the development of a large amount of crystals in a short period of time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Copper (Cu) and its alloys are used extensively in domestic and industrial applications. Cu is also an essential element in mammalian nutrition. Since both copper deficiency and copper excess produce adverse health effects, the dose-response curve is U-shaped, although the precise form has not yet been well characterized. Many animal and human studies were conducted on copper to provide a rich database from which data suitable for modeling the dose-response relationship for copper may be extracted. Possible dose-response modeling strategies are considered in this review, including those based on the benchmark dose and categorical regression. The usefulness of biologically based dose-response modeling techniques in understanding copper toxicity was difficult to assess at this time since the mechanisms underlying copper-induced toxicity have yet to be fully elucidated. A dose-response modeling strategy for copper toxicity was proposed associated with both deficiency and excess. This modeling strategy was applied to multiple studies of copper-induced toxicity, standardized with respect to severity of adverse health outcomes and selected on the basis of criteria reflecting the quality and relevance of individual studies. The use of a comprehensive database on copper-induced toxicity is essential for dose-response modeling since there is insufficient information in any single study to adequately characterize copper dose-response relationships. The dose-response modeling strategy envisioned here is designed to determine whether the existing toxicity data for copper excess or deficiency may be effectively utilized in defining the limits of the homeostatic range in humans and other species. By considering alternative techniques for determining a point of departure and low-dose extrapolation (including categorical regression, the benchmark dose, and identification of observed no-effect levels) this strategy will identify which techniques are most suitable for this purpose. This analysis also serves to identify areas in which additional data are needed to better define the characteristics of dose-response relationships for copper-induced toxicity in relation to excess or deficiency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent improvements in precursor chemistry, reactor geometry and run conditions extend the manufacturing capability of traditional flame aerosol synthesis of oxide nanoparticles to metals, alloys and inorganic complex salts. As an example of a demanding composition, we demonstrate here the one-step flame synthesis of nanoparticles of a 4-element non-oxide phosphor for upconversion applications. The phosphors are characterized in terms of emission capability, phase purity and thermal phase evolution. The preparation of flame-made beta-NaYF4 with dopants of Yb, Tm or Yb, Er furthermore illustrates the now available nanoparticle synthesis tool boxes based on modified flamespray synthesis from our laboratories at ETH Zurich. Since scaling concepts for flame synthesis, including large-scale filtration and powder handling, have become available commercially, the development of industrial applications of complex nanoparticles of metals, alloys or most other thermally stable, inorganic compounds can now be considered a feasible alternative to traditional top-down manufacturing or liquid-intense wet chemistry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Internet of Things based systems are anticipated to gain widespread use in industrial applications. Standardization efforts, like 6L0WPAN and the Constrained Application Protocol (CoAP) have made the integration of wireless sensor nodes possible using Internet technology and web-like access to data (RESTful service access). While there are still some open issues, the interoperability problem in the lower layers can now be considered solved from an enterprise software vendors' point of view. One possible next step towards integration of real-world objects into enterprise systems and solving the corresponding interoperability problems at higher levels is to use semantic web technologies. We introduce an abstraction of real-world objects, called Semantic Physical Business Entities (SPBE), using Linked Data principles. We show that this abstraction nicely fits into enterprise systems, as SPBEs allow a business object centric view on real-world objects, instead of a pure device centric view. The interdependencies between how currently services in an enterprise system are used and how this can be done in a semantic real-world aware enterprise system are outlined, arguing for the need of semantic services and semantic knowledge repositories. We introduce a lightweight query language, which we use to perform a quantitative analysis of our approach to demonstrate its feasibility.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Solver Add-in of Microsoft Excel is widely used in courses on Operations Research and in industrial applications. Since the 2010 version of Microsoft Excel, the Solver Add-in comprises a so-called evolutionary solver. We analyze how this metaheuristic can be applied to the resource-constrained project scheduling problem (RCPSP). We present an implementation of a schedule-generation scheme in a spreadsheet, which combined with the evolutionary solver can be used for devising good feasible schedules. Our computational results indicate that using this approach, non-trivial instances of the RCPSP can be (approximately) solved to optimality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Industrial applications of computer vision sometimes require detection of atypical objects that occur as small groups of pixels in digital images. These objects are difficult to single out because they are small and randomly distributed. In this work we propose an image segmentation method using the novel Ant System-based Clustering Algorithm (ASCA). ASCA models the foraging behaviour of ants, which move through the data space searching for high data-density regions, and leave pheromone trails on their path. The pheromone map is used to identify the exact number of clusters, and assign the pixels to these clusters using the pheromone gradient. We applied ASCA to detection of microcalcifications in digital mammograms and compared its performance with state-of-the-art clustering algorithms such as 1D Self-Organizing Map, k-Means, Fuzzy c-Means and Possibilistic Fuzzy c-Means. The main advantage of ASCA is that the number of clusters needs not to be known a priori. The experimental results show that ASCA is more efficient than the other algorithms in detecting small clusters of atypical data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La termografía es un método de inspección y diagnóstico basado en la radiación infrarroja que emiten los cuerpos. Permite medir dicha radiación a distancia y sin contacto, obteniendo un termograma o imagen termográfica, objeto de estudio de este proyecto. Todos los cuerpos que se encuentren a una cierta temperatura emiten radiación infrarroja. Sin embargo, para hacer una inspección termográfica hay que tener en cuenta la emisividad de los cuerpos, capacidad que tienen de emitir radiación, ya que ésta no sólo depende de la temperatura del cuerpo, sino también de sus características superficiales. Las herramientas necesarias para conseguir un termograma son principalmente una cámara termográfica y un software que permita su análisis. La cámara percibe la emisión infrarroja de un objeto y lo convierte en una imagen visible, originalmente monocromática. Sin embargo, después es coloreada por la propia cámara o por un software para una interpretación más fácil del termograma. Para obtener estas imágenes termográficas existen varias técnicas, que se diferencian en cómo la energía calorífica se transfiere al cuerpo. Estas técnicas se clasifican en termografía pasiva, activa y vibrotermografía. El método que se utiliza en cada caso depende de las características térmicas del cuerpo, del tipo de defecto a localizar o la resolución espacial de las imágenes, entre otros factores. Para analizar las imágenes y así obtener diagnósticos y detectar defectos, es importante la precisión. Por ello existe un procesado de las imágenes, para minimizar los efectos provocados por causas externas, mejorar la calidad de la imagen y extraer información de las inspecciones realizadas. La termografía es un método de ensayo no destructivo muy flexible y que ofrece muchas ventajas. Por esta razón el campo de aplicación es muy amplio, abarcando desde aplicaciones industriales hasta investigación y desarrollo. Vigilancia y seguridad, ahorro energético, medicina o medio ambiente, son algunos de los campos donde la termografía aportaimportantes beneficios. Este proyecto es un estudio teórico de la termografía, donde se describen detalladamente cada uno de los aspectos mencionados. Concluye con una aplicación práctica, creando una cámara infrarroja a partir de una webcam, y realizando un análisis de las imágenes obtenidas con ella. Con esto se demuestran algunas de las teorías explicadas, así como la posibilidad de reconocer objetos mediante la termografía. Thermography is a method of testing and diagnosis based on the infrared radiation emitted by bodies. It allows to measure this radiation from a distance and with no contact, getting a thermogram or thermal image, object of study of this project. All bodies that are at a certain temperature emit infrared radiation. However, making a thermographic inspection must take into account the emissivity of the body, capability of emitting radiation. This not only depends on the temperature of the body, but also on its surface characteristics. The tools needed to get a thermogram are mainly a thermal imaging camera and software that allows analysis. The camera sees the infrared emission of an object and converts it into a visible image, originally monochrome. However, after it is colored by the camera or software for easier interpretation of thermogram. To obtain these thermal images it exists various techniques, which differ in how heat energy is transferred to the body. These techniques are classified into passive thermography, active and vibrotermografy. The method used in each case depends on the thermal characteristics of the body, the type of defect to locate or spatial resolution of images, among other factors. To analyze the images and obtain diagnoses and defects, accuracy is important. Thus there is a image processing to minimize the effects caused by external causes, improving image quality and extract information from inspections. Thermography is a non-­‐destructive test method very flexible and offers many advantages. So the scope is very wide, ranging from industrial applications to research and development.Surveillance and security, energy saving, environmental or medicine are some of the areas where thermography provides significant benefits. This project is a theoretical study of thermography, which describes in detail each of these aspects. It concludes with a practical application, creating an infrared camera from a webcam, and making an analysis of the images obtained with it. This will demonstrate some of the theories explained as well as the ability to recognize objects by thermography.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El estudio de materiales, especialmente biológicos, por medios no destructivos está adquiriendo una importancia creciente tanto en las aplicaciones científicas como industriales. Las ventajas económicas de los métodos no destructivos son múltiples. Existen numerosos procedimientos físicos capaces de extraer información detallada de las superficie de la madera con escaso o nulo tratamiento previo y mínima intrusión en el material. Entre los diversos métodos destacan las técnicas ópticas y las acústicas por su gran versatilidad, relativa sencillez y bajo coste. Esta tesis pretende establecer desde la aplicación de principios simples de física, de medición directa y superficial, a través del desarrollo de los algoritmos de decisión mas adecuados basados en la estadística, unas soluciones tecnológicas simples y en esencia, de coste mínimo, para su posible aplicación en la determinación de la especie y los defectos superficiales de la madera de cada muestra tratando, en la medida de lo posible, no alterar su geometría de trabajo. Los análisis desarrollados han sido los tres siguientes: El primer método óptico utiliza las propiedades de la luz dispersada por la superficie de la madera cuando es iluminada por un laser difuso. Esta dispersión produce un moteado luminoso (speckle) cuyas propiedades estadísticas permiten extraer propiedades muy precisas de la estructura tanto microscópica como macroscópica de la madera. El análisis de las propiedades espectrales de la luz laser dispersada genera ciertos patrones mas o menos regulares relacionados con la estructura anatómica, composición, procesado y textura superficial de la madera bajo estudio que ponen de manifiesto características del material o de la calidad de los procesos a los que ha sido sometido. El uso de este tipo de láseres implica también la posibilidad de realizar monitorizaciones de procesos industriales en tiempo real y a distancia sin interferir con otros sensores. La segunda técnica óptica que emplearemos hace uso del estudio estadístico y matemático de las propiedades de las imágenes digitales obtenidas de la superficie de la madera a través de un sistema de scanner de alta resolución. Después de aislar los detalles mas relevantes de las imágenes, diversos algoritmos de clasificacion automatica se encargan de generar bases de datos con las diversas especies de maderas a las que pertenecían las imágenes, junto con los márgenes de error de tales clasificaciones. Una parte fundamental de las herramientas de clasificacion se basa en el estudio preciso de las bandas de color de las diversas maderas. Finalmente, numerosas técnicas acústicas, tales como el análisis de pulsos por impacto acústico, permiten complementar y afinar los resultados obtenidos con los métodos ópticos descritos, identificando estructuras superficiales y profundas en la madera así como patologías o deformaciones, aspectos de especial utilidad en usos de la madera en estructuras. La utilidad de estas técnicas esta mas que demostrada en el campo industrial aun cuando su aplicación carece de la suficiente expansión debido a sus altos costes y falta de normalización de los procesos, lo cual hace que cada análisis no sea comparable con su teórico equivalente de mercado. En la actualidad gran parte de los esfuerzos de investigación tienden a dar por supuesto que la diferenciación entre especies es un mecanismo de reconocimiento propio del ser humano y concentran las tecnologías en la definición de parámetros físicos (módulos de elasticidad, conductividad eléctrica o acústica, etc.), utilizando aparatos muy costosos y en muchos casos complejos en su aplicación de campo. Abstract The study of materials, especially the biological ones, by non-destructive techniques is becoming increasingly important in both scientific and industrial applications. The economic advantages of non-destructive methods are multiple and clear due to the related costs and resources necessaries. There are many physical processes capable of extracting detailed information on the wood surface with little or no previous treatment and minimal intrusion into the material. Among the various methods stand out acoustic and optical techniques for their great versatility, relative simplicity and low cost. This thesis aims to establish from the application of simple principles of physics, surface direct measurement and through the development of the more appropriate decision algorithms based on statistics, a simple technological solutions with the minimum cost for possible application in determining the species and the wood surface defects of each sample. Looking for a reasonable accuracy without altering their work-location or properties is the main objetive. There are three different work lines: Empirical characterization of wood surfaces by means of iterative autocorrelation of laser speckle patterns: A simple and inexpensive method for the qualitative characterization of wood surfaces is presented. it is based on the iterative autocorrelation of laser speckle patterns produced by diffuse laser illumination of the wood surfaces. The method exploits the high spatial frequency content of speckle images. A similar approach with raw conventional photographs taken with ordinary light would be very difficult. A few iterations of the algorithm are necessary, typically three or four, in order to visualize the most important periodic features of the surface. The processed patterns help in the study of surface parameters, to design new scattering models and to classify the wood species. Fractal-based image enhancement techniques inspired by differential interference contrast microscopy: Differential interference contrast microscopy is a very powerful optical technique for microscopic imaging. Inspired by the physics of this type of microscope, we have developed a series of image processing algorithms aimed at the magnification, noise reduction, contrast enhancement and tissue analysis of biological samples. These algorithms use fractal convolution schemes which provide fast and accurate results with a performance comparable to the best present image enhancement algorithms. These techniques can be used as post processing tools for advanced microscopy or as a means to improve the performance of less expensive visualization instruments. Several examples of the use of these algorithms to visualize microscopic images of raw pine wood samples with a simple desktop scanner are provided. Wood species identification using stress-wave analysis in the audible range: Stress-wave analysis is a powerful and flexible technique to study mechanical properties of many materials. We present a simple technique to obtain information about the species of wood samples using stress-wave sounds in the audible range generated by collision with a small pendulum. Stress-wave analysis has been used for flaw detection and quality control for decades, but its use for material identification and classification is less cited in the literature. Accurate wood species identification is a time consuming task for highly trained human experts. For this reason, the development of cost effective techniques for automatic wood classification is a desirable goal. Our proposed approach is fully non-invasive and non-destructive, reducing significantly the cost and complexity of the identification and classification process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The geometrical factors defining an adhesive joint are of great importance as its design greatly conditions the performance of the bonding. One of the most relevant geometrical factors is the thickness of the adhesive as it decisively influences the mechanical properties of the bonding and has a clear economic impact on the manufacturing processes or long runs. The traditional mechanical joints (riveting, welding, etc.) are characterised by a predictable performance, and are very reliable in service conditions. Thus, structural adhesive joints will only be selected in industrial applications demanding mechanical requirements and adverse environmental conditions if the suitable reliability (the same or higher than the mechanical joints) is guaranteed. For this purpose, the objective of this paper is to analyse the influence of the adhesive thickness on the mechanical behaviour of the joint and, by means of a statistical analysis based on Weibull distribution, propose the optimum thickness for the adhesive combining the best mechanical performance and high reliability. This procedure, which is applicable without a great deal of difficulty to other joints and adhesives, provides a general use for a more reliable use of adhesive bondings and, therefore, for a better and wider use in the industrial manufacturing processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The deviation of calibration coefficients from five cup anemometer models over time was analyzed. The analysis was based on a series of laboratory calibrations between January 2001 and August 2010. The analysis was performed on two different groups of anemometers: (1) anemometers not used for any industrial purpose (that is, just stored); and (2) anemometers used in different industrial applications (mainly in the field—or outside—applications like wind farms). Results indicate a loss of performance of the studied anemometers over time. In the case of the unused anemometers the degradation shows a clear pattern. In the case of the anemometers used in the field, the data analyzed also suggest a loss of performance, yet the degradation does not show a clear trend. A recalibration schedule is proposed based on the observed performances variations