950 resultados para plasma materials processing applications
Resumo:
In this work, electrochemical maltose biosensors based on mutants of the maltose binding protein (MBP) are developed. A ruthenium II complex (Ru II ), which is covalently attached to MBP, serves as an electrochemical reporter of MBP conformational changes. Biosensors were made through direct attachment of Ru II complex modified MBP to gold electrode surfaces. The responses of some individual mutants were evaluated using square wave voltammetry. A maltose-dependent change in Faradic current and capacitance was observed. It is therefore demonstrated that biosensors using generically this family of bacterial periplasmic binding proteins (bPBP) can be made lending themselves to facile biorecognition element preparation and low cost electrochemical transduction.
Resumo:
Esta Tesis aborda el diseño e implementación de aplicaciones en el campo de procesado de señal, utilizando como plataforma los dispositivos reconfigurables FPGA. Esta plataforma muestra una alta capacidad de lógica, e incorpora elementos orientados al procesado de señal, que unido a su relativamente bajo coste, la hacen ideal para el desarrollo de aplicaciones de procesado de señal cuando se requiere realizar un procesado intensivo y se buscan unas altas prestaciones. Sin embargo, el coste asociado al desarrollo en estas plataformas es elevado. Mientras que el aumento en la capacidad lógica de los dispositivos FPGA permite el desarrollo de sistemas completos, los requisitos de altas prestaciones obligan a que en muchas ocasiones se deban optimizar operadores a muy bajo nivel. Además de las restricciones temporales que imponen este tipo de aplicaciones, también tienen asociadas restricciones de área asociadas al dispositivo, lo que obliga a evaluar y verificar entre diferentes alternativas de implementación. El ciclo de diseño e implementación para estas aplicaciones se puede prolongar tanto, que es normal que aparezcan nuevos modelos de FPGA, con mayor capacidad y mayor velocidad, antes de completar el sistema, y que hagan a las restricciones utilizadas para el diseño del sistema inútiles. Para mejorar la productividad en el desarrollo de estas aplicaciones, y con ello acortar su ciclo de diseño, se pueden encontrar diferentes métodos. Esta Tesis se centra en la reutilización de componentes hardware previamente diseñados y verificados. Aunque los lenguajes HDL convencionales permiten reutilizar componentes ya definidos, se pueden realizar mejoras en la especificación que simplifiquen el proceso de incorporar componentes a nuevos diseños. Así, una primera parte de la Tesis se orientará a la especificación de diseños basada en componentes predefinidos. Esta especificación no sólo busca mejorar y simplificar el proceso de añadir componentes a una descripción, sino que también busca mejorar la calidad del diseño especificado, ofreciendo una mayor posibilidad de configuración e incluso la posibilidad de informar de características de la propia descripción. Reutilizar una componente ya descrito depende en gran medida de la información que se ofrezca para su integración en un sistema. En este sentido los HDLs convencionales únicamente proporcionan junto con la descripción del componente la interfaz de entrada/ salida y un conjunto de parámetros para su configuración, mientras que el resto de información requerida normalmente se acompaña mediante documentación externa. En la segunda parte de la Tesis se propondrán un conjunto de encapsulados cuya finalidad es incorporar junto con la propia descripción del componente, información que puede resultar útil para su integración en otros diseños. Incluyendo información de la implementación, ayuda a la configuración del componente, e incluso información de cómo configurar y conectar al componente para realizar una función. Finalmente se elegirá una aplicación clásica en el campo de procesado de señal, la transformada rápida de Fourier (FFT), y se utilizará como ejemplo de uso y aplicación, tanto de las posibilidades de especificación como de los encapsulados descritos. El objetivo del diseño realizado no sólo mostrará ejemplos de la especificación propuesta, sino que también se buscará obtener una implementación de calidad comparable con resultados de la literatura. Para ello, el diseño realizado se orientará a su implementación en FPGA, aprovechando tanto los elementos lógicos generalistas como elementos específicos de bajo nivel disponibles en estos dispositivos. Finalmente, la especificación de la FFT obtenida se utilizará para mostrar cómo incorporar en su interfaz información que ayude para su selección y configuración desde fases tempranas del ciclo de diseño. Abstract This PhD. thesis addresses the design and implementation of signal processing applications using reconfigurable FPGA platforms. This kind of platform exhibits high logic capability, incorporates dedicated signal processing elements and provides a low cost solution, which makes it ideal for the development of signal processing applications, where intensive data processing is required in order to obtain high performance. However, the cost associated to the hardware development on these platforms is high. While the increase in logic capacity of FPGA devices allows the development of complete systems, high-performance constraints require the optimization of operators at very low level. In addition to time constraints imposed by these applications, Area constraints are also applied related to the particular device, which force to evaluate and verify a design among different implementation alternatives. The design and implementation cycle for these applications can be tedious and long, being therefore normal that new FPGA models with a greater capacity and higher speed appear before completing the system implementation. Thus, the original constraints which guided the design of the system become useless. Different methods can be used to improve the productivity when developing these applications, and consequently shorten their design cycle. This PhD. Thesis focuses on the reuse of hardware components previously designed and verified. Although conventional HDLs allow the reuse of components already defined, their specification can be improved in order to simplify the process of incorporating new design components. Thus, a first part of the PhD. Thesis will focus on the specification of designs based on predefined components. This specification improves and simplifies the process of adding components to a description, but it also seeks to improve the quality of the design specified with better configuration options and even offering to report on features of the description. Hardware reuse of a component for its integration into a system largely depends on the information it offers. In this sense the conventional HDLs only provide together with the component description, the input/output interface and a set of parameters for its configuration, while other information is usually provided by external documentation. In the second part of the Thesis we will propose a formal way of encapsulation which aims to incorporate with the component description information that can be useful for its integration into other designs. This information will include features of the own implementation, but it will also support component configuration, and even information on how to configure and connect the component to carry out a function. Finally, the fast Fourier transform (FFT) will be chosen as a well-known signal processing application. It will be used as case study to illustrate the possibilities of proposed specification and encapsulation formalisms. The objective of the FFT design is not only to show practical examples of the proposed specification, but also to obtain an implementation of a quality comparable to scientific literature results. The design will focus its implementation on FPGA platforms, using general logic elements as base of the implementation, but also taking advantage of low-level specific elements available on these devices. Last, the specification of the obtained FFT will be used to show how to incorporate in its interface information to assist in the selection and configuration process early in the design cycle.
Resumo:
The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.
Resumo:
In recent future, wireless sensor networks (WSNs) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of WSNs facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers (DCs). The high economical and environmental impact of the energy consumption in DCs requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of WSNs: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of DCs: energy-optimal workload assignment policies in heterogeneous DCs, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.
Resumo:
In recent future, wireless sensor networks ({WSNs}) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of {WSNs} facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers ({DCs}). The high economical and environmental impact of the energy consumption in {DCs} requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of {WSNs}: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of {DCs}: energy-optimal workload assignment policies in heterogeneous {DCs}, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.
Resumo:
Con este proyecto se pretende crear un procedimiento general para la implantación de aplicaciones de procesado de imágenes en cámaras de video IP y la distribución de dicha información mediante Arquitecturas Orientadas a Servicios (SOA). El objetivo principal es crear una aplicación que se ejecute en una cámara de video IP y realice un procesado básico sobre las imágenes capturadas (detección de colores, formas y patrones) permitiendo distribuir el resultado del procesado mediante las arquitecturas SOA descritas en la especificación DPWS (Device Profile for Web Services). El estudio se va a centrar principalmente en la transformación automática de código de procesado de imágenes escrito en Matlab (archivos .m) a un código C ANSI (archivos .c) que posteriormente se compilará para la arquitectura del procesador de la cámara (arquitectura CRIS, similar a la RISC pero con un conjunto reducido de instrucciones). ABSTRACT. This project aims to create a general procedure for the implementation of image processing applications in IP video cameras and the distribution of such information through Service Oriented Architectures (SOA). The main goal is to create an application that runs on IP video camera and carry out a basic processing on the captured images ( color detection, shapes and patterns) allowing to distribute the result of process by SOA architectures described in the DPWS specification (Device Profile for Web Services). The study will focus primarily on the automated transform of image processing code written in Matlab files (. M) to ANSI C code files (. C) which is then compiled to the processor architecture of the camera (CRIS architecture , similar to the RISC but with a reduced instruction set).
Resumo:
In this paper we present an adaptive spatio-temporal filter that aims to improve low-cost depth camera accuracy and stability over time. The proposed system is composed by three blocks that are used to build a reliable depth map of static scenes. An adaptive joint-bilateral filter is used to obtain consistent depth maps by jointly considering depth and video information and by adapting its parameters to different levels of estimated noise. Kalman filters are used to reduce the temporal random fluctuations of the measurements. Finally an interpolation algorithm is used to obtain consistent depth maps in the regions where the depth information is not available. Results show that this approach allows to considerably improve the depth maps quality by considering spatio-temporal information and by adapting its parameters to different levels of noise.
Resumo:
Negative co-occurrence is a common phenomenon in many signal processing applications. In some cases the signals involved are sparse, and this information can be exploited to recover them. In this paper, we present a sparse learning approach that explicitly takes into account negative co-occurrence. This is achieved by adding a novel penalty term to the LASSO cost function based on the cross-products between the reconstruction coefficients. Although the resulting optimization problem is non-convex, we develop a new and efficient method for solving it based on successive convex approximations. Results on synthetic data, for both complete and overcomplete dictionaries, are provided to validate the proposed approach.
Resumo:
El empleo de biomasa como combustible para la generación de bio-energía va en aumento en la actualidad, debido a su impacto medioambiental nulo en cuanto a las emisiones de CO2. Por lo tanto la generación de cenizas de biomasa, residuo de la producción de esta energía, constituye un problema medioambiental con un claro impacto social y económico. Este tipo de ceniza tiene contenidos en óxidos que la hacen atractiva para su empleo como sustituto parcial del cemento Portland, lo cual proporciona una salida eco-eficiente a este residuo, reduciendo al mismo tiempo la emisión de gases de efecto invernadero asociada a la fabricación del cemento. Esta investigación se centra en el desarrollo de nuevos e innovadores materiales base-cemento eco-eficientes que incorporan ceniza de biomasa para su aplicación integral en construcción. Para ello, se emplea una ceniza de biomasa (CB) procedente de un combustor de lecho fluidizado, cuya biomasa de combustión es principalmente restos de corteza de eucalipto, suministrada por el grupo ENCE-Navia (Asturias). El trabajo desarrollado en la presente tesis doctoral, tiene como primera fase la caracterización de esta ceniza y el análisis de viabilidad de su valorización en materiales base-cemento. Dentro de este análisis, se propone la activación de la ceniza CB mediante tratamiento hidrotermal (TH) en diferentes condiciones de medio activante, temperatura y tiempo de proceso, con el objetivo de favorecer la formación de fases hidratadas que potencien la valorización de la ceniza en el campo de los materiales de construcción. Como fase hidratada de interés se obtiene la fase tobermorita (Ca2.25(Si3O7.5(OH)1.5)(H2O)), precursora del gel C-S-H, responsable del desarrollo de resistencias mecánicas en los materiales base-cemento. El proceso de TH se optimiza para la síntesis más eficiente de esta fase. El estudio posterior de las propiedades mecánicas y micro-estructurales de pastas de cemento eco-eficientes que incorporan la ceniza CB y la ceniza tratada hidrotermalmente, CB-TH, confirma una mayor viabilidad de incorporación de la ceniza CB como sustituto parcial del cemento Portland. Como siguiente paso en el desarrollo de estos innovadores materiales base-cemento eco-eficientes se amplía el estudio multi-escalar de los materiales que incorporan CB mediante diferentes ensayos físico-mecánicos y de durabilidad. Los resultados indican que la presencia de la ceniza de biomasa no tiene efectos negativos sobre las propiedades físicas de los morteros eco-eficientes estudiados. Sin embargo, la adición de CB proporciona una mejor durabilidad del material al producir modificaciones de la microestructura que dificultan el transporte de agentes agresivos. Por otro lado, los morteros con un 10 y 20% de sustitución parcial de cemento por la ceniza de biomasa CB (CB-10 y CB-20) presentan una resistencia a compresión de 53.3 y 50.5 MPa a 28 días de curado, respectivamente. Estos morteros son comparables con un cemento Portland tradicional tipo CEM I de clase de resistencia 42.5 R. Por último, y con el fin de proporcionar la apertura de estos nuevos cementos eco-eficientes al mercado en el campo de los materiales de construcción, se estudian propiedades concretas relacionadas con diferentes tipos de aplicaciones. Concretamente se estudian en detalle las propiedades relativas a la aplicación en baldosas de mortero y los resultados indican unas prestaciones del material eco-eficiente con incorporación de CB similares o mejoradas con respecto al cemento Portland. Se analiza también la viabilidad de aplicación estructural de los cementos eco-eficientes desarrollados mediante el estudio de la adherencia al acero, que resulta similar a la del material de referencia. En cuanto a los resultados de extracción y caracterización de la fase acuosa de los poros, en todas las matrices eco-eficientes se obtiene un pH que garantiza la pasivación de la armadura. Sin embargo, el alto contenido en cloruros de dicha fase acuosa sugiere la conveniencia de realizar un análisis más detallado para la aplicación de los nuevos materiales eco-eficientes en hormigón armado. Se comprueba que todas las matrices que incorporan CB en porcentajes entre un 10 y un 90%, se pueden considerar adecuadas como nuevos materiales de construcción más eco-eficientes en aplicaciones con distintos niveles de exigencias mecánicas y sin problemas ambientales asociados con procesos de lixiviación. Con el presente trabajo de investigación se completan los objetivos iniciales de la tesis, con la obtención de nuevos e innovadores materiales base-cemento eco-eficientes que incorporan cenizas de biomasa (CB) con aplicación integral en el campo de la construcción. ABSTRACT The use of biomass as a fuel for the generation of bio-energy is increasing nowadays, due to its zero environmental impact in terms of CO2 emissions. Therefore the generation of biomass ash, a by-product of this energy, is an environmental problem with a clear social and economic impact. This type of ash contains oxides that make it attractive to be used as a partial replacement of Portland cement, providing an eco-efficient solution to this residue, while reducing the emission of greenhouse gases associated with the production of cement. The present research is focused on the development of new and innovative eco-efficient cement-based materials that incorporate biomass ash for their comprehensive application in construction. For this purpose a biomass ash (CB) is used from a fluidized bed forest combustor mainly fed with the bark of eucalyptus trees, provided by the ENCE-Navia (Asturias) group. The work includes in the first stage the characterization of the raw materials and the analysis of viability of their valorization in cement-based materials. Within this analysis, the activation of the ash is proposed by hydrothermal treatment (HT) in different conditions of activation medium, temperature and process duration, aiming an enhanced formation of hydrated phases to improve the ash valorization in the construction materials field. As an interesting hydrated phase, the tobermorite (Ca2.25(Si3O7.5(OH)1.5)(H2O)) is obtained from the process. This phase is considered as a precursor of the gel C-S-H, responsible for the development of mechanical strength in cement-based materials. HT process is optimized for the most efficient synthesis of tobermorite. The analysis of mechanical and microstructural properties of eco-efficient cement pastes incorporating CB ash and hydrothermally treated ash, CB-TH, confirms an improved viability of incorporation of CB ash as a partial replacement for Portland cement in the case. As a next step in the development of these innovative eco-efficient cement-based materials, a multiscale study of the materials that incorporate CB by different physical-mechanical and durability tests is carried out. The results indicate that the presence of biomass ash does not give rise to negative effects on the physical properties of the eco-efficient mortars analyzed. Nevertheless, the addition of CB produces a better durability performance due to microstructural modifications that hinder the transport of aggressive agents through the material. Moreover, mortars with a 10% and 20% of partial substitution of cement by the CB biomass ash (CB-10 and CB-20) show a compressive resistance of 53.3 and 50.5 MPa at 28 days of curing, respectively. These mortars are comparable to an ordinary Portland cement type CEM I with a resistance class of 42.5R. Finally, and in order to provide the opening of these new eco-efficient cement to the market in the field of construction materials, certain properties specifically related to different types of applications are studied. Among these, the properties concerning the application in mortar tiles are analyzed and the results indicate a similar, or even better performance of the eco-efficient mortar that incorporates CB, with respect to Portland cement. The viability of structural application of the developed eco-efficient cement is also performed considering the study of the adhesion to steel, with results similar to those of the reference material. Regarding the results of extraction and analysis of the aqueous phase of the pores, a pH value guaranteeing reinforcement passivation is obtained for all the eco-efficient matrices. However, high chloride content is obtained suggesting the suitability of a more detailed study to evaluate the application of these new eco-efficient materials in reinforced concrete. It is established that all the matrices incorporating CB in percentages between 10 and 90% may be considered adequate as new more eco-efficient construction materials in applications with different levels of mechanical demand and without environmental problems associated to leaching processes. In this research the initial objectives of the thesis are fulfilled by obtaining new and innovative eco-efficient cement-based materials that incorporate biomass ashes (CB) with comprehensive application in the construction field.
Resumo:
The main goal of this paper is to present the initial version of a Textile Chemical Ontology, to be used by textile professionals with the purpose of conceptualising and representing the banned and harmful chemical substances that are forbidden in this domain. After analysing different methodologies and determining that “Methontology” is the most appropriate for the purposes, this methodology is explored and applied to the domain. In this manner, an initial set of concepts are defined, together with their hierarchy and the relationships between them. This paper shows the benefits of using the ontology through a real use case in the context of Information Retrieval. The potentiality of the proposed ontology in this preliminary evaluation encourages extending the ontology with a higher number of concepts and relationships, and validating it within other Natural Language Processing applications.
Resumo:
There are a large number of image processing applications that work with different performance requirements and available resources. Recent advances in image compression focus on reducing image size and processing time, but offer no real-time solutions for providing time/quality flexibility of the resulting image, such as using them to transmit the image contents of web pages. In this paper we propose a method for encoding still images based on the JPEG standard that allows the compression/decompression time cost and image quality to be adjusted to the needs of each application and to the bandwidth conditions of the network. The real-time control is based on a collection of adjustable parameters relating both to aspects of implementation and to the hardware with which the algorithm is processed. The proposed encoding system is evaluated in terms of compression ratio, processing delay and quality of the compressed image when compared with the standard method.
Resumo:
Partial differential equation (PDE) solvers are commonly employed to study and characterize the parameter space for reaction-diffusion (RD) systems while investigating biological pattern formation. Increasingly, biologists wish to perform such studies with arbitrary surfaces representing ‘real’ 3D geometries for better insights. In this paper, we present a highly optimized CUDA-based solver for RD equations on triangulated meshes in 3D. We demonstrate our solver using a chemotactic model that can be used to study snakeskin pigmentation, for example. We employ a finite element based approach to perform explicit Euler time integrations. We compare our approach to a naive GPU implementation and provide an in-depth performance analysis, demonstrating the significant speedup afforded by our optimizations. The optimization strategies that we exploit could be generalized to other mesh based processing applications with PDE simulations.
Resumo:
Effective July 21, 1977.
Resumo:
Accompanied by the Board's Rules for processing applications for permit filed by hospitals (Rule no. 4).
Resumo:
Papers in this issue of Natural Resources Research are from the “Symposium on the Application of Neural Networks to the Earth Sciences,” held 20–21 August 2002 at NASA Moffet Field, Mountain View, California. The Symposium represents the Seventh International Symposium on Mineral Exploration (ISME-02). It was sponsored by the Mining and Materials Processing Institute of Japan (MMIJ), the US Geological Survey, the Circum-Pacific Council, and NASA. The ISME symposia have been held every two years in order to bring together scientists actively working on diverse quantitative methods applied to the earth sciences. Although the title, International Symposium on Mineral Exploration, suggests exclusive focus on mineral exploration, interests and presentations always have been wide-ranging—talks presented at this symposium are no exception.