825 resultados para Solution-processed
Resumo:
Uniformly distributed ZnO nanorods with diameter 70-100 nm and 1-2μm long have been successfully grown at low temperatures on GaN by using the inexpensive aqueous solution method. The formation of the ZnO nanorods and the growth parameters are controlled by reactant concentration, temperature and pH. No catalyst is required. The XRD studies show that the ZnO nanorods are single crystals and that they grow along the c axis of the crystal plane. The room temperature photoluminescence measurements have shown ultraviolet peaks at 388nm with high intensity, which are comparable to those found in high quality ZnO films. The mechanism of the nanorod growth in the aqueous solution is proposed. The dependence of the ZnO nanorods on the growth parameters was also investigated. While changing the growth temperature from 60°C to 150°C, the morphology of the ZnO nanorods changed from sharp tip (needle shape) to flat tip (rod shape). These kinds of structure are useful in laser and field emission application.
Resumo:
Uniformly distributed ZnO nanorods with diameter 80-120 nm and 1-2µm long have been successfully grown at low temperatures on GaN by using the inexpensive aqueous solution method. The formation of the ZnO nanorods and the growth parameters are controlled by reactant concentration, temperature and pH. No catalyst is required. The XRD studies show that the ZnO nanorods are single crystals and that they grow along the c axis of the crystal plane. The room temperature photoluminescence measurements have shown ultraviolet peaks at 388nm with high intensity, which are comparable to those found in high quality ZnO films. The mechanism of the nanorod growth in the aqueous solution is proposed. The dependence of the ZnO nanorods on the growth parameters was also investigated. While changing the growth temperature from 60°C to 150°C, the morphology of the ZnO nanorods changed from sharp tip with high aspect ratio to flat tip with smaller aspect ratio. These kinds of structure are useful in laser and field emission application.
Resumo:
Many online services access a large number of autonomous data sources and at the same time need to meet different user requirements. It is essential for these services to achieve semantic interoperability among these information exchange entities. In the presence of an increasing number of proprietary business processes, heterogeneous data standards, and diverse user requirements, it is critical that the services are implemented using adaptable, extensible, and scalable technology. The COntext INterchange (COIN) approach, inspired by similar goals of the Semantic Web, provides a robust solution. In this paper, we describe how COIN can be used to implement dynamic online services where semantic differences are reconciled on the fly. We show that COIN is flexible and scalable by comparing it with several conventional approaches. With a given ontology, the number of conversions in COIN is quadratic to the semantic aspect that has the largest number of distinctions. These semantic aspects are modeled as modifiers in a conceptual ontology; in most cases the number of conversions is linear with the number of modifiers, which is significantly smaller than traditional hard-wiring middleware approach where the number of conversion programs is quadratic to the number of sources and data receivers. In the example scenario in the paper, the COIN approach needs only 5 conversions to be defined while traditional approaches require 20,000 to 100 million. COIN achieves this scalability by automatically composing all the comprehensive conversions from a small number of declaratively defined sub-conversions.
Resumo:
We present a technique for the rapid and reliable evaluation of linear-functional output of elliptic partial differential equations with affine parameter dependence. The essential components are (i) rapidly uniformly convergent reduced-basis approximations — Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N (optimally) selected points in parameter space; (ii) a posteriori error estimation — relaxations of the residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs; and (iii) offline/online computational procedures — stratagems that exploit affine parameter dependence to de-couple the generation and projection stages of the approximation process. The operation count for the online stage — in which, given a new parameter value, we calculate the output and associated error bound — depends only on N (typically small) and the parametric complexity of the problem. The method is thus ideally suited to the many-query and real-time contexts. In this paper, based on the technique we develop a robust inverse computational method for very fast solution of inverse problems characterized by parametrized partial differential equations. The essential ideas are in three-fold: first, we apply the technique to the forward problem for the rapid certified evaluation of PDE input-output relations and associated rigorous error bounds; second, we incorporate the reduced-basis approximation and error bounds into the inverse problem formulation; and third, rather than regularize the goodness-of-fit objective, we may instead identify all (or almost all, in the probabilistic sense) system configurations consistent with the available experimental data — well-posedness is reflected in a bounded "possibility region" that furthermore shrinks as the experimental error is decreased.
Resumo:
We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By an essential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur in many compositional situations, such as household budget patterns, time budgets, palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful in such situations. From consideration of such examples it seems sensible to build up a model in two stages, the first determining where the zeros will occur and the second how the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
Title: Data-Driven Text Generation using Neural Networks Speaker: Pavlos Vougiouklis, University of Southampton Abstract: Recent work on neural networks shows their great potential at tackling a wide variety of Natural Language Processing (NLP) tasks. This talk will focus on the Natural Language Generation (NLG) problem and, more specifically, on the extend to which neural network language models could be employed for context-sensitive and data-driven text generation. In addition, a neural network architecture for response generation in social media along with the training methods that enable it to capture contextual information and effectively participate in public conversations will be discussed. Speaker Bio: Pavlos Vougiouklis obtained his 5-year Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki in 2013. He was awarded an MSc degree in Software Engineering from the University of Southampton in 2014. In 2015, he joined the Web and Internet Science (WAIS) research group of the University of Southampton and he is currently working towards the acquisition of his PhD degree in the field of Neural Network Approaches for Natural Language Processing. Title: Provenance is Complicated and Boring — Is there a solution? Speaker: Darren Richardson, University of Southampton Abstract: Paper trails, auditing, and accountability — arguably not the sexiest terms in computer science. But then you discover that you've possibly been eating horse-meat, and the importance of provenance becomes almost palpable. Having accepted that we should be creating provenance-enabled systems, the challenge of then communicating that provenance to casual users is not trivial: users should not have to have a detailed working knowledge of your system, and they certainly shouldn't be expected to understand the data model. So how, then, do you give users an insight into the provenance, without having to build a bespoke system for each and every different provenance installation? Speaker Bio: Darren is a final year Computer Science PhD student. He completed his undergraduate degree in Electronic Engineering at Southampton in 2012.
Resumo:
Combina historias de ficción y no ficción acerca de la contaminación. De qué manera afecta nuestra vida, cómo agua aire y suelo pueden verse afectadas y cómo se está tratando de resolver los problemas de contaminación tóxica en todo el mundo. Las preguntas abiertas estimulan a los estudiantes a reflexionar sobre estas cuestiones que afectan a las personas en diferentes partes del mundo y a formar sus propias opiniones.
Resumo:
Recurso para la asignatura de la Ciudadanía para alumnos entre once y dieciséis años. Está estructurado en cuatro secciones: las dos primeras se centran en el conocimiento de sí mismo y en la capacidad para manejar las emociones y las relaciones. Las unidades de la sección tres están diseñadas para ayudar a entender cómo desarrollar un estilo de vida saludable, más seguro, a pensar en las alternativas al tomar decisiones sobre sanidad personal y las consecuencias de tales decisiones. En la sección cuatro se trata la comprensión del mundo del trabajo y la capacidad financiera.
Resumo:
Libro dirigido a profesores que tengan que impartir la asignatura de 'Educación para la ciudadanía' en relación con 'Educación personal, social y de la salud' en el nivel KS4 (Key Stage 4), enseñanza secundaria. Presenta sugerencias de planificación y materiales adicionales para acompañar a los libros del alumno 'Your life 4' y 'Your life 5'. Cubre los siguientes temas: desarrollo como ciudadano, bienestar personal (comprenderse a si mismo y saber relacionarse), bienestar personal (mantener la salud), bienestar económico y capacidad financiera.
Resumo:
Los Centros de Investigación de Geografía son por lo general productores de un gran volumen de Información Geográfica (IG), los cuales generan tanto proyectos financiados como iniciativas de investigación individuales. El Centro de Estudos de Geografia e Planeamento Regional (e-GEO) ha estado involucrado en varios proyectos a escala local, regional, nacional e internacional. Recientemente, dos cuestiones fueron objeto de debate. Una de ellas fue el hecho de que la información espacial obtenida a partir del desarrollo de tales proyectos de investigación no ha tenido la visibilidad que se esperaba. En la mayoría de las veces, la IG de estos proyectos no estaba en el formato adecuado para que los investigadores -o incluso el público en general o grupos de interés- pudieran pesquisar fácilmente. La segunda cuestión era sobre cómo hacer que estos resultados pudieran ser accesibles al alcance de todos, en todos los lugares, fácilmente y con los mínimos costes para el Centro, teniendo en cuenta el actual contexto económico portugués y los intereses de e-GEO. Estas dos cuestiones se resuelven con una sola respuesta: la puesta en marcha de un WebGIS en una plataforma Open Source. En este trabajo se ilustra la producción de un instrumento para la difusión de las indicaciones geográficas en el World Wide Web, utilizando únicamente software libre y freeware. Esta herramienta permite a todos los investigadores del Centro publicar su IG, la cual aparece como plenamente accesible a cualquier usuario final. Potencialmente, el hecho de permitir que este tipo de información sea plenamente accesible debería generar un gran impacto, acortando las distancias entre el trabajo realizado por los académicos y el usuario final. Creemos que es una óptima manera para que el público pueda acceder e interpretar la información espacial. En conclusión, esta plataforma debería servir para cerrar la brecha entre productores y usuarios de la información geográfica, permitiendo la interacción entre todas las partes así como la carga de nuevos datos dado un conjunto de normas destinadas a control de calidad
Resumo:
This thesis proposes a solution to the problem of estimating the motion of an Unmanned Underwater Vehicle (UUV). Our approach is based on the integration of the incremental measurements which are provided by a vision system. When the vehicle is close to the underwater terrain, it constructs a visual map (so called "mosaic") of the area where the mission takes place while, at the same time, it localizes itself on this map, following the Concurrent Mapping and Localization strategy. The proposed methodology to achieve this goal is based on a feature-based mosaicking algorithm. A down-looking camera is attached to the underwater vehicle. As the vehicle moves, a sequence of images of the sea-floor is acquired by the camera. For every image of the sequence, a set of characteristic features is detected by means of a corner detector. Then, their correspondences are found in the next image of the sequence. Solving the correspondence problem in an accurate and reliable way is a difficult task in computer vision. We consider different alternatives to solve this problem by introducing a detailed analysis of the textural characteristics of the image. This is done in two phases: first comparing different texture operators individually, and next selecting those that best characterize the point/matching pair and using them together to obtain a more robust characterization. Various alternatives are also studied to merge the information provided by the individual texture operators. Finally, the best approach in terms of robustness and efficiency is proposed. After the correspondences have been solved, for every pair of consecutive images we obtain a list of image features in the first image and their matchings in the next frame. Our aim is now to recover the apparent motion of the camera from these features. Although an accurate texture analysis is devoted to the matching pro-cedure, some false matches (known as outliers) could still appear among the right correspon-dences. For this reason, a robust estimation technique is used to estimate the planar transformation (homography) which explains the dominant motion of the image. Next, this homography is used to warp the processed image to the common mosaic frame, constructing a composite image formed by every frame of the sequence. With the aim of estimating the position of the vehicle as the mosaic is being constructed, the 3D motion of the vehicle can be computed from the measurements obtained by a sonar altimeter and the incremental motion computed from the homography. Unfortunately, as the mosaic increases in size, image local alignment errors increase the inaccuracies associated to the position of the vehicle. Occasionally, the trajectory described by the vehicle may cross over itself. In this situation new information is available, and the system can readjust the position estimates. Our proposal consists not only in localizing the vehicle, but also in readjusting the trajectory described by the vehicle when crossover information is obtained. This is achieved by implementing an Augmented State Kalman Filter (ASKF). Kalman filtering appears as an adequate framework to deal with position estimates and their associated covariances. Finally, some experimental results are shown. A laboratory setup has been used to analyze and evaluate the accuracy of the mosaicking system. This setup enables a quantitative measurement of the accumulated errors of the mosaics created in the lab. Then, the results obtained from real sea trials using the URIS underwater vehicle are shown.
Resumo:
La calidad de energía eléctrica incluye la calidad del suministro y la calidad de la atención al cliente. La calidad del suministro a su vez se considera que la conforman dos partes, la forma de onda y la continuidad. En esta tesis se aborda la continuidad del suministro a través de la localización de faltas. Este problema se encuentra relativamente resuelto en los sistemas de transmisión, donde por las características homogéneas de la línea, la medición en ambos terminales y la disponibilidad de diversos equipos, se puede localizar el sitio de falta con una precisión relativamente alta. En sistemas de distribución, sin embargo, la localización de faltas es un problema complejo y aún no resuelto. La complejidad es debida principalmente a la presencia de conductores no homogéneos, cargas intermedias, derivaciones laterales y desbalances en el sistema y la carga. Además, normalmente, en estos sistemas sólo se cuenta con medidas en la subestación, y un modelo simplificado del circuito. Los principales esfuerzos en la localización han estado orientados al desarrollo de métodos que utilicen el fundamental de la tensión y de la corriente en la subestación, para estimar la reactancia hasta la falta. Como la obtención de la reactancia permite cuantificar la distancia al sitio de falta a partir del uso del modelo, el Método se considera Basado en el Modelo (MBM). Sin embargo, algunas de sus desventajas están asociadas a la necesidad de un buen modelo del sistema y a la posibilidad de localizar varios sitios donde puede haber ocurrido la falta, esto es, se puede presentar múltiple estimación del sitio de falta. Como aporte, en esta tesis se presenta un análisis y prueba comparativa entre varios de los MBM frecuentemente referenciados. Adicionalmente se complementa la solución con métodos que utilizan otro tipo de información, como la obtenida de las bases históricas de faltas con registros de tensión y corriente medidos en la subestación (no se limita solamente al fundamental). Como herramienta de extracción de información de estos registros, se utilizan y prueban dos técnicas de clasificación (LAMDA y SVM). Éstas relacionan las características obtenidas de la señal, con la zona bajo falta y se denominan en este documento como Métodos de Clasificación Basados en el Conocimiento (MCBC). La información que usan los MCBC se obtiene de los registros de tensión y de corriente medidos en la subestación de distribución, antes, durante y después de la falta. Los registros se procesan para obtener los siguientes descriptores: a) la magnitud de la variación de tensión ( dV ), b) la variación de la magnitud de corriente ( dI ), c) la variación de la potencia ( dS ), d) la reactancia de falta ( Xf ), e) la frecuencia del transitorio ( f ), y f) el valor propio máximo de la matriz de correlación de corrientes (Sv), cada uno de los cuales ha sido seleccionado por facilitar la localización de la falta. A partir de estos descriptores, se proponen diferentes conjuntos de entrenamiento y validación de los MCBC, y mediante una metodología que muestra la posibilidad de hallar relaciones entre estos conjuntos y las zonas en las cuales se presenta la falta, se seleccionan los de mejor comportamiento. Los resultados de aplicación, demuestran que con la combinación de los MCBC con los MBM, se puede reducir el problema de la múltiple estimación del sitio de falta. El MCBC determina la zona de falta, mientras que el MBM encuentra la distancia desde el punto de medida hasta la falta, la integración en un esquema híbrido toma las mejores características de cada método. En este documento, lo que se conoce como híbrido es la combinación de los MBM y los MCBC, de una forma complementaria. Finalmente y para comprobar los aportes de esta tesis, se propone y prueba un esquema de integración híbrida para localización de faltas en dos sistemas de distribución diferentes. Tanto los métodos que usan los parámetros del sistema y se fundamentan en la estimación de la impedancia (MBM), como aquellos que usan como información los descriptores y se fundamentan en técnicas de clasificación (MCBC), muestran su validez para resolver el problema de localización de faltas. Ambas metodologías propuestas tienen ventajas y desventajas, pero según la teoría de integración de métodos presentada, se alcanza una alta complementariedad, que permite la formulación de híbridos que mejoran los resultados, reduciendo o evitando el problema de la múltiple estimación de la falta.