42 resultados para advanced editing
em Universidad Politécnica de Madrid
Resumo:
The consideration of real operating conditions for the design and optimization of a multijunction solar cell receiver-concentrator assembly is indispensable. Such a requirement involves the need for suitable modeling and simulation tools in order to complement the experimental work and circumvent its well-known burdens and restrictions. Three-dimensional distributed models have been demonstrated in the past to be a powerful choice for the analysis of distributed phenomena in single- and dual-junction solar cells, as well as for the design of strategies to minimize the solar cell losses when operating under high concentrations. In this paper, we present the application of these models for the analysis of triple-junction solar cells under real operating conditions. The impact of different chromatic aberration profiles on the short-circuit current of triple-junction solar cells is analyzed in detail using the developed distributed model. Current spreading conditions the impact of a given chromatic aberration profile on the solar cell I-V curve. The focus is put on determining the role of current spreading in the connection between photocurrent profile, subcell voltage and current, and semiconductor layers sheet resistance.
Resumo:
Nowadays, computer simulators are becoming basic tools for education and training in many engineering fields. In the nuclear industry, the role of simulation for training of operators of nuclear power plants is also recognized of the utmost relevance. As an example, the International Atomic Energy Agency sponsors the development of nuclear reactor simulators for education, and arranges the supply of such simulation programs. Aware of this, in 2008 Gas Natural Fenosa, a Spanish gas and electric utility that owns and operate nuclear power plants and promotes university education in the nuclear technology field, provided the Department of Nuclear Engineering of Universidad Politécnica de Madrid with the Interactive Graphic Simulator (IGS) of “José Cabrera” (Zorita) nuclear power plant, an industrial facility whose commercial operation ceased definitively in April 2006. It is a state-of-the-art full-scope real-time simulator that was used for training and qualification of the operators of the plant control room, as well as to understand and analyses the plant dynamics, and to develop, qualify and validate its emergency operating procedures.
Resumo:
Jóvenes Nucleares (Spanish Young Generation in Nuclear, JJNN) is a non-profit organization and a commission of the Spanish Nuclear Society (SNE). The Universidad Politécnica de Madrid (Technical University of Madrid, UPM) is one of the most prestigious technical universities of Spain, and has a very strong curriculum in nuclear engineering training and research. Finishing 2009, JJNN and the UPM started to plan a new and first-of-a-kind Seminar in Nuclear Safety focused on the Advanced Reactors (Generation III, III+ and IV). The scope was to make a general description of the safety in the new reactors, comparing them with the built Generation II reactors from a technical point of view but simple and without the need of strong background in nuclear engineering to try to be interesting for the most number of people possible.
Resumo:
This communication presents an overview of their first results and innovate methodologies, focused in their possibilities and limitations for the reconstruction of recent floods and paleofloods over the World.
Resumo:
Preliminary studies have been performed to design a device for nuclear waste transmutation and hydrogen generation based on a gas-cooled pebble bed accelerator driven system, TADSEA (Transmutation Advanced Device for Sustainable Energy Application). In previous studies we have addressed the viability of an ADS Transmutation device that uses as fuel wastes from the existing LWR power plants, encapsulated in graphite in the form of pebble beds, cooled by helium which enables high temperatures (in the order of 1200 K), to generate hydrogen from water either by high temperature electrolysis or by thermochemical cycles. For designing this device several configurations were studied, including several reflectors thickness, to achieve the desired parameters, the transmutation of nuclear waste and the production of 100 MW of thermal power. In this paper new studies performed on deep burn in-core fuel management strategy for LWR waste are presented. The fuel cycle on TADSEA device has been analyzed based on both: driven and transmutation fuel that had been proposed by the General Atomic design of a gas turbine-modular helium reactor. The transmutation results of the three fuel management strategies, using driven, transmutation and standard LWR spent fuel were compared, and several parameters describing the neutron performance of TADSEA nuclear core as the fuel and moderator temperature reactivity coefficients and transmutation chain, are also presented
Resumo:
Dentro de los paradigmas de programación en el mundo de la informática tenemos la "Programación Lógica'', cuyo principal exponente es el lenguaje Prolog. Los programas Prolog se componen de un conjunto de predicados, cada uno de ellos definido en base a reglas que aportan un elevado nivel de abstracción y declaratividad al programador. Sin embargo, las formulación con reglas implica, frecuentemente, que un predicado se recompute varias veces para la misma consulta y además, Prolog utiliza un orden fijo para evaluar reglas y objetivos (evaluación SLD) que puede entrar en "bucles infinitos'' cuando ejecuta reglas recursivas declarativamente correctas. Estas limitaciones son atacadas de raiz por la tabulación, que se basa en "recordar'' en una tabla las llamadas realizadas y sus soluciones. Así, en caso de repetir una llamada tendríamos ya disponibles sus soluciones y evitamos la recomputación. También evita "bucles infinitos'' ya que las llamadas que los generan son suspendidas, quedando a la espera de que se computen soluciones para las mismas. La implementación de la tabulación no es sencilla. En particular, necesita de tres operaciones que no pueden ser ejecutadas en tiempo constante simultáneamente. Dichas operaciones son: suspensión de llamadas, relanzamiento de llamadas y {acceso a variables. La primera parte de la tesis compara tres implementaciones de tabulación sobre Ciao, cada una de las cuales penaliza una de estas operaciones. Por tanto, cada solución tiene sus ventajas y sus inconvenientes y se comportan mejor o peor dependiendo del programa ejecutado. La segunda parte de la tesis mejora la funcionalidad de la tabulación para combinarla con restricciones y también para evitar computaciones innecesarias. La programación con restricciones permite la resolución de ecuaciones como medio de programar, mecanismo altamente declarativo. Hemos desarrollado un framework para combinar la tabulación con las restricciones, priorizando objetivos como la flexibilidad, la eficiencia y la generalidad de nuestra solución, obteniendo una sinergia entre ambas técnicas que puede ser aplicada en numerosas aplicaciones. Por otra parte, un aspecto fundamental de la tabulación hace referencia al momento en que se retornan las soluciones de una llamada tabulada. Local evaluation devuelve soluciones cuando todas las soluciones de la llamada tabulada han sido computadas. Por contra, batched evaluation devuelve las soluciones una a una conforme van siendo computadas, por lo que se adapta mejor a problemas donde no nos interesa encontrar todas las soluciones. Sin embargo, su consumo de memoria es exponencialmente peor que el de local evaluation. La tesis presenta swapping evaluation, que devuelve soluciones tan pronto como son computadas pero con un consumo de memoria similar a la de local evaluation. Además, se implementan operadores de poda, once/1, para descartar la búsqueda de soluciones alternativas cuando encontramos la solución deseada. Por último, Prolog adopta con relativa facilidad soluciones para paralelismo gracias a su flexibilidad en el control de la ejecución y a que sus asignaciones son lógicas. La tercera parte de la tesis extiende el paralelismo conjuntivo de Ciao para trabajar con programas no deterministas, lo que presenta dos problemas principales: los objetivos atrapados y la recomputación de objetivos. Las soluciones clásicas para los objetivos atrapados rompían muchos invariantes de la ejecución Prolog, siendo soluciones difíciles de mantener y de extender, que la experiencia nos dice que han caído en desuso. Nosotros proponemos una solución modular (basada en la implementación de swapping evaluation), localizada y que no rompe los invariantes de la ejecución Prolog, pero que mantiene un alto rendimiento de la ejecución paralela. En referencia a la recomputación de objetivos paralelos en presencia de no determinismo hemos adaptado ténicas derivadas de la tabulación para memorizar computaciones de estos objetivos y evitar su recomputación.
Resumo:
El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.
Resumo:
There are many industries that use highly technological solutions to improve quality in all of their products. The steel industry is one example. Several automatic surface-inspection systems are used in the steel industry to identify various types of defects and to help operators decide whether to accept, reroute, or downgrade the material, subject to the assessment process. This paper focuses on promoting a strategy that considers all defects in an integrated fashion. It does this by managing the uncertainty about the exact position of a defect due to different process conditions by means of Gaussian additive influence functions. The relevance of the approach is in making possible consistency and reliability between surface inspection systems. The results obtained are an increase in confidence in the automatic inspection system and an ability to introduce improved prediction and advanced routing models. The prediction is provided to technical operators to help them in their decision-making process. It shows the increase in improvement gained by reducing the 40 % of coils that are downgraded at the hot strip mill because of specific defects. In addition, this technology facilitates an increase of 50 % in the accuracy of the estimate of defect survival after the cleaning facility in comparison to the former approach. The proposed technology is implemented by means of software-based, multi-agent solutions. It makes possible the independent treatment of information, presentation, quality analysis, and other relevant functions.
Resumo:
Photoreflectance (PR) is a convenient characterization tool able to reveal optoelectronic properties of semiconductor materials and structures. It is a simple non-destructive and contactless technique which can be used in air at room temperature. We will present experimental results of the characterization carried out by means of PR on different types of advanced photovoltaic (PV) structures, including quantum-dot-based prototypes of intermediate band solar cells, quantum-well structures, highly mismatched alloys, and III?V-based multi-junction devices, thereby demonstrating the suitability of PR as a powerful diagnostic tool. Examples will be given to illustrate the value of this spectroscopic technique for PV including (i) the analysis of the PR spectra in search of critical points associated to absorption onsets; (ii) distinguishing signatures related to quantum confinement from those originating from delocalized band states; (iii) determining the intensity of the electric field related to built-in potentials at interfaces according to the Franz?Keldysh (FK) theory; and (v) determining the nature of different oscillatory PR signals among those ascribed to FK-oscillations, interferometric and photorefractive effects. The aim is to attract the interest of researchers in the field of PV to modulation spectroscopies, as they can be helpful in the analysis of their devices.
Resumo:
This paper presents de results of experiments conducted within the Work Package 10 (fusion experimental programme) of the HiPER project. The aim of these experiments was to study the physics relevant for advanced ignition schemes for inertial confinement fusion, i.e. the fast ignition and the shock ignition. Such schemes allow to achieve a higher fusion gain compared to the indirect drive approach adopted in the National Ignition Facility in United States, which is important for the future inertial fusion energy reactors and for realising the inertial fusion with smaller facilities
Resumo:
Lately, videoconference applications have experienced an evolution towards the World Wide Web. New technologies have given browsers real-time communications capabilities. In this context, WebRTC aims to provide this functionality by following and defining standards. Being a new effort, WebRTC still lacks advanced videoconferencing services such as session recording, media mixing and adjusting to varying network conditions. This paper analyzes these challenges and proposes an architecture based on a traditional communications entity, the Multipoint Control Unit or MCU as a solution.
Resumo:
Low resources in many African locations do not allow many African scientists and physicians to access the latest advances in technology. This deficiency hinders the daily life of African professionals that often cannot afford, for instance, the cost of internet fees or software licenses. The AFRICA BUILD project, funded by the European Commission and formed by four European and four African institutions, intends to provide advanced computational tools to African institutions in order to solve current technological limitations. In the context of AFRICA BUILD we have carried out, a series of experiments to test the feasibility of using Cloud Computing technologies in two different locations in Africa: Egypt and Burundi. The project aims to create a virtual platform to provide access to a wide range of biomedical informatics and learning resources to professionals and researchers in Africa.
Resumo:
panish Young Generation in Nuclear (Jóvenes Nucleares) is a commission of the Spanish Nuclear Society (SNE), whose main goals are to spread knowledge about nuclear energy among the society. Following this motivation, two Seminars have been carried out with the collaboration of the Technical University of Madrid: The Seminar of Nuclear Safety in Advanced Reactors (SRA) and the Seminar of Nuclear Fusion (SFN). The first one, which has been celebrated every year since 2010, aims to show clearly the advances that have been obtained in the section of safety with the new reactors, from a technical but simple point of view and without needing great previous nuclear engineering knowledge. The second one, which first edition was held in 2011, aims to give a general overview of the past, present and future situation of nuclear fusion technology, and was born as a result of the increasing interest of our Spanish Young Generation members in this technology.
Resumo:
This paper presents a proposal for an advanced system of debate in an environment of digital democracy which overcomes the limitations of existing systems. We have been especially careful in applying security procedures in telematic systems, for they are to offer citizens the guarantees that society demands. New functional tools have been included to ensure user authentication and to permit anonymous participation where the system is unable to disclose or even to know the identity of system users. The platform prevents participation by non-entitled persons who do not belong to the authorized group from giving their opinion. Furthermore, this proposal allows for verifying the proper function of the system, free of tampering or fraud intended to alter the conclusions or outcomes of participation. All these tools guarantee important aspects of both a social and technical nature, most importantly: freedom of expression, equality and auditability.
Resumo:
The creation of language resources is a time-consuming process requiring the efforts of many people. The use of resources collaboratively created by non-linguists can potentially ameliorate this situation. However, such resources often contain more errors compared to resources created by experts. For the particular case of lexica, we analyse the case of Wiktionary, a resource created along wiki principles and argue that through the use of a principled lexicon model, namely lemon, the resulting data could be better understandable to machines. We then present a platform called lemon source that supports the creation of linked lexical data along the lemon model. This tool builds on the concept of a semantic wiki to enable collaborative editing of the resources by many users concurrently. In this paper, we describe the model, the tool and present an evaluation of its usability based on a small group of users.