953 resultados para proposed solutions
Resumo:
A medida que se incrementa la energía de los aceleradores de partículas o iones pesados como el CERN o GSI, de los reactores de fusión como JET o ITER, u otros experimentos científicos, se va haciendo cada vez más imprescindible el uso de técnicas de manipulación remota para la interacción con el entorno sujeto a la radiación. Hasta ahora la tasa de dosis radioactiva en el CERN podía tomar valores cercanos a algunos mSv para tiempos de enfriamiento de horas, que permitían la intervención humana para tareas de mantenimiento. Durante los primeros ensayos con plasma en JET, se alcanzaban valores cercanos a los 200 μSv después de un tiempo de enfriamiento de 4 meses y ya se hacía extensivo el uso de técnicas de manipulación remota. Hay una clara tendencia al incremento de los niveles de radioactividad en el futuro en este tipo de instalaciones. Un claro ejemplo es ITER, donde se esperan valores de 450 Sv/h en el centro del toroide a los 11 días de enfriamiento o los nuevos niveles energéticos del CERN que harán necesario una apuesta por niveles de mantenimiento remotos. En estas circunstancias se enmarca esta tesis, que estudia un sistema de control bilateral basado en fuerza-posición, tratando de evitar el uso de sensores de fuerza/par, cuyo contenido electrónico los hace especialmente sensitivos en estos ambientes. El contenido de este trabajo se centra en la teleoperación de robots industriales, que debido a su reconocida solvencia y facilidad para ser adaptados a estos entornos, unido al bajo coste y alta disponibilidad, les convierte en una alternativa interesante para tareas de manipulación remota frente a costosas soluciones a medida. En primer lugar se considera el problema cinemático de teleoperación maestro-esclavo de cinemática disimilar y se desarrolla un método general para la solución del problema en el que se incluye el uso de fuerzas asistivas para guiar al operador. A continuación se explican con detalle los experimentos realizados con un robot ABB y que muestran las dificultades encontradas y recomendaciones para solventarlas. Se concluye el estudio cinemático con un método para el encaje de espacios de trabajo entre maestro y esclavo disimilares. Posteriormente se mira hacia la dinámica, estudiándose el modelado de robots con vistas a obtener un método que permita estimar las fuerzas externas que actúan sobre los mismos. Durante la caracterización del modelo dinámico, se realizan varios ensayos para tratar de encontrar un compromiso entre complejidad de cálculo y error de estimación. También se dan las claves para modelar y caracterizar robots con estructura en forma de paralelogramo y se presenta la arquitectura de control deseada. Una vez obtenido el modelo completo del esclavo, se investigan diferentes alternativas que permitan una estimación de fuerzas externas en tiempo real, minimizando las derivadas de la posición para minimizar el ruido. Se comienza utilizando observadores clásicos del estado para ir evolucionando hasta llegar al desarrollo de un observador de tipo Luenberger-Sliding cuya implementación es relativamente sencilla y sus resultados contundentes. También se analiza el uso del observador propuesto durante un control bilateral simulado en el que se compara la realimentación de fuerzas obtenida con las técnicas clásicas basadas en error de posición frente a un control basado en fuerza-posición donde la fuerza es estimada y no medida. Se comprueba como la solución propuesta da resultados comparables con las arquitecturas clásicas y sin embargo introduce una alternativa para la teleoperación de robots industriales cuya teleoperación en entornos radioactivos sería imposible de otra manera. Finalmente se analizan los problemas derivados de la aplicación práctica de la teleoperación en los escenarios mencionados anteriormente. Debido a las condiciones prohibitivas para todo equipo electrónico, los sistemas de control se deben colocar a gran distancia de los manipuladores, dando lugar a longitudes de cable de centenares de metros. En estas condiciones se crean sobretensiones en controladores basados en PWM que pueden ser destructivas para el sistema formado por control, cableado y actuador, y por tanto, han de ser eliminadas. En este trabajo se propone una solución basada en un filtro LC comercial y se prueba de forma extensiva que su inclusión no produce efectos negativos sobre el control del actuador. ABSTRACT As the energy on the particle accelerators or heavy ion accelerators such as CERN or GSI, fusion reactors such as JET or ITER, or other scientific experiments is increased, it is becoming increasingly necessary to use remote handling techniques to interact with the remote and radioactive environment. So far, the dose rate at CERN could present values near several mSv for cooling times on the range of hours, which allowed human intervention for maintenance tasks. At JET, they measured values close to 200 μSv after a cooling time of 4 months and since then, the remote handling techniques became usual. There is a clear tendency to increase the radiation levels in the future. A clear example is ITER, where values of 450 Sv/h are expected in the centre of the torus after 11 days of cooling. Also, the new energetic levels of CERN are expected to lead to a more advanced remote handling means. In these circumstances this thesis is framed, studying a bilateral control system based on force-position, trying to avoid the use of force/torque sensors, whose electronic content makes them very sensitive in these environments. The contents of this work are focused on teleoperating industrial robots, which due its well-known reliability, easiness to be adapted to these environments, cost-effectiveness and high availability, are considered as an interesting alternative to expensive custom-made solutions for remote handling tasks. Firstly, the kinematic problem of teloperating master and slave with dissimilar kinematics is analysed and a new general approach for solving this issue is presented. The solution includes using assistive forces in order to guide the human operator. Coming up next, I explain with detail the experiments accomplished with an ABB robot that show the difficulties encountered and the proposed solutions. This section is concluded with a method to match the master’s and slave’s workspaces when they present dissimilar kinematics. Later on, the research studies the dynamics, with special focus on robot modelling with the purpose of obtaining a method that allows to estimate external forces acting on them. During the characterisation of the model’s parameters, a set of tests are performed in order to get to a compromise between computational complexity and estimation error. Key points for modelling and characterising robots with a parallelogram structure are also given, and the desired control architecture is presented. Once a complete model of the slave is obtained, different alternatives for external force estimation are review to be able to predict forces in real time, minimizing the position differentiation to minimize the estimation noise. The research starts by implementing classic state observers and then it evolves towards the use of Luenberger- Sliding observers whose implementation is relatively easy and the results are convincing. I also analyse the use of proposed observer during a simulated bilateral control on which the force feedback obtained with the classic techniques based on the position error is compared versus a control architecture based on force-position, where the force is estimated instead of measured. I t is checked how the proposed solution gives results comparable with the classical techniques and however introduces an alternative method for teleoperating industrial robots whose teleoperation in radioactive environments would have been impossible in a different way. Finally, the problems originated by the practical application of teleoperation in the before mentioned scenarios are analysed. Due the prohibitive conditions for every electronic equipment, the control systems should be placed far from the manipulators. This provokes that the power cables that fed the slaves devices can present lengths of hundreds of meters. In these circumstances, overvoltage waves are developed when implementing drives based on PWM technique. The occurrence of overvoltage is very dangerous for the system composed by drive, wiring and actuator, and has to be eliminated. During this work, a solution based on commercial LC filters is proposed and it is extensively proved that its inclusion does not introduce adverse effects into the actuator’s control.
Resumo:
La presente tesis doctoral contribuye al problema del diagnóstico autonómico de fallos en redes de telecomunicación. En las redes de telecomunicación actuales, las operadoras realizan tareas de diagnóstico de forma manual. Dichas operaciones deben ser llevadas a cabo por ingenieros altamente cualificados que cada vez tienen más dificultades a la hora de gestionar debidamente el crecimiento exponencial de la red tanto en tamaño, complejidad y heterogeneidad. Además, el advenimiento del Internet del Futuro hace que la demanda de sistemas que simplifiquen y automaticen la gestión de las redes de telecomunicación se haya incrementado en los últimos años. Para extraer el conocimiento necesario para desarrollar las soluciones propuestas y facilitar su adopción por los operadores de red, se propone una metodología de pruebas de aceptación para sistemas multi-agente enfocada en simplificar la comunicación entre los diferentes grupos de trabajo involucrados en todo proyecto de desarrollo software: clientes y desarrolladores. Para contribuir a la solución del problema del diagnóstico autonómico de fallos, se propone una arquitectura de agente capaz de diagnosticar fallos en redes de telecomunicación de manera autónoma. Dicha arquitectura extiende el modelo de agente Belief-Desire- Intention (BDI) con diferentes modelos de diagnóstico que gestionan las diferentes sub-tareas del proceso. La arquitectura propuesta combina diferentes técnicas de razonamiento para alcanzar su propósito gracias a un modelo estructural de la red, que usa razonamiento basado en ontologías, y un modelo causal de fallos, que usa razonamiento Bayesiano para gestionar debidamente la incertidumbre del proceso de diagnóstico. Para asegurar la adecuación de la arquitectura propuesta en situaciones de gran complejidad y heterogeneidad, se propone un marco de argumentación que permite diagnosticar a agentes que estén ejecutando en dominios federados. Para la aplicación de este marco en un sistema multi-agente, se propone un protocolo de coordinación en el que los agentes dialogan hasta alcanzar una conclusión para un caso de diagnóstico concreto. Como trabajos futuros, se consideran la extensión de la arquitectura para abordar otros problemas de gestión como el auto-descubrimiento o la auto-optimización, el uso de técnicas de reputación dentro del marco de argumentación para mejorar la extensibilidad del sistema de diagnóstico en entornos federados y la aplicación de las arquitecturas propuestas en las arquitecturas de red emergentes, como SDN, que ofrecen mayor capacidad de interacción con la red. ABSTRACT This PhD thesis contributes to the problem of autonomic fault diagnosis of telecommunication networks. Nowadays, in telecommunication networks, operators perform manual diagnosis tasks. Those operations must be carried out by high skilled network engineers which have increasing difficulties to properly manage the growing of those networks, both in size, complexity and heterogeneity. Moreover, the advent of the Future Internet makes the demand of solutions which simplifies and automates the telecommunication network management has been increased in recent years. To collect the domain knowledge required to developed the proposed solutions and to simplify its adoption by the operators, an agile testing methodology is defined for multiagent systems. This methodology is focused on the communication gap between the different work groups involved in any software development project, stakeholders and developers. To contribute to overcoming the problem of autonomic fault diagnosis, an agent architecture for fault diagnosis of telecommunication networks is defined. That architecture extends the Belief-Desire-Intention (BDI) agent model with different diagnostic models which handle the different subtasks of the process. The proposed architecture combines different reasoning techniques to achieve its objective using a structural model of the network, which uses ontology-based reasoning, and a causal model, which uses Bayesian reasoning to properly handle the uncertainty of the diagnosis process. To ensure the suitability of the proposed architecture in complex and heterogeneous environments, an argumentation framework is defined. This framework allows agents to perform fault diagnosis in federated domains. To apply this framework in a multi-agent system, a coordination protocol is defined. This protocol is used by agents to dialogue until a reliable conclusion for a specific diagnosis case is reached. Future work comprises the further extension of the agent architecture to approach other managements problems, such as self-discovery or self-optimisation; the application of reputation techniques in the argumentation framework to improve the extensibility of the diagnostic system in federated domains; and the application of the proposed agent architecture in emergent networking architectures, such as SDN, which offers new capabilities of control for the network.
Resumo:
Las obras de infraestructura que construye el ser humano para optimizar los recursos naturales y satisfacer sus necesidades, producen impactos tanto positivos como negativos en el ambiente. México cuenta con una gran cantidad de recursos naturales y lugares que han sido favorecidos por la naturaleza, donde la sobrecarga de las actividades antropogénicas genera problemas de impacto ambiental, especialmente en las zonas costeras y en su entorno. El objetivo del presente trabajo fue aportar información acerca de las principales presiones que recibe el sistema y cómo esto afecta a las propuestas de soluciones integrales y a la capacidad para recuperar el estado de equilibrio en las zonas costeras. En la presente investigación, se desarrolló una metodología para la caracterización de zonas costeras, basada en un modelo sistémico, con el propósito de tener una herramienta de planificación para proyectos ambientalmente sustentables, integrando una base de datos con las mejores prácticas de planificación, lo que facilitará el diagnóstico y la evaluación de la capacidad adaptativa de recuperación del sistema. Asimismo, se utilizó un modelo sistémico como una metodología para organizar la gran complejidad que implica la interrelación e interconexión que existe entre los múltiples componentes, y con ello obtener el conocimiento para su caracterización. Con base en el modelo de Zachman, se realizó un análisis para la detección de las fortalezas y debilidades del sistema, lo que permitió visualizar el impacto de los riesgos a que está expuesta una zona costera. Las principales aportaciones de este trabajo fueron el desarrollo de la FICHA DE CARACTERIZACIÓN DE LA ZONA COSTERA y la inclusión, en dicha ficha, de la estimación del nivel de la resiliencia física, ambiental, social, económica y política. La metodología propuesta, es una aportación que permite integrar los componentes, las relaciones e interconexiones que existen en el sistema costero. La metodología tiene la ventaja de ser flexible y se pueden agregar o desechar componentes de acuerdo a las particularidades de cada caso de estudio; adicionalmente, se propone utilizar esta herramienta como ayuda en el monitoreo periódico del sistema. Lo anterior como parte de un observatorio integrado al Sistema Nacional de Gestión Costera que se propone como parte de futuras líneas de investigación. Como caso de estudio, se realizó la caracterización del complejo sistema Banco Chinchorro, lo que resultó en la inclusión (en la FICHA DE CARACTERIZACIÓN DE LA ZONA COSTERA), de las lecciones aprendidas con la detección de buenas y malas prácticas, esto redundó en la mejora de la metodología propuesta para la gestión de la zona costera. All infrastructures that build the human being to optimize natural resources and meet their needs, generate both, positive and negative impacts on the environment, since the acquisition and transformation of resources in coastal areas affect their balance. Mexico has a large number of natural resources and places that have been favored by nature, whereas the overhead of anthropogenic activities leads to problems of environmental impact, especially in coastal areas and in its surroundings. The aim of this study was to provide information about the main pressures that a system receives and how this affects the proposed solutions and the ability to restore the state of balance in coastal areas. In this research, a methodology for the characterization of coastal zones, based on a systemic model, in order to develop a planning tool for environmentally sustainable projects, was developed, integrating a database with the best practices for planning, conservation and balance of coastal areas. This will facilitate the diagnosis and evaluation of the adaptive resilience of the system. A systemic model was used as a methodology to organize the vast complexity of the relationship and interconnection between the multiple components, and so thus gain knowledge for its characterization. Based on the Zachman model, an analysis to detect the strengths and weaknesses of the system was performed, allowing visualizing the impact of the risks that the coastal zone is exposed to. The main contributions of this study was the development of the COASTAL CHARACTERIZATION RECORD, and the inclusion, on that record, of the estimation of the physical, environmental, social, economic and political resilience. The proposed methodology is a contribution that allows integrating the components, relationships and interconnections existing in the coastal system. The methodology has the advantage of being flexible and components can be added or discarded according to the particularities of each case study; Additionally, this is not only a diagnostic tool, it is proposed to use it as an aid in monitoring periodically the system, this as part of an integrated monitoring into the National System of Coastal Management that is proposed as part of future research. As a case study, the characterization of the coastal zone “Banco Chinchorro” was done, resulting in the inclusion, in the COASTAL CHARACTERIZATION RECORD, of the documented lessons learned from the good and bad practices detection, improvement of the methodology proposed for the management of the coastal zone.
Análisis de las herramientas ORCC y Vivado HLS para la Síntesis de Modelos de Flujo de Datos RVC-CAL
Resumo:
En este Proyecto Fin de Grado se ha realizado un estudio de cómo generar, a partir de modelos de flujo de datos en RVC-CAL (Reconfigurable Video Coding – CAL Actor Language), modelos VHDL (Versatile Hardware Description Language) mediante Vivado HLS (Vivado High Level Synthesis), incluida en las herramientas disponibles en Vivado de Xilinx. Una vez conseguido el modelo VHDL resultante, la intención es que mediante las herramientas de Xilinx se programe en una FPGA (Field Programmable Gate Array) o el dispositivo Zynq también desarrollado por Xilinx. RVC-CAL es un lenguaje de flujo de datos que describe la funcionalidad de bloques funcionales, denominados actores. Las funcionalidades que desarrolla un actor se definen como acciones, las cuales pueden ser diferentes en un mismo actor. Los actores pueden comunicarse entre sí y formar una red de actores o network. Con Vivado HLS podemos obtener un diseño VHDL a partir de un modelo en lenguaje C. Por lo que la generación de modelos en VHDL a partir de otros en RVC-CAL, requiere una fase previa en la que los modelos en RVC-CAL serán compilados para conseguir su equivalente en lenguaje C. El compilador ORCC (Open RVC-CAL Compiler) es la herramienta que nos permite lograr diseños en lenguaje C partiendo de modelos en RVC-CAL. ORCC no crea directamente el código ejecutable, sino que genera un código fuente disponible para ser compilado por otra herramienta, en el caso de este proyecto, el compilador GCC (Gnu C Compiler) de Linux. En resumen en este proyecto nos encontramos con tres puntos de estudio bien diferenciados, los cuales son: 1. Partimos de modelos de flujo de datos en RVC-CAL, los cuales son compilados por ORCC para alcanzar su traducción en lenguaje C. 2. Una vez conseguidos los diseños equivalentes en lenguaje C, son sintetizados en Vivado HLS para conseguir los modelos en VHDL. 3. Los modelos VHDL resultantes serian manipulados por las herramientas de Xilinx para producir el bitstream que sea programado en una FPGA o en el dispositivo Zynq. En el estudio del segundo punto, nos encontramos con una serie de elementos conflictivos que afectan a la síntesis en Vivado HLS de los diseños en lenguaje C generados por ORCC. Estos elementos están relacionados con la manera que se encuentra estructurada la especificación en C generada por ORCC y que Vivado HLS no puede soportar en determinados momentos de la síntesis. De esta manera se ha propuesto una transformación “manual” de los diseños generados por ORCC que afecto lo menos posible a los modelos originales para poder realizar la síntesis con Vivado HLS y crear el fichero VHDL correcto. De esta forma este documento se estructura siguiendo el modelo de un trabajo de investigación. En primer lugar, se exponen las motivaciones y objetivos que apoyan y se esperan lograr en este trabajo. Seguidamente, se pone de manifiesto un análisis del estado del arte de los elementos necesarios para el desarrollo del mismo, proporcionando los conceptos básicos para la correcta comprensión y estudio del documento. Se realiza una descripción de los lenguajes RVC-CAL y VHDL, además de una introducción de las herramientas ORCC y Vivado, analizando las bondades y características principales de ambas. Una vez conocido el comportamiento de ambas herramientas, se describen las soluciones desarrolladas en nuestro estudio de la síntesis de modelos en RVC-CAL, poniéndose de manifiesto los puntos conflictivos anteriormente señalados que Vivado HLS no puede soportar en la síntesis de los diseños en lenguaje C generados por el compilador ORCC. A continuación se presentan las soluciones propuestas a estos errores acontecidos durante la síntesis, con las cuales se pretende alcanzar una especificación en C más óptima para una correcta síntesis en Vivado HLS y alcanzar de esta forma los modelos VHDL adecuados. Por último, como resultado final de este trabajo se extraen un conjunto de conclusiones sobre todos los análisis y desarrollos acontecidos en el mismo. Al mismo tiempo se proponen una serie de líneas futuras de trabajo con las que se podría continuar el estudio y completar la investigación desarrollada en este documento. ABSTRACT. In this Project it has made a study of how to generate, from data flow models in RVC-CAL (Reconfigurable Video Coding - Actor CAL Language), VHDL models (Versatile Hardware Description Language) by Vivado HLS (Vivado High Level Synthesis), included in the tools available in Vivado of Xilinx. Once achieved the resulting VHDL model, the intention is that by the Xilinx tools programmed in FPGA or Zynq device also developed by Xilinx. RVC-CAL is a dataflow language that describes the functionality of functional blocks, called actors. The functionalities developed by an actor are defined as actions, which may be different in the same actor. Actors can communicate with each other and form a network of actors. With Vivado HLS we can get a VHDL design from a model in C. So the generation of models in VHDL from others in RVC-CAL requires a preliminary phase in which the models RVC-CAL will be compiled to get its equivalent in C. The compiler ORCC (Open RVC-CAL Compiler) is the tool that allows us to achieve designs in C language models based on RVC-CAL. ORCC not directly create the executable code but generates an available source code to be compiled by another tool, in the case of this project, the GCC compiler (GNU C Compiler) of Linux. In short, in this project we find three well-defined points of study, which are: 1. We start from data flow models in RVC-CAL, which are compiled by ORCC to achieve its translation in C. 2. Once you realize the equivalent designs in C, they are synthesized in Vivado HLS for VHDL models. 3. The resulting models VHDL would be manipulated by Xilinx tools to produce the bitstream that is programmed into an FPGA or Zynq device. In the study of the second point, we find a number of conflicting elements that affect the synthesis Vivado HLS designs in C generated by ORCC. These elements are related to the way it is structured specification in C generated ORCC and Vivado HLS cannot hold at certain times of the synthesis. Thus it has proposed a "manual" transformation of designs generated by ORCC that affected as little as possible to the original in order to perform the synthesis Vivado HLS and create the correct file VHDL models. Thus this document is structured along the lines of a research. First, the motivations and objectives that support and hope to reach in this work are presented. Then it shows an analysis the state of the art of the elements necessary for its development, providing the basics for a correct understanding and study of the document. A description of the RVC-CAL and VHDL languages is made, in addition an introduction of the ORCC and Vivado tools, analyzing the advantages and main features of both. Once you know the behavior of both tools, the solutions developed in our study of the synthesis of RVC-CAL models, introducing the conflicting points mentioned above are described that Vivado HLS cannot stand in the synthesis of design in C language generated by ORCC compiler. Below the proposed solutions to these errors occurred during synthesis, with which it is intended to achieve optimum C specification for proper synthesis Vivado HLS and thus create the appropriate VHDL models are presented. Finally, as the end result of this work a set of conclusions on all analyzes and developments occurred in the same are removed. At the same time a series of future lines of work which could continue to study and complete the research developed in this document are proposed.
Resumo:
O problema de Planejamento da Expansão de Sistemas de Distribuição (PESD) visa determinar diretrizes para a expansão da rede considerando a crescente demanda dos consumidores. Nesse contexto, as empresas distribuidoras de energia elétrica têm o papel de propor ações no sistema de distribuição com o intuito de adequar o fornecimento da energia aos padrões exigidos pelos órgãos reguladores. Tradicionalmente considera-se apenas a minimização do custo global de investimento de planos de expansão, negligenciando-se questões de confiabilidade e robustez do sistema. Como consequência, os planos de expansão obtidos levam o sistema de distribuição a configurações que são vulneráveis a elevados cortes de carga na ocorrência de contingências na rede. Este trabalho busca a elaboração de uma metodologia para inserir questões de confiabilidade e risco ao problema PESD tradicional, com o intuito de escolher planos de expansão que maximizem a robustez da rede e, consequentemente, atenuar os danos causados pelas contingências no sistema. Formulou-se um modelo multiobjetivo do problema PESD em que se minimizam dois objetivos: o custo global (que incorpora custo de investimento, custo de manutenção, custo de operação e custo de produção de energia) e o risco de implantação de planos de expansão. Para ambos os objetivos, são formulados modelos lineares inteiros mistos que são resolvidos utilizando o solver CPLEX através do software GAMS. Para administrar a busca por soluções ótimas, optou-se por programar em linguagem C++ dois Algoritmos Evolutivos: Non-dominated Sorting Genetic Algorithm-2 (NSGA2) e Strength Pareto Evolutionary Algorithm-2 (SPEA2). Esses algoritmos mostraram-se eficazes nessa busca, o que foi constatado através de simulações do planejamento da expansão de dois sistemas testes adaptados da literatura. O conjunto de soluções encontradas nas simulações contém planos de expansão com diferentes níveis de custo global e de risco de implantação, destacando a diversidade das soluções propostas. Algumas dessas topologias são ilustradas para se evidenciar suas diferenças.
Resumo:
Após o aumento de potência do reator IEA-R1 de 2 MW para 5 MW observou-se um aumento da taxa de corrosão nas placas laterais de alguns elementos combustíveis e algumas dúvidas surgiram com relação ao valor de vazão utilizada nas análises termo-hidráulicas. A fim de esclarecer e medir a distribuição de vazão real pelos elementos combustíveis que compõe o núcleo do reator IEA-R1, um elemento combustível protótipo, sem material nuclear, chamado DMPV-01 (Dispositivo para Medida de Pressão e Vazão), em escala real, foi projetado e construído em alumínio. A vazão no canal entre dois elementos combustíveis é muito difícil de estimar ou ser medida. Esta vazão é muito importante no processo de resfriamento das placas laterais. Este trabalho apresenta a concepção e construção de um elemento combustível instrumentado para medir a temperatura real nestas placas laterais para melhor avaliar as condições de resfriamento do combustível. Quatorze termopares foram instalados neste elemento combustível instrumentado. Quatro termopares em cada canal lateral e quatro no canal central, além de um termopar no bocal de entrada e outro no bocal de saída do elemento. Existem três termopares para medida de temperatura do revestimento e um para a temperatura do fluido em cada canal. Três séries de experimentos, para três configurações distintas, foram realizadas com o elemento combustível instrumentado. Em dois experimentos uma caixa de alumínio foi instalada ao redor do núcleo para reduzir o escoamento transverso entre os elementos combustíveis e medir o impacto na temperatura das placas externas. Dada a tamanha quantidade de informações obtidas e sua utilidade no projeto, melhoria e capacitação na construção, montagem e fabricação de elementos combustíveis instrumentados, este projeto constitui um importante marco no estudo de núcleos de reatores de pesquisa. As soluções propostas podem ser amplamente utilizadas para outros reatores de pesquisa.
Resumo:
Policy-makers often fret about the low number of university graduates in the fields of science, technology, engineering and mathematics (STEM). Proposed solutions often focus on providing better information for students and parents about the employability or average wages of different fields to emphasise that STEM professions pay. This paper argues that, from a personal point of view, students are actually making rational decisions, if all benefits and costs are factored into the equation. The authors conclude, therefore, that public policy needs to change the incentives to induce students to enter these fields and not just provide information about them.
Resumo:
The rapid increase in the number of immigrants from outside of the EU coming to Germany has become the paramount political issue. According to new estimates, the number of individuals expected arrive in Germany in 2015 and apply for asylum there is 800,000, which is nearly twice as many as estimated in earlier forecasts. Various administrative, financial and social problems related to the influx of migrants are becoming increasingly apparent. The problem of ‘refugees’ (in public debate, the terms ‘immigrants’, ‘refugees’, ‘illegal immigrants’, ‘economic immigrants’ have not been clearly defined and have often been used interchangeably) has been culminating for over a year. Despite this, it was being disregarded by Angela Merkel’s government which was preoccupied with debates on how to rescue Greece. It was only daily reports of cases of refugee centres being set on fire that convinced Chancellor Merkel to speak and to make immigration problem a priority issue (Chefsache). Neither the ruling coalition nor the opposition parties have a consistent idea of how Germany should react to the growing number of refugees. In this matter, divisions run across parties. Various solutions have been proposed, from liberalisation of laws on the right to stay in Germany to combating illegal immigration more effectively, which would be possible if asylum granting procedures were accelerated. The proposed solutions have not been properly thought through, instead they are reactive measures inspired by the results of opinion polls. This is why their assumptions are often contradictory. The situation is similar regarding the actions proposed by Chancellor Merkel which involve faster procedures to expel individuals with no right to stay in Germany and a plan to convince other EU states to accept ‘refugees’. None of these ideas is new – they were already present in the German internal debate.
Resumo:
In Italia molti edifici sono stati costruiti senza tenere in considerazione l'azione sismica. La necessità di adeguare tali edifici in accordo con la normativa italiana vigente è stato il motivo scatenante di questa ricerca. Per l'adeguamento sismico, vengono qui proposti differenti approcci in base al tipo di struttura e, in particolare, in base alla loro deformabilità. Per le strutture flessibili come le scaffalature di acciaio adibite alla stagionatura del Parmigiano Reggiano, sono stati utilizzati dispositivi passivi di dissipazione energetica. Sono state condotte analisi di sensitività per determinare il coefficiente di smorzamento in grado di minimizzare lo stato tensionale nelle sezioni di interesse. I risultati delle analisi mostrano l'efficacia delle soluzioni proposte e potrebbero rappresentare un punto di partenza per la definizione di possibili contromisure standar per l'adeguamento sismico. Per le strutture rigide, come i ponti in muratura, sono stati definiti alcuni criteri per la modellazione e la verifica delle sezioni di interesse, utilizzando modelli semplificati ma dall'efficacia comprovata come termine di paragone.
Resumo:
This report gives an overview of the work being carried out, as part of the NEUROSAT project, in the Neural Computing Research Group at Aston University. The aim is to give a general review of the work and methods, with reference to other documents which provide the detail. The document is ongoing and will be updated as parts of the project are completed. Thus some of the references are not yet present. In the broadest sense, the Aston part of NEUROSAT is about using neural networks (and other advanced statistical techniques) to extract wind vectors from satellite measurements of ocean surface radar backscatter. The work involves several phases, which are outlined below. A brief summary of the theory and application of satellite scatterometers forms the first section. The next section deals with the forward modelling of the scatterometer data, after which the inverse problem is addressed. Dealiasing (or disambiguation) is discussed, together with proposed solutions. Finally a holistic framework is presented in which the problem can be solved.
Resumo:
Distributive tactile sensing is a method of tactile sensing in which a small number of sensors monitors the behaviour of a flexible substrate which is in contact with the object being sensed. This paper describes the first use of fibre Bragg grating sensors in such a system. Two systems are presented: the first is a one-dimensional metal strip with an array of four sensors, which is capable of detecting the magnitude and position of a contacting load. This system is favourably compared experimentally with a similar system using resistive strain gauges. The second system is a two-dimensional steel plate with nine sensors which is able to distinguish the position and shape of a contacting load, or the positions of two loads simultaneously. This system is compared with a similar system using 16 infrared displacement sensors. Each system uses neural networks to process the sensor data to give information concerning the type of contact. Issues and limitations of the systems are discussed, along with proposed solutions to some of the difficulties.
Resumo:
Distributive tactile sensing is a method of tactile sensing in which a small number of sensors monitors the behaviour of a flexible substrate which is in contact with the object being sensed. This paper describes the first use of fibre Bragg grating sensors in such a system. Two systems are presented: the first is a one-dimensional metal strip with an array of four sensors, which is capable of detecting the magnitude and position of a contacting load. This system is favourably compared experimentally with a similar system using resistive strain gauges. The second system is a two-dimensional steel plate with nine sensors which is able to distinguish the position and shape of a contacting load, or the positions of two loads simultaneously. This system is compared with a similar system using 16 infrared displacement sensors. Each system uses neural networks to process the sensor data to give information concerning the type of contact. Issues and limitations of the systems are discussed, along with proposed solutions to some of the difficulties. © 2007 IOP Publishing Ltd.
Resumo:
Supply chain formation (SCF) is the process of determining the set of participants and exchange relationships within a network with the goal of setting up a supply chain that meets some predefined social objective. Many proposed solutions for the SCF problem rely on centralized computation, which presents a single point of failure and can also lead to problems with scalability. Decentralized techniques that aid supply chain emergence offer a more robust and scalable approach by allowing participants to deliberate between themselves about the structure of the optimal supply chain. Current decentralized supply chain emergence mechanisms are only able to deal with simplistic scenarios in which goods are produced and traded in single units only and without taking into account production capacities or input-output ratios other than 1:1. In this paper, we demonstrate the performance of a graphical inference technique, max-sum loopy belief propagation (LBP), in a complex multiunit unit supply chain emergence scenario which models additional constraints such as production capacities and input-to-output ratios. We also provide results demonstrating the performance of LBP in dynamic environments, where the properties and composition of participants are altered as the algorithm is running. Our results suggest that max-sum LBP produces consistently strong solutions on a variety of network structures in a multiunit problem scenario, and that performance tends not to be affected by on-the-fly changes to the properties or composition of participants.
Resumo:
Growth of complexity and functional importance of integrated navigation systems (INS) leads to high losses at the equipment refusals. The paper is devoted to the INS diagnosis system development, allowing identifying the cause of malfunction. The proposed solutions permit taking into account any changes in sensors dynamic and accuracy characteristics by means of the appropriate error models coefficients. Under actual conditions of INS operation, the determination of current values of the sensor models and estimation filter parameters rely on identification procedures. The results of full-scale experiments are given, which corroborate the expediency of INS error models parametric identification in bench test process.
Resumo:
This is a follow up to "Solution of the least squares method problem of pairwise comparisons matrix" by Bozóki published by this journal in 2008. Familiarity with this paper is essential and assumed. For lower inconsistency and decreased accuracy, our proposed solutions run in seconds instead of days. As such, they may be useful for researchers willing to use the least squares method (LSM) instead of the geometric means (GM) method.