862 resultados para computing systems design
Resumo:
Aplanatic designs present great interest in the optics field since they are free from spherical aberration and linear coma at the axial direction. Nevertheless nowadays it cannot be found on literature any thin aplanatic design based on a lens. This work presents the first aplanatic thin lens (in this case a dome-shaped faceted TIR lens performing light collimation), designed for LED illumination applications. This device, due to its TIR structure (defined as an anomalous microstructure as we will see) presents good color-mixing properties. We will show this by means of raytrace simulations, as well as high optical efficiency.
Resumo:
LEDs are substituting fluorescent and incandescent bulbs as illumination sources due to their low power consumption and long lifetime. Visible Light Communications (VLC) makes use of the LEDs short switching times to transmit information. Although LEDs switching speed is around Mbps range, higher speeds (hundred of Mbps) can be reached by using high bandwidth-efficiency modulation techniques. However, the use of these techniques requires a more complex driver which elevates drastically its power consumption. In this work an energy efficiency analysis of the different VLC modulation techniques and drivers is presented. Besides, the design of new schemes of VLC drivers is described.
Resumo:
Modern object oriented languages like C# and JAVA enable developers to build complex application in less time. These languages are based on selecting heap allocated pass-by-reference objects for user defined data structures. This simplifies programming by automatically managing memory allocation and deallocation in conjunction with automated garbage collection. This simplification of programming comes at the cost of performance. Using pass-by-reference objects instead of lighter weight pass-by value structs can have memory impact in some cases. These costs can be critical when these application runs on limited resource environments such as mobile devices and cloud computing systems. We explore the problem by using the simple and uniform memory model to improve the performance. In this work we address this problem by providing an automated and sounds static conversion analysis which identifies if a by reference type can be safely converted to a by value type where the conversion may result in performance improvements. This works focus on C# programs. Our approach is based on a combination of syntactic and semantic checks to identify classes that are safe to convert. We evaluate the effectiveness of our work in identifying convertible types and impact of this transformation. The result shows that the transformation of reference type to value type can have substantial performance impact in practice. In our case studies we optimize the performance in Barnes-Hut program which shows total memory allocation decreased by 93% and execution time also reduced by 15%.
Resumo:
There is an increasing interest in the intersection of human-computer interaction and public policy. This day-long workshop will examine successes and challenges related to public policy and human computer interaction, in order to provide a forum to create a baseline of examples and to start the process of writing a white paper on the topic.
Resumo:
Las cargas de origen térmico causadas por las acciones medioambientales generan esfuerzos apreciables en estructuras hiperestáticas masivas, como es el caso de las presas bóvedas. Ciertas investigaciones apuntan que la variación de la temperatura ambiental es la segunda causa de reparaciones en las presas del hormigón en servicio. Del mismo modo, es una causa de fisuración en un porcentaje apreciable de casos. Las presas son infraestructuras singulares por sus dimensiones, su vida útil, su impacto sobre el territorio y por el riesgo que implica su presencia. La evaluación de ese riesgo requiere, entre otras herramientas, de modelos matemáticos de predicción del comportamiento. Los modelos han de reproducir la realidad del modo más fidedigno posible. Además, en un escenario de posible cambio climático en el que se prevé un aumento de las temperaturas medias, la sociedad ha de conocer cuál será el comportamiento estructural de las infraestructuras sensibles en los futuros escenarios climáticos. No obstante, existen escasos estudios enfocados a determinar el campo de temperaturas de las presas de hormigón. Así, en esta investigación se han mejorado los modelos de cálculo térmico existentes con la incorporación de nuevos fenómenos físicos de transferencia de calor entre la estructura y el medio ambiente que la rodea. También se han propuesto nuevas metodologías más eficientes para cuantificar otros mecanismos de transferencia de calor. La nueva metodología se ha aplicado a un caso de estudio donde se disponía de un amplio registro de temperaturas de su hormigón. Se ha comprobado la calidad de las predicciones realizadas por los diversos modelos térmicos en el caso piloto. También se han comparado los resultados de los diversos modelos entre sí. Finalmente, se ha determinado las consecuencias de las predicciones de las temperaturas por algunos de los modelos térmicos sobre la respuesta estructural del caso de estudio. Los modelos térmicos se han empleado para caracterizar térmicamente las presas bóveda. Se ha estudiado el efecto de ciertas variables atmosféricas y determinados aspectos geométricos de las presas sobre su respuesta térmica. También se ha propuesto una metodología para evaluar la respuesta térmica y estructural de las infraestructuras frente a los posibles cambios meteorológicos inducidos por el cambio climático. La metodología se ha aplicado a un caso de estudio, una presa bóveda, y se ha obtenido su futura respuesta térmica y estructural frente a diversos escenarios climáticos. Frente a este posible cambio de las variables meteorológicas, se han detallado diversas medidas de adaptación y se ha propuesto una modificación de la normativa española de proyecto de presas en el punto acerca del cálculo de la distribución de temperaturas de diseño. Finalmente, se han extraído una serie de conclusiones y se han sugerido posibles futuras líneas de investigación para ampliar el conocimiento del fenómeno de la distribución de temperaturas en el interior de las presas y las consecuencias sobre su respuesta estructural. También se han propuesto futuras investigaciones para desarrollar nuevos procedimiento para definir las cargas térmicas de diseño, así como posibles medidas de adaptación frente al cambio climático. Thermal loads produced by external temperature variations may cause stresses in massive hyperstatic structures, such as arch dams. External temperature changes are pointed out as the second most major repairs in dams during operation. Moreover, cracking is caused by thermal loads in a quite number of cases. Dams are unique infrastructures given their dimensions, lifetime, spatial impacts and the risks involve by their presence. The risks are assessed by means of mathematical models which compute the behavior of the structure. The behavior has to be reproduced as reliable as possible. Moreover, since mean temperature on Earth is expected to increase, society has to know the structural behavior of sensitive structures to climate change. However, few studies have addressed the assessment of the thermal field in concrete dams. Thermal models are improved in this research. New heat transfer phenomena have been accounted for. Moreover, new and more efficient methodologies for computing other heat transfer phenomena have been proposed. The methodology has been applied to a case study where observations from thermometers embedded in the concrete were available. Recorded data were predicted by several thermal models and the quality of the predictions was assessed. Furthermore, predictions were compared between them. Finally, the consequences on the stress calculations were analyzed. Thermal models have been used to characterize arch dams from a thermal point of view. The effect of some meteorological and geometrical variables on the thermal response of the dam has been analyzed. Moreover, a methodology for assessing the impacts of global warming on the thermal and structural behavior of infrastructures has been proposed. The methodology was applied to a case study, an arch dam, and its thermal and structural response to several future climatic scenarios was computed. In addition, several adaptation strategies has been outlined and a new formulation for computing design thermal loads in concrete dams has been proposed. Finally, some conclusions have been reported and some future research works have been outlined. Future research works will increase the knowledge of the concrete thermal field and its consequences on the structural response of the infrastructures. Moreover, research works will develope a new procedure for computing the design thermal loads and will study some adaptation strategies against the climate change.
Resumo:
Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.
Resumo:
La robótica móvil constituye un área de desarrollo y explotación de interés creciente. Existen ejemplos de robótica móvil de relevancia destacada en el ámbito industrial y se estima un fuerte crecimiento en el terreno de la robótica de servicios. En la arquitectura software de todos los robots móviles suelen aparecer con frecuencia componentes que tienen asignadas competencias de gobierno, navegación, percepción, etcétera, todos ellos de importancia destacada. Sin embargo, existe un elemento, difícilmente prescindible en este tipo de robots, el cual se encarga del control de velocidad del dispositivo en sus desplazamientos. En el presente proyecto se propone desarrollar un controlador PID basado en el modelo y otro no basado en el modelo. Dichos controladores deberán operar en un robot con configuración de triciclo disponible en el Departamento de Sistemas Informáticos y deberán por tanto ser programados en lenguaje C para ejecutar en el procesador digital de señal destinado para esa actividad en el mencionado robot (dsPIC33FJ128MC802). ABSTRACT Mobile robotics constitutes an area of development and exploitation of increasing interest. There are examples of mobile robotics of outstanding importance in industry and strong growth is expected in the field of service robotics. In the software architecture of all mobile robots usually appear components which have assigned competences of government, navigation, perceptionetc., all of them of major importance. However, there is an essential element in this type of robots, which takes care of the speed control. The present project aims to develop a model-based and other non-model-based PID controller. These controllers must operate in a robot with tricycle settings, available from the Department of Computing Systems, and should therefore be programmed in C language to run on the digital signal processor dedicated to that activity in the robot (dsPIC33FJ128MC802).
Resumo:
Como punto de partida para el desarrollo de la Tesis, se mantiene la hipótesis de que es posible establecer métodos de evaluación global sobre el grado de utilidad de los sistemas constructivos correspondientes a los cerramientos de la edificación. Tales métodos habrían de posibilitar, de entre una serie finita de sistemas alternativos, cuáles de ellos son los objetivamente más adecuados para su selección en un entorno de decisión concreto, y habrían de permitir fundamentar la justificación objetiva de tal decisión. Paralelamente a esta hipótesis de carácter general, se planteó desde el inicio la necesidad de comprobación de una hipótesis de partida particular según la cual los sistemas constructivos basados en la utilización de componentes prefabricados, o procesos de puesta en obra con un alto grado de industrialización arrojarían valores de utilidad mayores que los sistemas tradicionales basados en la albañilería. Para la verificación de estas dos hipótesis de partida se ha procedido inicialmente a la selección de un conjunto coherente de doce sistemas de cerramientos de la edificación que pudiese servir como testigo de su diversidad potencial, para proceder a su valoración comparativa. El método de valoración propuesto ha entrado a considerar una serie de factores de diversa índole que no son reducibles a un único parámetro o magnitud que permitiese una valoración de tipo lineal sobre su idoneidad relativa, ni que permitiese establecer un grado de prelación entre los distintos sistemas constructivos alternativos de manera absoluta. Para resolver este tour de force o desafío metodológico se ha acudido a la aplicación de metodologías de valoración que nos permitiesen establecer de forma racional dicha comparativa. Nos referimos a una serie de metodologías provenientes en primera instancia de las ciencias exactas, que reciben la denominación de métodos de ayuda a la decisión multicriterio, y en concreto el denominado método ELECTRE. Inicialmente, se ha planteado la aplicación del método de análisis sobre doce sistemas constructivos seleccionados de tal forma que representasen de forma adecuada las tres categorías establecidas para caracterizar la totalidad de sistemas constructivos posibles; por peso, grado de prefabricación y grado de ventilación. Si bien la combinación de las tres categorías básicas anteriormente señaladas produce un total de 18 subcategorías conceptuales, tomamos finalmente doce subcategorías dado que consideramos que es un número operativo suficiente por extenso para el análisis propuesto y elimina tipos no relevantes. Aplicado el método propuesto, a estos doce sistemas constructivos “testigo”, se constata el mayor grado de utilidad de los sistemas prefabricados, pesados y no ventilados. Al hilo del análisis realizado en la Parte II de la Tesis sobre los doce sistemas constructivos “testigo”, se ha realizado un volcado de los sistemas constructivos incluidos en el Catalogo de Elementos Constructivos del CTE (versión 2010) sobre las dieciocho subcategorías definidas en dicha Parte II para caracterizar los sistemas constructivos “testigo”. Posteriormente, se ha procedido a una parametrización de la totalidad de sistemas constructivos para cerramientos de fachadas incluidos en este Catálogo. La parametrización sistemática realizada ha permitido establecer, mediante el cálculo del valor medio que adoptan los parámetros de los sistemas pertenecientes a una misma familia de las establecidas por el Catálogo, una caracterización comparativa del grado de utilidad de dichas familias, tanto en lo relativo a cada uno de los parámetros como en una valoración global de sus valores, de carácter indicativo. Una vez realizada una parametrización completa de la totalidad de sistemas constructivos incluidos en el Catálogo, se ha realizado una simulación de aplicación de la metodología de validación desarrollada en la Parte II de la presente Tesis, con el objeto de comprobar su adecuación al caso. En conclusión, el desarrollo de una herramienta de apoyo a la decisión multicriterio aplicada al Catálogo de Elementos constructivos del CTE se ha demostrado técnicamente viable y arroja resultados significativos. Se han diseñado dos sistemas constructivos mediante la aplicación de la herramienta desarrollada, uno de fachada no ventilada y otro de fachada ventilada. Comparados estos dos sistemas constructivos mejorados con otros sistemas constructivos analizados Se comprueba el alto grado de utilidad objetiva de los dos sistemas diseñados en relación con el resto. La realización de este ejercicio de diseño de un sistema constructivo específico, que responde a los requerimientos de un decisor concreto viene a demostrar, así pues, la utilidad del algoritmo propuesto en su aplicación a los procesos de diseño de los sistemas constructivos. La tesis incorpora dos innovaciones metodológicas y tres innovaciones instrumentales. ABSTRACT The starting point for the thesis is the hypothesis that it is possible to devise suitability degree evaluation methods of building enclosure systems. Such methods should allow optimizing appraisal, given a specific domain of decision, among a finite number of alternative systems, and provide objective justification of such decision. Along with the above mentioned general assumption, a second hypothesis whereby constructive systems based on the use of prefabricated components, or high industrialization degree work processes, would throw efficiency values higher than traditional masonry systems needed to be tested. In order to validate these two hypothesis a coherent set of twelve enclosure systems that could serve as a reference sample of their potential diversity was selected and a comparative evaluation was carried out. The valuation method proposed has considered several different factors that are neither reducible to a single parameter or magnitude that would allow a linear evaluation of their relative suitability nor allow to establishing an absolute priority ranking between different alternative constructive systems. In order to resolve this tour de force or methodological challenge, valuation methodologies that enable use establishing rational assessments were used. We are referring to a number of methodologies taken from the exact sciences field, usually known as aid methods for multi-criteria decision, in particular the so-called ELECTRE method. Even though the combination of the mentioned three basic categories result in eighteen conceptual sub categories, we are finally considering just twelve since we deem it adequately extended for the our intended purpose and eliminates non relevant instances. The method of analysis was initially applied to the set of twelve selected constructive systems is a way that they could represent adequately the three previously established categories set out to characterize all possible enclosure systems, namely weight, prefabrication degree and ventilation degree. Once the proposed method is applied to the sample systems, the higher efficiency of the prefabricated, heavy and not ventilated systems was confirmed. In line with the analysis in Part II of the thesis on the twelve chosen enclosure systems, it has done an uploading data of construction systems listed in the Catalogue of constructive elements of the CTE (version 2010) according the eighteen subcategories used in this part II to characterize the construction systems taken as sample. Subsequently, a parameterization of all enclosure facade systems included in this catalog has been undertaken. The systematic parameterization has allowed to set, by means of calculating the average values of the parameters of the systems belonging to the same family of those established by the Catalog, a comparative characterization of the efficiency degree of these families, both in relation to each parameter as to an overall evaluation of its values, in a indicative way. After the parameterization of all enclosure systems included in the Catalog, a simulation of validation methodology application developed in Part II of this Thesis has been made, in order to assess its consistency to the referred case. In conclusion, the development of a multi-criteria decision aid tool, applied to the CTE Catalog of constructive elements, has proved to be technically feasible and yields significant results. Two building systems through the application of the developed tool, a non-ventilated façade and a ventilated façade have been designed. Comparing these two improved construction systems with other building systems analyzed, we were able to assess the high degree of objective efficiency of the two systems designed in relation to the rest. The exercise of designing a specific enclosure system that meets the requirements of a particular decision-maker hence shows the suitability of the proposed algorithm applied to the process of enclosure systems design. This Thesis includes two methodological innovations and three instrumental innovations.
Resumo:
A simple evolutionary process can discover sophisticated methods for emergent information processing in decentralized spatially extended systems. The mechanisms underlying the resulting emergent computation are explicated by a technique for analyzing particle-based logic embedded in pattern-forming systems. Understanding how globally coordinated computation can emerge in evolution is relevant both for the scientific understanding of natural information processing and for engineering new forms of parallel computing systems.
Resumo:
As redes de sensores sem fio, aplicadas à automação do controle de ambientes representam um paradigma emergente da computação, onde múltiplos nós providos de sensores, sistemas computacionais autônomos e capacidade de comunicação sem fio, conformam uma rede cuja topologia altamente dinâmica permite adquirir informações sobre sistemas complexos sendo monitorados. Um dos fatores essenciais para obter um ganho na produtividade avícola é o controle da ambiência animal. Atualmente os métodos utilizados para o monitoramento e controle ambiental não podem considerar a grande quantidade de microambientes internos nos ambientes de produção animal e também requerem infraestruturas cabeadas complexas. Dentro desse contexto o objetivo deste trabalho foi desenvolver e testar um sistema automatizado de controle ambiental, através da utilização de sensores sem fio, que auxilie e proporcione maior segurança no controle de ambientes automatizados. O sistema monitora variáveis que influenciam na produtividade de aves, tais como temperatura e umidade e outras variáveis físico-químicas do aviário. A infraestrutura desenvolvida foi testada em um aviário experimental e resultou em um sistema seguro e com grande escalabilidade, que é capaz de controlar e monitorar o ambiente e ainda coletar e gravar dados. Foi utilizado o protocolo ZigBee® para gerenciar o fluxo de dados do sistema. Foram feitas análises da eficiência de comunicação do sistema no aviário, monitorando os pacotes de dados perdidos. Os testes demonstraram uma perda de dados de aproximadamente 2% dos pacotes enviados, demonstrando a eficiência das redes ZigBee® para gerenciar o fluxo de dados no interior do aviário. Desta forma, pode-se concluir que é possível e viável a implantação de uma rede ZigBee®, para automatizar ambientes de produção animal com coleta de dados em tempo real, utilizando um sistema integrado via internet, que compreende: instrumentação eletrônica, comunicação sem fio e engenharia de software\".
Resumo:
These days as we are facing extremely powerful attacks on servers over the Internet (say, by the Advanced Persistent Threat attackers or by Surveillance by powerful adversary), Shamir has claimed that “Cryptography is Ineffective”and some understood it as “Cryptography is Dead!” In this talk I will discuss the implications on cryptographic systems design while facing such strong adversaries. Is crypto dead or we need to design it better, taking into account, mathematical constraints, but also systems vulnerability constraints. Can crypto be effective at all when your computer or your cloud is penetrated? What is lost and what can be saved? These are very basic issues at this point of time, when we are facing potential loss of privacy and security.
Resumo:
Paper submitted to Euromicro Symposium on Digital Systems Design (DSD), Belek-Antalya, Turkey, 2003.
Resumo:
In the past years, an important volume of research in Natural Language Processing has concentrated on the development of automatic systems to deal with affect in text. The different approaches considered dealt mostly with explicit expressions of emotion, at word level. Nevertheless, expressions of emotion are often implicit, inferrable from situations that have an affective meaning. Dealing with this phenomenon requires automatic systems to have “knowledge” on the situation, and the concepts it describes and their interaction, to be able to “judge” it, in the same manner as a person would. This necessity motivated us to develop the EmotiNet knowledge base — a resource for the detection of emotion from text based on commonsense knowledge on concepts, their interaction and their affective consequence. In this article, we briefly present the process undergone to build EmotiNet and subsequently propose methods to extend the knowledge it contains. We further on analyse the performance of implicit affect detection using this resource. We compare the results obtained with EmotiNet to the use of alternative methods for affect detection. Following the evaluations, we conclude that the structure and content of EmotiNet are appropriate to address the automatic treatment of implicitly expressed affect, that the knowledge it contains can be easily extended and that overall, methods employing EmotiNet obtain better results than traditional emotion detection approaches.
Resumo:
This qualitative study focuses on what contributes to making a music information-seeking experience satisfying in the context of everyday life. Data were collected through in-depth interviews conducted with 15 younger adults (18 to 29 years old). The analysis revealed that satisfaction could depend on both hedonic (i.e., experiencing pleasure) and utilitarian outcomes. It was found that two types of utilitarian outcomes contributed to satisfaction: (1) the acquisition of music, and (2) the acquisition of information about music. Information about music was gathered to (1) enrich the listening experience, (2) increase one's music knowledge, and/or (3) optimize future acquisition. This study contributes to a better understanding of music information-seeking behavior in recreational contexts. It also has implications for music information retrieval systems design: results suggest that these systems should be engaging, include a wealth of extra-musical information, allow users to navigate among music items, and encourage serendipitous encountering of music.
Resumo:
This paper proposes to build on previous research on the use of real options in strategic decision making (Carayannis and Sipp, 2010) and instill some real options-related concepts stemming from systems design, more particularly engineering. It also builds on previously-established concepts of strategic knowledge serendipity and arbitrage and strategic knowledge co-opetition, co-evolution and co-specialization developed by Carayannis (2009). The application of real options “in” system and real options to innovation and innovation policies demonstrate how embedded real options can more effectively be identified and therefore the decision to execute them or not more effectively made.