932 resultados para multiplicity of solutions


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores the hitherto futile quest for developing disciplines on the trade- and investment-distorting effects of services subsidies. It sheds light on the multiplicity of factors that have weighed on the conduct of negotiations on subsidy disciplines in a services trade context at both the global and preferential levels, and advances a few thoughts on what the future may hold for the adoption of such disciplines. The analysis suggests that it is rather unlikely that WTO Members will any time soon reach a consensus on the matter of subsidy disciplines for services beyond those that currently (and timidly) obtain in the GATS and in many preferential trade agreements. The main reason behind such a conclusion stems from a marked rise in the value of preserving policy space in a trading environment characterized by considerably greater global market contestability than two decades ago.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article presents the model and implementation of a multiagent fuzzy system (MAFS), to automate the search of solutions of incidents in telecommunications, expressed by the users in an imprecise way and, later, registered in a a knowledge base keeping their original vaguenesses and the relationships between the incidents considered as ancestors and descendants. The process of the fuzzy incidents, no matter their causes, is based on the application of a formula which transforms the intervals of the fuzzy incidents to a computational language and in the interaction between the different kinds of software agents and the humans. To search and suggest solutions of the incident originally stated, a search algorithm is used and illustrated with an example. The preliminary results obtained show the users' satisfaction, in a great percentage of the presented cases. The system is adaptive and allows to record new solutions for future users.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents and develops a generalized concept of Non-Strict Independent And Parallelism (NSIAP). NSIAP extends the applicability of Independent And- Parallelism (IAP) by enlarging the class of goals which are eligible for parallel execution. At the same time it maintains IAP's ability to run non-deterministic goals in parallel and to preserve the computational complexity expected in the execution of the program by the programmer. First, a parallel execution framework is defined and some fundamental correctness results, in the sense of equivalence of solutions with the sequential model, are discussed for this framework. The issue of efficiency is then considered. Two new definitions of NSI are given for the cases of puré and impure goals respectively and efficiency results are provided for programs parallelized under these definitions which include treatment of the case of goal failure: not only is reduction of execution time guaranteed (modulo run-time overheads) in the absence of failure but it is also shown that in the worst case of failure no speed-down will occur. In addition to applying to NSI, these results carry over and complete previous results shown in the context of IAP which did not deal with the case of goal failure. Finally, some practical examples of the application of the NSIAP concept to the parallelization of a set of programs are presented and performance results, showing the advantage of using NSI, are given.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Idea Management Systems are web applications that implement the notion of open innovation though crowdsourcing. Typically, organizations use those kind of systems to connect to large communities in order to gather ideas for improvement of products or services. Originating from simple suggestion boxes, Idea Management Systems advanced beyond collecting ideas and aspire to be a knowledge management solution capable to select best ideas via collaborative as well as expert assessment methods. In practice, however, the contemporary systems still face a number of problems usually related to information overflow and recognizing questionable quality of submissions with reasonable time and effort allocation. This thesis focuses on idea assessment problem area and contributes a number of solutions that allow to filter, compare and evaluate ideas submitted into an Idea Management System. With respect to Idea Management System interoperability the thesis proposes theoretical model of Idea Life Cycle and formalizes it as the Gi2MO ontology which enables to go beyond the boundaries of a single system to compare and assess innovation in an organization wide or market wide context. Furthermore, based on the ontology, the thesis builds a number of solutions for improving idea assessment via: community opinion analysis (MARL), annotation of idea characteristics (Gi2MO Types) and study of idea relationships (Gi2MO Links). The main achievements of the thesis are: application of theoretical innovation models for practice of Idea Management to successfully recognize the differentiation between communities, opinion metrics and their recognition as a new tool for idea assessment, discovery of new relationship types between ideas and their impact on idea clustering. Finally, the thesis outcome is establishment of Gi2MO Project that serves as an incubator for Idea Management solutions and mature open-source software alternatives for the widely available commercial suites. From the academic point of view the project delivers resources to undertake experiments in the Idea Management Systems area and managed to become a forum that gathered a number of academic and industrial partners. Resumen Los Sistemas de Gestión de Ideas son aplicaciones Web que implementan el concepto de innovación abierta con técnicas de crowdsourcing. Típicamente, las organizaciones utilizan ese tipo de sistemas para conectar con comunidades grandes y así recoger ideas sobre cómo mejorar productos o servicios. Los Sistemas de Gestión de Ideas lian avanzado más allá de recoger simplemente ideas de buzones de sugerencias y ahora aspiran ser una solución de gestión de conocimiento capaz de seleccionar las mejores ideas por medio de técnicas colaborativas, así como métodos de evaluación llevados a cabo por expertos. Sin embargo, en la práctica, los sistemas contemporáneos todavía se enfrentan a una serie de problemas, que, por lo general, están relacionados con la sobrecarga de información y el reconocimiento de las ideas de dudosa calidad con la asignación de un tiempo y un esfuerzo razonables. Esta tesis se centra en el área de la evaluación de ideas y aporta una serie de soluciones que permiten filtrar, comparar y evaluar las ideas publicadas en un Sistema de Gestión de Ideas. Con respecto a la interoperabilidad de los Sistemas de Gestión de Ideas, la tesis propone un modelo teórico del Ciclo de Vida de la Idea y lo formaliza como la ontología Gi2MO que permite ir más allá de los límites de un sistema único para comparar y evaluar la innovación en un contexto amplio dentro de cualquier organización o mercado. Por otra parte, basado en la ontología, la tesis desarrolla una serie de soluciones para mejorar la evaluación de las ideas a través de: análisis de las opiniones de la comunidad (MARL), la anotación de las características de las ideas (Gi2MO Types) y el estudio de las relaciones de las ideas (Gi2MO Links). Los logros principales de la tesis son: la aplicación de los modelos teóricos de innovación para la práctica de Sistemas de Gestión de Ideas para reconocer las diferenciasentre comu¬nidades, métricas de opiniones de comunidad y su reconocimiento como una nueva herramienta para la evaluación de ideas, el descubrimiento de nuevos tipos de relaciones entre ideas y su impacto en la agrupación de estas. Por último, el resultado de tesis es el establecimiento de proyecto Gi2MO que sirve como incubadora de soluciones para Gestión de Ideas y herramientas de código abierto ya maduras como alternativas a otros sistemas comerciales. Desde el punto de vista académico, el proyecto ha provisto de recursos a ciertos experimentos en el área de Sistemas de Gestión de Ideas y logró convertirse en un foro que reunión para un número de socios tanto académicos como industriales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La presente tesis estudia las realizaciones del arquitecto Emilio Pérez Pinero, todas dentro de las estructuras espaciales de barras desmontables y desplegables, elabora la documentación que hace transmisible su investigación y generaliza el estudio del comportamiento en la parcela de las desplegables. La obra de este arquitecto forma un conjunto original, atractivo y sin continuadores, y por otra parte, no abundan las" investigaciones sobre este tipo de estructuras ( mucho menos las realizaciones), en las que hay que resolver tanto su definición como su movilidad y comportamiento estructural. El contenido de la parte correspondiente a las estructuras desmontables se limita a las cúpulas reticuladas de una capa, con el sistema de reticulado y montaje ideado por Pinero, por considerar que se debe documentar su aportación pero no incidir mas en un campo de investigación que cuenta con abundantes estudios. Se aporta la solución matemática y un programa de ordenador para la definición geométrica completa del reticulado empleado. Las estructuras desplegables se caracterizan por el empleo de barras dispuestas en "x" en el espesor de la estructura, con generación de superficies tanto planas como curvas. En ambos casos se analiza la movilidad en fase de mecanismo, tanto a las soluciones de Pinero como a las complementarlas que se exponen. Se estudian las relaciones geométricas que deben de cumplirse para que sea posible el movimiento de las barras, relaciones particularmente complejas en las desplegables según superficies esféricas, y que determinan su definición geométrica. En la fase de estructura, además de analizar lo realizado por Pinero, documentando y definiendo sus componentes, se proponen varias estructuras posibles para cada mecanismo, y se desarrolla en detalle el tipo de los emparrillados de canto constante, donde se incluye un estudio comparativo de nueve variantes distintas. Se muestra el amplio campo de uso posible para estas estructuras. ABSTRACT The • present doctoral dissertation studies the work of de spanish architect Emilio Pérez Pinero, all of it within de field of spatial demountable and deployable structures. This contribution compiles the necessary documentation for research in this field and, besides, generalizes the theoretical background for the analysis of this type of structures. Pérez Pinero's contributions are original and attractive, but, so far, he has not any followers ; on the other hand research in this field is scarce (much less actual realizations). In the part corresponding to demountable structures the research is limited to reticulated domes of only one layer, following Pérez Pinero's sys~ tem, trying to give a comprehensive documentation of it. The mathematical solution is given and so is a computer program for the complete definition of the geometry of the structure. One characteristic of deployable structures is the use of struts placed - formix "X" in the thickness of the structure, making possible the generation of plañe as well as curved surfaces. In both cases, the operation in the phase of mechanism is studied, both fot Pinero's solution and for the other schemes presented. The geometrical relationships that must be maintained in order to guarantee strut's movements, are studied; these relationships are particularly complex in the case of spherical surfaces, and, in this last casey determine completely its geometrical definition. In regard of the structure behaviour, besides analysing Pinero's works, a variety of solutions are proposed for each mechanism. Particularly, the configuration for double layer grids of constant thickness is developed with great detall, and a comparative study of nine different solutions of this special case is included. A wide range of the possible applications of this structural type is shown.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The latest video coding standards developed, like HEVC (High Efficiency Video Coding, approved in January 2013), require for their implementation the use of devices able to support a high computational load. Considering that currently it is not enough the usage of one unique Digital Signal Processor (DSP), multicore devices have appeared recently in the market. However, due to its novelty, the working methodology that allows produce solutions for these configurations is in a very initial state, since currently the most part of the work needs to be performed manually. In consequence, the objective set consists on finding methodologies that ease this process. The study has been focused on extend a methodology, under development, for the generation of solutions for PCs and embedded systems. During this study, the standards RVC (Reconfigurable Video Coding) and HEVC have been employed, as well as DSPs of the Texas Instruments company. In its development, it has been tried to address all the factors that influence both the development and deployment of these new implementations of video decoders, ranging from tools up to aspects of the partitioning of algorithms, without this can cause a drop in application performance. The results of this study are the description of the employed methodology, the characterization of the software migration process and performance measurements for the HEVC standard in an RVC-based implementation. RESUMEN Los estándares de codificación de vídeo desarrollados más recientemente, como HEVC (High Efficiency Video Coding, aprobado en enero de 2013), requieren para su implementación el uso de dispositivos capaces de soportar una elevada carga computacional. Teniendo en cuenta que actualmente no es suficiente con utilizar un único Procesador Digital de Señal (DSP), han aparecido recientemente dispositivos multinúcleo en el mercado. Sin embargo, debido a su novedad, la metodología de trabajo que permite elaborar soluciones para tales configuraciones se encuentra en un estado muy inicial, ya que actualmente la mayor parte del trabajo debe realizarse manualmente. En consecuencia, el objetivo marcado consiste en encontrar metodologías que faciliten este proceso. El estudio se ha centrado en extender una metodología, en desarrollo, para la generación de soluciones para PC y sistemas empotrados. Durante dicho estudio se han empleado los estándares RVC (Reconfigurable Video Coding) y HEVC, así como DSPs de la compañía Texas Instruments. En su desarrollo se ha tratado de atender a todos los factores que influyen tanto en el desarrollo como en la puesta en marcha de estas nuevas implementaciones de descodificadores de vídeo; abarcando desde las herramientas a utilizar hasta aspectos del particionado de los algoritmos, sin que por ello se produzca una reducción en el rendimiento de las aplicaciones. Los resultados de este estudio son una descripción de la metodología empleada, la caracterización del proceso de migración de software, y medidas de rendimiento para el estándar HEVC en una implementación basada en RVC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tesis se centra en el estudio de medios granulares blandos y atascados mediante la aplicación de la física estadística. Esta aproximación se sitúa entre los tradicionales enfoques macro y micromecánicos: trata de establecer cuáles son las propiedades macroscópicas esperables de un sistema granular en base a un análisis de las propiedades de las partículas y las interacciones que se producen entre ellas y a una consideración de las restricciones macroscópicas del sistema. Para ello se utiliza la teoría estadística junto con algunos principios, conceptos y definiciones de la teoría de los medios continuos (campo de tensiones y deformaciones, energía potencial elástica, etc) y algunas técnicas de homogeneización. La interacción entre las partículas es analizada mediante las aportaciones de la teoría del contacto y de las fuerzas capilares (producidas por eventuales meniscos de líquido cuando el medio está húmedo). La idea básica de la mecánica estadística es que entre todas soluciones de un problema físico (como puede ser el ensamblaje en equilibrio estático de partículas de un medio granular) existe un conjunto que es compatible con el conocimiento macroscópico que tenemos del sistema (por ejemplo, su volumen, la tensión a la que está sometido, la energía potencial elástica que almacena, etc.). Este conjunto todavía contiene un número enorme de soluciones. Pues bien, si no hay ninguna información adicional es razonable pensar que no existe ningún motivo para que alguna de estas soluciones sea más probable que las demás. Entonces parece natural asignarles a todas ellas el mismo peso estadístico y construir una función matemática compatible. Actuando de este modo se obtiene cuál es la función de distribución más probable de algunas cantidades asociadas a las soluciones, para lo cual es muy importante asegurarse de que todas ellas son igualmente accesibles por el procedimiento de ensamblaje o protocolo. Este enfoque se desarrolló en sus orígenes para el estudio de los gases ideales pero se puede extender para sistemas no térmicos como los analizados en esta tesis. En este sentido el primer intento se produjo hace poco más de veinte años y es la colectividad de volumen. Desde entonces esta ha sido empleada y mejorada por muchos investigadores en todo el mundo, mientras que han surgido otras, como la de la energía o la del fuerza-momento (tensión multiplicada por volumen). Cada colectividad describe, en definitiva, conjuntos de soluciones caracterizados por diferentes restricciones macroscópicas, pero de todos ellos resultan distribuciones estadísticas de tipo Maxwell-Boltzmann y controladas por dichas restricciones. En base a estos trabajos previos, en esta tesis se ha adaptado el enfoque clásico de la física estadística para el caso de medios granulares blandos. Se ha propuesto un marco general para estudiar estas colectividades que se basa en la comparación de todas las posibles soluciones en un espacio matemático definido por las componentes del fuerza-momento y en unas funciones de densidad de estados. Este desarrollo teórico se complementa con resultados obtenidos mediante simulación de la compresión cíclica de sistemas granulares bidimensionales. Se utilizó para ello un método de dinámica molecular, MD (o DEM). Las simulaciones consideran una interacción mecánica elástica, lineal y amortiguada a la que se ha añadido, en algunos casos, la fuerza cohesiva producida por meniscos de agua. Se realizaron cálculos en serie y en paralelo. Los resultados no solo prueban que las funciones de distribución de las componentes de fuerza-momento del sistema sometido a un protocolo específico parecen ser universales, sino que también revelan que existen muchos aspectos computacionales que pueden determinar cuáles son las soluciones accesibles. This thesis focuses on the application of statistical mechanics for the study of static and jammed packings of soft granular media. Such approach lies between micro and macromechanics: it tries to establish what the expected macroscopic properties of a granular system are, by starting from a micromechanical analysis of the features of the particles, and the interactions between them, and by considering the macroscopic constraints of the system. To do that, statistics together with some principles, concepts and definitions of continuum mechanics (e.g. stress and strain fields, elastic potential energy, etc.) as well as some homogenization techniques are used. The interaction between the particles of a granular system is examined too and theories on contact and capillary forces (when the media are wet) are revisited. The basic idea of statistical mechanics is that among the solutions of a physical problem (e.g. the static arrangement of particles in mechanical equilibrium) there is a class that is compatible with our macroscopic knowledge of the system (volume, stress, elastic potential energy,...). This class still contains an enormous number of solutions. In the absence of further information there is not any a priori reason for favoring one of these more than any other. Hence we shall naturally construct the equilibrium function by assigning equal statistical weights to all the functions compatible with our requirements. This procedure leads to the most probable statistical distribution of some quantities, but it is necessary to guarantee that all the solutions are likely accessed. This approach was originally set up for the study of ideal gases, but it can be extended to non-thermal systems too. In this connection, the first attempt for granular systems was the volume ensemble, developed about 20 years ago. Since then, this model has been followed and improved upon by many researchers around the world, while other two approaches have also been set up: energy and force-moment (i.e. stress multiplied by volume) ensembles. Each ensemble is described by different macroscopic constraints but all of them result on a Maxwell-Boltzmann statistical distribution, which is precisely controlled by the respective constraints. According to this previous work, in this thesis the classical statistical mechanics approach is introduced and adapted to the case of soft granular media. A general framework, which includes these three ensembles and uses a force-moment phase space and a density of states function, is proposed. This theoretical development is complemented by molecular dynamics (or DEM) simulations of the cyclic compression of 2D granular systems. Simulations were carried out by considering spring-dashpot mechanical interactions and attractive capillary forces in some cases. They were run on single and parallel processors. Results not only prove that the statistical distributions of the force-moment components obtained with a specific protocol seem to be universal, but also that there are many computational issues that can determine what the attained packings or solutions are.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud computing and, more particularly, private IaaS, is seen as a mature technology with a myriad solutions tochoose from. However, this disparity of solutions and products has instilled in potential adopters the fear of vendor and data lock-in. Several competing and incompatible interfaces and management styles have given even more voice to these fears. On top of this, cloud users might want to work with several solutions at the same time, an integration that is difficult to achieve in practice. In this paper, we propose a management architecture that tries to tackle these problems; it offers a common way of managing several cloud solutions, and an interface that can be tailored to the needs of the user. This management architecture is designed in a modular way, and using a generic information model. We have validated our approach through the implementation of the components needed for this architecture to support a sample private IaaS solution: OpenStack

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper tackles the optimization of applications in multi-provider hybrid cloud scenarios from an economic point of view. In these scenarios the great majority of solutions offer the automatic allocation of resources on different cloud providers based on their current prices. However our approach is intended to introduce a novel solution by making maximum use of divide and rule. This paper describes a methodology to create cost aware cloud applications that can be broken down into the three most important components in cloud infrastructures: computation, network and storage. A real videoconference system has been modified in order to evaluate this idea with both theoretical and empirical experiments. This system has become a widely used tool in several national and European projects for e-learning and collaboration purposes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this Thesis is to get in deep in the use of models (conceptual and numerical), as a prediction and analytical tool for hydrogeological studies, mainly from point of view of the mining drainage. In the first place, are developed the basic concepts and the parametric variations range are developed, usually used in the modelization of underground f10w and particle transport, and also the more recommended modelization process, analysing step by step each of its sequences, developed based in the experience of the author, contrasted against the available bibliography. Following MODFLOW is described, as a modelization tool, taking into account the advantages that its more common pre/post-treatment software have (Processing MODFLOW, Mod CAD and Visual MODFLOW). In third place, are introduced the criterions and required parameters to develop a conceptual model, numerical discretization, definition of the boundary and initial conditions, as well as all those factors which affects to the system (antropic or natural), developing the creation process, data introduction, execution of morlel, convergence criterions and calibration and obtaining result, natural of Visual MODFLOUI. Next, five practical cases are analysed, in which the author has been applied MODFLOW, and the different pre/post-treatment software (Processing MODFLOW, Mod CAD and Visual MODFLOW), describing for each one, the objectives, the conceptual model defined, discretization, the parametric definition, sensibility analysis, results reached and future states prediction. In fifth place, are presented a program developed by the author which allow to improve the facilities offered by Mod CAD and Visual MODFLOW, expanding modelization possibilities and connection to other computers. Next step it is presented a series of solutions to the most typical problems which could appear during the modelization with MODFLOW. Finally, the conclusions and recommendation readied are exposed, with the purpose to help in the developing of hydrogeological models both conceptuals and numericals. RESUMEN El objetivo de esta Tesis es profundizar en el empleo de modelos (conceptuales y numéricos), como herramienta de predicción y análisis en estudios hidrogeológicos, fundamentalmente desde el punto de vista de drenaje minero. En primer lugar, se desarrollan los conceptos básicos y los rangos de variación paramétrica, habituales en la modelización de flujos subterráneos y transporte de partículas, así como el proceso de modelización más recomendado, analizando paso a paso cada una de sus secuencias, desarrollado en base a la experiencia del autor, contrastado con la bibliografía disponible. Seguidamente se describe MODFLOW como herramienta de modelización, valorando las ventajas que presentan sus software de pre/post-tratamiento más comunes (Proccesing MODFLOW, Mod CAD y Visual MODFLOW). En tercer lugar, se introducen los criterios y parámetros precisos para desarrollar un modelo conceptual, discretización numérica, definición de las condiciones de contorno e iniciales, así como todos aquellos factores que afectan al sistema (antrópicos o naturales), desarrollando el proceso de creación, introducción de datos, ejecución del modelo, criterios de convergencia y calibración, y obtención de resultados, propios de Visual MODFLOW. A continuación, se analizan cinco casos prácticos, donde el autor ha aplicado MODFLOW, así como diferentes software de pre/post-tratamiento (Proccesing MODFLOW, Mod CAD y Visual MODFLOW), describiendo para cada uno, el objetivo marcado, modelo conceptual definido, discretización, definición paramétrica, análisis de sensibilidad, resultados alcanzados y predicción de estados futuros. En quinto lugar, se presenta un programa desarrollado por el autor, que permite mejorar las prestaciones ofrecidas por MODFLOW y Visual MODFLOW, ampliando las posibilidades de modelización y conexión con otros ordenadores. Seguidamente se plantean una serie de soluciones a los problemas más típicos que pueden producirse durante la modelización con MODFLOW. Por último, se exponen las conclusiones y recomendaciones alcanzadas, con el fin de auxiliar el desarrollo del desarrollo de modelos hidrogeológicos, tanto conceptuales como numéricos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Evolvable hardware (EH) is an interesting alternative to conventional digital circuit design, since autonomous generation of solutions for a given task permits self-adaptivity of the system to changing environments, and they present inherent fault tolerance when evolution is intrinsically performed. Systems based on FPGAs that use Dynamic and Partial Reconfiguration (DPR) for evolving the circuit are an example. Also, thanks to DPR, these systems can be provided with scalability, a feature that allows a system to change the number of allocated resources at run-time in order to vary some feature, such as performance. The combination of both aspects leads to scalable evolvable hardware (SEH), which changes in size as an extra degree of freedom when trying to achieve the optimal solution by means of evolution. The main contributions of this paper are an architecture of a scalable and evolvable hardware processing array system, some preliminary evolution strategies which take scalability into consideration, and to show in the experimental results the benefits of combined evolution and scalability. A digital image filtering application is used as use case.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Content protection is a key component for the success of a multimedia services platform, as proven by the plethora of solutions currently on the market. In this paper we analyze a new network scenario where permanent bidirectional connectivity and video-aware encryption technologies allow a trustful operation of ubiquitous end devices. We propose new scalable models for a content protection architecture that may achieve dramatic improvement in robustness, reliability, and scalability. Selective ciphering and countermeasures are included in those models, together with several examples of their application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Solid State Lasers (SSL) have been used in microelectronic and photovoltaic (PV) industry for decades but, currently, laser technology appears as a key enabling technology to improve efficiency and to reduce production costs in high efficiency solar cells fabrication. Moreover, the fact that the interaction between the laser radiation and the device is normally localized and restricted to a controlled volume makes SSL a tool of choice for the implementation of low temperature concepts in PV industry. Specifically, SSL are ideally suited to improve the electrical performance of the contacts further improving the efficiency of these devices. Advanced concepts based on standard laser firing or advanced laser doping techniques are optimal solutions for the back contact of a significant number of structures of growing interest in the c-Si PV industry, and a number of solutions has been proposed as well for emitter formation, to reduce the metallization optical losses or even to remove completely the contacts from the front part of the cell. In this work we present our more recent results of SSL applications for contact optimization in c-Si solar cell technology, including applications on low temperature processes demanding devices, like heterojunction solar cells.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El objetivo de este proyecto es el estudio de soluciones de escalabilidad y alta disponibilidad en sistemas distribuidos, así como su implantación en aquel de los sistemas analizados por Telefónica Digital, PopBox y Rush, que se consideré más adecuado. Actualmente, muchos servicios y aplicaciones están alojados directamente en laWeb, permitiendo abaratar el uso de ciertos servicios y mejorando la productividad y la competitividad de las empresas que los usan. Este crecimiento de las tecnologías en cloud experimentado en los últimos años plantea la necesidad de realizar sistemas que sean escalables, fiables y estén disponibles la mayor parte del tiempo posible. Un fallo en el servicio no afecta a una sola empresa, sino a todas las que están haciendo uso de dicho servicio. A lo largo de este proyecto se estudiarán las soluciones de alta disponibilidad y escalabilidad implementadas en varios sistemas distribuidos y se realizará una evaluación crítica de cada una de ellas. También se analizará la idoneidad de estas soluciones para los sistemas en los que posteriormente se aplicarán: PopBox y Rush. Se han diseñado diferentes soluciones para las plataformas implicadas, siguiendo varias aproximaciones y realizando un análisis exhaustivo de cada una de ellas, teniendo en cuenta el rendimiento y fiabilidad de cada aproximación. Una vez se ha determinado cuál es la estrategia más adecuada, se ha realizado una implementación fiable del sistema. Para cada uno de los módulos implementados se ha llevado a cabo una fase de testing unitario y de integración para asegurar el buen comportamiento del sistema y la integridad de éste cuando se realicen cambios. Específicamente, los objetivos que se alcanzarán son los siguientes: 1. Análisis exhaustivo de los sistemas de escalabilidad y alta escalabilidad que existen actualmente. 2. Diseño de una solución general HA1 y escalable teniendo en cuenta el objetivo anterior. 3. Análisis de la idoneidad de los sistemas PopBox y Rush para el diseño de un entorno distribuido escalable. 4. Diseño e implantación de una solución ad-hoc en el sistema elegido. ---ABSTRACT---The aim of this project is the study of solutions in scalability and high availability in distributed systems, and also its implementation in one of the systems developed y Telefónica I+D, PopBox and Rush, deemed more suitable. Nowadays, a lot of services and applications are stored directly in the Web, allowing companies to reduce the costs of using certain services and improving the productivity and competitiveness of those who use these services. This increase of the use of cloud tecnologies experimented in the last few years has led to the need of developing high available, scalable, and reliable systems. A failure in the service does not affect a single company but all the companies using this service. Throughout this project, I will study several solutions in High Availability and Scalability developed in some distributed systems and I will make a critic analysis of each one. Also I will analize the suitability of these solutions in the systems in which they will be applied: PopBox and Rush. I have designed different solutions for the platforms involved, following several approaches and making an exhaustive analysis of each one, taking into account their performance and reliability of each approach. Once I had determined which is the best strategy, I have developed a reliable implementation of the system. For each module implemented, I have carried out a set of unitary and integration tests to ensure the good behaviour of the system and the integrity of it when it changes. Specifically, the objectives to be achieved are as follows: 1. Exhaustive analysis of the systems in scalability and high availability that currently exist. 2. Design of a general solution taking into account the previous point. 3. Analysis of the suitability of the sistems PopBox and Rush for the design of a scalable distributed system. 4. Design and implementation of an ad-hoc solution in the chosen system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last decade we have seen how small and light weight aerial platforms - aka, Mini Unmanned Aerial Vehicles (MUAV) - shipped with heterogeneous sensors have become a 'most wanted' Remote Sensing (RS) tool. Most of the off-the-shelf aerial systems found in the market provide way-point navigation. However, they do not rely on a tool that compute the aerial trajectories considering all the aspects that allow optimizing the aerial missions. One of the most demanded RS applications of MUAV is image surveying. The images acquired are typically used to build a high-resolution image, i.e., a mosaic of the workspace surface. Although, it may be applied to any other application where a sensor-based map must be computed. This thesis provides a study of this application and a set of solutions and methods to address this kind of aerial mission by using a fleet of MUAVs. In particular, a set of algorithms are proposed for map-based sampling, and aerial coverage path planning (ACPP). Regarding to map-based sampling, the approaches proposed consider workspaces with different shapes and surface characteristics. The workspace is sampled considering the sensor characteristics and a set of mission requirements. The algorithm applies different computational geometry approaches, providing a unique way to deal with workspaces with different shape and surface characteristics in order to be surveyed by one or more MUAVs. This feature introduces a previous optimization step before path planning. After that, the ACPP problem is theorized and a set of ACPP algorithms to compute the MUAVs trajectories are proposed. The problem addressed herein is the problem to coverage a wide area by using MUAVs with limited autonomy. Therefore, the mission must be accomplished in the shortest amount of time. The aerial survey is usually subject to a set of workspace restrictions, such as the take-off and landing positions as well as a safety distance between elements of the fleet. Moreover, it has to avoid forbidden zones to y. Three different algorithms have been studied to address this problem. The approaches studied are based on graph searching, heuristic and meta-heuristic approaches, e.g., mimic, evolutionary. Finally, an extended survey of field experiments applying the previous methods, as well as the materials and methods adopted in outdoor missions is presented. The reported outcomes demonstrate that the findings attained from this thesis improve ACPP mission for mapping purpose in an efficient and safe manner.