907 resultados para User-Designer Collaboration, Problem Restructuring, Scenario Building


Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis se ha profundizado en el estudio y desarrollo de modelos de soporte para el aprendizaje colaborativo a distancia, que ha permitido proponer una arquitectura fundamentada en los principios del paradigma CSCL (Computer Supported Collaborative Learning). La arquitectura propuesta aborda un tipo de problema concreto que requiere el uso de técnicas derivadas del Trabajo Colaborativo, la Inteligencia Artificial, Interfaces de Usuario así como ideas tomadas de la Pedagogía y la Psicología. Se ha diseñado una solución completa, abierta y genérica. La arquitectura aprovecha las nuevas tecnologías para lograr un sistema efectivo de apoyo a la educación a distancia. Está organizada en cuatro niveles: el de Configuración, el de Experiencia, el de Organización y el de Análisis. A partir de ella se ha implementado un sistema llamado DEGREE. En DEGREE, cada uno de los niveles de la arquitectura da lugar a un subsistema independiente pero relacionado con los otros. La aplicación saca partido del uso de espacios de trabajo estructurados. El subsistema Configurador de Experiencias permite definir los elementos de un espacio de trabajo y una experiencia y adaptarlos a cada tipo de usuario. El subsistema Manejador de Experiencias recoge las contribuciones de los usuarios para construir una solución conjunta de un problema. Las intervenciones de los alumnos se estructuran basándose en un grafo conversacional genérico. Además, se registran todas las acciones de los usuarios para representar explícitamente el proceso completo que lleva a la solución. Estos datos también se almacenan en una memoria común que constituye el subsistema llamado Memoria Organizativa de Experiencias. El subsistema Analizador estudia las intervenciones de los usuarios. Este análisis permite inferir conclusiones sobre la forma en que trabajan los grupos y sus actitudes frente a la colaboración, teniendo en cuenta además el conocimiento subjetivo del observador. El proceso de desarrollo en paralelo de la arquitectura y el sistema ha seguido un ciclo de refinamiento en cinco fases con sucesivas etapas de prototipado y evaluación formativa. Cada fase de este proceso se ha realizado con usuarios reales y se han considerado las opiniones de los usuarios para mejorar las funcionalidades de la arquitectura así como la interfaz del sistema. Esta aproximación ha permitido, además, comprobar la utilidad práctica y la validez de las propuestas que sustentan este trabajo.---ABSTRACT---In this thesis, we have studied in depth the development of support models for distance collaborative learning and subsequently devised an architecture based on the Computer Supported Collaborative Learning paradigm principles. The proposed architecture addresses a specific problem: coordinating groups of students to perform collaborative distance learning activities. Our approach uses Cooperative Work, Artificial Intelligence and Human-Computer Interaction techniques as well as some ideas from the fields of Pedagogy and Psychology. We have designed a complete, open and generic solution. Our architecture exploits the new information technologies to achieve an effective system for education purposes. It is organised into four levels: Configuration, Experience, Organisation and Reflection. This model has been implemented into a system called DEGREE. In DEGREE, each level of the architecture gives rise to an independent subsystem related to the other ones. The application benefits from the use of shared structured workspaces. The configuration subsystem allows customising the elements that define an experience and a workspace. The experience subsystem gathers the users' contributions to build joint solutions to a given problem. The students' interventions build up a structure based on a generic conversation graph. Moreover, all user actions are registered in order to represent explicitly the complete process for reaching the group solution. Those data are also stored into a common memory, which constitutes the organisation subsystem. The user interventions are studied by the reflection subsystem. This analysis allows us inferring conclusions about the way in which the group works and its attitudes towards collaboration. The inference process takes into account the observer's subjective knowledge. The process of developing both the architecture and the system in parallel has run through a five-pass cycle involving successive stages of prototyping and formative evaluation. At each stage of that process, we have considered the users' feedback for improving the architecture's functionalities as well as the system interface. This approach has allowed us to prove the usability and validity of our proposal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agricultural water management needs to evolve in view of increased water scarcity, especially when farming and natural protected areas are closely linked. In the study site of Don?ana (southern Spain), water is shared by rice producers and a world heritage biodiversity ecosystem. Our aim is to contribute to defining adaptation strategies that may build resilience to increasing water scarcity and minimize water conflicts among agricultural and natural systems. The analytical framework links a participatory process with quantitative methods to prioritize the adaptation options. Bottom-up proposed adaptation measures are evaluated by a multi-criteria analysis (MCA) that includes both socioeconomic criteria and criteria of the ecosystem services affected by the adaptation options. Criteria weights are estimated by three different methods?analytic hierarchy process, Likert scale and equal weights?that are then compared. Finally, scores from an MCA are input into an optimization model used to determine the optimal land-use distribution in order to maximize utility and land-use diversification according to different scenarios of funds and water availability. While our results show a spectrum of perceptions of priorities among stakeholders, there is one overriding theme that is to define a way to restore part of the rice fields to natural wetlands. These results hold true under the current climate scenario and evenmore so under an increased water scarcity scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enabling real end-user development is the next logical stage in the evolution of Internet-wide service-based applications. Successful composite applications rely on heavyweight service orchestration technologies that raise the bar far above end-user skills. This weakness can be attributed to the fact that the composition model does not satisfy end-user needs rather than to the actual infrastructure technologies. In our opinion, the best way to overcome this weakness is to offer end-to-end composition from the user interface to service invocation, plus an understandable abstraction of building blocks and a visual composition technique empowering end users to develop their own applications. In this paper, we present a visual framework for end users, called FAST, which fulfils this objective. FAST implements a novel composition model designed to empower non-programmer end users to create and share their own self-service composite applications in a fully visual fashion. We projected the development environment implementing this model as part of the European FP7 FAST Project, which was used to validate the rationale behind our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: This paper addresses one of the major end-user development (EUD) challenges, namely, how to pack today?s EUD support tools with composable elements. This would give end users better access to more components which they can use to build a solution tailored to their own needs. The success of later end-user software engineering (EUSE) activities largely depends on how many components each tool has and how adaptable components are to multiple problem domains. Objective: A system for automatically adapting heterogeneous components to a common development environment would offer a sizeable saving of time and resources within the EUD support tool construction process. This paper presents an automated adaptation system for transforming EUD components to a standard format. Method: This system is based on the use of description logic. Based on a generic UML2 data model, this description logic is able to check whether an end-user component can be transformed to this modeling language through subsumption or as an instance of the UML2 model. Besides it automatically finds a consistent, non-ambiguous and finite set of XSLT mappings to automatically prepare data in order to leverage the component as part of a tool that conforms to the target UML2 component model. Results: The proposed system has been successfully applied to components from four prominent EUD tools. These components were automatically converted to a standard format. In order to validate the proposed system, rich internet applications (RIA) used as an operational support system for operators at a large services company were developed using automatically adapted standard format components. These RIAs would be impossible to develop using each EUD tool separately. Conclusion: The positive results of applying our system for automatically adapting components from current tool catalogues are indicative of the system?s effectiveness. Use of this system could foster the growth of web EUD component catalogues, leveraging a vast ecosystem of user-centred SaaS to further current EUSE trends.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the memristor was first built in 2008 at HP Labs, no end of devices and models have been presented. Also, new applications appear frequently. However, the integration of the device at the circuit level is not straightforward, because available models are still immature and/or suppose high computational loads, making their simulation long and cumbersome. This study assists circuit/systems designers in the integration of memristors in their applications, while aiding model developers in the validation of their proposals. We introduce the use of a memristor application framework to support the work of both the model developer and the circuit designer. First, the framework includes a library with the best-known memristor models, being easily extensible with upcoming models. Systematic modifications have been applied to these models to provide better convergence and significant simulations speedups. Second, a quick device simulator allows the study of the response of the models under different scenarios, helping the designer with the stimuli and operation time selection. Third, fine tuning of the device including parameters variations and threshold determination is also supported. Finally, SPICE/Spectre subcircuit generation is provided to ease the integration of the devices in application circuits. The framework provides the designer with total control overconvergence, computational load, and the evolution of system variables, overcoming usual problems in the integration of memristive devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A damage scenario modelling is developed and compared with the damage distribution observed after the 2011 Lorca earthquake. The strong ground motion models considered include five modern ground motion prediction equations (GMPEs) amply used worldwide. Capacity and fragility curves from the Risk-UE project are utilized to model building vulnerability and expected damage. Damage estimates resulting from different combinations of GMPE and capacity/fragility curves are compared with the actual damage scenario, establishing the combination that best explains the observed damage distribution. In addition, some recommendations are proposed, including correction factors in fragility curves in order to reproduce in a better way the observed damage in masonry and reinforce concrete buildings. The lessons learned would contribute to improve the simulation of expected damages due to future earthquakes in Lorca or other regions in Spain with similar characteristics regarding attenuation and vulnerability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

¿La gente utiliza la bicicleta porque les gusta? ¿O es el propio hecho de usarla la razón por la que les gusta hacerlo? ¿O es una combinación de las dos? Este tipo de preguntas reflejan un problema que se puede llamar ‘el círculo de la consideración de la bicicleta’: para poder considerar el uso de la bicicleta en el conjunto de posibles opciones a escoger, un individuo tiene que tener creencias positivas sobre ella, sobre todo en el caso de ‘contextos de bajo uso’. Pero parece poco probable que se formen creencias positivas cuando hay bajos niveles de familiaridad al modo, es decir, con un bajo conocimiento de sus características, su funcionamiento y del imaginario asociado; al mismo tiempo, la familiaridad irá alcanzando niveles más altos conforme aumente el tiempo y la intensidad con la que se utilice la bicicleta a lo largo de la vida de los individuos. El problema parece un circulo recursivo huevo-gallina, ya que es difícil que alguien considere el usar la bicicleta en lugares donde su uso es una práctica poco extendida. En estos lugares, y dentro del conglomerado actual de tecnologías, infraestructuras, reglas, prácticas de los usuarios y preferencias culturales que se han desarrollado alrededor del automóvil (el actual "sistema socio-técnico de la movilidad urbana", Urry 2004; Geels 2005, 2012) usar la bicicleta es considerado por la mayoría como algo difícil, inseguro, y anormal. Como consecuencia, los procesos de aumento de familiaridad con la bicicleta permanecen inactivos. La tesis asume la familiaridad como una fuente de información e influencia sobre las creencias positivas sobre la bicicleta. En ‘contextos de bajo uso’, sin familiaridad al uso de la bicicleta, estas creencias sólo pueden surgir de ciertos rasgos personales (afecto, valores, identidades, voluntad, etc.). Tal como han evidenciado investigaciones recientes, en estos contextos la posibilidad de considerar el uso de la bicicleta (y su eventual adopción), se circunscribe principalmente a los ‘entusiastas’, a los que están dispuestos a “ir contra corriente” (Horton & Parkin 2012), limitando el alcance de las políticas de promoción. La investigación llevada a cabo en esta tesis ofrece un nuevo enfoque al problema del ‘círculo de la consideración de la bicicleta’. Para ello, plantea un modelo en el que se introduce a la familiaridad como un constructo que media entre el comportamiento final –qué modo de transporte elige el individuo– y el conjunto de constructos psicosociales que preceden la elección modal (creencias y actitudes). La familiaridad al uso de la bicicleta se concibe como una medida de la intensidad relativa del uso de una bicicleta, real y percibida (basándose en Diana & Mokhtarian 2009) que puede formarse de manera distinta según sus fines (utilitarios o no utilitarios). El constructo familiaridad con el modo bicicleta está relacionado con la cantidad de tiempo, la intensidad y la regularidad con la que un individuo ha hecho uso de la bicicleta a lo largo de su vida. La familiaridad se concibe así como una condición que permite definir adecuadamente el contexto en el que se toman las decisiones modales de los individuos, en línea con investigaciones que postulan patrones de causalidad alternativos entre los procesos cognitivos de elección y los comportamientos modales (Tardif 1977; Dobson et al. 1978; Golob et al. 1979; Golob 2001; Schwanen et al. 2012; Diana et al. 2009; Vij & Walker 2014). De este modo se plantea que el esquema unidireccional actitudesconductas podría no ser completamente valido en el caso de la consideración de la bicicleta, explorando la hipótesis que sean las propias conductas a influenciar la formación de las actitudes. En esta tesis, el constructo de familiaridad se articula teórica y metodológicamente, y se emplea un instrumento de diseño transversal para contrastarlo. Los resultados de una encuesta telefónica a una muestra representativa de 736 personas en la ciudad española de Vitoria-Gasteiz proveen evidencias que sugieren –aunque de forma preliminar– que la familiaridad juega un papel de mediadora en la relación entre la utilización de la bicicleta y la formación de las creencias y actitudes hacia el su uso. La tesis emplea mediciones para cada individuo con respecto tanto a su consideración como a su familiaridad al uso de la bicicleta. Éstas mediciones se definen haciendo uso del análisis factorial exploratorio (AFE). Por un lado, el AFE arroja una estructura del constructo ‘consideración’ formada por cuatro factores, tres de ellos asociados con elementos positivos y uno con elementos negativos: (1) de cómo el uso de la bicicleta se considera verde e inteligente (G&S); (2) sobre su carácter agradable y adecuado (P&S); (3) sobre su eficacia como modo de transporte para ir al trabajo (E); y (4) sobre los principales inconvenientes de su uso, es decir, las dificultades implícitas (sudoración y estar expuestos a las inclemencias del tiempo) y la sensación de inseguridad que genera (sentirse en riesgo de accidentes y estresarse por el tráfico) (D&T). Por otro lado, la familiaridad al uso de la bicicleta se mide en dos distintas variables ordinales (según se base en el uso utilitario o no utilitario). Como resultado, se puede hablar de que cada individuo se encuentra en una de las siguientes cuatro etapas en orden creciente hacia una familiaridad completa al modo: no familiarizados; apenas familiarizados; moderadamente familiarizados; totalmente familiarizados. El análisis de los datos de los cuatro grupos de sujetos de la muestra, –definidos de acuerdo con cada una de las cuatro etapas de familiaridad definidas– ha evidenciado la existencia de diferencias intergrupo estadísticamente significativas, especialmente para la medida relacionada con el uso utilitario. Asimismo, las personas en los niveles inferiores de familiaridad tienen una consideración menor de los aspectos positivos de la bicicleta y por el contrario presentan preocupaciones mayores hacia las características negativas respecto a aquellas personas que están más familiarizados en el uso utilitario. El uso, aunque esporádico, de una bicicleta para fines utilitarios (ir de compras, hacer recados, etc.), a diferencia de no usarla en absoluto, aparece asociado a unas puntuaciones significativamente más altas en los tres factores positivos (G&S, E, P&S), mientras que parece estar asociado a puntuaciones significativamente más bajas en el factor relacionado con las características negativas (D&U). Aparecen resultados similares cuando se compara un uso moderado, con uno esporádico, sobre todo con respecto a la consideración de las características negativas. Los resultados de esta tesis están en línea con la literatura anterior que se ha basado en variables similares (por ejemplo, de Geus et al. 2008; Stinson & Bhat 2003, 2004; Hunt & Abraham 2006; y van Bekkum et al. 2011a, entre otros), pero en este estudio las diferencias se observan en un contexto de bajo uso y se derivan de un análisis de toda la población de personas que se desplazan a su lugar de trabajo o estudio, lo cual eleva la fiabilidad de los resultados. La posibilidad de que unos niveles más altos de uso de la bicicleta para fines utilitarios puedan llevar a niveles más positivos de su consideración abre el camino a implicaciones teóricas y de políticas que se discuten en la tesis. Con estos resultados se argumenta que el enfoque convencional basado en el cambio de actitudes puede no ser el único y prioritario para lograr cambios a la hora de fomentar el uso de la bicicleta. Los resultados apuntan al potencial de otros esquemas de causalidad, basados en patrones de influencia más descentrados y distribuidos, y que adopten una mirada más positiva hacia los hábitos de transporte, conceptualizándolos como “inteligencia encarnada y pre-reflexiva” (Schwanen et al. 2012). Tales esquemas conducen a un enfoque más práctico para la promoción del uso de la bicicleta, con estrategias que podrían basarse en acciones de ‘degustación’ de su uso o de mayor ‘exposición’ a su uso. Is the fact that people like cycling the reason for them to cycle? Or is the fact that they do cycle the reason for them to like cycling? Or is a combination of the two? This kind of questions reflect a problem that can be called ‘the cycle of cycling consideration’: in order to consider cycling in the set of possible options to be chosen, an individual needs to have positive beliefs about it, especially in the case of ‘low-cycling contexts’. However, positive beliefs seem unlikely to be formed with low levels of mode familiarity, say, with a low acquaintance with mode features, functioning and images; at the same time, higher levels of familiarity are likely to be reached if cycling is practised over relative threshold levels of intensities and extensively across individual life courses. The problem looks like a chicken-egg recursive cycle, since the latter condition is hardly met in places where cycling is little practised. In fact, inside the current conglomerate of technologies, infrastructures, regulations, user practices, cultural preferences that have grown around the automobile (the current “socio-technical system of urban mobility”, Urry 2004; Geels 2005, 2012) cycling is commonly considered as difficult, unsafe, and abnormal. Consequently, the processes of familiarity forming remain disabled, and, as a result, beliefs cannot rely on mode familiarity as a source of information and influence. Without cycling familiarity, origins of positive beliefs are supposed to rely only on personal traits (affect, values, identities, willingness, etc.), which, in low-cycling contexts, confine the possibility of cycling consideration (and eventual adoption) mainly to ‘cycling enthusiasts’ who are willing to “go against the grain” (Horton & Parkin 2012), as it results from previous research. New research conducted by author provides theoretical insights for a different approach of the cycling consideration problem in which the presence of the new construct of cycling familiarity is hypothesised in the relationship between mode choice behaviour and the set of psychosocial constructs that are supposed to precede it (beliefs and attitudes). Cycling familiarity is conceived as a measure of the real and the perceived relative intensity of use of a bicycle (building upon Diana & Mokhtarian 2009) which may be differently formed for utilitarian or non-utilitarian purposes. The construct is assumed to be related to the amount of time, the intensity and the regularity an individual spends in using a bicycle for the two distinct categories of purposes, gaining in this way a certain level of acquaintance with the mode. Familiarity with a mode of transport is conceived as an enabling condition to properly define the decision-making context in which individual travel mode choices are taken, in line with rather disperse research efforts postulating inverse relationships between mode behaviours and mode choices (Tardiff 1977; Dobson et al. 1978; Golob et al. 1979; Golob 2001; Schwanen et al. 2012; Diana et al. 2009; Vij & Walker 2014). The new construct is built theoretically and methodologically, and a cross-sectional design instrument is employed. Results from a telephone survey in a representative sample of 736 commuters in the Spanish city of Vitoria-Gasteiz, provide suggestive –although preliminary– evidence on the role of mode familiarity as a mediator in the relationship between cycling use and the formation of beliefs and attitudes toward cycling. Measures of both cycling consideration and cycling familiarity are defined making use of exploratory factor analysis. On the one hand, four distinct cycling consideration measures are created, based on attitude expressions on four underlying factors relating to the cycling commuting behaviour: on how cycling commuting is considered green and smart (G&S); on its pleasant and suited character (P&S); on its efficiency as a mode of transport for commuting (E); and on the main drawbacks of its use, namely the difficulties implied (sweating and being exposed to adverse weather conditions) and the sense of unsafety it generates (feeling at risk of accidents and getting stressed by traffic) (D&U). On the other hand, dimensions of cycling familiarity are measured on two distinct ordinal variables (whether based on the utilitarian or non-utilitarian use) comprising four stages to a complete mode familiarity: not familiar; barely familiar; moderately familiar; fully familiar. For each of the four stages of cycling familiarity defined, statistical significant differences are found, especially for the measure related to the utilitarian use. Consistently, people at the lower levels of cycling familiarity have a lower consideration of the positive aspects of cycling and conversely they exhibit higher concerns towards the negative characteristics than those individuals that are more familiar in utilitarian cycling. Using a bicycle occasionally for practical purposes, as opposed to not using it at all, seems associated to significant higher scores in the three positive factors (G&S, E, P&S) while it appears to be associated to significant lower scores in the factor relating with the negative characteristics of cycling commuting (D&U). A same pattern also occurs with a moderate use, as opposed to an occasional one, especially for the consideration of the negative characteristics. The results are in line with previous literature based on similar variables (e.g. de Geus et al. 2008; Stinson & Bhat 2003, 2004; Hunt & Abraham 2006; and van Bekkum et al. 2011a, among others), but in this study the differences are observed in a low-cycling context and derive from an analysis of the entire population of commuters, which rises the reliability of results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of a web platform is a complex and interdisciplinary task, where people with different roles such as project manager, designer or developer participate. Different usability and User Experience evaluation methods can be used in each stage of the development life cycle, but not all of them have the same influence in the software development and in the final product or system. This article presents the study of the impact of these methods applied in the context of an e-Learning platform development. The results show that the impact has been strong from a developer's perspective. Developer team members considered that usability and User Experience evaluation allowed them mainly to identify design mistakes, improve the platform's usability and understand the end users and their needs in a better way. Interviews with potential users, clickmaps and scrollmaps were rated as the most useful methods. Finally, these methods were considered unanimously very useful in the context of the entire software development, only comparable to SCRUM meetings and overcoming the rest of involved factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automated Teller Machines (ATMs) are sensitive self-service systems that require important investments in security and testing. ATM certifications are testing processes for machines that integrate software components from different vendors and are performed before their deployment for public use. This project was originated from the need of optimization of the certification process in an ATM manufacturing company. The process identifies compatibility problems between software components through testing. It is composed by a huge number of manual user tasks that makes the process very expensive and error-prone. Moreover, it is not possible to fully automate the process as it requires human intervention for manipulating ATM peripherals. This project presented important challenges for the development team. First, this is a critical process, as all the ATM operations rely on the software under test. Second, the context of use of ATMs applications is vastly different from ordinary software. Third, ATMs’ useful lifetime is beyond 15 years and both new and old models need to be supported. Fourth, the know-how for efficient testing depends on each specialist and it is not explicitly documented. Fifth, the huge number of tests and their importance implies the need for user efficiency and accuracy. All these factors led us conclude that besides the technical challenges, the usability of the intended software solution was critical for the project success. This business context is the motivation of this Master Thesis project. Our proposal focused in the development process applied. By combining user-centered design (UCD) with agile development we ensured both the high priority of usability and the early mitigation of software development risks caused by all the technology constraints. We performed 23 development iterations and finally we were able to provide a working solution on time according to users’ expectations. The evaluation of the project was carried out through usability tests, where 4 real users participated in different tests in the real context of use. The results were positive, according to different metrics: error rate, efficiency, effectiveness, and user satisfaction. We discuss the problems found, the benefits and the lessons learned in the process. Finally, we measured the expected project benefits by comparing the effort required by the current and the new process (once the new software tool is adopted). The savings corresponded to 40% less effort (man-hours) per certification. Future work includes additional evaluation of product usability in a real scenario (with customers) and the measuring of benefits in terms of quality improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integral Masonry System consisting of intersecting steel trusses alo ng each of the three dimensional directions of space on walls and slabs using any masonry material, had yet been backed up by the previous adobe test for seismic areas. This paper presents the comparison this last test and the adaptation of the IMS using h ollow brick. A prototype based on a two storey model house (6mx6mx6m) has being also built in two different scales in order to maximize the load and size of the shake table: the first one half size the whole building (3mx3mx3m) and the second, a quarter of the real size (3mx3mx6m). Both tests have suffered some mild to moderate damages while supporting the higher seismic action subjected by the shake table, without even fissuring the first test and with very few damages the second one. The thickness of the hollow brick wall and the diameter of the tree - dimensional truss reinforcement were scaled to the real size test in order to ascertain its great structural behaviour in relation to the previous structural model calculations. The aim of this study is to sum marize the results of the research collaboration between the ETSAM - UPM and the PUCP in whose laboratory these tests were carried out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arch bridge structural solution has been known for centuries, in fact the simple nature of arch that require low tension and shear strength was an advantage as the simple materials like stone and brick were the only option back in ancient centuries. By the pass of time especially after industrial revolution, the new materials were adopted in construction of arch bridges to reach longer spans. Nowadays one long span arch bridge is made of steel, concrete or combination of these two as "CFST", as the result of using these high strength materials, very long spans can be achieved. The current record for longest arch belongs to Chaotianmen bridge over Yangtze river in China with 552 meters span made of steel and the longest reinforced concrete type is Wanxian bridge which also cross the Yangtze river through a 420 meters span. Today the designer is no longer limited by span length as long as arch bridge is the most applicable solution among other approaches, i.e. cable stayed and suspended bridges are more reasonable if very long span is desired. Like any super structure, the economical and architectural aspects in construction of a bridge is extremely important, in other words, as a narrower bridge has better appearance, it also require smaller volume of material which make the design more economical. Design of such bridge, beside the high strength materials, requires precise structural analysis approaches capable of integrating the combination of material behaviour and complex geometry of structure and various types of loads which may be applied to bridge during its service life. Depend on the design strategy, analysis may only evaluates the linear elastic behaviour of structure or consider the nonlinear properties as well. Although most of structures in the past were designed to act in their elastic range, the rapid increase in computational capacity allow us to consider different sources of nonlinearities in order to achieve a more realistic evaluations where the dynamic behaviour of bridge is important especially in seismic zones where large movements may occur or structure experience P - _ effect during the earthquake. The above mentioned type of analysis is computationally expensive and very time consuming. In recent years, several methods were proposed in order to resolve this problem. Discussion of recent developments on these methods and their application on long span concrete arch bridges is the main goal of this research. Accordingly available long span concrete arch bridges have been studied to gather the critical information about their geometrical aspects and properties of their materials. Based on concluded information, several concrete arch bridges were designed for further studies. The main span of these bridges range from 100 to 400 meters. The Structural analysis methods implemented in in this study are as following: Elastic Analysis: Direct Response History Analysis (DRHA): This method solves the direct equation of motion over time history of applied acceleration or imposed load in linear elastic range. Modal Response History Analysis (MRHA): Similar to DRHA, this method is also based on time history, but the equation of motion is simplified to single degree of freedom system and calculates the response of each mode independently. Performing this analysis require less time than DRHA. Modal Response Spectrum Analysis (MRSA): As it is obvious from its name, this method calculates the peak response of structure for each mode and combine them using modal combination rules based on the introduced spectra of ground motion. This method is expected to be fastest among Elastic analysis. Inelastic Analysis: Nonlinear Response History Analysis (NL-RHA): The most accurate strategy to address significant nonlinearities in structural dynamics is undoubtedly the nonlinear response history analysis which is similar to DRHA but extended to inelastic range by updating the stiffness matrix for every iteration. This onerous task, clearly increase the computational cost especially for unsymmetrical buildings that requires to be analyzed in a full 3D model for taking the torsional effects in to consideration. Modal Pushover Analysis (MPA): The Modal Pushover Analysis is basically the MRHA but extended to inelastic stage. After all, the MRHA cannot solve the system of dynamics because the resisting force fs(u; u_ ) is unknown for inelastic stage. The solution of MPA for this obstacle is using the previously recorded fs to evaluate system of dynamics. Extended Modal Pushover Analysis (EMPA): Expanded Modal pushover is a one of very recent proposed methods which evaluates response of structure under multi-directional excitation using the modal pushover analysis strategy. In one specific mode,the original pushover neglect the contribution of the directions different than characteristic one, this is reasonable in regular symmetric building but a structure with complex shape like long span arch bridges may go through strong modal coupling. This method intend to consider modal coupling while it take same time of computation as MPA. Coupled Nonlinear Static Pushover Analysis (CNSP): The EMPA includes the contribution of non-characteristic direction to the formal MPA procedure. However the static pushovers in EMPA are performed individually for every mode, accordingly the resulted values from different modes can be combined but this is only valid in elastic phase; as soon as any element in structure starts yielding the neutral axis of that section is no longer fixed for both response during the earthquake, meaning the longitudinal deflection unavoidably affect the transverse one or vice versa. To overcome this drawback, the CNSP suggests executing pushover analysis for governing modes of each direction at the same time. This strategy is estimated to be more accurate than MPA and EMPA, moreover the calculation time is reduced because only one pushover analysis is required. Regardless of the strategy, the accuracy of structural analysis is highly dependent on modelling and numerical integration approaches used in evaluation of each method. Therefore the widely used Finite Element Method is implemented in process of all analysis performed in this research. In order to address the study, chapter 2, starts with gathered information about constructed long span arch bridges, this chapter continuous with geometrical and material definition of new models. Chapter 3 provides the detailed information about structural analysis strategies; furthermore the step by step description of procedure of all methods is available in Appendix A. The document ends with the description of results and conclusion of chapter 4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las redes de sensores inalámbricas son uno de los sectores con más crecimiento dentro de las redes inalámbricas. La rápida adopción de estas redes como solución para muchas nuevas aplicaciones ha llevado a un creciente tráfico en el espectro radioeléctrico. Debido a que las redes inalámbricas de sensores operan en las bandas libres Industrial, Scientific and Medical (ISM) se ha producido una saturación del espectro que en pocos años no permitirá un buen funcionamiento. Con el objetivo de solucionar este tipo de problemas ha aparecido el paradigma de Radio Cognitiva (CR). La introducción de las capacidades cognitivas en las redes inalámbricas de sensores permite utilizar estas redes para aplicaciones con unos requisitos más estrictos respecto a fiabilidad, cobertura o calidad de servicio. Estas redes que aúnan todas estas características son llamadas redes de sensores inalámbricas cognitivas (CWSNs). La mejora en prestaciones de las CWSNs permite su utilización en aplicaciones críticas donde antes no podían ser utilizadas como monitorización de estructuras, de servicios médicos, en entornos militares o de vigilancia. Sin embargo, estas aplicaciones también requieren de otras características que la radio cognitiva no nos ofrece directamente como, por ejemplo, la seguridad. La seguridad en CWSNs es un aspecto poco desarrollado al ser una característica no esencial para su funcionamiento, como pueden serlo el sensado del espectro o la colaboración. Sin embargo, su estudio y mejora es esencial de cara al crecimiento de las CWSNs. Por tanto, esta tesis tiene como objetivo implementar contramedidas usando las nuevas capacidades cognitivas, especialmente en la capa física, teniendo en cuenta las limitaciones con las que cuentan las WSNs. En el ciclo de trabajo de esta tesis se han desarrollado dos estrategias de seguridad contra ataques de especial importancia en redes cognitivas: el ataque de simulación de usuario primario (PUE) y el ataque contra la privacidad eavesdropping. Para mitigar el ataque PUE se ha desarrollado una contramedida basada en la detección de anomalías. Se han implementado dos algoritmos diferentes para detectar este ataque: el algoritmo de Cumulative Sum y el algoritmo de Data Clustering. Una vez comprobado su validez se han comparado entre sí y se han investigado los efectos que pueden afectar al funcionamiento de los mismos. Para combatir el ataque de eavesdropping se ha desarrollado una contramedida basada en la inyección de ruido artificial de manera que el atacante no distinga las señales con información del ruido sin verse afectada la comunicación que nos interesa. También se ha estudiado el impacto que tiene esta contramedida en los recursos de la red. Como resultado paralelo se ha desarrollado un marco de pruebas para CWSNs que consta de un simulador y de una red de nodos cognitivos reales. Estas herramientas han sido esenciales para la implementación y extracción de resultados de la tesis. ABSTRACT Wireless Sensor Networks (WSNs) are one of the fastest growing sectors in wireless networks. The fast introduction of these networks as a solution in many new applications has increased the traffic in the radio spectrum. Due to the operation of WSNs in the free industrial, scientific, and medical (ISM) bands, saturation has ocurred in these frequencies that will make the same operation methods impossible in the future. Cognitive radio (CR) has appeared as a solution for this problem. The networks that join all the mentioned features together are called cognitive wireless sensor networks (CWSNs). The adoption of cognitive features in WSNs allows the use of these networks in applications with higher reliability, coverage, or quality of service requirements. The improvement of the performance of CWSNs allows their use in critical applications where they could not be used before such as structural monitoring, medical care, military scenarios, or security monitoring systems. Nevertheless, these applications also need other features that cognitive radio does not add directly, such as security. The security in CWSNs has not yet been explored fully because it is not necessary field for the main performance of these networks. Instead, other fields like spectrum sensing or collaboration have been explored deeply. However, the study of security in CWSNs is essential for their growth. Therefore, the main objective of this thesis is to study the impact of some cognitive radio attacks in CWSNs and to implement countermeasures using new cognitive capabilities, especially in the physical layer and considering the limitations of WSNs. Inside the work cycle of this thesis, security strategies against two important kinds of attacks in cognitive networks have been developed. These attacks are the primary user emulator (PUE) attack and the eavesdropping attack. A countermeasure against the PUE attack based on anomaly detection has been developed. Two different algorithms have been implemented: the cumulative sum algorithm and the data clustering algorithm. After the verification of these solutions, they have been compared and the side effects that can disturb their performance have been analyzed. The developed approach against the eavesdropping attack is based on the generation of artificial noise to conceal information messages. The impact of this countermeasure on network resources has also been studied. As a parallel result, a new framework for CWSNs has been developed. This includes a simulator and a real network with cognitive nodes. This framework has been crucial for the implementation and extraction of the results presented in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El consumo energético de las Redes de Sensores Inalámbricas (WSNs por sus siglas en inglés) es un problema histórico que ha sido abordado desde diferentes niveles y visiones, ya que no solo afecta a la propia supervivencia de la red sino que el creciente uso de dispositivos inteligentes y el nuevo paradigma del Internet de las Cosas hace que las WSNs tengan cada vez una mayor influencia en la huella energética. Debido a la tendencia al alza en el uso de estas redes se añade un nuevo problema, la saturación espectral. Las WSNs operan habitualmente en bandas sin licencia como son las bandas Industrial, Científica y Médica (ISM por sus siglas en inglés). Estas bandas se comparten con otro tipo de redes como Wi-Fi o Bluetooth cuyo uso ha crecido exponencialmente en los últimos años. Para abordar este problema aparece el paradigma de la Radio Cognitiva (CR), una tecnología que permite el acceso oportunista al espectro. La introducción de capacidades cognitivas en las WSNs no solo permite optimizar su eficiencia espectral sino que también tiene un impacto positivo en parámetros como la calidad de servicio, la seguridad o el consumo energético. Sin embargo, por otra parte, este nuevo paradigma plantea algunos retos relacionados con el consumo energético. Concretamente, el sensado del espectro, la colaboración entre los nodos (que requiere comunicación adicional) y el cambio en los parámetros de transmisión aumentan el consumo respecto a las WSN clásicas. Teniendo en cuenta que la investigación en el campo del consumo energético ha sido ampliamente abordada puesto que se trata de una de sus principales limitaciones, asumimos que las nuevas estrategias deben surgir de las nuevas capacidades añadidas por las redes cognitivas. Por otro lado, a la hora de diseñar estrategias de optimización para CWSN hay que tener muy presentes las limitaciones de recursos de estas redes en cuanto a memoria, computación y consumo energético de los nodos. En esta tesis doctoral proponemos dos estrategias de reducción de consumo energético en CWSNs basadas en tres pilares fundamentales. El primero son las capacidades cognitivas añadidas a las WSNs que proporcionan la posibilidad de adaptar los parámetros de transmisión en función del espectro disponible. La segunda es la colaboración, como característica intrínseca de las CWSNs. Finalmente, el tercer pilar de este trabajo es teoría de juegos como algoritmo de soporte a la decisión, ampliamente utilizado en WSNs debido a su simplicidad. Como primer aporte de la tesis se presenta un análisis completo de las posibilidades introducidas por la radio cognitiva en materia de reducción de consumo para WSNs. Gracias a las conclusiones extraídas de este análisis, se han planteado las hipótesis de esta tesis relacionadas con la validez de usar capacidades cognitivas como herramienta para la reducción de consumo en CWSNs. Una vez presentada las hipótesis, pasamos a desarrollar las principales contribuciones de la tesis: las dos estrategias diseñadas para reducción de consumo basadas en teoría de juegos y CR. La primera de ellas hace uso de un juego no cooperativo que se juega mediante pares de jugadores. En la segunda estrategia, aunque el juego continúa siendo no cooperativo, se añade el concepto de colaboración. Para cada una de las estrategias se presenta el modelo del juego, el análisis formal de equilibrios y óptimos y la descripción de la estrategia completa donde se incluye la interacción entre nodos. Con el propósito de probar las estrategias mediante simulación e implementación en dispositivos reales hemos desarrollado un marco de pruebas compuesto por un simulador cognitivo y un banco de pruebas formado por nodos cognitivos capaces de comunicarse en tres bandas ISM desarrollados en el B105 Lab. Este marco de pruebas constituye otra de las aportaciones de la tesis que permitirá el avance en la investigación en el área de las CWSNs. Finalmente, se presentan y discuten los resultados derivados de la prueba de las estrategias desarrolladas. La primera estrategia proporciona ahorros de energía mayores al 65% comparados con una WSN sin capacidades cognitivas y alrededor del 25% si la comparamos con una estrategia cognitiva basada en el sensado periódico del espectro para el cambio de canal de acuerdo a un nivel de ruido fijado. Este algoritmo se comporta de forma similar independientemente del nivel de ruido siempre que éste sea espacialmente uniformemente. Esta estrategia, a pesar de su sencillez, nos asegura el comportamiento óptimo en cuanto a consumo energético debido a la utilización de teoría de juegos en la fase de diseño del comportamiento de los nodos. La estrategia colaborativa presenta mejoras respecto a la anterior en términos de protección frente al ruido en escenarios de ruido más complejos donde aporta una mejora del 50% comparada con la estrategia anterior. ABSTRACT Energy consumption in Wireless Sensor Networks (WSNs) is a known historical problem that has been addressed from different areas and on many levels. But this problem should not only be approached from the point of view of their own efficiency for survival. A major portion of communication traffic has migrated to mobile networks and systems. The increased use of smart devices and the introduction of the Internet of Things (IoT) give WSNs a great influence on the carbon footprint. Thus, optimizing the energy consumption of wireless networks could reduce their environmental impact considerably. In recent years, another problem has been added to the equation: spectrum saturation. Wireless Sensor Networks usually operate in unlicensed spectrum bands such as Industrial, Scientific, and Medical (ISM) bands shared with other networks (mainly Wi-Fi and Bluetooth). To address the efficient spectrum utilization problem, Cognitive Radio (CR) has emerged as the key technology that enables opportunistic access to the spectrum. Therefore, the introduction of cognitive capabilities to WSNs allows optimizing their spectral occupation. Cognitive Wireless Sensor Networks (CWSNs) do not only increase the reliability of communications, but they also have a positive impact on parameters such as the Quality of Service (QoS), network security, or energy consumption. These new opportunities introduced by CWSNs unveil a wide field in the energy consumption research area. However, this also implies some challenges. Specifically, the spectrum sensing stage, collaboration among devices (which requires extra communication), and changes in the transmission parameters increase the total energy consumption of the network. When designing CWSN optimization strategies, the fact that WSN nodes are very limited in terms of memory, computational power, or energy consumption has to be considered. Thus, light strategies that require a low computing capacity must be found. Since the field of energy conservation in WSNs has been widely explored, we assume that new strategies could emerge from the new opportunities presented by cognitive networks. In this PhD Thesis, we present two strategies for energy consumption reduction in CWSNs supported by three main pillars. The first pillar is that cognitive capabilities added to the WSN provide the ability to change the transmission parameters according to the spectrum. The second pillar is that the ability to collaborate is a basic characteristic of CWSNs. Finally, the third pillar for this work is the game theory as a decision-making algorithm, which has been widely used in WSNs due to its lightness and simplicity that make it valid to operate in CWSNs. For the development of these strategies, a complete analysis of the possibilities is first carried out by incorporating the cognitive abilities into the network. Once this analysis has been performed, we expose the hypotheses of this thesis related to the use of cognitive capabilities as a useful tool to reduce energy consumption in CWSNs. Once the analyses are exposed, we present the main contribution of this thesis: the two designed strategies for energy consumption reduction based on game theory and cognitive capabilities. The first one is based on a non-cooperative game played between two players in a simple and selfish way. In the second strategy, the concept of collaboration is introduced. Despite the fact that the game used is also a non-cooperative game, the decisions are taken through collaboration. For each strategy, we present the modeled game, the formal analysis of equilibrium and optimum, and the complete strategy describing the interaction between nodes. In order to test the strategies through simulation and implementation in real devices, we have developed a CWSN framework composed by a CWSN simulator based on Castalia and a testbed based on CWSN nodes able to communicate in three different ISM bands. We present and discuss the results derived by the energy optimization strategies. The first strategy brings energy improvement rates of over 65% compared to WSN without cognitive techniques. It also brings energy improvement rates of over 25% compared with sensing strategies for changing channels based on a decision threshold. We have also seen that the algorithm behaves similarly even with significant variations in the level of noise while working in a uniform noise scenario. The collaborative strategy presents improvements respecting the previous strategy in terms of noise protection when the noise scheme is more complex where this strategy shows improvement rates of over 50%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existe un amplio catálogo de posibles soluciones para resolver la problemática de las zapatas de medianería así como, por extensión, las zapatas de esquina como caso particular de las anteriores. De ellas, las más habitualmente empleadas en estructuras de edificación son, por un lado, la utilización de una viga centradora que conecta la zapata de medianería con la zapata del pilar interior más próximo y, por otro, la colaboración de la viga de la primera planta trabajando como tirante. En la primera solución planteada, el equilibrio de la zapata de medianería y el centrado de la respuesta del terreno se consigue gracias a la colaboración del pilar interior con su cimentación y al trabajo a flexión de la viga centradora. La modelización clásica considera que se logra un centrado total de la reacción del terreno, con distribución uniforme de las tensiones de contacto bajo ambas zapatas. Este planteamiento presupone, por tanto, que la viga centradora logra evitar cualquier giro de la zapata de medianería y que el pilar puede, por ello, considerarse perfectamente empotrado en la cimentación. En este primer modelo, el protagonismo fundamental recae en la viga centradora, cuyo trabajo a flexión conduce frecuentemente a unas escuadrías y a unas cuantías de armado considerables. La segunda solución, plantea la colaboración de la viga de la primera planta, trabajando como tirante. De nuevo, los métodos convencionales suponen un éxito total en el mecanismo estabilizador del tirante, que logra evitar cualquier giro de la zapata de medianería, dando lugar a una distribución de tensiones también uniforme. Los modelos convencionales existentes para el cálculo de este tipo de cimentaciones presentan, por tanto, una serie de simplificaciones que permiten el cálculo de las mismas, por medios manuales, en un tiempo razonable, pero presentan el inconveniente de su posible alejamiento del comportamiento real de la cimentación, con las consecuencias negativas que ello puede suponer en el dimensionamiento de estos elementos estructurales. La presente tesis doctoral desarrolla un contraste de los modelos convencionales de cálculo de cimentaciones de medianería y esquina, mediante un análisis alternativo con modelos de elementos finitos, con el objetivo de poner de manifiesto las diferencias entre los resultados obtenidos con ambos tipos de modelización, analizar cuáles son las variables que más influyen en el comportamiento real de este tipo de cimentaciones y proponer un nuevo modelo de cálculo, de tipo convencional, más ajustado a la realidad. El proceso de investigación se desarrolla mediante una etapa experimental virtual que utiliza como modelo un pórtico tipo de edificación, ortogonal, de hormigón armado, con dos vanos y número variable de plantas. Tras identificar el posible giro de la cimentación como elemento clave en el comportamiento de las zapatas de medianería y de esquina, se adoptan como variables de estudio aquellas que mayor influencia puedan tener sobre el citado giro de las zapatas y sobre la rigidez del conjunto del elemento estructural. Así, se han estudiado luces de 3 m a 7 m, diferente número de plantas desde baja+1 hasta baja+4, resistencias del terreno desde 100 kN/m2 hasta 300 kN/m2, relaciones de forma de la zapata de medianería de 1,5 : 1 y 2 : 1, aumento y reducción de la cuantía de armado de la viga centradora y variación del canto de la viga centradora desde el mínimo canto compatible con el anclaje de la armadura de los pilares hasta un incremento del 75% respecto del citado canto mínimo. El conjunto de pórticos generados al aplicar las variables indicadas, se ha calculado tanto por métodos convencionales como por el método de los elementos finitos. Los resultados obtenidos ponen de manifiesto importantes discrepancias entre ambos métodos que conducen a importantes diferencias en el dimensionamiento de este tipo de cimentaciones. El empleo de los métodos tradicionales da lugar, por un lado, a un sobredimensionamiento de la armadura de la viga centradora y, por otro, a un infradimensionamiento, tanto del canto de la viga centradora, como del tamaño de la zapata de medianería y del armado de la viga de la primera planta. Finalizado el análisis y discusión de resultados, la tesis propone un nuevo método alternativo, de carácter convencional y, por tanto, aplicable a un cálculo manual en un tiempo razonable, que permite obtener los parámetros clave que regulan el comportamiento de las zapatas de medianería y esquina, conduciendo a un dimensionamiento más ajustado a las necesidades reales de este tipo de cimentación. There is a wide catalogue of possible solutions to solve the problem of party shoes and, by extension, corner shoes as a special case of the above. From all of them, the most commonly used in building structures are, on one hand, the use of a centering beam that connects the party shoe with the shoe of the nearest interior pillar and, on the other hand, the collaboration of the beam of the first floor working as a tie rod. In the first proposed solution, the balance of the party shoe and the centering of the ground response is achieved thanks to the collaboration of the interior pillar with his foundation along with the bending work of the centering beam. Classical modeling considers that a whole centering of the ground reaction is achieved, with uniform contact stress distribution under both shoes. This approach to the issue presupposes that the centering beam manages to avoid any rotation of the party shoe, so the pillar can be considered perfectly embedded in the foundation. In this first model, the leading role lies in the centering beam, whose bending work usually leads to important section sizes and high amounts of reinforced. The second solution, consideres the collaboration of the beam of the first floor, working as tie rod. Again, conventional methods involve a total success in the stabilizing mechanism of the tie rod, that manages to avoid any rotation of the party shoe, resulting in a stress distribution also uniform. Existing conventional models for calculating such foundations show, therefore, a series of simplifications which allow calculation of the same, by manual means, in a reasonable time, but have the disadvantage of the possible distance from the real behavior of the foundation, with the negative consequences this could bring in the dimensioning of these structural elements. The present thesis develops a contrast of conventional models of calculation of party and corner foundations by an alternative analysis with finite element models with the aim of bring to light the differences between the results obtained with both types of modeling, analysis which are the variables that influence the real behavior of this type of foundations and propose a new calculation model, conventional type, more adjusted to reality. The research process is developed through a virtual experimental stage using as a model a typical building frame, orthogonal, made of reinforced concrete, with two openings and variable number of floors. After identifying the possible spin of the foundation as the key element in the behavior of the party and corner shoes, it has been adopted as study variables, those that may have greater influence on the spin of the shoes and on the rigidity of the whole structural element. So, it have been studied lights from 3 m to 7 m, different number of floors from lower floor + 1 to lower floor + 4, máximum ground stresses from 100 kN/m2 300 kN/m2, shape relationships of party shoe 1,5:1 and 2:1, increase and decrease of the amount of reinforced of the centering beam and variation of the height of the centering beam from the minimum compatible with the anchoring of the reinforcement of pillars to an increase of 75% from the minimum quoted height. The set of frames generated by applying the indicated variables, is calculated both by conventional methods such as by the finite element method. The results show significant discrepancies between the two methods that lead to significant differences in the dimensioning of this type of foundation. The use of traditional methods results, on one hand, to an overdimensioning of the reinforced of the centering beam and, on the other hand, to an underdimensioning, both the height of the centering beam, such as the size of the party shoe and the reinforced of the beam of the first floor. After the analysis and discussion of results, the thesis proposes a new alternative method, conventional type and, therefore, applicable to a manual calculation in a reasonable time, that allows to obtain the key parameters that govern the behavior of party and corner shoes, leading to a dimensioning more adjusted to the real needings of this type of foundation.