803 resultados para nature-based
Resumo:
Added value products are being developed in ceramic industry. Different optical effects as bright metallic shine or new functionalities as hydrophobicity or bactericide characteristics are the new properties searched on the tiles. In this study, we prepare glassy coatings for tiles based on copper pigment by a conventional industrial process. The obtained coatings present different aesthetical aspects, including bright metallic aspect which confers a high decorative value to the tile. Furthermore, these metallic coatings present hydrophobic properties with contact angles with water as high as 115 degrees and also bactericide characteristics. Superficial microstructure and nanoparticles were found in the bactericide-hydrophobic samples, resembling the surface of hydrophobic leaf surfaces. This structure was formed by the crystallization of CuO nanoparticles as Tenorite due to the copper saturation of the glassy matrix at the surface of the coatings.
Resumo:
Critical infrastructures support everyday activities in modern societies, facilitating the exchange of services and quantities of various nature. Their functioning is the result of the integration of diverse technologies, systems and organizations into a complex network of interconnections. Benefits from networking are accompanied by new threats and risks. In particular, because of the increased interdependency, disturbances and failures may propagate and render unstable the whole infrastructure network. This paper presents a methodology of resilience analysis of networked systems of systems. Resilience generalizes the concept of stability of a system around a state of equilibrium, with respect to a disturbance and its ability of preventing, resisting and recovery. The methodology provides a tool for the analysis of off-equilibrium conditions that may occur in a single system and propagate through the network of dependencies. The analysis is conducted in two stages. The first stage of the analysis is qualitative. It identifies the resilience scenarios, i.e. the sequence of events, triggered by an initial disturbance, which include failures and the system response. The second stage is quantitative. The most critical scenarios can be simulated, for the desired parameter settings, in order to check if they are successfully handled, i.e recovered to nominal conditions, or they end into the network failure. The proposed methodology aims at providing an effective support to resilience-informed design.
Resumo:
Development of PCB-integrateable microsensors for monitoring chemical species is a goal in areas such as lab-on-a-chip analytical devices, diagnostics medicine and electronics for hand-held instruments where the device size is a major issue. Cellular phones have pervaded the world inhabitants and their usefulness has dramatically increased with the introduction of smartphones due to a combination of amazing processing power in a confined space, geolocalization and manifold telecommunication features. Therefore, a number of physical and chemical sensors that add value to the terminal for health monitoring, personal safety (at home, at work) and, eventually, national security have started to be developed, capitalizing also on the huge number of circulating cell phones. The chemical sensor-enabled “super” smartphone provides a unique (bio)sensing platform for monitoring airborne or waterborne hazardous chemicals or microorganisms for both single user and crowdsourcing security applications. Some of the latest ones are illustrated by a few examples. Moreover, we have recently achieved for the first time (covalent) functionalization of p- and n-GaN semiconductor surfaces with tuneable luminescent indicator dyes of the Ru-polypyridyl family, as a key step in the development of innovative microsensors for smartphone applications. Chemical “sensoring” of GaN-based blue LED chips with those indicators has also been achieved by plasma treatment of their surface, and the micrometer-sized devices have been tested to monitor O2 in the gas phase to show their full functionality. Novel strategies to enhance the sensor sensitivity such as changing the length and nature of the siloxane buffer layer are discussed in this paper.
Resumo:
The decision to select the most suitable type of energy storage system for an electric vehicle is always difficult, since many conditionings must be taken into account. Sometimes, this study can be made by means of complex mathematical models which represent the behavior of a battery, ultracapacitor or some other devices. However, these models are usually too dependent on parameters that are not easily available, which usually results in nonrealistic results. Besides, the more accurate the model, the more specific it needs to be, which becomes an issue when comparing systems of different nature. This paper proposes a practical methodology to compare different energy storage technologies. This is done by means of a linear approach of an equivalent circuit based on laboratory tests. Via these tests, the internal resistance and the self-discharge rate are evaluated, making it possible to compare different energy storage systems regardless their technology. Rather simple testing equipment is sufficient to give a comparative idea of the differences between each system, concerning issues such as efficiency, heating and self-discharge, when operating under a certain scenario. The proposed methodology is applied to four energy storage systems of different nature for the sake of illustration.
Resumo:
Tradicionalmente, el uso de técnicas de análisis de datos ha sido una de las principales vías para el descubrimiento de conocimiento oculto en grandes cantidades de datos, recopilados por expertos en diferentes dominios. Por otra parte, las técnicas de visualización también se han usado para mejorar y facilitar este proceso. Sin embargo, existen limitaciones serias en la obtención de conocimiento, ya que suele ser un proceso lento, tedioso y en muchas ocasiones infructífero, debido a la dificultad de las personas para comprender conjuntos de datos de grandes dimensiones. Otro gran inconveniente, pocas veces tenido en cuenta por los expertos que analizan grandes conjuntos de datos, es la degradación involuntaria a la que someten a los datos durante las tareas de análisis, previas a la obtención final de conclusiones. Por degradación quiere decirse que los datos pueden perder sus propiedades originales, y suele producirse por una reducción inapropiada de los datos, alterando así su naturaleza original y llevando en muchos casos a interpretaciones y conclusiones erróneas que podrían tener serias implicaciones. Además, este hecho adquiere una importancia trascendental cuando los datos pertenecen al dominio médico o biológico, y la vida de diferentes personas depende de esta toma final de decisiones, en algunas ocasiones llevada a cabo de forma inapropiada. Ésta es la motivación de la presente tesis, la cual propone un nuevo framework visual, llamado MedVir, que combina la potencia de técnicas avanzadas de visualización y minería de datos para tratar de dar solución a estos grandes inconvenientes existentes en el proceso de descubrimiento de información válida. El objetivo principal es hacer más fácil, comprensible, intuitivo y rápido el proceso de adquisición de conocimiento al que se enfrentan los expertos cuando trabajan con grandes conjuntos de datos en diferentes dominios. Para ello, en primer lugar, se lleva a cabo una fuerte disminución en el tamaño de los datos con el objetivo de facilitar al experto su manejo, y a la vez preservando intactas, en la medida de lo posible, sus propiedades originales. Después, se hace uso de efectivas técnicas de visualización para representar los datos obtenidos, permitiendo al experto interactuar de forma sencilla e intuitiva con los datos, llevar a cabo diferentes tareas de análisis de datos y así estimular visualmente su capacidad de comprensión. De este modo, el objetivo subyacente se basa en abstraer al experto, en la medida de lo posible, de la complejidad de sus datos originales para presentarle una versión más comprensible, que facilite y acelere la tarea final de descubrimiento de conocimiento. MedVir se ha aplicado satisfactoriamente, entre otros, al campo de la magnetoencefalografía (MEG), que consiste en la predicción en la rehabilitación de lesiones cerebrales traumáticas (Traumatic Brain Injury (TBI) rehabilitation prediction). Los resultados obtenidos demuestran la efectividad del framework a la hora de acelerar y facilitar el proceso de descubrimiento de conocimiento sobre conjuntos de datos reales. ABSTRACT Traditionally, the use of data analysis techniques has been one of the main ways of discovering knowledge hidden in large amounts of data, collected by experts in different domains. Moreover, visualization techniques have also been used to enhance and facilitate this process. However, there are serious limitations in the process of knowledge acquisition, as it is often a slow, tedious and many times fruitless process, due to the difficulty for human beings to understand large datasets. Another major drawback, rarely considered by experts that analyze large datasets, is the involuntary degradation to which they subject the data during analysis tasks, prior to obtaining the final conclusions. Degradation means that data can lose part of their original properties, and it is usually caused by improper data reduction, thereby altering their original nature and often leading to erroneous interpretations and conclusions that could have serious implications. Furthermore, this fact gains a trascendental importance when the data belong to medical or biological domain, and the lives of people depends on the final decision-making, which is sometimes conducted improperly. This is the motivation of this thesis, which proposes a new visual framework, called MedVir, which combines the power of advanced visualization techniques and data mining to try to solve these major problems existing in the process of discovery of valid information. Thus, the main objective is to facilitate and to make more understandable, intuitive and fast the process of knowledge acquisition that experts face when working with large datasets in different domains. To achieve this, first, a strong reduction in the size of the data is carried out in order to make the management of the data easier to the expert, while preserving intact, as far as possible, the original properties of the data. Then, effective visualization techniques are used to represent the obtained data, allowing the expert to interact easily and intuitively with the data, to carry out different data analysis tasks, and so visually stimulating their comprehension capacity. Therefore, the underlying objective is based on abstracting the expert, as far as possible, from the complexity of the original data to present him a more understandable version, thus facilitating and accelerating the task of knowledge discovery. MedVir has been succesfully applied to, among others, the field of magnetoencephalography (MEG), which consists in predicting the rehabilitation of Traumatic Brain Injury (TBI). The results obtained successfully demonstrate the effectiveness of the framework to accelerate and facilitate the process of knowledge discovery on real world datasets.
Resumo:
El principio de Teoría de Juegos permite desarrollar modelos estocásticos de patrullaje multi-robot para proteger infraestructuras criticas. La protección de infraestructuras criticas representa un gran reto para los países al rededor del mundo, principalmente después de los ataques terroristas llevados a cabo la década pasada. En este documento el termino infraestructura hace referencia a aeropuertos, plantas nucleares u otros instalaciones. El problema de patrullaje se define como la actividad de patrullar un entorno determinado para monitorear cualquier actividad o sensar algunas variables ambientales. En esta actividad, un grupo de robots debe visitar un conjunto de puntos de interés definidos en un entorno en intervalos de tiempo irregulares con propósitos de seguridad. Los modelos de partullaje multi-robot son utilizados para resolver este problema. Hasta el momento existen trabajos que resuelven este problema utilizando diversos principios matemáticos. Los modelos de patrullaje multi-robot desarrollados en esos trabajos representan un gran avance en este campo de investigación. Sin embargo, los modelos con los mejores resultados no son viables para aplicaciones de seguridad debido a su naturaleza centralizada y determinista. Esta tesis presenta cinco modelos de patrullaje multi-robot distribuidos e impredecibles basados en modelos matemáticos de aprendizaje de Teoría de Juegos. El objetivo del desarrollo de estos modelos está en resolver los inconvenientes presentes en trabajos preliminares. Con esta finalidad, el problema de patrullaje multi-robot se formuló utilizando conceptos de Teoría de Grafos, en la cual se definieron varios juegos en cada vértice de un grafo. Los modelos de patrullaje multi-robot desarrollados en este trabajo de investigación se han validado y comparado con los mejores modelos disponibles en la literatura. Para llevar a cabo tanto la validación como la comparación se ha utilizado un simulador de patrullaje y un grupo de robots reales. Los resultados experimentales muestran que los modelos de patrullaje desarrollados en este trabajo de investigación trabajan mejor que modelos de trabajos previos en el 80% de 150 casos de estudio. Además de esto, estos modelos cuentan con varias características importantes tales como distribución, robustez, escalabilidad y dinamismo. Los avances logrados con este trabajo de investigación dan evidencia del potencial de Teoría de Juegos para desarrollar modelos de patrullaje útiles para proteger infraestructuras. ABSTRACT Game theory principle allows to developing stochastic multi-robot patrolling models to protect critical infrastructures. Critical infrastructures protection is a great concern for countries around the world, mainly due to terrorist attacks in the last decade. In this document, the term infrastructures includes airports, nuclear power plants, and many other facilities. The patrolling problem is defined as the activity of traversing a given environment to monitoring any activity or sensing some environmental variables If this activity were performed by a fleet of robots, they would have to visit some places of interest of an environment at irregular intervals of time for security purposes. This problem is solved using multi-robot patrolling models. To date, literature works have been solved this problem applying various mathematical principles.The multi-robot patrolling models developed in those works represent great advances in this field. However, the models that obtain the best results are unfeasible for security applications due to their centralized and predictable nature. This thesis presents five distributed and unpredictable multi-robot patrolling models based on mathematical learning models derived from Game Theory. These multi-robot patrolling models aim at overcoming the disadvantages of previous work. To this end, the multi-robot patrolling problem was formulated using concepts of Graph Theory to represent the environment. Several normal-form games were defined at each vertex of a graph in this formulation. The multi-robot patrolling models developed in this research work have been validated and compared with best ranked multi-robot patrolling models in the literature. Both validation and comparison were preformed by using both a patrolling simulator and real robots. Experimental results show that the multirobot patrolling models developed in this research work improve previous ones in as many as 80% of 150 cases of study. Moreover, these multi-robot patrolling models rely on several features to highlight in security applications such as distribution, robustness, scalability, and dynamism. The achievements obtained in this research work validate the potential of Game Theory to develop patrolling models to protect infrastructures.
Resumo:
La adecuada estimación de avenidas de diseño asociadas a altos periodos de retorno es necesaria para el diseño y gestión de estructuras hidráulicas como presas. En la práctica, la estimación de estos cuantiles se realiza normalmente a través de análisis de frecuencia univariados, basados en su mayoría en el estudio de caudales punta. Sin embargo, la naturaleza de las avenidas es multivariada, siendo esencial tener en cuenta características representativas de las avenidas, tales como caudal punta, volumen y duración del hidrograma, con el fin de llevar a cabo un análisis apropiado; especialmente cuando el caudal de entrada se transforma en un caudal de salida diferente durante el proceso de laminación en un embalse o llanura de inundación. Los análisis de frecuencia de avenidas multivariados han sido tradicionalmente llevados a cabo mediante el uso de distribuciones bivariadas estándar con el fin de modelar variables correlacionadas. Sin embargo, su uso conlleva limitaciones como la necesidad de usar el mismo tipo de distribuciones marginales para todas las variables y la existencia de una relación de dependencia lineal entre ellas. Recientemente, el uso de cópulas se ha extendido en hidrología debido a sus beneficios en relación al contexto multivariado, permitiendo superar los inconvenientes de las técnicas tradicionales. Una copula es una función que representa la estructura de dependencia de las variables de estudio, y permite obtener la distribución de frecuencia multivariada de dichas variables mediante sus distribuciones marginales, sin importar el tipo de distribución marginal utilizada. La estimación de periodos de retorno multivariados, y por lo tanto, de cuantiles multivariados, también se facilita debido a la manera en la que las cópulas están formuladas. La presente tesis doctoral busca proporcionar metodologías que mejoren las técnicas tradicionales usadas por profesionales para estimar cuantiles de avenida más adecuados para el diseño y la gestión de presas, así como para la evaluación del riesgo de avenida, mediante análisis de frecuencia de avenidas bivariados basados en cópulas. Las variables consideradas para ello son el caudal punta y el volumen del hidrograma. Con el objetivo de llevar a cabo un estudio completo, la presente investigación abarca: (i) el análisis de frecuencia de avenidas local bivariado centrado en examinar y comparar los periodos de retorno teóricos basados en la probabilidad natural de ocurrencia de una avenida, con el periodo de retorno asociado al riesgo de sobrevertido de la presa bajo análisis, con el fin de proporcionar cuantiles en una estación de aforo determinada; (ii) la extensión del enfoque local al regional, proporcionando un procedimiento completo para llevar a cabo un análisis de frecuencia de avenidas regional bivariado para proporcionar cuantiles en estaciones sin aforar o para mejorar la estimación de dichos cuantiles en estaciones aforadas; (iii) el uso de cópulas para investigar tendencias bivariadas en avenidas debido al aumento de los niveles de urbanización en una cuenca; y (iv) la extensión de series de avenida observadas mediante la combinación de los beneficios de un modelo basado en cópulas y de un modelo hidrometeorológico. Accurate design flood estimates associated with high return periods are necessary to design and manage hydraulic structures such as dams. In practice, the estimate of such quantiles is usually done via univariate flood frequency analyses, mostly based on the study of peak flows. Nevertheless, the nature of floods is multivariate, being essential to consider representative flood characteristics, such as flood peak, hydrograph volume and hydrograph duration to carry out an appropriate analysis; especially when the inflow peak is transformed into a different outflow peak during the routing process in a reservoir or floodplain. Multivariate flood frequency analyses have been traditionally performed by using standard bivariate distributions to model correlated variables, yet they entail some shortcomings such as the need of using the same kind of marginal distribution for all variables and the assumption of a linear dependence relation between them. Recently, the use of copulas has been extended in hydrology because of their benefits regarding dealing with the multivariate context, as they overcome the drawbacks of the traditional approach. A copula is a function that represents the dependence structure of the studied variables, and allows obtaining the multivariate frequency distribution of them by using their marginal distributions, regardless of the kind of marginal distributions considered. The estimate of multivariate return periods, and therefore multivariate quantiles, is also facilitated by the way in which copulas are formulated. The present doctoral thesis seeks to provide methodologies that improve traditional techniques used by practitioners, in order to estimate more appropriate flood quantiles for dam design, dam management and flood risk assessment, through bivariate flood frequency analyses based on the copula approach. The flood variables considered for that goal are peak flow and hydrograph volume. In order to accomplish a complete study, the present research addresses: (i) a bivariate local flood frequency analysis focused on examining and comparing theoretical return periods based on the natural probability of occurrence of a flood, with the return period associated with the risk of dam overtopping, to estimate quantiles at a given gauged site; (ii) the extension of the local to the regional approach, supplying a complete procedure for performing a bivariate regional flood frequency analysis to either estimate quantiles at ungauged sites or improve at-site estimates at gauged sites; (iii) the use of copulas to investigate bivariate flood trends due to increasing urbanisation levels in a catchment; and (iv) the extension of observed flood series by combining the benefits of a copula-based model and a hydro-meteorological model.
Resumo:
Wireless sensor networks (WSNs) are one of the most important users of wireless communication technologies in the coming years and some challenges in this area must be addressed for their complete development. Energy consumption and spectrum availability are two of the most severe constraints of WSNs due to their intrinsic nature. The introduction of cognitive capabilities into these networks has arisen to face the issue of spectrum scarcity but could be used to face energy challenges too due to their new range of communication possibilities. In this paper a new strategy based on game theory for cognitive WSNs is discussed. The presented strategy improves energy consumption by taking advantage of the new change-communication-channel capability. Based on game theory, the strategy decides when to change the transmission channel depending on the behavior of the rest of the network nodes. The strategy presented is lightweight but still has higher energy saving rates as compared to noncognitive networks and even to other strategies based on scheduled spectrum sensing. Simulations are presented for several scenarios that demonstrate energy saving rates of around 65% as compared to WSNs without cognitive techniques.
Resumo:
Passive performance of buildings is nowadays one of the key points, not only for reducing energy consumption of buildings, but also for decreasing “fuel poverty”. Among the constructive systems in buildings, façades are the ones having higher influence on thermal performance in urban spaces. Lime renders are specialized systems which can improve not only the durability of the support but also the thermal properties. According to previous researches, a modification of their radiative properties can reduce thermal fluxes between 24% and 89%. In this paper, the influences of the aggregate content in lime pastes, as well as the nature of the aggregates, colour and roughness, on the visible near and medium infrared reflectance are analyzed. Ten types of aerial lime mortars were prepared and two methods of reflectance determination were performed. Finally, the effect of the resulted reflectance on the constructive systems of façades was analyzed using pseudotime-dependent software, for which an annulation of the thermal fluxes or significant reduction of them can be observed, when modifying the aggregate nature.
Resumo:
The scientific method is a methodological approach to the process of inquiry { in which empirically grounded theory of nature is constructed and verified [14]. It is a hard, exhaustive and dedicated multi-stage procedure that a researcher must perform to achieve valuable knowledge. Trying to help researchers during this process, a recommender system, intended as a researcher assistant, is designed to provide them useful tools and information for each stage of the procedure. A new similarity measure between research objects and a representational model, based on domain spaces, to handle them in dif ferent levels are created as well as a system to build them from OAI-PMH (and RSS) resources. It tries to represents a sound balance between scientific insight into individual scientific creative processes and technical implementation using innovative technologies in information extraction, document summarization and semantic analysis at a large scale.
Resumo:
La computación ubicua está extendiendo su aplicación desde entornos específicos hacia el uso cotidiano; el Internet de las cosas (IoT, en inglés) es el ejemplo más brillante de su aplicación y de la complejidad intrínseca que tiene, en comparación con el clásico desarrollo de aplicaciones. La principal característica que diferencia la computación ubicua de los otros tipos está en como se emplea la información de contexto. Las aplicaciones clásicas no usan en absoluto la información de contexto o usan sólo una pequeña parte de ella, integrándola de una forma ad hoc con una implementación específica para la aplicación. La motivación de este tratamiento particular se tiene que buscar en la dificultad de compartir el contexto con otras aplicaciones. En realidad lo que es información de contexto depende del tipo de aplicación: por poner un ejemplo, para un editor de imágenes, la imagen es la información y sus metadatos, tales como la hora de grabación o los ajustes de la cámara, son el contexto, mientras que para el sistema de ficheros la imagen junto con los ajustes de cámara son la información, y el contexto es representado por los metadatos externos al fichero como la fecha de modificación o la de último acceso. Esto significa que es difícil compartir la información de contexto, y la presencia de un middleware de comunicación que soporte el contexto de forma explícita simplifica el desarrollo de aplicaciones para computación ubicua. Al mismo tiempo el uso del contexto no tiene que ser obligatorio, porque si no se perdería la compatibilidad con las aplicaciones que no lo usan, convirtiendo así dicho middleware en un middleware de contexto. SilboPS, que es nuestra implementación de un sistema publicador/subscriptor basado en contenido e inspirado en SIENA [11, 9], resuelve dicho problema extendiendo el paradigma con dos elementos: el Contexto y la Función de Contexto. El contexto representa la información contextual propiamente dicha del mensaje por enviar o aquella requerida por el subscriptor para recibir notificaciones, mientras la función de contexto se evalúa usando el contexto del publicador y del subscriptor. Esto permite desacoplar la lógica de gestión del contexto de aquella de la función de contexto, incrementando de esta forma la flexibilidad de la comunicación entre varias aplicaciones. De hecho, al utilizar por defecto un contexto vacío, las aplicaciones clásicas y las que manejan el contexto pueden usar el mismo SilboPS, resolviendo de esta forma la incompatibilidad entre las dos categorías. En cualquier caso la posible incompatibilidad semántica sigue existiendo ya que depende de la interpretación que cada aplicación hace de los datos y no puede ser solucionada por una tercera parte agnóstica. El entorno IoT conlleva retos no sólo de contexto, sino también de escalabilidad. La cantidad de sensores, el volumen de datos que producen y la cantidad de aplicaciones que podrían estar interesadas en manipular esos datos está en continuo aumento. Hoy en día la respuesta a esa necesidad es la computación en la nube, pero requiere que las aplicaciones sean no sólo capaces de escalar, sino de hacerlo de forma elástica [22]. Desgraciadamente no hay ninguna primitiva de sistema distribuido de slicing que soporte un particionamiento del estado interno [33] junto con un cambio en caliente, además de que los sistemas cloud actuales como OpenStack u OpenNebula no ofrecen directamente una monitorización elástica. Esto implica que hay un problema bilateral: cómo puede una aplicación escalar de forma elástica y cómo monitorizar esa aplicación para saber cuándo escalarla horizontalmente. E-SilboPS es la versión elástica de SilboPS y se adapta perfectamente como solución para el problema de monitorización, gracias al paradigma publicador/subscriptor basado en contenido y, a diferencia de otras soluciones [5], permite escalar eficientemente, para cumplir con la carga de trabajo sin sobre-provisionar o sub-provisionar recursos. Además está basado en un algoritmo recientemente diseñado que muestra como añadir elasticidad a una aplicación con distintas restricciones sobre el estado: sin estado, estado aislado con coordinación externa y estado compartido con coordinación general. Su evaluación enseña como se pueden conseguir notables speedups, siendo el nivel de red el principal factor limitante: de hecho la eficiencia calculada (ver Figura 5.8) demuestra cómo se comporta cada configuración en comparación con las adyacentes. Esto permite conocer la tendencia actual de todo el sistema, para saber si la siguiente configuración compensará el coste que tiene con la ganancia que lleva en el throughput de notificaciones. Se tiene que prestar especial atención en la evaluación de los despliegues con igual coste, para ver cuál es la mejor solución en relación a una carga de trabajo dada. Como último análisis se ha estimado el overhead introducido por las distintas configuraciones a fin de identificar el principal factor limitante del throughput. Esto ayuda a determinar la parte secuencial y el overhead de base [26] en un despliegue óptimo en comparación con uno subóptimo. Efectivamente, según el tipo de carga de trabajo, la estimación puede ser tan baja como el 10 % para un óptimo local o tan alta como el 60 %: esto ocurre cuando se despliega una configuración sobredimensionada para la carga de trabajo. Esta estimación de la métrica de Karp-Flatt es importante para el sistema de gestión porque le permite conocer en que dirección (ampliar o reducir) es necesario cambiar el despliegue para mejorar sus prestaciones, en lugar que usar simplemente una política de ampliación. ABSTRACT The application of pervasive computing is extending from field-specific to everyday use. The Internet of Things (IoT) is the shiniest example of its application and of its intrinsic complexity compared with classical application development. The main characteristic that differentiates pervasive from other forms of computing lies in the use of contextual information. Some classical applications do not use any contextual information whatsoever. Others, on the other hand, use only part of the contextual information, which is integrated in an ad hoc fashion using an application-specific implementation. This information is handled in a one-off manner because of the difficulty of sharing context across applications. As a matter of fact, the application type determines what the contextual information is. For instance, for an imaging editor, the image is the information and its meta-data, like the time of the shot or camera settings, are the context, whereas, for a file-system application, the image, including its camera settings, is the information and the meta-data external to the file, like the modification date or the last accessed timestamps, constitute the context. This means that contextual information is hard to share. A communication middleware that supports context decidedly eases application development in pervasive computing. However, the use of context should not be mandatory; otherwise, the communication middleware would be reduced to a context middleware and no longer be compatible with non-context-aware applications. SilboPS, our implementation of content-based publish/subscribe inspired by SIENA [11, 9], solves this problem by adding two new elements to the paradigm: the context and the context function. Context represents the actual contextual information specific to the message to be sent or that needs to be notified to the subscriber, whereas the context function is evaluated using the publisher’s context and the subscriber’s context to decide whether the current message and context are useful for the subscriber. In this manner, context logic management is decoupled from context management, increasing the flexibility of communication and usage across different applications. Since the default context is empty, context-aware and classical applications can use the same SilboPS, resolving the syntactic mismatch that there is between the two categories. In any case, the possible semantic mismatch is still present because it depends on how each application interprets the data, and it cannot be resolved by an agnostic third party. The IoT environment introduces not only context but scaling challenges too. The number of sensors, the volume of the data that they produce and the number of applications that could be interested in harvesting such data are growing all the time. Today’s response to the above need is cloud computing. However, cloud computing applications need to be able to scale elastically [22]. Unfortunately there is no slicing, as distributed system primitives that support internal state partitioning [33] and hot swapping and current cloud systems like OpenStack or OpenNebula do not provide elastic monitoring out of the box. This means there is a two-sided problem: 1) how to scale an application elastically and 2) how to monitor the application and know when it should scale in or out. E-SilboPS is the elastic version of SilboPS. I t is the solution for the monitoring problem thanks to its content-based publish/subscribe nature and, unlike other solutions [5], it scales efficiently so as to meet workload demand without overprovisioning or underprovisioning. Additionally, it is based on a newly designed algorithm that shows how to add elasticity in an application with different state constraints: stateless, isolated stateful with external coordination and shared stateful with general coordination. Its evaluation shows that it is able to achieve remarkable speedups where the network layer is the main limiting factor: the calculated efficiency (see Figure 5.8) shows how each configuration performs with respect to adjacent configurations. This provides insight into the actual trending of the whole system in order to predict if the next configuration would offset its cost against the resulting gain in notification throughput. Particular attention has been paid to the evaluation of same-cost deployments in order to find out which one is the best for the given workload demand. Finally, the overhead introduced by the different configurations has been estimated to identify the primary limiting factor for throughput. This helps to determine the intrinsic sequential part and base overhead [26] of an optimal versus a suboptimal deployment. Depending on the type of workload, this can be as low as 10% in a local optimum or as high as 60% when an overprovisioned configuration is deployed for a given workload demand. This Karp-Flatt metric estimation is important for system management because it indicates the direction (scale in or out) in which the deployment has to be changed in order to improve its performance instead of simply using a scale-out policy.
Resumo:
Esta tesis se desarrolla dentro del marco de las comunicaciones satelitales en el innovador campo de los pequeños satélites también llamados nanosatélites o cubesats, llamados así por su forma cubica. Estos nanosatélites se caracterizan por su bajo costo debido a que usan componentes comerciales llamados COTS (commercial off-the-shelf) y su pequeño tamaño como los Cubesats 1U (10cm*10 cm*10 cm) con masa aproximada a 1 kg. Este trabajo de tesis tiene como base una iniciativa propuesta por el autor de la tesis para poner en órbita el primer satélite peruano en mi país llamado chasqui I, actualmente puesto en órbita desde la Estación Espacial Internacional. La experiencia de este trabajo de investigación me llevo a proponer una constelación de pequeños satélites llamada Waposat para dar servicio de monitoreo de sensores de calidad de agua a nivel global, escenario que es usado en esta tesis. Es ente entorno y dadas las características limitadas de los pequeños satélites, tanto en potencia como en velocidad de datos, es que propongo investigar una nueva arquitectura de comunicaciones que permita resolver en forma óptima la problemática planteada por los nanosatélites en órbita LEO debido a su carácter disruptivo en sus comunicaciones poniendo énfasis en las capas de enlace y aplicación. Esta tesis presenta y evalúa una nueva arquitectura de comunicaciones para proveer servicio a una red de sensores terrestres usando una solución basada en DTN (Delay/Disruption Tolerant Networking) para comunicaciones espaciales. Adicionalmente, propongo un nuevo protocolo de acceso múltiple que usa una extensión del protocolo ALOHA no ranurado, el cual toma en cuenta la prioridad del trafico del Gateway (ALOHAGP) con un mecanismo de contienda adaptativo. Utiliza la realimentación del satélite para implementar el control de la congestión y adapta dinámicamente el rendimiento efectivo del canal de una manera óptima. Asumimos un modelo de población de sensores finito y una condición de tráfico saturado en el que cada sensor tiene siempre tramas que transmitir. El desempeño de la red se evaluó en términos de rendimiento efectivo, retardo y la equidad del sistema. Además, se ha definido una capa de convergencia DTN (ALOHAGP-CL) como un subconjunto del estándar TCP-CL (Transmission Control Protocol-Convergency Layer). Esta tesis muestra que ALOHAGP/CL soporta adecuadamente el escenario DTN propuesto, sobre todo cuando se utiliza la fragmentación reactiva. Finalmente, esta tesis investiga una transferencia óptima de mensajes DTN (Bundles) utilizando estrategias de fragmentación proactivas para dar servicio a una red de sensores terrestres utilizando un enlace de comunicaciones satelitales que utiliza el mecanismo de acceso múltiple con prioridad en el tráfico de enlace descendente (ALOHAGP). El rendimiento efectivo ha sido optimizado mediante la adaptación de los parámetros del protocolo como una función del número actual de los sensores activos recibidos desde el satélite. También, actualmente no existe un método para advertir o negociar el tamaño máximo de un “bundle” que puede ser aceptado por un agente DTN “bundle” en las comunicaciones por satélite tanto para el almacenamiento y la entrega, por lo que los “bundles” que son demasiado grandes son eliminados o demasiado pequeños son ineficientes. He caracterizado este tipo de escenario obteniendo una distribución de probabilidad de la llegada de tramas al nanosatélite así como una distribución de probabilidad del tiempo de visibilidad del nanosatélite, los cuales proveen una fragmentación proactiva óptima de los DTN “bundles”. He encontrado que el rendimiento efectivo (goodput) de la fragmentación proactiva alcanza un valor ligeramente inferior al de la fragmentación reactiva. Esta contribución permite utilizar la fragmentación activa de forma óptima con todas sus ventajas tales como permitir implantar el modelo de seguridad de DTN y la simplicidad al implementarlo en equipos con muchas limitaciones de CPU y memoria. La implementación de estas contribuciones se han contemplado inicialmente como parte de la carga útil del nanosatélite QBito, que forma parte de la constelación de 50 nanosatélites que se está llevando a cabo dentro del proyecto QB50. ABSTRACT This thesis is developed within the framework of satellite communications in the innovative field of small satellites also known as nanosatellites (<10 kg) or CubeSats, so called from their cubic form. These nanosatellites are characterized by their low cost because they use commercial components called COTS (commercial off-the-shelf), and their small size and mass, such as 1U Cubesats (10cm * 10cm * 10cm) with approximately 1 kg mass. This thesis is based on a proposal made by the author of the thesis to put into orbit the first Peruvian satellite in his country called Chasqui I, which was successfully launched into orbit from the International Space Station in 2014. The experience of this research work led me to propose a constellation of small satellites named Waposat to provide water quality monitoring sensors worldwide, scenario that is used in this thesis. In this scenario and given the limited features of nanosatellites, both power and data rate, I propose to investigate a new communications architecture that allows solving in an optimal manner the problems of nanosatellites in orbit LEO due to the disruptive nature of their communications by putting emphasis on the link and application layers. This thesis presents and evaluates a new communications architecture to provide services to terrestrial sensor networks using a space Delay/Disruption Tolerant Networking (DTN) based solution. In addition, I propose a new multiple access mechanism protocol based on extended unslotted ALOHA that takes into account the priority of gateway traffic, which we call ALOHA multiple access with gateway priority (ALOHAGP) with an adaptive contention mechanism. It uses satellite feedback to implement the congestion control, and to dynamically adapt the channel effective throughput in an optimal way. We assume a finite sensor population model and a saturated traffic condition where every sensor always has frames to transmit. The performance was evaluated in terms of effective throughput, delay and system fairness. In addition, a DTN convergence layer (ALOHAGP-CL) has been defined as a subset of the standard TCP-CL (Transmission Control Protocol-Convergence Layer). This thesis reveals that ALOHAGP/CL adequately supports the proposed DTN scenario, mainly when reactive fragmentation is used. Finally, this thesis investigates an optimal DTN message (bundles) transfer using proactive fragmentation strategies to give service to a ground sensor network using a nanosatellite communications link which uses a multi-access mechanism with priority in downlink traffic (ALOHAGP). The effective throughput has been optimized by adapting the protocol parameters as a function of the current number of active sensors received from satellite. Also, there is currently no method for advertising or negotiating the maximum size of a bundle which can be accepted by a bundle agent in satellite communications for storage and delivery, so that bundles which are too large can be dropped or which are too small are inefficient. We have characterized this kind of scenario obtaining a probability distribution for frame arrivals to nanosatellite and visibility time distribution that provide an optimal proactive fragmentation of DTN bundles. We have found that the proactive effective throughput (goodput) reaches a value slightly lower than reactive fragmentation approach. This contribution allows to use the proactive fragmentation optimally with all its advantages such as the incorporation of the security model of DTN and simplicity in protocol implementation for computers with many CPU and memory limitations. The implementation of these contributions was initially contemplated as part of the payload of the nanosatellite QBito, which is part of the constellation of 50 nanosatellites envisaged under the QB50 project.
Resumo:
The electronic nature of low-barrier hydrogen bonds (LBHBs) in enzymatic reactions is discussed based on combined low temperature neutron and x-ray diffraction experiments and on high level ab initio calculations by using the model substrate benzoylacetone. This molecule has a LBHB, as the intramolecular hydrogen bond is described by a double-well potential with a small barrier for hydrogen transfer. From an “atoms in molecules” analysis of the electron density, it is found that the hydrogen atom is stabilized by covalent bonds to both oxygens. Large atomic partial charges on the hydrogen-bonded atoms are found experimentally and theoretically. Therefore, the hydrogen bond gains stabilization from both covalency and from the normal electrostatic interactions found for long, weak hydrogen bonds. Based on comparisons with other systems having short-strong hydrogen bonds or LBHBs, it is proposed that all short-strong and LBHB systems possess similar electronic features of the hydrogen-bonded region, namely polar covalent bonds between the hydrogen atom and both heteroatoms in question.
Resumo:
Sea urchin coelomocytes represent an excellent experimental model system for studying retrograde flow. Their extreme flatness allows for excellent microscopic visualization. Their discoid shape provides a radially symmetric geometry, which simplifies analysis of the flow pattern. Finally, the nonmotile nature of the cells allows for the retrograde flow to be analyzed in the absence of cell translocation. In this study we have begun an analysis of the retrograde flow mechanism by characterizing its kinetic and structural properties. The supramolecular organization of actin and myosin II was investigated using light and electron microscopic methods. Light microscopic immunolocalization was performed with anti-actin and anti-sea urchin egg myosin II antibodies, whereas transmission electron microscopy was performed on platinum replicas of critical point-dried and rotary-shadowed cytoskeletons. Coelomocytes contain a dense cortical actin network, which feeds into an extensive array of radial bundles in the interior. These actin bundles terminate in a perinuclear region, which contains a ring of myosin II bipolar minifilaments. Retrograde flow was arrested either by interfering with actin polymerization or by inhibiting myosin II function, but the pathway by which the flow was blocked was different for the two kinds of inhibitory treatments. Inhibition of actin polymerization with cytochalasin D caused the actin cytoskeleton to separate from the cell margin and undergo a finite retrograde retraction. In contrast, inhibition of myosin II function either with the wide-spectrum protein kinase inhibitor staurosporine or the myosin light chain kinase–specific inhibitor KT5926 stopped flow in the cell center, whereas normal retrograde flow continued at the cell periphery. These differential results suggest that the mechanism of retrograde flow has two, spatially segregated components. We propose a “push–pull” mechanism in which actin polymerization drives flow at the cell periphery, whereas myosin II provides the tension on the actin cytoskeleton necessary for flow in the cell interior.
Resumo:
The Daphniphyllum alkaloids are a group of highly complex polycyclic alkaloids. Examination of the structures if several members of this family of natural products led to a hypothesis about their mode of biosynthesis (depicted in Scheme SI). Based on this hypothetical biosynthetic pathway, a laboratory synthesis was designed that incorporated as a key transformation the novel one-pot transformation of dialdehyde 24 to pentacyclic unsaturated amine 25. This process turned out to be an exceptionally efficient way to construct the pentacyclic nucleus of the Daphniphyllum alkaloids. However, a purely fortuitous discovery, resulting from accidental use of methylamine rather than ammonia, led to a great improvement in the synthesis and suggests an even more attractive possible biosynthesis.