28 resultados para Control Network
em Universidad Politécnica de Madrid
Resumo:
La protección de las aguas subterráneas es una prioridad de la política medioambiental de la UE. Por ello ha establecido un marco de prevención y control de la contaminación, que incluye provisiones para evaluar el estado químico de las aguas y reducir la presencia de contaminantes en ellas. Las herramientas fundamentales para el desarrollo de dichas políticas son la Directiva Marco del Agua y la Directiva Hija de Aguas Subterráneas. Según ellas, las aguas se consideran en buen estado químico si: • la concentración medida o prevista de nitratos no supera los 50 mg/l y la de ingredientes activos de plaguicidas, de sus metabolitos y de los productos de reacción no supera el 0,1 μg/l (0,5 μg/l para el total de los plaguicidas medidos) • la concentración de determinadas sustancias de riesgo es inferior al valor umbral fijado por los Estados miembros; se trata, como mínimo, del amonio, arsénico, cadmio, cloruro, plomo, mercurio, sulfatos, tricloroetileno y tetracloroetileno • la concentración de cualquier otro contaminante se ajusta a la definición de buen estado químico enunciada en el anexo V de la Directiva marco sobre la política de aguas • en caso de superarse el valor correspondiente a una norma de calidad o a un valor umbral, una investigación confirma, entre otros puntos, la falta de riesgo significativo para el medio ambiente. Analizar el comportamiento estadístico de los datos procedentes de la red de seguimiento y control puede resultar considerablemente complejo, debido al sesgo positivo que suelen presentar dichos datos y a su distribución asimétrica, debido a la existencia de valores anómalos y diferentes tipos de suelos y mezclas de contaminantes. Además, la distribución de determinados componentes en el agua subterránea puede presentar concentraciones por debajo del límite de detección o no ser estacionaria debida a la existencia de tendencias lineales o estacionales. En el primer caso es necesario realizar estimaciones de esos valores desconocidos, mediante procedimientos que varían en función del porcentaje de valores por debajo del límite de detección y el número de límites de detección aplicables. En el segundo caso es necesario eliminar las tendencias de forma previa a la realización de contrastes de hipótesis sobre los residuos. Con esta tesis se ha pretendido establecer las bases estadísticas para el análisis riguroso de los datos de las redes de calidad con objeto de realizar la evaluación del estado químico de las masas de agua subterránea para la determinación de tendencias al aumento en la concentración de contaminantes y para la detección de empeoramientos significativos, tanto en los casos que se ha fijado un estándar de calidad por el organismo medioambiental competente como en aquéllos que no ha sido así. Para diseñar una metodología que permita contemplar la variedad de casos existentes, se han analizado los datos de la Red Oficial de Seguimiento y Control del Estado Químico de las Aguas Subterráneas del Ministerio de Agricultura, Alimentación y Medio Ambiente (Magrama). A continuación, y dado que los Planes Hidrológicos de Cuenca son la herramienta básica de las Directivas, se ha seleccionado la Cuenca del Júcar, dada su designación como cuenca piloto en la estrategia de implementación común (CIS) de la Comisión Europea. El objetivo principal de los grupos de trabajo creados para ello se dirigió a implementar la Directiva Derivada de Agua Subterráneas y los elementos de la Directiva Marco del Agua relacionadas, en especial la toma de datos en los puntos de control y la preparación del primer Plan de Gestión de Cuencas Hidrográficas. Dada la extensión de la zona y con objeto de analizar una masa de agua subterránea (definida como la unidad de gestión en las Directivas), se ha seleccionado una zona piloto (Plana de Vinaroz Peñiscola) en la que se han aplicado los procedimientos desarrollados con objeto de determinar el estado químico de dicha masa. Los datos examinados no contienen en general valores de concentración de contaminantes asociados a fuentes puntuales, por lo que para la realización del estudio se han seleccionado valores de concentración de los datos más comunes, es decir, nitratos y cloruros. La estrategia diseñada combina el análisis de tendencias con la elaboración de intervalos de confianza cuando existe un estándar de calidad e intervalos de predicción cuando no existe o se ha superado dicho estándar. De forma análoga se ha procedido en el caso de los valores por debajo del límite de detección, tomando los valores disponibles en la zona piloto de la Plana de Sagunto y simulando diferentes grados de censura con objeto de comparar los resultados obtenidos con los intervalos producidos de los datos reales y verificar de esta forma la eficacia del método. El resultado final es una metodología general que integra los casos existentes y permite definir el estado químico de una masa de agua subterránea, verificar la existencia de impactos significativos en la calidad del agua subterránea y evaluar la efectividad de los planes de medidas adoptados en el marco del Plan Hidrológico de Cuenca. ABSTRACT Groundwater protection is a priority of the EU environmental policy. As a result, it has established a framework for prevention and control of pollution, which includes provisions for assessing the chemical status of waters and reducing the presence of contaminants in it. The measures include: • criteria for assessing the chemical status of groundwater bodies • criteria for identifying significant upward trends and sustained concentrations of contaminants and define starting points for reversal of such trends • preventing and limiting indirect discharges of pollutants as a result of percolation through soil or subsoil. The basic tools for the development of such policies are the Water Framework Directive and Groundwater Daughter Directive. According to them, the groundwater bodies are considered in good status if: • measured or predicted concentration of nitrate does not exceed 50 mg / l and the active ingredients of pesticides, their metabolites and reaction products do not exceed 0.1 mg / l (0.5 mg / l for total of pesticides measured) • the concentration of certain hazardous substances is below the threshold set by the Member States concerned, at least, of ammonium, arsenic, cadmium, chloride, lead, mercury, sulphates, trichloroethylene and tetrachlorethylene • the concentration of other contaminants fits the definition of good chemical status set out in Annex V of the Framework Directive on water policy • If the value corresponding to a quality standard or a threshold value is exceeded, an investigation confirms, among other things, the lack of significant risk to the environment. Analyzing the statistical behaviour of the data from the monitoring networks may be considerably complex due to the positive bias which often presents such information and its asymmetrical distribution, due to the existence of outliers and different soil types and mixtures of pollutants. Furthermore, the distribution of certain components in groundwater may have concentrations below the detection limit or may not be stationary due to the existence of linear or seasonal trends. In the first case it is necessary to estimate these unknown values, through procedures that vary according to the percentage of values below the limit of detection and the number of applicable limits of detection. In the second case removing trends is needed before conducting hypothesis tests on residuals. This PhD thesis has intended to establish the statistical basis for the rigorous analysis of data quality networks in order to conduct the evaluation of the chemical status of groundwater bodies for determining upward and sustained trends in pollutant concentrations and for the detection of significant deterioration in cases in which an environmental standard has been set by the relevant environmental agency and those that have not. Aiming to design a comprehensive methodology to include the whole range of cases, data from the Groundwater Official Monitoring and Control Network of the Ministry of Agriculture, Food and Environment (Magrama) have been analysed. Then, since River Basin Management Plans are the basic tool of the Directives, the Júcar river Basin has been selected. The main reason is its designation as a pilot basin in the common implementation strategy (CIS) of the European Commission. The main objective of the ad hoc working groups is to implement the Daughter Ground Water Directive and elements of the Water Framework Directive related to groundwater, especially the data collection at control stations and the preparation of the first River Basin Management Plan. Given the size of the area and in order to analyze a groundwater body (defined as the management unit in the Directives), Plana de Vinaroz Peñíscola has been selected as pilot area. Procedures developed to determine the chemical status of that body have been then applied. The data examined do not generally contain pollutant concentration values associated with point sources, so for the study concentration values of the most common data, i.e., nitrates and chlorides have been selected. The designed strategy combines trend analysis with the development of confidence intervals when there is a standard of quality and prediction intervals when there is not or the standard has been exceeded. Similarly we have proceeded in the case of values below the detection limit, taking the available values in Plana de Sagunto pilot area and simulating different degrees of censoring in order to compare the results obtained with the intervals achieved from the actual data and verify in this way the effectiveness of the method. The end result is a general methodology that integrates existing cases to define the chemical status of a groundwater body, verify the existence of significant impacts on groundwater quality and evaluate the effectiveness of the action plans adopted in the framework of the River Basin Management Plan.
Resumo:
During sentence processing there is a preference to treat the first noun phrase found as the subject and agent, unless marked the other way. This preference would lead to a conflict in thematic role assignment when the syntactic structure conforms to a non-canonical object-before-subject pattern. Left perisylvian and fronto-parietal brain networks have been found to be engaged by increased computational demands during sentence comprehension, while event-reated brain potentials have been used to study the on-line manifestation of these demands. However, evidence regarding the spatiotemporal organization of brain networks in this domain is scarce. In the current study we used Magnetoencephalography to track spatio-temporally brain activity while Spanish speakers were reading subject- and object-first cleft sentences. Both kinds of sentences remained ambiguous between a subject-first or an object-first interpretation up to the appearance of the second argument. Results show the time-modulation of a frontal network at the disambiguation point of object-first sentences. Moreover, the time windows where these effects took place have been previously related to thematic role integration (300–500 ms) and to sentence reanalysis and resolution of conflicts during processing (beyond 500 ms post-stimulus). These results point to frontal cognitive control as a putative key mechanism which may operate when a revision of the sentence structure and meaning is necessary
Resumo:
Las limitaciones de las tecnologías de red actuales, identificadas en la Agencia de Proyectos de Investigación Avanzados para la Defensa (DARPA) durante 1995, han originado recientemente una propuesta de modelo de red denominado Redes Activas. En este modelo, los nodos proporcionan un entorno de ejecución sobre el que se ejecuta el código asociado a cada paquete. El objetivo es disponer de una tecnología de red que permita que nuevos servicios de red sean desarrollados e instalados rápidamente sin modificar los nodos de la red. Un servicio de red que se puede beneficiar de esta tecnología es la transmisión de datos en multipunto con diferentes grados fiabilidad. Las propuestas actuales de servicios de multipunto fiable proporcionan una solución específica para cada clase de aplicaciones, y los protocolos existentes extremo a extremo sufren de limitaciones técnicas relacionadas con una fiabilidad limitada, y con la ausencia de mecanismos de control de congestión efectivos. Esta tesis realiza propuestas originales conducentes a solucionar parte de las limitaciones actuales en el ámbito de Redes Activas y multipunto fiable con control de congestión. En primer lugar, se especificará un servicio genérico de multipunto fiable que, basándose en los requisitos de una serie de aplicaciones consideradas relevantes, proporcione diferentes clases de sesiones y grados de fiabilidad. Partiendo de la definición del servicio genérico especificado, se diseñará un protocolo de comunicaciones sobre la tecnología de Redes Activas que proporcione dicho servicio. El protocolo diseñado estará dotado de un mecanismo de control de congestión para que la fuente ajuste dinámicamente el tráfico inyectado a las condiciones de carga de la red. En esta tesis se pretende también profundizar en el estudio y análisis de la tecnología de Redes Activas, experimentando con dicha tecnología para proporcionar una realimentación a sus diseñadores. Dicha experimentación se realizará en tres ámbitos: el de los servicios y protocolos que puede soportar, el del modelo y arquitectura de las Redes Activas y el de las plataformas de ejecución disponibles. Como aportación adicional de este trabajo, se validarán los objetivos anteriores mediante una implementación piloto de las entidades de protocolo y de su interfaz de servicio sobre uno de los entornos de ejecución disponibles. Abstract The limitations of current networking technologies identified in the Defense Advance Research Projects Agency (DARPA) along 1995 have led to a recent proposal of a new network model called Active Networks. In this model, the nodes provide an execution environment over which the code used to process each packet is executed. The objective is a network technology that allows the fast design and deployment of new network services without requiring the modification of the network nodes. One network service that could benefit from this technology is the transmission of multicast data with different types of loss tolerance. The current proposals for reliable multicast services provide specific solutions for each application class, and existing end-to-end protocols suffer from technical drawbacks related to limited reliability and lack of an effective congestion control mechanism. This thesis contains original proposals that aim to solve part of the current drawbacks in the scope of Active Networks and reliable multicast with congestion control. Firstly, a generic reliable multicast network service will be specified. This service will be designed from the requirements of a relevant set of applications, and will provide different session classes and different types of reliability. Then, a network protocol based on Active Network technology will be designed such that it provides the specified network service. This protocol will incorporate a congestion control mechanism capable of performing an automatic adjustment of the traffic injected by the source to the available network capacity. This thesis will also contribute to a deeper study and analysis of Active Network technology, by experimenting with the technology in order to provide feedback to its designers. This experimentation will be done attending to three different scopes: support of Active Network for services and protocols, Active Network model and architecture, and currently available Active Network execution environments. As an additional contribution of this work, the previous objectives will be validated through a prototype implementation of the protocol entities and the service interface based on one of the current execution environments.
Resumo:
The emerging use of real-time 3D-based multimedia applications imposes strict quality of service (QoS) requirements on both access and core networks. These requirements and their impact to provide end-to-end 3D videoconferencing services have been studied within the Spanish-funded VISION project, where different scenarios were implemented showing an agile stereoscopic video call that might be offered to the general public in the near future. In view of the requirements, we designed an integrated access and core converged network architecture which provides the requested QoS to end-to-end IP sessions. Novel functional blocks are proposed to control core optical networks, the functionality of the standard ones is redefined, and the signaling improved to better meet the requirements of future multimedia services. An experimental test-bed to assess the feasibility of the solution was also deployed. In such test-bed, set-up and release of end-to-end sessions meeting specific QoS requirements are shown and the impact of QoS degradation in terms of the user perceived quality degradation is quantified. In addition, scalability results show that the proposed signaling architecture is able to cope with large number of requests introducing almost negligible delay.
Resumo:
In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the “Smart Grid” which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called “MagicBox” equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency.
Resumo:
EURATOM/CIEMAT and Technical University of Madrid (UPM) have been involved in the development of a FPSC [1] (Fast Plant System Control) prototype for ITER, based on PXIe (PCI eXtensions for Instrumentation). One of the main focuses of this project has been data acquisition and all the related issues, including scientific data archiving. Additionally, a new data archiving solution has been developed to demonstrate the obtainable performances and possible bottlenecks of scientific data archiving in Fast Plant System Control. The presented system implements a fault tolerant architecture over a GEthernet network where FPSC data are reliably archived on remote, while remaining accessible to be redistributed, within the duration of a pulse. The storing service is supported by a clustering solution to guaranty scalability, so that FPSC management and configuration may be simplified, and a unique view of all archived data provided. All the involved components have been integrated under EPICS [2] (Experimental Physics and Industrial Control System), implementing in each case the necessary extensions, state machines and configuration process variables. The prototyped solution is based on the NetCDF-4 [3] and [4] (Network Common Data Format) file format in order to incorporate important features, such as scientific data models support, huge size files management, platform independent codification, or single-writer/multiple-readers concurrency. In this contribution, a complete description of the above mentioned solution is presented, together with the most relevant results of the tests performed, while focusing in the benefits and limitations of the applied technologies.
Resumo:
Communications Based Train Control Systems require high quality radio data communications for train signaling and control. Actually most of these systems use 2.4GHz band with proprietary radio transceivers and leaky feeder as distribution system. All them demand a high QoS radio network to improve the efficiency of railway networks. We present narrow band, broad band and data correlated measurements taken in Madrid underground with a transmission system at 2.4 GHz in a test network of 2 km length in subway tunnels. The architecture proposed has a strong overlap in between cells to improve reliability and QoS. The radio planning of the network is carefully described and modeled with narrow band and broadband measurements and statistics. The result is a network with 99.7% of packets transmitted correctly and average propagation delay of 20ms. These results fulfill the specifications QoS of CBTC systems.
Neural network controller for active demand side management with PV energy in the residential sector
Resumo:
In this paper, we describe the development of a control system for Demand-Side Management in the residential sector with Distributed Generation. The electrical system under study incorporates local PV energy generation, an electricity storage system, connection to the grid and a home automation system. The distributed control system is composed of two modules: a scheduler and a coordinator, both implemented with neural networks. The control system enhances the local energy performance, scheduling the tasks demanded by the user and maximizing the use of local generation.
Resumo:
The Session Initiation Protocol (SIP) is an application-layer control protocol standardized by the IETF for creating, modifying and terminating multimedia sessions. With the increasing use of SIP in large deployments, the current SIP design cannot handle overload effectively, which may cause SIP networks to suffer from congestion collapse under heavy offered load. This paper introduces a distributed end-to-end overload control (DEOC) mechanism, which is deployed at the edge servers of SIP networks and is easy to implement. By applying overload control closest to the source of traf?c, DEOC can keep high throughput for SIP networks even when the offered load exceeds the capacity of the network. Besides, it responds quickly to the sudden variations of the offered load and achieves good fairness. Theoretic analysis and extensive simulations verify that DEOC is effective in controlling overload of SIP networks.
Resumo:
Here gray and white matter changes after four weeks of videogame practice were analyzed using optimized voxel-based morphometry (VBM), cortical surface and cortical thickness indices, and white matter integrity computed from several projection, commissural, and association tracts relevant to cognition. Beginning with a sample of one hundred young females, twenty right handed participants were recruited for the study and assigned to a practice or a control group carefully matched by their general cognitive ability scores. After the first scan, the practice group played ‘Professor Layton and The Pandora's Box’ 4 h per week during four weeks. A second scan was obtained at the end of practice and intelligence was measured again. Image analyses revealed gray and white matter changes in the practice group. Gray matter changes theoretically relevant for intelligence were observed for the practice group mainly in frontal clusters (Brodmann areas 9 and 10) and also in smaller parietal and temporal regions. White matter findings were focused in the hippocampal cingulum and the inferior longitudinal fasciculus. These gray and white matter changes presumably induced by practice did not interact with intelligence tests' scores.
Resumo:
Rapid prototyping environments can speed up the research of visual control algorithms. We have designed and implemented a software framework for fast prototyping of visual control algorithms for Micro Aerial Vehicles (MAV). We have applied a combination of a proxy-based network communication architecture and a custom Application Programming Interface. This allows multiple experimental configurations, like drone swarms or distributed processing of a drone’s video stream. Currently, the framework supports a low-cost MAV: the Parrot AR.Drone. Real tests have been performed on this platform and the results show comparatively low figures of the extra communication delay introduced by the framework, while adding new functionalities and flexibility to the selected drone. This implementation is open-source and can be downloaded from www.vision4uav.com/?q=VC4MAV-FW
Resumo:
This paper presents an adaptive control for the auxiliary circuit, called ARCN (Auxiliary Resonant Commutating Network), used to achieve ZVS in full active bridge converters under a wide load range. Depending on the load conditions, the proposed control adapts the timing of the ARCN to minimize the losses. The principle of operation and implementation considerations are presented for a three phase full active bridge converter, proposing different methods to implement the control according to the specifications. The experimental results shown verify the proposed methodology.
Resumo:
The growth of the Internet has increased the need for scalable congestion control mechanisms in high speed networks. In this context, we propose a rate-based explicit congestion control mechanism with which the sources are provided with the rate at which they can transmit. These rates are computed with a distributed max-min fair algorithm, SLBN. The novelty of SLBN is that it combines two interesting features not simultaneously present in existing proposals: scalability and fast convergence to the max-min fair rates, even under high session churn. SLBN is scalable because routers only maintain a constant amount of state information (only three integer variables per link) and only incur a constant amount of computation per protocol packet, independently of the number of sessions that cross the router. Additionally, SLBN does not require processing any data packet, and it converges independently of sessions' RTT. Finally, by design, the protocol is conservative when assigning rates, even in the presence of high churn, which helps preventing link overshoots in transient periods. We claim that, with all these features, our mechanism is a good candidate to be used in real deployments.
Resumo:
Las redes son la esencia de comunidades y sociedades humanas; constituyen el entramado en el que nos relacionamos y determinan cómo lo hacemos, cómo se disemina la información o incluso cómo las cosas se llevan a cabo. Pero el protagonismo de las redes va más allá del que adquiere en las redes sociales. Se encuentran en el seno de múltiples estructuras que conocemos, desde las interaciones entre las proteínas dentro de una célula hasta la interconexión de los routers de internet. Las redes sociales están presentes en internet desde sus principios, en el correo electrónico por tomar un ejemplo. Dentro de cada cliente de correo se manejan listas contactos que agregadas constituyen una red social. Sin embargo, ha sido con la aparición de los sitios web de redes sociales cuando este tipo de aplicaciones web han llegado a la conciencia general. Las redes sociales se han situado entre los sitios más populares y con más tráfico de la web. Páginas como Facebook o Twitter manejan cifras asombrosas en cuanto a número de usuarios activos, de tráfico o de tiempo invertido en el sitio. Pero las funcionalidades de red social no están restringidas a las redes sociales orientadas a contactos, aquellas enfocadas a construir tu lista de contactos e interactuar con ellos. Existen otros ejemplos de sitios que aprovechan las redes sociales para aumentar la actividad de los usuarios y su involucración alrededor de algún tipo de contenido. Estos ejemplos van desde una de las redes sociales más antiguas, Flickr, orientada al intercambio de fotografías, hasta Github, la red social de código libre más popular hoy en día. No es una casualidad que la popularidad de estos sitios web venga de la mano de sus funcionalidades de red social. El escenario es más rico aún, ya que los sitios de redes sociales interaccionan entre ellos, compartiendo y exportando listas de contactos, servicios de autenticación y proporcionando un valioso canal para publicitar la actividad de los usuarios en otros sitios web. Esta funcionalidad es reciente y aún les queda un paso hasta que las redes sociales superen su condición de bunkers y lleguen a un estado de verdadera interoperabilidad entre ellas, tal como funcionan hoy en día el correo electrónico o la mensajería instantánea. Este trabajo muestra una tecnología que permite construir sitios web con características de red social distribuída. En primer lugar, se presenta una tecnología para la construcción de un componente intermedio que permite proporcionar cualquier característica de gestión de contenidos al popular marco de desarrollo web modelo-vista-controlador (MVC) Ruby on Rails. Esta técnica constituye una herramienta para desarrolladores que les permita abstraerse de las complejidades de la gestión de contenidos y enfocarse en las particularidades de los propios contenidos. Esta técnica se usará también para proporcionar las características de red social. Se describe una nueva métrica de reusabilidad de código para demostrar la validez del componente intermedio en marcos MVC. En segundo lugar, se analizan las características de los sitios web de redes sociales más populares, con el objetivo de encontrar los patrones comunes que aparecen en ellos. Este análisis servirá como base para definir los requisitos que debe cumplir un marco para construir redes sociales. A continuación se propone una arquitectura de referencia que proporcione este tipo de características. Dicha arquitectura ha sido implementada en un componente, Social Stream, y probada en varias redes sociales, tanto orientadas a contactos como a contenido, en el contexto de una asociación vecinal tanto como en proyectos de investigación financiados por la UE. Ha sido la base de varios proyectos fin de carrera. Además, ha sido publicado como código libre, obteniendo una comunidad creciente y está siendo usado más allá del ámbito de este trabajo. Dicha arquitectura ha permitido la definición de un nuevo modelo de control de acceso social que supera varias limitaciones presentes en los modelos de control de acceso para redes sociales. Más aún, se han analizado casos de estudio de sitios de red social distribuídos, reuniendo un conjunto de caraterísticas que debe cumplir un marco para construir redes sociales distribuídas. Por último, se ha extendido la arquitectura del marco para dar cabida a las características de redes sociales distribuídas. Su implementación ha sido validada en proyectos de investigación financiados por la UE. Abstract Networks are the substance of human communities and societies; they constitute the structural framework on which we relate to each other and determine the way we do it, the way information is diseminated or even the way people get things done. But network prominence goes beyond the importance it acquires in social networks. Networks are found within numerous known structures, from protein interactions inside a cell to router connections on the internet. Social networks are present on the internet since its beginnings, in emails for example. Inside every email client, there are contact lists that added together constitute a social network. However, it has been with the emergence of social network sites (SNS) when these kinds of web applications have reached general awareness. SNS are now among the most popular sites in the web and with the higher traffic. Sites such as Facebook and Twitter hold astonishing figures of active users, traffic and time invested into the sites. Nevertheless, SNS functionalities are not restricted to contact-oriented social networks, those that are focused on building your own list of contacts and interacting with them. There are other examples of sites that leverage social networking to foster user activity and engagement around other types of content. Examples go from early SNS such as Flickr, the photography related networking site, to Github, the most popular social network repository nowadays. It is not an accident that the popularity of these websites comes hand-in-hand with their social network capabilities The scenario is even richer, due to the fact that SNS interact with each other, sharing and exporting contact lists and authentication as well as providing a valuable channel to publize user activity in other sites. These interactions are very recent and they are still finding their way to the point where SNS overcome their condition of data silos to a stage of full interoperability between sites, in the same way email and instant messaging networks work today. This work introduces a technology that allows to rapidly build any kind of distributed social network website. It first introduces a new technique to create middleware that can provide any kind of content management feature to a popular model-view-controller (MVC) web development framework, Ruby on Rails. It provides developers with tools that allow them to abstract from the complexities related with content management and focus on the development of specific content. This same technique is also used to provide the framework with social network features. Additionally, it describes a new metric of code reuse to assert the validity of the kind of middleware that is emerging in MVC frameworks. Secondly, the characteristics of top popular SNS are analysed in order to find the common patterns shown in them. This analysis is the ground for defining the requirements of a framework for building social network websites. Next, a reference architecture for supporting the features found in the analysis is proposed. This architecture has been implemented in a software component, called Social Stream, and tested in several social networks, both contact- and content-oriented, in local neighbourhood associations and EU-founded research projects. It has also been the ground for several Master’s theses. It has been released as a free and open source software that has obtained a growing community and that is now being used beyond the scope of this work. The social architecture has enabled the definition of a new social-based access control model that overcomes some of the limitations currenly present in access control models for social networks. Furthermore, paradigms and case studies in distributed SNS have been analysed, gathering a set of features for distributed social networking. Finally the architecture of the framework has been extended to support distributed SNS capabilities. Its implementation has also been validated in EU-founded research projects.
Resumo:
El objetivo principal del presente Proyecto Fin de Carrera es el de dotar a la Escuela Universitaria de Ingenieros Técnicos de Telecomunicación – Universidad Politécnica de Madrid (EUITT-UPM) de un banco de medida donde poder caracterizar los módulos fotovoltaicos en condiciones reales de operación. Es necesario comprobar el funcionamiento de los módulos para asegurarse de que está acorde a lo indicado en las especificaciones anunciadas por los fabricantes. A lo largo del texto daremos una introducción al concepto de energía solar fotovoltaica y una descripción de los sistemas tanto aislados como los conectados a la red eléctrica de distribución. Hablaremos sobre el fenómeno fotovoltaico y describiremos los módulos fotovoltaicos para ver las partes de las que está compuesto un módulo. Finalmente nos centraremos en el banco de ensayo y acabaremos explicando el caso práctico realizado en la EUITT. A través de la medida de la curva I-V del módulo fotovoltaico en condiciones reales de operación y la extrapolación de los resultados obtenidos a las Condiciones Estándar de Medida (CEM) comprobaremos lo que se ajustan los valores dados por los fabricantes de los módulos solares. ABSTRACT. The main aim of this project is to provide the EUITT-UPM a measure workbench to characterize photovoltaic (PV) modules in real test conditions (RTC). It is necessary to check the PV modules operations to assure that its characteristics are close to the ones given by the manufacturers. I will introduce the concept of photovoltaic solar energy and describe remote systems as well as network-connected systems. I will talk about the photovoltaic phenomenon and describe the PV modules in order to know the parts making up a module. Finally, I shall describe the measure workbench explaining the practical case carried out at the university. By measuring the I-V curve of PV modules in real test conditions and the later extrapolation of the results to the standard test conditions (STC), manufacturers’ data can be compared to the data obtained within this study.