827 resultados para provider
Resumo:
artículo publicado en la revista Int Fam Plan Perspect. 2003 Sep;29(3):112-20
Resumo:
Fucus vesiculosus L. (Phaeophyceae) is the most abundant and hence ecologically most important primary producer, carbon sink and habitat provider in the western Baltic Sea. All F. vesiculosus L. specimens were collected on 23 April 2014 from a depth of 0.2-1 m in the non-tidal Kiel Fjord, western Baltic Sea (54°27'N; 10°12'E), where this species forms dense and almost monospecific stands on stones. After sampling the algal thalli were stored in a refrigerator box with water from the sampling site, transported to Bremerhaven and stored at 10 °C for one day in filtered seawater. Experiments were conducted with vegetative apical tips (6.7±0.5 cm length), the actively growing region of F. vesiculosus, which were randomly selected and cut from 144 different individuals prior to the experiments. These tips were acclimated to laboratory conditions for three days in filtered seawater at 10 °C before the start of the experiment. Furthermore, 30 additional vegetative apices were freeze-dried to document the initial biochemical status of F. vesiculosus in its native habitat. A temperature gradient was installed in a walk-in constant cooling chamber (15 °C) in nine water baths (5, 10, 15, 20, 24, 26, 27, 28 and 29 °C ± 0.1 °C) which were tempered by thermostats (5, 10 and 15 °C: Huber Variostat CC + Pilot ONE, Peter Huber Kältemaschinen GmbH, Offenburg, Germany; 20 and 28 °C: Haake DC3, Thermo Fisher Scientific Inc., Waltham, USA; 24, 26, 27 and 29 °C: Haake DC10). Every temperature treatment consisted of four 2 L glass beakers (n = 4). In each beaker four F. vesiculosus apices were grown in 2 µm-filtered North Sea water diluted with demineralized water in a ratio of 1:1 and enriched with nutrients after Provasoli (1968; 1/10 enrichment), leading to a salinity of about 15.6 which equaled habitat conditions. The algae were exposed to an irradiance of 130 µmol photons m-2 s-1 ±10 % (Powerstar HGI-TS 150 W, OSRAM GmbH, Bad Homburg, Germany) measured at the top of the beaker under a 16:8 h L:D cycle. The media in the beakers was changed every third or fourth day and aerated with artificial air containing 380 ppm CO2 (gas mixing device; HTK Hamburg GmbH, Hamburg, Germany). Before the experiment, the algae were acclimated to the final temperatures in steps of 5 °C for 2 days each, beginning at 10 °C. After 21 days exposure time, three out of four samples per replicate were freeze-dried for further biochemical analyses, and afterwards the thermostats were turned off to reduce the temperature to 16±0.4 °C for another 10 days permitting growth under post-culture conditions.
Resumo:
Decreases in seawater pH and carbonate saturation state (Omega) following the continuous increase in atmospheric CO2 represent a process termed ocean acidification, which is predicted to become a main threat to marine calcifiers in the near future. Segmented, tropical, marine green macro-algae of the genus Halimeda form a calcareous skeleton that involves biotically initiated and induced calcification processes influenced by cell physiology. As Halimeda is an important habitat provider and major carbonate sediment producer in tropical shallow areas, alterations of these processes due to ocean acidification may cause changes in the skeletal microstructure that have major consequences for the alga and its environment, but related knowledge is scarce. This study used scanning electron microscopy to examine changes of the CaCO3 segment microstructure of Halimedaopuntia specimens that had been exposed to artificially elevated seawater pCO2 of 650 µatm for 45 d. In spite of elevated seawater pCO2, the calcification of needles, located at the former utricle walls, was not reduced as frequent initiation of new needle-shaped crystals was observed. Abundance of the needles was 22 %/µm**2 higher and needle crystal dimensions 14 % longer. However, those needles were 42 % thinner compared with the control treatment. Moreover, lifetime cementation of the segments decreased under elevated seawater pCO2 due to a loss in micro-anhedral carbonate as indicated by significantly thinner calcified rims of central utricles (35-173 % compared with the control treatment). Decreased micro-anhedral carbonate suggests that seawater within the inter-utricular space becomes CaCO3 undersaturated (Omega < 1) during nighttime under conditions of elevated seawater pCO2, thereby favoring CaCO3 dissolution over micro-anhedral carbonate accretion. Less-cemented segments of H. opuntia may impair the environmental success of the alga, its carbonate sediment contribution, and the temporal storage of atmospheric CO2 within Halimeda-derived sediments.
Resumo:
The CoastColour project Round Robin (CCRR) project (http://www.coastcolour.org) funded by the European Space Agency (ESA) was designed to bring together a variety of reference datasets and to use these to test algorithms and assess their accuracy for retrieving water quality parameters. This information was then developed to help end-users of remote sensing products to select the most accurate algorithms for their coastal region. To facilitate this, an inter-comparison of the performance of algorithms for the retrieval of in-water properties over coastal waters was carried out. The comparison used three types of datasets on which ocean colour algorithms were tested. The description and comparison of the three datasets are the focus of this paper, and include the Medium Resolution Imaging Spectrometer (MERIS) Level 2 match-ups, in situ reflectance measurements and data generated by a radiative transfer model (HydroLight). The datasets mainly consisted of 6,484 marine reflectance associated with various geometrical (sensor viewing and solar angles) and sky conditions and water constituents: Total Suspended Matter (TSM) and Chlorophyll-a (CHL) concentrations, and the absorption of Coloured Dissolved Organic Matter (CDOM). Inherent optical properties were also provided in the simulated datasets (5,000 simulations) and from 3,054 match-up locations. The distributions of reflectance at selected MERIS bands and band ratios, CHL and TSM as a function of reflectance, from the three datasets are compared. Match-up and in situ sites where deviations occur are identified. The distribution of the three reflectance datasets are also compared to the simulated and in situ reflectances used previously by the International Ocean Colour Coordinating Group (IOCCG, 2006) for algorithm testing, showing a clear extension of the CCRR data which covers more turbid waters.
Resumo:
Rising atmospheric CO2 concentrations could cause a calcium carbonate subsaturation of Arctic surface waters in the next 20 yr, making these waters corrosive for calcareous organisms. It is presently unknown what effects this will have on Arctic calcifying organisms and the ecosystems of which they are integral components. So far, acidification effects on crustose coralline red algae (CCA) have only been studied in tropical and Mediterranean species. In this work, we investigated calcification rates of the CCA Lithothamnion glaciale collected in northwest Svalbard in laboratory experiments under future atmospheric CO2 concentrations. The algae were exposed to simulated Arctic summer and winter light conditions in 2 separate experiments at optimum growth temperatures. We found a significant negative effect of increased CO2 levels on the net calcification rates of L. glaciale in both experiments. Annual mean net dissolution of L. glaciale was estimated to start at an aragonite saturation state between 1.1 and 0.9 which is projected to occur in parts of the Arctic surface ocean between 2030 and 2050 if emissions follow 'business as usual' scenarios (SRES A2; IPCC 2007). The massive skeleton of CCA, which consist of more than 80% calcium carbonate, is considered crucial to withstanding natural stresses such as water movement, overgrowth or grazing. The observed strong negative response of this Arctic CCA to increased CO2 levels suggests severe threats of the projected ocean acidification for an important habitat provider in the Arctic coastal ocean.
Resumo:
La estrategia i2010 de la UE tiene como objetivo garantizar el liderazgo europeo en materia de TIC y poner los beneficios de la Sociedad de la Información al servicio de la economía, la sociedad y la calidad de vida personal, teniendo presente que los éxitos de Europa hasta la fecha se han basado en favorecer la competencia leal en los mercados de las telecomunicaciones y crear un mercado sin fronteras para contenidos y medios de comunicación digitales. En esta línea, la Comisión Europea ha establecido que los distintos estados miembros deben contribuir activamente al desarrollo y uso seguro de los servicios telemáticos entre sus ciudadanos. Más concretamente, atribuye a las Administraciones Públicas, tanto a nivel nacional, regional como local, un papel dinamizador de la Sociedad de la Información que les obliga a ofrecer paulatinamente todos los actos administrativos a los ciudadanos a través de Internet. Como primer paso para el uso seguro de los servicios telemáticos que ofrecen las instituciones públicas se hace preciso dotar a los ciudadanos de una identidad digital que les permita identificarse ante un Proveedor de Servicio o ante otros ciudadanos de manera inequívoca. Por esta razón, la mayoría de países europeos – y otros en el resto del mundo – están promoviendo, sistemas fiables de gestión de identidad electrónica (eIDM), de tal manera que los ciudadanos, las empresas y departamentos gubernamentales (incluso en Estados miembros diferentes) pueden identificar y certificar sus operaciones con precisión, rapidez y sencillez. Sin embargo, la gestión de esta identidad por las Administraciones Públicas supone un importante desafío, acentuado cuando se hace necesaria la interoperabilidad entre Administraciones de diferentes países, puesto que personas y entidades tienen credenciales de identificación diferentes en función de su propio marco jurídico nacional. Consciente del problema, en la Unión Europea se han puesto en marcha una serie de proyectos con el objetivo de conseguir la interoperabilidad de los eIDMs entre las instituciones públicas de diferentes Estados miembros. A pesar de ello, las soluciones adoptadas hasta la fecha son insuficientes porque no prevén todos los posibles casos de interacción del usuario con las instituciones. En concreto, no tienen en cuenta un aspecto muy importante que se ofrece en los distintos sistemas jurídicos nacionales, a saber, la delegación de la identidad, mediante la cual un ciudadano puede autorizar a otro para que actúe en su nombre para acceder a determinados servicios prestados por las instituciones públicas. En esta tesis se realizan un conjunto de aportaciones que dan solución a distintos aspectos de los problemas planteados y que, de forma conjunta, permiten la interoperabilidad y la delegación de identidad en determinados Sistemas de Gestión de Identidad aplicados al entorno de las Administraciones Públicas. En el caso de la delegación, se ha definido un sistema de delegación dinámica de identidad entre dos entidades genéricas que permite solucionar el problema del acceso delegado a los servicios telemáticos ofrecidos por las Administraciones Públicas. La solución propuesta se basa en la generación de un token de delegación, constituido a partir de un Certificado Proxy, que permite a la entidad que delega establecer la delegación de identidad en otra entidad en base a un subconjunto de sus atributos como delegador, estableciendo además, en el propio token de delegación, restricciones en el conjunto de servicios accesibles a la entidad delegada y el tiempo de validez de la delegación. Adicionalmente, se presentan los mecanismos necesarios tanto para poder revocar un token de delegación como para comprobar sin un token de delegación ha sido o no revocado. Para ello se propone una solución para la identificación unívoca de tokens de delegación y la creación de una nueva entidad denominada Autoridad de Revocación de Tokens de Delegación. Entre las características del sistema de delegación propuesto destaca el que es lo suficientemente seguro como para ser utilizado en el entorno de la Administración Pública, que no requiere el uso de mecanismos off‐line para la generación de la delegación y que se puede realizar la delegación de forma instantánea y sin la necesidad de trámites complejos o la participación de un elevado número de entidades. Adicionalmente, el token de delegación propuesto es perfectamente integrable en las infraestructura de clave pública actual lo que hace que, dado que gran parte de las Administraciones Públicas europeas basan sus sistemas de identidad digital en el uso de la PKI y certificados de identidad X.509, la solución pueda ser puesta en marcha en un entorno real sin necesidad de grandes cambios o modificaciones de comportamiento. En lo referente a la interoperabilidad, se realiza un análisis exhaustivo y la correspondiente evaluación de las principales propuestas de Sistemas de Gestión de Identidad orientados a conseguir la interoperabilidad realizadas hasta la fecha en el marco de la Unión Europea y se propone, a alto nivel, una arquitectura de interoperabilidad para la gestión de identidad en las Administraciones Públicas. Dicha arquitectura es lo suficientemente genérica como para poder ser aplicada tanto en el entorno pan‐Europeo como en los entornos nacionales, autonómicos y locales, de tal forma que la interoperabilidad en la gestión de la identidad esté garantizada en todos los niveles de la Administración Pública. Por último, mediante la integración de la solución de delegación dinámica de identidad y la arquitectura de interoperabilidad propuestas se presenta una solución al problema de la delegación en un escenario pan‐Europeo de gestión de identidad, dando lugar a una arquitectura global de interoperabilidad pan‐Europea con soporte a la delegación de identidad. SUMMARY The i2010 European Union Plan aims to ensure European leadership in ICT and to promote the positive contribution that information and communication technologies can make to the economic, social and personal quality of life, bearing in mind that, to date, success in Europe has been based on promoting fair competition in telecommunications markets and on creating a borderless market for contents and digital media. In this line, the European Commission has established that the different member states should contribute actively to the development and secure use of telematic services among their citizens. More specifically, it is attributed to national, regional and local Public Administrations to have a supportive role of the Information Society, requiring them to gradually provide the citizens with Internet‐based access to all administrative procedures acts. As a first step for the secure use of telematic services offered by public institutions, it is necessary to provide the citizens with a digital identity to enable them to identify themselves unequivocally to a Service Provider or to other citizens. For this reason, most European countries ‐ and others in the rest of the world ‐ are promoting reliable systems for managing electronic identity (eIDM), so that citizens, businesses and government departments (even in different Member States) can identify and certify their operations with precision, speed and simplicity. However, the identity management by Public Administrations is a major challenge that becomes more difficult when interoperability between administrations of different countries is needed, due to the fact that individuals and entities have different identification credentials according to their own national legal framework. Aware of the problem, the European Union has launched a series of projects with the aim of achieving interoperability of eIDMs between public institutions of different Member States. However, the solutions adopted to date are insufficient because they do not foresee all possible cases of user interaction with the institutions. In particular, solutions do not take into account a very important aspect that is offered in different national legal systems, namely, the delegation of identity, by which a citizen can authorize another to act on his/her behalf to access certain services provided by public institutions. In this thesis a collection of contributions that provide solution to different aspects of the aforementioned problems are carried out. The solutions, in global, enable interoperability and identity delegation in some of the Identity Management Systems applied to Public Administration environment. In the case of delegation, a dynamic identity delegation system between generic entities is defined. This system makes it possible to solve the problem of delegated access to telematic services offered by Public Administrations. The proposed solution is based on the generation of a piece of information called delegation token. This delegation token, derived from a Proxy Certificate, allows the establishment of identity delegation by an entity that delegates (delegator) in other entity (delegatee) making use of a subset of delegator attributes. It also establishes restrictions on services that can be used by the delegated entity and the expiry date of delegation. In addition to this, the mechanisms necessary to revoke and check the revocation status of a delegation token are presented. To do this, a solution to univocally identify delegation tokens and the creation of a completely new entity, called Token Delegation Revocation Authority, are proposed. The most remarkable characteristics of the proposed delegation system are its security, enough for it to be used in the Public Administration environment, the fact that it does not require off‐line processes in order to generate the delegation, and the possibility of performing the delegation instantaneously and without neither complex processes nor the intervention of a large number of entities. The proposed delegation token can be completely incorporated into current Public Key Infrastructure (PKI). Thus, since most of the European Public Administrations base their digital identity systems on PKI and X.509 identity certificates, the solution can be adopted in a real environment without great changes or performance modifications. Regarding interoperability, an exhaustive analysis and evaluation of most significant proposals on Identity Management Systems that aim to achieve interoperability carried out in the European Union framework until now are performed. A high level identity management interoperability architecture for Public Administrations is also proposed. This architecture is sufficiently generic to be applied to both pan‐European environment and national, regional or local environments, thus interoperability in identity management at all Public Administration levels is guaranteed. Finally, through the integration of the proposed dynamic identity delegation solution and the high level interoperability architecture, a solution to the problem of identity delegation in a pan‐European identity management environment is suggested, leading to a pan‐European global interoperability architecture with identity delegation support.
Resumo:
Multi-user videoconferencing systems offer communication between more than two users, who are able to interact through their webcams, microphones and other components. The use of these systems has been increased recently due to, on the one hand, improvements in Internet access, networks of companies, universities and houses, whose available bandwidth has been increased whilst the delay in sending and receiving packets has decreased. On the other hand, the advent of Rich Internet Applications (RIA) means that a large part of web application logic and control has started to be implemented on the web browsers. This has allowed developers to create web applications with a level of complexity comparable to traditional desktop applications, running on top of the Operating Systems. More recently the use of Cloud Computing systems has improved application scalability and involves a reduction in the price of backend systems. This offers the possibility of implementing web services on the Internet with no need to spend a lot of money when deploying infrastructures and resources, both hardware and software. Nevertheless there are not many initiatives that aim to implement videoconferencing systems taking advantage of Cloud systems. This dissertation proposes a set of techniques, interfaces and algorithms for the implementation of videoconferencing systems in public and private Cloud Computing infrastructures. The mechanisms proposed here are based on the implementation of a basic videoconferencing system that runs on the web browser without any previous installation requirements. To this end, the development of this thesis starts from a RIA application with current technologies that allow users to access their webcams and microphones from the browser, and to send captured data through their Internet connections. Furthermore interfaces have been implemented to allow end users to participate in videoconferencing rooms that are managed in different Cloud provider servers. To do so this dissertation starts from the results obtained from the previous techniques and backend resources were implemented in the Cloud. A traditional videoconferencing service which was implemented in the department was modified to meet typical Cloud Computing infrastructure requirements. This allowed us to validate whether Cloud Computing public infrastructures are suitable for the traffic generated by this kind of system. This analysis focused on the network level and processing capacity and stability of the Cloud Computing systems. In order to improve this validation several other general considerations were taken in order to cover more cases, such as multimedia data processing in the Cloud, as research activity has increased in this area in recent years. The last stage of this dissertation is the design of a new methodology to implement these kinds of applications in hybrid clouds reducing the cost of videoconferencing systems. Finally, this dissertation opens up a discussion about the conclusions obtained throughout this study, resulting in useful information from the different stages of the implementation of videoconferencing systems in Cloud Computing systems. RESUMEN Los sistemas de videoconferencia multiusuario permiten la comunicación entre más de dos usuarios que pueden interactuar a través de cámaras de video, micrófonos y otros elementos. En los últimos años el uso de estos sistemas se ha visto incrementado gracias, por un lado, a la mejora de las redes de acceso en las conexiones a Internet en empresas, universidades y viviendas, que han visto un aumento del ancho de banda disponible en dichas conexiones y una disminución en el retardo experimentado por los datos enviados y recibidos. Por otro lado también ayudó la aparación de las Aplicaciones Ricas de Internet (RIA) con las que gran parte de la lógica y del control de las aplicaciones web comenzó a ejecutarse en los mismos navegadores. Esto permitió a los desarrolladores la creación de aplicaciones web cuya complejidad podía compararse con la de las tradicionales aplicaciones de escritorio, ejecutadas directamente por los sistemas operativos. Más recientemente el uso de sistemas de Cloud Computing ha mejorado la escalabilidad y el abaratamiento de los costes para sistemas de backend, ofreciendo la posibilidad de implementar servicios Web en Internet sin la necesidad de grandes desembolsos iniciales en las áreas de infraestructuras y recursos tanto hardware como software. Sin embargo no existen aún muchas iniciativas con el objetivo de realizar sistemas de videoconferencia que aprovechen las ventajas del Cloud. Esta tesis doctoral propone un conjunto de técnicas, interfaces y algoritmos para la implentación de sistemas de videoconferencia en infraestructuras tanto públicas como privadas de Cloud Computing. Las técnicas propuestas en la tesis se basan en la realización de un servicio básico de videoconferencia que se ejecuta directamente en el navegador sin la necesidad de instalar ningún tipo de aplicación de escritorio. Para ello el desarrollo de esta tesis parte de una aplicación RIA con tecnologías que hoy en día permiten acceder a la cámara y al micrófono directamente desde el navegador, y enviar los datos que capturan a través de la conexión de Internet. Además se han implementado interfaces que permiten a usuarios finales la participación en salas de videoconferencia que se ejecutan en servidores de proveedores de Cloud. Para ello se partió de los resultados obtenidos en las técnicas anteriores de ejecución de aplicaciones en el navegador y se implementaron los recursos de backend en la nube. Además se modificó un servicio ya existente implementado en el departamento para adaptarlo a los requisitos típicos de las infraestructuras de Cloud Computing. Alcanzado este punto se procedió a analizar si las infraestructuras propias de los proveedores públicos de Cloud Computing podrían soportar el tráfico generado por los sistemas que se habían adaptado. Este análisis se centró tanto a nivel de red como a nivel de capacidad de procesamiento y estabilidad de los sistemas. Para los pasos de análisis y validación de los sistemas Cloud se tomaron consideraciones más generales para abarcar casos como el procesamiento de datos multimedia en la nube, campo en el que comienza a haber bastante investigación en los últimos años. Como último paso se ideó una metodología de implementación de este tipo de aplicaciones para que fuera posible abaratar los costes de los sistemas de videoconferencia haciendo uso de clouds híbridos. Finalmente en la tesis se abre una discusión sobre las conclusiones obtenidas a lo largo de este amplio estudio, obteniendo resultados útiles en las distintas etapas de implementación de los sistemas de videoconferencia en la nube.
Resumo:
As it is defined in ATM 2000+ Strategy (Eurocontrol 2001), the mission of the Air Traffic Management (ATM) System is: “For all the phases of a flight, the ATM system should facilitate a safe, efficient, and expedite traffic flow, through the provision of adaptable ATM services that can be dimensioned in relation to the requirements of all the users and areas of the European air space. The ATM services should comply with the demand, be compatible, operate under uniform principles, respect the environment and satisfy the national security requirements.” The objective of this paper is to present a methodology designed to evaluate the status of the ATM system in terms of the relationship between the offered capacity and traffic demand, identifying weakness areas and proposing solutions. The first part of the methodology relates to the characterization and evaluation of the current system, while a second part proposes an approach to analyze the possible development limit. As part of the work, general criteria are established to define the framework in which the analysis and diagnostic methodology presented is placed. They are: the use of Air Traffic Control (ATC) sectors as analysis unit, the presence of network effects, the tactical focus, the relative character of the analysis, objectivity and a high level assessment that allows assumptions on the human and Communications, Navigation and Surveillance (CNS) elements, considered as the typical high density air traffic resources. The steps followed by the methodology start with the definition of indicators and metrics, like the nominal criticality or the nominal efficiency of a sector; scenario characterization where the necessary data is collected; network effects analysis to study the relations among the constitutive elements of the ATC system; diagnostic by means of the “System Status Diagram”; analytical study of the ATC system development limit; and finally, formulation of conclusions and proposal for improvement. This methodology was employed by Aena (Spanish Airports Manager and Air Navigation Service Provider) and INECO (Spanish Transport Engineering Company) in the analysis of the Spanish ATM System in the frame of the Spanish airspace capacity sustainability program, although it could be applied elsewhere.
Resumo:
El uso de Internet por parte de los ciudadanos para relacionarse con las Administraciones Públicas o en relación con actividades de comercio electrónico crece día a día. Así lo evidencian los diferentes estudios realizados en esta materia, como los que lleva a cabo el Observatorio Nacional de las Telecomunicaciones y la Sociedad de la Información (http://www.ontsi.red.es/ontsi/). Se hace necesario, por tanto, identificar a las partes intervinientes en estas transacciones, además de dotarlas de la confidencialidad necesaria y garantizar el no repudio. Uno de los elementos que, junto con los mecanismos criptográficos apropiados, proporcionan estos requisitos, son los certificados electrónicos de servidor web. Existen numerosas publicaciones dedicadas a analizar esos mecanismos criptográficos y numerosos estudios de seguridad relacionados con los algoritmos de cifrado, simétrico y asimétrico, y el tamaño de las claves criptográficas. Sin embargo, la seguridad relacionada con el uso de los protocolos de seguridad SSL/TLS está estrechamente ligada a dos aspectos menos conocidos: el grado de seguridad con el que se emiten los certificados electrónicos de servidor que permiten implementar dichos protocolos; y el uso que hacen las aplicaciones software, y en especial los navegadores web, de los campos que contiene el perfil de dichos certificados. Por tanto, diferentes perfiles de certificados electrónicos de servidor y diferentes niveles de seguridad asociados al procedimiento de emisión de los mismos, dan lugar a diferentes tipos de certificados electrónicos. Si además se considera el marco jurídico que afecta a cada uno de ellos, se puede concluir que existe una tipología de certificados de servidor, con diferentes grados de seguridad o de confianza. Adicionalmente, existen otros requisitos que también pueden pasar desapercibidos tanto a los titulares de los certificados como a los usuarios de los servicios de comercio electrónico y administración electrónica. Por ejemplo, el grado de confianza que otorgan los navegadores web a las Autoridades de Certificación emisoras de los certificados y cómo estas adquieren tal condición, o la posibilidad de poder verificar el estado de revocación del certificado electrónico. El presente trabajo analiza todos estos requisitos y establece, en función de los mismos, la correspondiente tipología de certificados electrónicos de servidor web. Concretamente, las características a analizar para cada tipo de certificado son las siguientes: Seguridad jurídica. Normas técnicas. Garantías sobre la verdadera identidad del dominio. Verificación del estado de revocación. Requisitos del Prestador de Servicios de Certificación. Los tipos de certificados electrónicos a analizar son: Certificados de servidor web: Certificados autofirmados y certificados emitidos por un Prestador de Servicios de Certificación. Certificados de dominio simple y certificados multidominio (wildcard y SAN) Certificados de validación extendida. Certificados de sede electrónica. ABSTRACT Internet use by citizens to interact with government or with e-commerce activities is growing daily. This topic is evidenced by different studies in this area, such as those undertaken by the Observatorio Nacional de las Telecomunicaciones y la Sociedad de la Información (http://www.ontsi.red.es/ontsi/ ). Therefore, it is necessary to identify the parties involved in these transactions, as well as provide guaranties such as confidentiality and non-repudiation. One instrument which, together with appropriate cryptographic mechanisms, provides these requirements is SSL electronic certificate. There are numerous publications devoted to analyzing these cryptographic mechanisms and many studies related security encryption algorithms, symmetric and asymmetric, and the size of the cryptographic keys. However, the safety related to the use of security protocols SSL / TLS is closely linked to two lesser known aspects: the degree of security used in issuing the SSL electronic certificates; and the way software applications, especially web Internet browsers, work with the fields of the SSL certificates profiles. Therefore, the diversity of profiles and security levels of issuing SSL electronic certificates give rise to different types of certificates. Besides, some of these electronic certificates are affected by a specific legal framework. Consequently, it can be concluded that there are different types of SSL certificates, with different degrees of security or trustworthiness. Additionally, there are other requirements that may go unnoticed both certificate holders and users of e-commerce services and e-Government. For example, the degree of trustworthiness of the Certification Authorities and how they acquire such a condition by suppliers of Internet browsers, or the possibility to verify the revocation status of the SSL electronic certificate. This paper discusses these requirements and sets, according to them, the type of SSL electronic certificates. Specifically, the characteristics analyzed for each type of certificate are the following: Legal security. Technical standards. Guarantees to the real identity of the domain. Check the revocation status. Requirements of the Certification Services Providers. The types of electronic certificates to be analyzed are the following: SSL electronic certificates: Self-signed certificates and certificates issued by a Certification Service Provider. Single-domain certificates and multi-domain certificates (wildcard and SAN) Extended Validation Certificates. “Sede electrónica” certificates (specific certificates for web sites of Spanish Public Administrations).
Resumo:
El propósito de este proyecto de fin de Grado es el estudio y desarrollo de una aplicación basada en Android que proporcionará soporte y atención a los servicios de transporte público existentes en Cracovia, Polonia. La principal funcionalidad del sistema será consultar la posición de un determinado autobús o tranvía y mostrar su ubicación con exactitud. Para lograr esto, necesitaremos tres fases de desarrollo. En primer lugar, deberemos implementar un sistema que obtenga las coordenadas geográficas de los vehículos de transporte público en cada instante. A continuación, tendremos que registrar todos estos datos y almacenarlos en una base de datos en un servidor web. Por último, desarrollaremos un sistema cliente que realice consultas a tiempo real sobre estos datos almacenados, obteniendo la posición para una línea determinada y mostrando su ubicación con un marcador en el mapa. Para hacer el seguimiento de los vehículos, sería necesario tener acceso a una API pública que nos proporcionase la posición registrada por los GPS que integran cada uno de ellos. Como esta API no existe actualmente para los servicios de autobús, y para los tranvías es de uso meramente privado, desarrollaremos una segunda aplicación en Android que hará las funciones del lado servidor. En ella podremos elegir mediante una simple interfaz el número de línea y un código específico que identificará a cada vehículo en particular (e.g. podemos tener 6 tranvías recorriendo la red al mismo tiempo para la línea 24). Esta aplicación obtendrá las coordenadas geográficas del teléfono móvil, lo cual incluye latitud, longitud y orientación a través del proveedor GPS. De este modo, podremos realizar una simulación de como el sistema funcionará a tiempo real utilizando la aplicación servidora desde dentro de un tranvía o autobús y, al mismo tiempo, utilizando la aplicación cliente haciendo peticiones para mostrar la información de dicho tranvía. El cliente, además, podrá consultar la ruta de cualquier línea sin necesidad de tener acceso a Internet. Almacenaremos las rutas y paradas de cada línea en la memoria del teléfono móvil utilizando ficheros XML debido al poco espacio que ocupan y a lo útil que resulta poder consultar un trayecto en cualquier momento, independientemente del acceso a la red. El usuario también podrá consultar las tablas de horarios oficiales para cada línea. Aunque en este caso si será necesaria una conexión a Internet debido a que se realizará a través de la web oficial de MPK. Para almacenar todas las coordenadas de cada vehículo en cada instante necesitaremos crear una base de datos en un servidor. Esto se resolverá mediante el uso de MYSQL y PHP. Se enviarán peticiones de tipo GET y POST a los servicios PHP que se encargarán de traducir y realizar la consulta correspondiente a la base de datos MYSQL. Por último, gracias a todos los datos recogidos relativos a la posición de los vehículos de transporte público, podremos realizar algunas tareas de análisis. Comparando la hora exacta a la que los vehículos pasaron por cada parada y la hora a la que deberían haber pasado según los horarios oficiales, podremos descubrir fallos en estos. Seremos capaces de determinar si es un error puntual debido a factores externos (atascos, averías,…) o si por el contrario, es algo que ocurre muy a menudo y se debería corregir el horario oficial. ABSTRACT The aim of this final Project (for University) is to develop an Android application thatwill provide support and feedback to the public transport services in Krakow. The main functionality of the system will be to track the position of a desired bus or tram line, and display its position on the map. To achieve this, we will need 3 stages: the first one will be to implement a system that sends the geographical position of the public transport vehicles, the second one will be to collect this data in a web server, and the last one will be to get the last location registered for the desired line and display it on the map. For tracking the vehicles, we would need to have access to a public API that should be connected with each bus/tram GPS. As this doesn’t exist in Krakow or at least is not available for public use, we will develop a second android application that will do the server side job. We will be able to choose in a simple interface the line number and a code letter to identify each vehicle (e.g. we can have 6 trams that belong to the line number 24 working at the same time). It will take the current mobile geolocation; this includes getting latitude, longitude and bearing from the GPS provider. Thus, we will be able to make a simulation of how the system works in real time by using the server app inside a tram and at the same time, using the client app and making requests to display the information of that tram. The client will also be able to check the path of the desired line without internet access. We will store the path and stops for each line locally in the phone memory using xml files due to the few requirements of available space it needs and the usefulness of checking a path when needed. This app will also offer the functionality of checking the timetable for the line, but in this case, it will link to the official Mpk website, so Internet access will be required. For storing all the coordinates for each vehicle at every moment we will need to create a database on a server. We have decided that the easiest way is to use Mysql and PHP for the deployment of the service. We will send GET and POST requests to the php files and those files will make the according queries to our database. Finally, based on all the collected data, we will be able to get some information about errors in the system of public transport timetables. We will check at what time a line was in each specific stop and compare it with the official timetable to find mistakes of time. We will determine if it is something that happens occasionally and related to external factors (e.g. traffic jams, breakdowns…) or if on the other hand, it is something that happens very often and the public transport timetables should be looked over and corrected.
Resumo:
Esta tesis estudia la monitorización y gestión de la Calidad de Experiencia (QoE) en los servicios de distribución de vídeo sobre IP. Aborda el problema de cómo prevenir, detectar, medir y reaccionar a las degradaciones de la QoE desde la perspectiva de un proveedor de servicios: la solución debe ser escalable para una red IP extensa que entregue flujos individuales a miles de usuarios simultáneamente. La solución de monitorización propuesta se ha denominado QuEM(Qualitative Experience Monitoring, o Monitorización Cualitativa de la Experiencia). Se basa en la detección de las degradaciones de la calidad de servicio de red (pérdidas de paquetes, disminuciones abruptas del ancho de banda...) e inferir de cada una una descripción cualitativa de su efecto en la Calidad de Experiencia percibida (silencios, defectos en el vídeo...). Este análisis se apoya en la información de transporte y de la capa de abstracción de red de los flujos codificados, y permite caracterizar los defectos más relevantes que se observan en este tipo de servicios: congelaciones, efecto de “cuadros”, silencios, pérdida de calidad del vídeo, retardos e interrupciones en el servicio. Los resultados se han validado mediante pruebas de calidad subjetiva. La metodología usada en esas pruebas se ha desarrollado a su vez para imitar lo más posible las condiciones de visualización de un usuario de este tipo de servicios: los defectos que se evalúan se introducen de forma aleatoria en medio de una secuencia de vídeo continua. Se han propuesto también algunas aplicaciones basadas en la solución de monitorización: un sistema de protección desigual frente a errores que ofrece más protección a las partes del vídeo más sensibles a pérdidas, una solución para minimizar el impacto de la interrupción de la descarga de segmentos de Streaming Adaptativo sobre HTTP, y un sistema de cifrado selectivo que encripta únicamente las partes del vídeo más sensibles. También se ha presentado una solución de cambio rápido de canal, así como el análisis de la aplicabilidad de los resultados anteriores a un escenario de vídeo en 3D. ABSTRACT This thesis proposes a comprehensive approach to the monitoring and management of Quality of Experience (QoE) in multimedia delivery services over IP. It addresses the problem of preventing, detecting, measuring, and reacting to QoE degradations, under the constraints of a service provider: the solution must scale for a wide IP network delivering individual media streams to thousands of users. The solution proposed for the monitoring is called QuEM (Qualitative Experience Monitoring). It is based on the detection of degradations in the network Quality of Service (packet losses, bandwidth drops...) and the mapping of each degradation event to a qualitative description of its effect in the perceived Quality of Experience (audio mutes, video artifacts...). This mapping is based on the analysis of the transport and Network Abstraction Layer information of the coded stream, and allows a good characterization of the most relevant defects that exist in this kind of services: screen freezing, macroblocking, audio mutes, video quality drops, delay issues, and service outages. The results have been validated by subjective quality assessment tests. The methodology used for those test has also been designed to mimic as much as possible the conditions of a real user of those services: the impairments to evaluate are introduced randomly in the middle of a continuous video stream. Based on the monitoring solution, several applications have been proposed as well: an unequal error protection system which provides higher protection to the parts of the stream which are more critical for the QoE, a solution which applies the same principles to minimize the impact of incomplete segment downloads in HTTP Adaptive Streaming, and a selective scrambling algorithm which ciphers only the most sensitive parts of the media stream. A fast channel change application is also presented, as well as a discussion about how to apply the previous results and concepts in a 3D video scenario.
Resumo:
El presente Proyecto Fin de Carrera consiste en un estudio de los accesos a red que utilizan los servicios a los que están adscritos los usuarios de servicios de teleasistencia, planteando al final del mismo un modelo de previsión de caídas que permita que ese acceso a red no sea un problema para la prestación del servicio. Para poder llegar a los objetivos anteriormente descritos, iniciaremos este documento presentando qué se entiende actualmente como servicios de telemedicina y teleasistencia. Prestaremos atención a los actores que intervienen, usos y beneficios que tienen tanto para los pacientes como para las administraciones públicas. Una vez sepamos en qué consisten, centraremos la atención en las redes de acceso que se utilizan para prestar los servicios de telemedicina, con sus ventajas y desventajas. Puesto que no todos los servicios tienen los mismos requisitos generales de fiabilidad o velocidad de transmisión, veremos cómo se puede garantizar las necesidades de cada tipo de servicio por parte del proveedor de red. El siguiente paso para llegar a establecer el modelo de previsión de caídas será conocer las necesidades técnicas y de los actores para prestar un servicio de teleasistencia en el hogar de un paciente. Esto incluirá estudiar qué equipos se necesitan, cómo gestionarlos y cómo marcar el tráfico para que el operador de red sepa cómo tratarlo según el servicio de teleasistencia que se está utilizando, llevando a generar un modelo de supervisión de enlaces de teleasistencia. Llegados a este punto estaremos ya preparados para establecer un modelo de previsión de caídas de la conexión, describiendo la lógica que se necesite para ello, y poniéndolo en práctica con dos ejemplo concretos: un servicio de telemonitorización domiciliaria y otro servicio de telemonitorización ambulatoria. Para finalizar, realizaremos una recapitulación sobre lo estudiado en este documento y realizaremos una serie de recomendaciones. ABSTRACT. This Thesis is a study of the access network to be used with services assigned to patients that are users of telecare services. In the last chapter we will describe a fall forecasting model that allows the access network to not be an issue for the service. For achieving the objectives described above, this paper will begin with the presentation of what is now understood as telemedicine and telecare services. We pay attention to the actors involved, uses and benefits that they have both for patients and for public administrations. Once we know what telecare means and what requisites they have, we will focus on access networks which are used to provide telemedicine services, with their advantages and disadvantages. Since not all services have the same general requirements of reliability and transmission speed, we will try to see how you can ensure the needs of each type of service from the network provider's point of view. The next step is to establish that the forecasting model of falls will meet the technical needs and actors to provide telecare service in the home of a patient. This will include a study of what equipment is needed, how to manage and how to mark traffic for the network operator knowing how to treat it according to the telecare service being used, and this will lead us to the creation of a model of telecare link monitoring. At this point we are already prepared to establish a forecasting model of connection drops, describing the logic that is needed for this, and putting it into practice with two concrete examples: telemonitoring service and an ambulatory telemonitoring service. Finally, we will have a recap on what has been studied in this paper and will make a series of recommendations.
Resumo:
Today P2P faces two important challenges: design of mechanisms to encourage users' collaboration in multimedia live streaming services; design of reliable algorithms with QoS provision, to encourage the multimedia providers employ the P2P topology in commercial live streaming systems. We believe that these two challenges are tightly-related and there is much to be done with respect. This paper analyzes the effect of user behavior in a multi-tree P2P overlay and describes a business model based on monetary discount as incentive in a P2P-Cloud multimedia streaming system. We believe a discount model can boost up users' cooperation and loyalty and enhance the overall system integrity and performance. Moreover the model bounds the constraints for a provider's revenue and cost if the P2P system is leveraged on a cloud infrastructure. Our case study shows that a streaming system provider can establish or adapt his business model by applying the described bounds to achieve a good discount-revenue trade-off and promote the system to the users.
Resumo:
This paper tackles the optimization of applications in multi-provider hybrid cloud scenarios from an economic point of view. In these scenarios the great majority of solutions offer the automatic allocation of resources on different cloud providers based on their current prices. However our approach is intended to introduce a novel solution by making maximum use of divide and rule. This paper describes a methodology to create cost aware cloud applications that can be broken down into the three most important components in cloud infrastructures: computation, network and storage. A real videoconference system has been modified in order to evaluate this idea with both theoretical and empirical experiments. This system has become a widely used tool in several national and European projects for e-learning and collaboration purposes.
Resumo:
Today P2P faces two important challenges: design of mechanisms to encourage users’ collaboration in multimedia live streaming services; design of reliable algorithms with QoS provision, to encourage multimedia providers employ the P2P topology in commercial streaming services. We believe that these two challenges are tightly-related and there is much to be done with respect. This paper proposes a novel monetary incentive for P2P multimedia streaming. The incentive model classifies the users in groups according to the perceived video quality. We apply the model to a streaming system’s billing model in order to evaluate its feasibility and visualize its quantitative effect on the users’ motivation and the provider’s profit. We conclude that monetary incentive can boost up users’ cooperation, loyalty and enhance the overall system integrity and performance. Moreover the model defines the constraints for the provider’s cost and profit when the system is leveraged on the cloud. Considering those constraints, a multimedia content provider can adapt the billing model of his streaming service and achieve desirable discount-profit trade-off. This will moreover contribute to better promotion of the service, across the users on the Internet.