924 resultados para Scenario Programming, Markup Language, End User Programming
Resumo:
PURPOSE Confidence intervals (CIs) are integral to the interpretation of the precision and clinical relevance of research findings. The aim of this study was to ascertain the frequency of reporting of CIs in leading prosthodontic and dental implantology journals and to explore possible factors associated with improved reporting. MATERIALS AND METHODS Thirty issues of nine journals in prosthodontics and implant dentistry were accessed, covering the years 2005 to 2012: The Journal of Prosthetic Dentistry, Journal of Oral Rehabilitation, The International Journal of Prosthodontics, The International Journal of Periodontics & Restorative Dentistry, Clinical Oral Implants Research, Clinical Implant Dentistry and Related Research, The International Journal of Oral & Maxillofacial Implants, Implant Dentistry, and Journal of Dentistry. Articles were screened and the reporting of CIs and P values recorded. Other information including study design, region of authorship, involvement of methodologists, and ethical approval was also obtained. Univariable and multivariable logistic regression was used to identify characteristics associated with reporting of CIs. RESULTS Interrater agreement for the data extraction performed was excellent (kappa = 0.88; 95% CI: 0.87 to 0.89). CI reporting was limited, with mean reporting across journals of 14%. CI reporting was associated with journal type, study design, and involvement of a methodologist or statistician. CONCLUSIONS Reporting of CI in implant dentistry and prosthodontic journals requires improvement. Improved reporting will aid appraisal of the clinical relevance of research findings by providing a range of values within which the effect size lies, thus giving the end user the opportunity to interpret the results in relation to clinical practice.
Resumo:
The widespread use of wireless enabled devices and the increasing capabilities of wireless technologies has promoted multimedia content access and sharing among users. However, the quality perceived by the users still depends on multiple factors such as video characteristics, device capabilities, and link quality. While video characteristics include the video time and spatial complexity as well as the coding complexity, one of the most important device characteristics is the battery lifetime. There is the need to assess how these aspects interact and how they impact the overall user satisfaction. This paper advances previous works by proposing and validating a flexible framework, named EViTEQ, to be applied in real testbeds to satisfy the requirements of performance assessment. EViTEQ is able to measure network interface energy consumption with high precision, while being completely technology independent and assessing the application level quality of experience. The results obtained in the testbed show the relevance of combined multi-criteria measurement approaches, leading to superior end-user satisfaction perception evaluation .
Resumo:
This paper presents a new tool for large-area photo-mosaicking (LAPM tool). This tool was developed specifically for the purpose of underwater mosaicking, and it is aimed at providing end-user scientists with an easy and robust way to construct large photo-mosaics from any set of images. It is notably capable of constructing mosaics with an unlimited number of images on any modern computer (minimum 1.30 GHz, 2 GB RAM). The mosaicking process can rely on both feature matching and navigation data. This is complemented by an intuitive graphical user interface, which gives the user the ability to select feature matches between any pair of overlapping images. Finally, mosaic files are given geographic attributes that permit direct import into ArcGIS. So far, the LAPM tool has been successfully used to construct geo-referenced photo-mosaics with photo and video material from several scientific cruises. The largest photo-mosaic contained more than 5000 images for a total area of about 105,000 m**2. This is the first article to present and to provide a finished and functional program to construct large geo-referenced photo-mosaics of the seafloor using feature detection and matching techniques. It also presents concrete examples of photo-mosaics produced with the LAPM tool.
Resumo:
In 2005, the International Ocean Colour Coordinating Group (IOCCG) convened a working group to examine the state of the art in ocean colour data merging, which showed that the research techniques had matured sufficiently for creating long multi-sensor datasets (IOCCG, 2007). As a result, ESA initiated and funded the DUE GlobColour project (http://www.globcolour.info/) to develop a satellite based ocean colour data set to support global carbon-cycle research. It aims to satisfy the scientific requirement for a long (10+ year) time-series of consistently calibrated global ocean colour information with the best possible spatial coverage. This has been achieved by merging data from the three most capable sensors: SeaWiFS on GeoEye's Orbview-2 mission, MODIS on NASA's Aqua mission and MERIS on ESA's ENVISAT mission. In setting up the GlobColour project, three user organisations were invited to help. Their roles are to specify the detailed user requirements, act as a channel to the broader end user community and to provide feedback and assessment of the results. The International Ocean Carbon Coordination Project (IOCCP) based at UNESCO in Paris provides direct access to the carbon cycle modelling community's requirements and to the modellers themselves who will use the final products. The UK Met Office's National Centre for Ocean Forecasting (NCOF) in Exeter, UK, provides an understanding of the requirements of oceanography users, and the IOCCG bring their understanding of the global user needs and valuable advice on best practice within the ocean colour science community. The three year project kicked-off in November 2005 under the leadership of ACRI-ST (France). The first year was a feasibility demonstration phase that was successfully concluded at a user consultation workshop organised by the Laboratoire d'Océanographie de Villefranche, France, in December 2006. Error statistics and inter-sensor biases were quantified by comparison with insitu measurements from moored optical buoys and ship based campaigns, and used as an input to the merging. The second year was dedicated to the production of the time series. In total, more than 25 Tb of input (level 2) data have been ingested and 14 Tb of intermediate and output products created, with 4 Tb of data distributed to the user community. Quality control (QC) is provided through the Diagnostic Data Sets (DDS), which are extracted sub-areas covering locations of in-situ data collection or interesting oceanographic phenomena. This Full Product Set (FPS) covers global daily merged ocean colour products in the time period 1997-2006 and is also freely available for use by the worldwide science community at http://www.globcolour.info/data_access_full_prod_set.html. The GlobColour service distributes global daily, 8-day and monthly data sets at 4.6 km resolution for, chlorophyll-a concentration, normalised water-leaving radiances (412, 443, 490, 510, 531, 555 and 620 nm, 670, 681 and 709 nm), diffuse attenuation coefficient, coloured dissolved and detrital organic materials, total suspended matter or particulate backscattering coefficient, turbidity index, cloud fraction and quality indicators. Error statistics from the initial sensor characterisation are used as an input to the merging methods and propagate through the merging process to provide error estimates for the output merged products. These error estimates are a key component of GlobColour as they are invaluable to the users; particularly the modellers who need them in order to assimilate the ocean colour data into ocean simulations. An intensive phase of validation has been undertaken to assess the quality of the data set. In addition, inter-comparisons between the different merged datasets will help in further refining the techniques used. Both the final products and the quality assessment were presented at a second user consultation in Oslo on 20-22 November 2007 organised by the Norwegian Institute for Water Research (NIVA); presentations are available on the GlobColour WWW site. On request of the ESA Technical Officer for the GlobColour project, the FPS data set was mirrored in the PANGAEA data library.
Resumo:
Purpose The purpose of this paper is to present what kind of elements and evaluation methods should be included into a framework for evaluating the achievements and impacts of transport projects supported in EC Framework Programmes (FP). Further, the paper discusses the possibilities of such an evaluation framework in producing recommendations regarding future transport research and policy objectives as well as mutual learning for the basis of strategic long term planning. Methods The paper describes the two-dimensional evaluation methodology developed in the course of the FP7 METRONOME project. The dimensions are: (1) achievement of project objectives and targets in different levels and (2) research project impacts according to four impact groups. The methodology uses four complementary approaches in evaluation, namely evaluation matrices, coordinator questionnaires, lead user interviews and workshops. Results Based on the methodology testing, with a sample of FP5 and FP6 projects, the main results relating to the rationale, implementation and achievements of FP projects is presented. In general, achievement of objectives in both FPs was good. Strongest impacts were identified within the impact group of management and co-ordination. Also scientific and end-user impacts of the projects were adequate, but wider societal impacts quite modest. The paper concludes with a discussion both on the theoretical and practical implications of the proposed methodology and by presenting some relevant future research needs.
Resumo:
Cultural content on the Web is available in various domains (cultural objects, datasets, geospatial data, moving images, scholarly texts and visual resources), concerns various topics, is written in different languages, targeted to both laymen and experts, and provided by different communities (libraries, archives museums and information industry) and individuals (Figure 1). The integration of information technologies and cultural heritage content on the Web is expected to have an impact on everyday life from the point of view of institutions, communities and individuals. In particular, collaborative environment scan recreate 3D navigable worlds that can offer new insights into our cultural heritage (Chan 2007). However, the main barrier is to find and relate cultural heritage information by end-users of cultural contents, as well as by organisations and communities managing and producing them. In this paper, we explore several visualisation techniques for supporting cultural interfaces, where the role of metadata is essential for supporting the search and communication among end-users (Figure 2). A conceptual framework was developed to integrate the data, purpose, technology, impact, and form components of a collaborative environment, Our preliminary results show that collaborative environments can help with cultural heritage information sharing and communication tasks because of the way in which they provide a visual context to end-users. They can be regarded as distributed virtual reality systems that offer graphically realised, potentially infinite, digital information landscapes. Moreover, collaborative environments also provide a new way of interaction between an end-user and a cultural heritage data set. Finally, the visualisation of metadata of a dataset plays an important role in helping end-users in their search for heritage contents on the Web.
Resumo:
En este artículo se describe el proceso de diseño e implementación de la base de datos RVDynDB (Rail Vehicle Dynamic parameters DataBase), que pretende ser un extenso repositorio de los modelos de dominio público empleados en la simulación dinámica de vehículos ferroviarios en todo el mundo. Atendiendo a sus características de flexibilidad, extensibilidad e independencia de la plataforma, se ha escogido un modelo de datos XML, que facilita el almacenamiento de datos de procedencia muy heterogénea, al tiempo que permite compartir el contenido de la base de datos con otros usuarios a través de internet. Se ha presentado también el lenguaje RVDynML (Rail Vehicle Dynamic parameters Markup Language), que define la estructura de la información almacenada en la base de datos. Al ser un lenguaje basado en XML, con el tiempo podría llegar a convertirse en un estándar para el intercambio de datos sobre los principales parámetros constructivos que definen el comportamiento dinámico de los vehículos.Se han seleccionado 173 referencias bibliográficas, cuyos datos se han utilizado para construir la base de datos, constituida por un total de 957 registros. Finalmente, se ha desarrollado una aplicación específica con MATLAB para gestionar las búsquedas en la base de datos. Para ello se ha empleando una API de Java que proporciona una interfaz para el DOM, que permite permiten acceder, modificar, insertar o eliminar los elementos y atributos que componen un documento XML.
Resumo:
Las prácticas en laboratorios forman una parte muy importante de la formación en todos los programas docentes. A pesar de esta importancia, la creación de un laboratorio no es una tarea fácil, ya que el hecho de equipar un laboratorio puede suponer un gran gasto económico, tanto inicial como posterior. Como solución, surge la educación a distancia, y en concreto los laboratorios virtuales, es decir, simulaciones de un laboratorio real utilizando modelos matemáticos. Por sus características y flexibilidad se han ido desarrollando laboratorios virtuales en el ámbito docente, pero no todas las áreas cuentan con tantas posibilidades o facilidades como en la electrónica. La mayoría de los laboratorios accesibles desde Internet que hay en la actualidad dentro de la enseñanza a distancia o formación online, son virtuales. El laboratorio que se ha desarrollado tiene como principal ventaja la realización de prácticas controlando instrumentos y circuitos reales de forma remota. El proyecto consiste en realizar un sistema software para implementar un laboratorio remoto en el área de la electrónica analógica, que pueda ser utilizado como complemento a las actividades formativas que se realizan en los laboratorios de los centros de enseñanza. El sistema completo también consta de un hardware controlado mediante buses de comunicación estándar, que permite la implementación de distintos circuitos analógicos, de tal forma que se pueda realizar prácticas sobre circuitos físicos reales. Para desarrollar un laboratorio lo más real posible, la aplicación que maneja el estudiante es un visor 3D. Con la utilización de un visor 3D lo que se pretende es tener un aumento de la realidad a la hora de realizar las prácticas de laboratorio remotamente. El sistema desarrollado cuenta con un sistema de comunicación basado en un modelo cliente-servidor: • Servidor: se encarga de procesar las acciones que realiza el cliente y controla y monitoriza los instrumentos y dispositivos del sistema hardware. • Cliente: sería el usuario final, que mediante un visor 3D comunica las acciones a realizar al servidor para que éste las procese. Practices in laboratories are a very important part of training in all educational programs. Despite this importance, the establishment of a laboratory is not an easy task, since the fact of equipping a laboratory can be a great economic budget, both initial and subsequent spending. As a solution, appears the education at distance (online), and in particular the virtual labs, namely simulations of a real laboratory by using mathematical models. Virtual laboratories in the field of teaching have been developed for its features and flexibility, but not all areas have so many possibilities or facilities as in electronics. The most accessible laboratories from the Internet that are currently accessible within the distance or e-learning (on-line) are virtual. The laboratory which has been developed has as a main advantage to make practices or exercises in the fact of controlling instruments and real circuits remotely. The project consists of making a software system in order to implement a remote laboratory in the area of analog electronics that can be used as a complement to the others training activities to be carried out. The complete system also consists of a controlled hardware by standard communication buses that allow the implementation of several analog circuits, in such a way that practices can control real physical circuits. To develop a laboratory as more realistic as possible, the application that manages the student is a 3D viewer. With the use of a 3D viewer, is intended to have an increase in reality when any student wants to access to laboratory practices remotely. The developed system has a communication system based on a model Client/Server: • Server: The system that handles actions provided by the client and controls and monitors the instruments and devices in the hardware system. • Client: The end user, which using a 3D viewer, communicates the actions to be performed at the server so that it will process them.
Resumo:
El objetivo fundamental del Proyecto Fin de Carrera es ofrecer una visión global de la arquitectura de red IMS (“IP Multimedia Subsystem”) conociendo el comportamiento de los nodos que la forman y las funcionalidades que cada uno de ellos implementa. Durante la realización del proyecto se ha tenido en cuenta que las redes IMS han transformando el sector de las telecomunicaciones debido a la gran convergencia de los servicios que ofrece. Esta tecnología presenta un amplio abanico a las aplicaciones multimedia, tanto en el entorno empresarial como residencial. Como desarrollo práctico se expone un diseño de alto nivel que consiste en la ampliación de la capacidad existente de uno de los nodos de la red IMS para un operador nacional ofreciendo el servicio ‘Business Trunking’ para grandes clientes. El objetivo destinado alcanzar con este caso práctico es el análisis del impacto que supone la inclusión de este nuevo elemento en el resto de nodos que forman la red. Consiguiendo demostrar la facilidad para el operador, a la hora de ampliar la capacidad de su red, sin olvidarnos que esto conlleva numerosos beneficios tanto a nivel de proveedor como de usuario final. The principal objective in this Final Degree Project is to provide an overview of the architect of the IMS Network, considering the behavior of nodes that integrate this network and the main functionalities that each of them implements. During the study of this Project has taken into account that IMS network has turned Telecommunications world, due to the strong convergence of the services offered. This technology shows a wide range of multimedia applications, in enterprise as much as residential environment. As a practical development a high-level design is presented. It involves the expansion of existing capacity for one of IMS network nodes for a national operator offering ‘Business Trunking’ service for enterprises. The goal to achieve in this case is to analyze the impact caused by the inclusion of this new element in the rest of the nodes within the IMS network. To manage to prove the ease of the operator, in expanding the capacity of its network, and considering IMS technology allows to obtain several benefits for both supplier and end user.
Resumo:
In the mid-long-term after a nuclear accident, the contamination of drinking water sources, fish and other aquatic foodstuffs, irrigation supplies and people?s exposure during recreational activities may create considerable public concern, even though dose assessment may in certain situations indicate lesser importance than for other sources, as clearly experienced in the aftermath of past accidents. In such circumstances there are a number of available countermeasure options, ranging from specific chemical treatment of lakes to bans on fish ingestion or on the use of water for crop irrigation. The potential actions can be broadly grouped into four main categories, chemical, biological, physical and social. In some cases a combination of actions may be the optimal strategy and a decision support system (DSS) like MOIRA-PLUS can be of great help to optimise a decision. A further option is of course not to take any remedial actions, although this may also have significant socio-economic repercussions which should be adequately evaluated. MOIRA-PLUS is designed to allow for a reliable assessment of the long-term evolution of the radiological situation and of feasible alternative rehabilitation strategies, including an objective evaluation of their social, economic and ecological impacts in a rational and comprehensive manner. MOIRA-PLUS also features a decision analysis methodology, making use of multi-attribute analysis, which can take into account the preferences and needs of different types of stakeholders. The main functions and elements of the system are described summarily. Also the conclusions from end-user?s experiences with the system are discussed, including exercises involving the organizations responsible for emergency management and the affected services, as well as different local and regional stakeholders. MOIRAPLUS has proven to be a mature system, user friendly and relatively easy to set up. It can help to better decisionmaking by enabling a realistic evaluation of the complete impacts of possible recovery strategies. Also, the interaction with stakeholders has allowed identifying improvements of the system that have been recently implemented.
Resumo:
Este trabajo fin de máster se integra con el sistema Localiza, un sistema previamente desarrollado para la localización bajo demanda de personas que requieran un cierto grado de supervisión. El proyecto amplia las funciones del sistema Localiza, añadiendo una nueva funcionalidad que permita a un usuario con movilidad reducida controlar su dispositivo móvil a través de su ordenador personal. Este proyecto se integra con el proyecto Localiza bajo el título: “Desarrollo de una herramienta software para el manejo de un teléfono móvil adaptada a personas con discapacidad física severa”. El proyecto citado se centra en el desarrollo de una aplicación móvil, que se comunicara con el ordenador personal del usuario. El desarrollo del sistema residente en el ordenador personal, es el ámbito central que ocupa a este trabajo fin de Master. El usuario final al que está destinada la aplicación desarrollada en este proyecto, es un usuario con grado de discapacidad motórica, de forma que con ligeros movimientos de cabeza sea capaz de controlar remotamente el terminal móvil a través de un ordenador personal. El objetivo principal del proyecto es el control remoto de un terminal móvil desde un ordenador personal. La comunicación entre el terminal móvil y el ordenador personal se ha realizado bajo el protocolo Bluetooth. Para desarrollar la aplicación residente en el ordenador personal, se ha utilizado la plataforma Java. SUMMARY. This Master Tesis develops an application, which is intended to provide an added value to the already existing project Localiza, on-demand position system for people with severe disabilities. This project extends the functions of the Localiza system, adding a new feature that allows a user with limited mobility to control their mobile device. This project is integrated with the project under the title: “Desarrollo de una herramienta software para el manejo de un teléfono móvil adaptada a personas con discapacidad física severa”. The above project it is focused on the development of mobile application and the development of the application that resides in the personal computer is the main work of this project. Both projects are closely related and together they complement. The end-user of the application that is developed in this project is a person with motor disabilities. This person may control the computer mouse with slight head movements. The aim of this project is to facilitate the access to the personal computer and to the mobile telephony environment for disabled people. The Communication between the mobile and personal computer has been conducted under the Bluetooth protocol. To develop the application resident on the personal computer have been used the Java platform.
Resumo:
FTTH (fibra hasta el hogar) es actualmente, junto con la banda ancha móvil, la principal evolución tecnológica en Redes y Servicios de Telecomunicaciones. Se prevé que en los próximos años, el despliegue de las redes FTTH se vea incrementado de manera significativa, gracias al interés creciente tanto de Operadores de Telecomunicaciones como de Organismos Gubernamentales. Este despliegue (que en el año 2013 ya se está haciendo realidad) llevará servicios de muy alta velocidad (superiores a 100 Mbps, incluso 1 Gbps) de manera masiva a los hogares, exigiendo nuevos requisitos y prestaciones en la red del hogar de los clientes. Se abre aquí, por tanto, un campo de exploración novedoso, incipiente y de requerimientos cada vez más exigentes. De hecho, sin duda, la red del hogar es uno de los elementos fundamentales para el éxito de las redes y servicios en FTTH. Debido a todo lo anterior, se convierte en una necesidad para el sector de las Telecomunicaciones el encontrar soluciones a los problemas anteriormente mencionados. Con objeto de contribuir al análisis de soluciones, este proyecto se enfoca en dos temas, ambos relacionados con la problemática ya mencionada en la red del hogar: Prospección e identificación de soluciones tecnológicas avanzadas para la red del hogar. Descrito en capítulos 2, 3 y 4. En ellos se realiza un estudio detallado de la situación actual y tendencias futuras de los dispositivos utilizados en la red del hogar. Este estudio está enfocado en la distribución de señales de muy alto ancho de banda (en torno a los 100 Mbps por segundo) en el hogar. Diseño y desarrollo de una aplicación que permita determinar la calidad de experiencia de cliente (QoE) de un servicio de televisión IP (IPTV). Descrito en capítulos 5 y 6. Se ha seleccionado este tipo de servicios debido a que son los que requieren mayores prestaciones tanto de la red de transporte como de la red del hogar y, al mismo tiempo, son los más complicados de medir debido a la fuerte componente de subjetividad del usuario final. Una red del hogar correctamente diseñada debe cumplir de manera equilibrada los requisitos demandados tanto por el operador como por el cliente o usuario final del servicio. Los requisitos del operador se centran principalmente en el control de la inversión (CAPEX) y del gasto de mantenimiento de la red del hogar (OPEX). El usuario, por otra parte, requiere sencillez en la instalación y mínimo número de elementos a instalar (cero intrusismo, ausencia de cableado). Para adaptarse a estos requerimientos, existe una serie de dispositivos y tecnologías que buscan encontrar el punto de equilibrio entre necesidades de operadores y de clientes finales. Las soluciones actualmente utilizadas pueden dividirse en soluciones cableadas e inalámbricas. También existen soluciones híbridas. Todas ellas se describen en detalle en los capítulos 3 y 4. Al final del estudio se concluye que, con la tecnología actual, es preferible el uso de soluciones cableadas tipo Ethernet o POF. Es recomendable no usar soluciones PLC de manera extensiva (G.hn puede ser una alternativa a futuro) y, en caso de no requerir cableado, utilizar WiFi 11n con frecuencias de 5 GHz, así como sus evoluciones, WiFi 11ac y 11ad. La aplicación desarrollada, explicada en los capítulos 5 y 6, permite capturar y medir en tiempo real la señal de televisión IP que se entrega al usuario. Esta aplicación estimará, a partir de dichas medidas, la calidad de la señal entregada. Para ello tendrá en cuenta el tipo de descodificador utilizado por el usuario así como la red empleada (red FTTH de Telefónica). Esta aplicación podría ser utilizada en los centros de atención técnica de las operadoras de telecomunicaciones, determinando así la relación existente entre reclamaciones recibidas y calidad de servicio medida por la aplicación. Asimismo, aparte de realizar medidas en tiempo real, la aplicación vuelca las medidas realizadas y alarmas detectadas en ficheros log, facilitando el análisis técnico de los problemas e incidencias registrados por dichos centros técnicos. Igualmente, esta aplicación puede ser utilizada para el proceso de certificación de equipamiento de red del hogar o incluso como herramienta para profundizar en parámetros teóricos y criterios de medida de calidad de servicio en IPTV. ABSTRACT. FTTH (Fiber To The Home) and mobile broadband are currently the main technological trend in the Network and Telecommunications Services area. In the next few years, the deployment of FTTH networks will experiment a significant increase, due to the growing interest of both telecommunications operators and government agencies. This deployment (that is becoming a reality) which will massively carry high-speed services to households (speeds of more than 100 Mbps, even 1 Gbps) will demand new requirements and features in the customer’s home network. It can be found here a new and emerging field of exploration, with increasingly demanding requirements. In fact, the home network is one of the key elements for the success of FTTH network and services. Due to the aforementioned, it is a necessity for the telecommunications industry to find solutions to these problems. In order to contribute into the solution analysis, this project focuses on two subjects, both related to the problems of home networking: Exploratory research and identification of advanced technology solutions for the home network. Described in chapters 2, 3 and 4. These chapters show a detailed study of the current situation and future trends of the devices used at the home network. It focuses on the distribution of very high bandwidth signals (around 100 Mbps per second) in the customer’s home. Design and development of an application to evaluate customer’s quality of experience (QoE) of an IP television service (IPTV). Described in chapters 5 and 6. IPTV service has been selected because it requires higher performance both from the transport and the home networks and, at the same time, it is the most difficult to measure due to the strong component of subjectivity of the end user. A correct design of the home network must meet the requirements demanded both by the network operator and the customer (end user of the service). Network operator requirements mainly focus on reduced capital expenditures (CAPEX) and operational expenditures (OPEX). Additionally, the final user requires a simple and easy installation and also the minimum number of items to install (zero intrusion, lack of wiring, etc.). Different devices and technologies seek to find a balance between these two requirements (network operators and final users requirements). Solutions available in the market can be divided into wired and wireless. There are also hybrid solutions. All of them are described thoroughly in the first part of the project. The conclusion at the end of the study recommends the use of wired technologies like Ethernet or POF. Additionally, the use of PLC is not advised (G.hn can be an alternative in the future) and, in the case of not requiring wiring, the use of 11ac and 11ad WiFi is advised. The application developed in the second part of the project allows capturing and measuring the real-time IPTV signal delivered to the user. This application will estimate the delivered signal quality from the captured measurements. For this purpose, it will also consider the type of decoder installed on the customer’s premises and the transport network (Telefonica’s FTTH network). This application could be used at the operator’s technical service centres, determining in this way the relationship between user’s complaints and the quality of service measured. Additionally, this application can write all the measurements and alarms in log files, making easier the technical analysis of problems and impairments recorded by the technical centres. Finally, the application can also be used for the certification process of home networking equipment (i.e. decoders) or even as a tool to deepen theoretical parameters and measuring criteria of quality of service in IPTV.
Resumo:
A día de hoy, XML (Extensible Markup Language) es uno de los formatos más utilizados para el intercambio y almacenamiento de información estructurada en la World Wide Web. Es habitual que las aplicaciones que utilizan archivos XML presupongan en ellos una estructura determinada, pudiendo producirse errores si se intentase emplear documentos que no la cumplan. A fin de poder expresar este tipo de limitaciones y poder verificar que un documento las cumple, se definió en el mismo estándar XML el DTD, si bien pronto se mostró bastante limitado en cuanto a su capacidad expresiva. Es por este motivo que se decidió crear el XML Schema, un lenguaje XML para definir qué estructura deben tener otros documentos XML. Contar con un esquema tiene múltiples ventajas, siendo la principal de ellas el poder validar documentos contra él para comprobar si su estructura es correcta u otras como la generación automática de código. Sin embargo, definir una estructura común a varios documentos XML de una manera óptima puede convertirse en una tarea ardua si se hace de manera manual. Este problema puede salvarse contando con una herramienta que automatice el proceso de creación de dichos XSDs. En este proyecto, desarrollaremos una herramienta en Java que, a partir de una serie de documentos XML de entrada, inferirá automáticamente un esquema contra el que validen todos ellos, expresando su estructura de manera completa y concisa. Dicha herramienta permitirá elegir varios parámetros de inferencia, a fin de que el esquema generado se adapte lo más posible a los propósitos del usuario. Esta herramienta generará también una serie de estadísticas adicionales, que permitirán conocer más información sobre los ficheros de entrada.
Resumo:
Telecommunications networks have been always expanding and thanks to it, new services have appeared. The old mechanisms for carrying packets have become obsolete due to the new service requirements, which have begun working in real time. Real time traffic requires strict service guarantees. When this traffic is sent through the network, enough resources must be given in order to avoid delays and information losses. When browsing through the Internet and requesting web pages, data must be sent from a server to the user. If during the transmission there is any packet drop, the packet is sent again. For the end user, it does not matter if the webpage loads in one or two seconds more. But if the user is maintaining a conversation with a VoIP program, such as Skype, one or two seconds of delay in the conversation may be catastrophic, and none of them can understand the other. In order to provide support for this new services, the networks have to evolve. For this purpose MPLS and QoS were developed. MPLS is a packet carrying mechanism used in high performance telecommunication networks which directs and carries data using pre-established paths. Now, packets are forwarded on the basis of labels, making this process faster than routing the packets with the IP addresses. MPLS also supports Traffic Engineering (TE). This refers to the process of selecting the best paths for data traffic in order to balance the traffic load between the different links. In a network with multiple paths, routing algorithms calculate the shortest one, and most of the times all traffic is directed through it, causing overload and packet drops, without distributing the packets in the other paths that the network offers and do not have any traffic. But this is not enough in order to provide the real time traffic the guarantees it needs. In fact, those mechanisms improve the network, but they do not make changes in how the traffic is treated. That is why Quality of Service (QoS) was developed. Quality of service is the ability to provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. Traffic is distributed into different classes and each of them is treated differently, according to its Service Level Agreement (SLA). Traffic with the highest priority will have the preference over lower classes, but this does not mean it will monopolize all the resources. In order to achieve this goal, a set policies are defined to control and alter how the traffic flows. Possibilities are endless, and it depends in how the network must be structured. By using those mechanisms it is possible to provide the necessary guarantees to the real-time traffic, distributing it between categories inside the network and offering the best service for both real time data and non real time data. Las Redes de Telecomunicaciones siempre han estado en expansión y han propiciado la aparición de nuevos servicios. Los viejos mecanismos para transportar paquetes se han quedado obsoletos debido a las exigencias de los nuevos servicios, que han comenzado a operar en tiempo real. El tráfico en tiempo real requiere de unas estrictas garantías de servicio. Cuando este tráfico se envía a través de la red, necesita disponer de suficientes recursos para evitar retrasos y pérdidas de información. Cuando se navega por la red y se solicitan páginas web, los datos viajan desde un servidor hasta el usuario. Si durante la transmisión se pierde algún paquete, éste se vuelve a mandar de nuevo. Para el usuario final, no importa si la página tarda uno o dos segundos más en cargar. Ahora bien, si el usuario está manteniendo una conversación usando algún programa de VoIP (como por ejemplo Skype) uno o dos segundos de retardo en la conversación podrían ser catastróficos, y ninguno de los interlocutores sería capaz de entender al otro. Para poder dar soporte a estos nuevos servicios, las redes deben evolucionar. Para este propósito se han concebido MPLS y QoS MPLS es un mecanismo de transporte de paquetes que se usa en redes de telecomunicaciones de alto rendimiento que dirige y transporta los datos de acuerdo a caminos preestablecidos. Ahora los paquetes se encaminan en función de unas etiquetas, lo cual hace que sea mucho más rápido que encaminar los paquetes usando las direcciones IP. MPLS también soporta Ingeniería de Tráfico (TE). Consiste en seleccionar los mejores caminos para el tráfico de datos con el objetivo de balancear la carga entre los diferentes enlaces. En una red con múltiples caminos, los algoritmos de enrutamiento actuales calculan el camino más corto, y muchas veces el tráfico se dirige sólo por éste, saturando el canal, mientras que otras rutas se quedan completamente desocupadas. Ahora bien, esto no es suficiente para ofrecer al tráfico en tiempo real las garantías que necesita. De hecho, estos mecanismos mejoran la red, pero no realizan cambios a la hora de tratar el tráfico. Por esto es por lo que se ha desarrollado el concepto de Calidad de Servicio (QoS). La calidad de servicio es la capacidad para ofrecer diferentes prioridades a las diferentes aplicaciones, usuarios o flujos de datos, y para garantizar un cierto nivel de rendimiento en un flujo de datos. El tráfico se distribuye en diferentes clases y cada una de ellas se trata de forma diferente, de acuerdo a las especificaciones que se indiquen en su Contrato de Tráfico (SLA). EL tráfico con mayor prioridad tendrá preferencia sobre el resto, pero esto no significa que acapare la totalidad de los recursos. Para poder alcanzar estos objetivos se definen una serie de políticas para controlar y alterar el comportamiento del tráfico. Las posibilidades son inmensas dependiendo de cómo se quiera estructurar la red. Usando estos mecanismos se pueden proporcionar las garantías necesarias al tráfico en tiempo real, distribuyéndolo en categorías dentro de la red y ofreciendo el mejor servicio posible tanto a los datos en tiempo real como a los que no lo son.
Resumo:
El objetivo de este proyecto es presentar un estudio acerca de la evolución a lo largo del tiempo de las diferentes tecnologías de acceso a la red que hacen uso de la planta instalada de par de cobre, llamadas tecnologías xDSL, y prestando especial interés a aquellas más ampliamente implantadas y aquellas que ofrecen velocidades de acceso mayores. El estudio hará un breve repaso a esta evolución desde la tecnología HDSL, la primera en utilizar el bucle de abonado para la transmisión de datos digitales, hasta las más utilizadas actualmente, como el ADSL2+ o el VDSL2. Además, se profundizará en el desarrollo de las tecnologías de acceso a la red de alta velocidad, principalmente asimétricas, haciendo un amplio estudio del funcionamiento de tecnologías desde el ADSL hasta el VDSL2. Se expondrán las diferencias entre todas ellas, atendiendo a las ventajas que cada una ofrece frente a las desarrolladas anteriormente, para lo que se tendrá en cuenta, principalmente, el uso cada vez más eficiente que se hace del espectro de frecuencias disponible, así como la velocidad máxima teórica que se podría alcanzar con cada una de ellas. Existen además, varias técnicas que permiten mejorar el rendimiento de la tecnología VDSL2, se repasaran brevemente, y se dedicará parte del estudio a aquella que permite agrupar varias líneas en una sola conexión, tecnología llamada bonding. Para complementar el estudio se realizarán una serie de simulaciones que permitan reflejar las mejoras que se van produciendo en las distintas tecnologías que se han ido desarrollando a lo largo del tiempo, observando para ello los diferentes puntos que se han tenido en cuenta en la parte teórica y haciendo hincapié en aquellas ventajas que más valorará el usuario final. Con un simulador de bucles de diferentes distancias, y un simulador e inyector de ruido, se simularán distintos escenarios en los que realizaran medidas de la velocidad de sincronismo obtenida utilizando tres tecnologías distintas, ADSL2+, VDSL2 y VDSL2 Bonding. Con los resultados obtenidos, se realizará una valoración de las condiciones en las que se obtienen mejores rendimientos con cada una de ellas. ABSTRACT. The goal of this Project is studying the historical evolution of different access technologies based on twisted pair, also known as xDSL access. I will focus on those technologies widely deployed and those that provide greater access speeds. I will begin with a historical approach, from HDSL, the first technology to use the copper pair to transmit digital data, to the most used nowadays, ADSL2+ and VDSL2. Later on, I will make a deep analysis of broadband access technologies, mainly asymmetric ones, from ADSL to VDSL2. I will explain the differences between them, paying special attention to their advantages in the face of the previous ones. To evaluate these leverages I will mainly consider the frequency spectrum efficiency and the maximum theoretical speed, both upstream and downstream. I will make a brief introduction to various techniques that improve VDSL2 performance. But I will take some more time to explain bonding, a technique that allows link some lines in a unique connection. To finish the project I will make a series of simulations that reflect the improvements achieved with each new technology, keeping in mind all those points reflected in the theoretical part of the project, and focusing on those advantages most valuable to the end user. I will analyze the obtained data to evaluate the best conditions for each technology. Those simulations will be made using a loop simulator and a noise injector to evaluate different scenarios, making rate measurements of three technologies, i.e. ADSL2+, VDSL2 and VDLS2 Bonding.