979 resultados para Software Tools


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interoperability on multiple levels, concerning both the ontologies themselves and their engineering activities, is a key requirement for ontology networks to be efficient, with minimal redundancy and high reuse. This requirement has a strict binding for software tools that can support some interoperability levels, yet they can be hindered by a lack of shared models and vocabularies describing the resources to be handled, as well as the ways of handling them. Here, three examples of metalevel vocabularies are proposed, each covering at least one peculiar interoperability aspect: OMV for modeling the artifacts themselves, LIR for managing a multilingual layer on top of them, and C-ODO Light for modeling collaboration-supportive life cycle management tasks and processes. All of these models lend themselves to handling by dedicated software tools and are all being employed within NeOn products.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La implantación de la televisión digital en España ha supuesto un conjunto de desafíos técnicos y de orden práctico que se han ido acometiendo en multitud de ámbitos, desde la legislación que normaliza las infraestructuras comunes de telecomunicación hasta los cambios en las instalaciones y receptores donde el usuario final recibe los servicios. Por la complejidad y el carácter interdisciplinar de los conocimientos necesarios el aprendizaje de los titulados dentro del ámbito de la Telecomunicación supone también un reto importante. Este proyecto realiza una primera aproximación a un conjunto de herramientas hardware y software de ayuda a la enseñanza de esta amplia disciplina. El proyecto se ha realizado en torno a Labmu laboratorio multiusuario para prácticas de televisión digital de la empresa Xpertia. Se ha realizado para conocer y documentar sus posibilidades. También se ha documentado la tarjeta moduladora DTA-111 y el software para Windows StreamXpress. Estos sistemas ofrecen muchas posibilidades para la docencia de la televisión digital en todas las áreas desde la codificación fuente hasta la decodificación en el usuario final. En particular para ambos sistemas se han realizado pruebas en radiofrecuencia de emisiones de TDT. También se han establecido algunas ideas para trabajo futuro con estos sistemas. El proyecto se divide en seis capítulos: Capitulo 1: En el primer capítulo titulado Introducción se presenta el proyecto. Capítulo 2: En el segundo capítulo titulado Composición Hardware Labmu se presentan todos los componentes del laboratorio Labmu con la descripción de cada componente y sus características técnicas. Así mismo se presenta el interconexionado y configuración con que se ha trabajado. Capítulo 3: En el tercer capítulo titulado Software Labmu se describe como manual de usuario todos los componentes y posibilidades de software de Labmu. Capítulo 4: En el cuarto capítulo titulado Medidas con Labmu se realiza las medidas de MER, CBER, VBER, C/N y potencia de canal de los canales emitidos en la Comunidad de Madrid, comparando estas medidas con el analizador Promax Prodig-5. Capítulo 5: En el quinto capítulo titulado Tarjeta receptora y software se describe la tarjeta DTA-111 y el software StreamXpress, realizando medidas con el analizador Promax Prodig-5 introduciendo errores a la señal emitida por la tarjeta, y estudiando los niveles límites de visualización correcta. Capítulo 6: En el sexto y último capítulo titulado Conclusiones se presentan las conclusiones del proyecto y un plan de trabajo futuro ABSTRACT. The implantation of Digital Television in Spain (TDT) has implied a number of technical and practical challenges in several scopes. These challenges range from recommendations that standardize common telecommunications infrastructure to the changes in facilities where the end user receives digital services. The complexity and the interdisciplinary nature of skills that graduate Telecommunications students need to learn, is also a major challenge. This project is a first approach of a set of hard ware and software tools to help in the task of teaching this broad range discipline. The project has been carried out on Labmu, a multiuser Digital Television laboratory created by the Xpertia company. Its objectives are to understand and document this range of possibilities. DTA-111 modulator card and software for Windows StreamXpress have also been documented. These systems offer many options for teaching digital television in all areas, from source coding to end user decoding. In particular, both systems were tested on RF emissions in TDT. More over some ideas for future work with these systems have also foreseen. The project is structured in six chapters: Chapter1: This section Introduces the project. Chapter2: Titled “The Composition Hardware Labmu” presents Labmu lab components and provides descriptions for each component and its technical characteristics. It also presents the interconnection and configuration we have been using. Chapter3: Titled “Software Labmu” is a user manual, describing all components and software possibilities Labmu offers. Chapter4: Titled “Measures to Labmu” presents MER, CBER, VBER, C /N and channel power measures provided by Labmu in comparison with Promax Prodig-5 measures for all channels broad casting digital television services in the Community of Madrid. Chapter 5 Titled “Receiver card and software” describes DTA-111 card and software StreamXpress. Also the effects of errors insertion performed by this card are measured with the PromaxProdig-5 meter. Threes hold levels for correct reception are also studied. Chapter6: Entitled Conclusions presents the conclusions of the project and a plan for future work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Direct Steam Generation (DSG) in Linear Fresnel (LF) solar collectors is being consolidated as a feasible technology for Concentrating Solar Power (CSP) plants. The competitiveness of this technology relies on the following main features: water as heat transfer fluid (HTF) in Solar Field (SF), obtaining high superheated steam temperatures and pressures at turbine inlet (500ºC and 90 bar), no heat tracing required to avoid HTF freezing, no HTF degradation, no environmental impacts, any heat exchanger between SF and Balance Of Plant (BOP), and low cost installation and maintenance. Regarding to LF solar collectors, were recently developed as an alternative to Parabolic Trough Collector (PTC) technology. The main advantages of LF are: the reduced collector manufacturing cost and maintenance, linear mirrors shapes versus parabolic mirror, fixed receiver pipes (no ball joints reducing leaking for high pressures), lower susceptibility to wind damages, and light supporting structures allowing reduced driving devices. Companies as Novatec, Areva, Solar Euromed, etc., are investing in LF DSG technology and constructing different pilot plants to demonstrate the benefits and feasibility of this solution for defined locations and conditions (Puerto Errado 1 and 2 in Murcia Spain, Lidellin Newcastle Australia, Kogran Creek in South West Queensland Australia, Kimberlina in Bakersfield California USA, Llo Solar in Pyrénées France,Dhursar in India,etc). There are several critical decisions that must be taken in order to obtain a compromise and optimization between plant performance, cost, and durability. Some of these decisions go through the SF design: proper thermodynamic operational parameters, receiver material selection for high pressures, phase separators and recirculation pumps number and location, pipes distribution to reduce the amount of tubes (reducing possible leaks points and transient time, etc.), etc. Attending to these aspects, the correct design parameters selection and its correct assessment are the main target for designing DSG LF power plants. For this purpose in the recent few years some commercial software tools were developed to simulatesolar thermal power plants, the most focused on LF DSG design are Thermoflex and System Advisor Model (SAM). Once the simulation tool is selected,it is made the study of the proposed SFconfiguration that constitutes the main innovation of this work, and also a comparison with one of the most typical state-of-the-art configuration. The transient analysis must be simulated with high detail level, mainly in the BOP during start up, shut down, stand by, and partial loads are crucial, to obtain the annual plant performance. An innovative SF configurationwas proposed and analyzed to improve plant performance. Finally it was demonstrated thermal inertia and BOP regulation mode are critical points in low sun irradiation day plant behavior, impacting in annual performance depending on power plant location.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

After being designed, a product has to be manufactured, which means converting concepts and information into a real, physical object. This requires a big amount of resources and a careful planning. The product manufacturing must be designed too, and that is called Industrialization Design. An accepted methodology for this activity is starting defining simple structures and then progressively increasing the detail degree of the manufacturing solution. The impact of decisions taken at first stages of Industrialization Design is remarkable, and software tools to assist designers are required. In this paper a Knowledge Based Application prototype for the Industrialization Design is presented. The application is implemented within the environment CATIA V5/DELMIA. A case study with a simple Product from aerospace sector illustrates the prototype development.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Knowledge modeling tools are software tools that follow a modeling approach to help developers in building a knowledge-based system. The purpose of this article is to show the advantages of using this type of tools in the development of complex knowledge-based decision support systems. In order to do so, the article describes the development of a system called SAIDA in the domain of hydrology with the help of the KSM modeling tool. SAIDA operates on real-time receiving data recorded by sensors (rainfall, water levels, flows, etc.). It follows a multi-agent architecture to interpret the data, predict the future behavior and recommend control actions. The system includes an advanced knowledge based architecture with multiple symbolic representation. KSM was especially useful to design and implement the complex knowledge based architecture in an efficient way.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern sensor technologies and simulators applied to large and complex dynamic systems (such as road traffic networks, sets of river channels, etc.) produce large amounts of behavior data that are difficult for users to interpret and analyze. Software tools that generate presentations combining text and graphics can help users understand this data. In this paper we describe the results of our research on automatic multimedia presentation generation (including text, graphics, maps, images, etc.) for interactive exploration of behavior datasets. We designed a novel user interface that combines automatically generated text and graphical resources. We describe the general knowledge-based design of our presentation generation tool. We also present applications that we developed to validate the method, and a comparison with related work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

EPICS (Experimental Physics and Industrial Control System) lies in a set of software tools and applications which provide a software infrastructure for building distributed data acquisition and control systems. Currently there is an increase in use of such systems in large Physics experiments like ITER, ESS, and FREIA. In these experiments, advanced data acquisition systems using FPGA-based technology like FlexRIO are more frequently been used. The particular case of ITER (International Thermonuclear Experimental Reactor), the instrumentation and control system is supported by CCS (CODAC Core System), based on RHEL (Red Hat Enterprise Linux) operating system, and by the plant design specifications in which every CCS element is defined either hardware, firmware or software. In this degree final project the methodology proposed in Implementation of Intelligent Data Acquisition Systems for Fusion Experiments using EPICS and FlexRIO Technology Sanz et al. [1] is used. The final objective is to provide a document describing the fulfilled process and the source code of the data acquisition system accomplished. The use of the proposed methodology leads to have two diferent stages. The first one consists of the hardware modelling with graphic design tools like LabVIEWFPGA which later will be implemented in the FlexRIO device. In the next stage the design cycle is completed creating an EPICS controller that manages the device using a generic device support layer named NDS (Nominal Device Support). This layer integrates the data acquisition system developed into CCS (Control, data access and communication Core System) as an EPICS interface to the system. The use of FlexRIO technology drives the use of LabVIEW and LabVIEW FPGA respectively. RESUMEN. EPICS (Experimental Physics and Industrial Control System) es un conjunto de herramientas software utilizadas para el desarrollo e implementación de sistemas de adquisición de datos y control distribuidos. Cada vez es más utilizado para entornos de experimentación física a gran escala como ITER, ESS y FREIA entre otros. En estos experimentos se están empezando a utilizar sistemas de adquisición de datos avanzados que usan tecnología basada en FPGA como FlexRIO. En el caso particular de ITER, el sistema de instrumentación y control adoptado se basa en el uso de la herramienta CCS (CODAC Core System) basado en el sistema operativo RHEL (Red Hat) y en las especificaciones del diseño del sistema de planta, en la cual define todos los elementos integrantes del CCS, tanto software como firmware y hardware. En este proyecto utiliza la metodología propuesta para la implementación de sistemas de adquisición de datos inteligente basada en EPICS y FlexRIO. Se desea generar una serie de ejemplos que cubran dicho ciclo de diseño completo y que serían propuestos como casos de uso de dichas tecnologías. Se proporcionará un documento en el que se describa el trabajo realizado así como el código fuente del sistema de adquisición. La metodología adoptada consta de dos etapas diferenciadas. En la primera de ellas se modela el hardware y se sintetiza en el dispositivo FlexRIO utilizando LabVIEW FPGA. Posteriormente se completa el ciclo de diseño creando un controlador EPICS que maneja cada dispositivo creado utilizando una capa software genérica de manejo de dispositivos que se denomina NDS (Nominal Device Support). Esta capa integra la solución en CCS realizando la interfaz con la capa EPICS del sistema. El uso de la tecnología FlexRIO conlleva el uso del lenguaje de programación y descripción hardware LabVIEW y LabVIEW FPGA respectivamente.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research group is currently developing a biological computing model to be implemented with Escherichia Coli bacteria and bacteriophages M13, but it has to be modelled and simulated before any experiment in order to reduce the amount of failed attempts, time and costs. The problem that gave rise to this project is that there are no software tools which are able to simulate the biological process underlying that com- putational model, so it needs to be developed before doing any experimental implementation. There are several software tools which can simulate most of the biological processes and bacterial interactions in which this model is based, so what needs to be done is to study those available simulation tools, compare them and choose the most appropriate in order to be improved adding the desired functionality for this design. Directed evolution is a method used in biotechnology to obtain proteins or nucleic acids with properties not found in nature. It consists of three steps: 1) creating a library of mutants, 2) selecting the mutants with the desired properties, 3) replicating the variants identified in the selection step. The new software tool will be verified by simulating the selection step of a process of directed evolution applied to bacteriophages. ---ABSRACT---El grupo de investigación está desarrollando un modelo de computación biolóogica para ser implementado con bacterias Escherichia Coli y bacteriofagos M13, aunque primero tiene que ser modelizado antes de realizar cualquier experimento, de forma que los intentos fallidos y por lo tanto los costes se verán reducidos. El problema que dio lugar a este proyecto es la ausencia de herramientas software capaces de simular el proceso biológico que subyace a este modelo de computación biológica, por lo que dicha herramienta tiene que ser desarrollada antes de realizar cualquier implementación real. Existen varias herramientas software capaces de simular la mayoría de los procesos biológicos y las interacciones entre bacterias en los que se basa este modelo, por lo que este trabajo consiste en realizar un estudio de dichas herramientas de simulación, compararlas y escoger aquella más apropiada para ser mejorada añadiendo la funcionalidad deseada para este diseño. La evolución dirigida es un método utilizado en biotecnología para obtener proteínas o ácidos nucleicos con propiedades que no se encuentran en la naturaleza. Este método consiste en tres pasos: 1) crear una librería de mutantes, 2) seleccionar los mutantes con las propiedades deseadas, 3) Replicar los mutantes deseados. La nueva herramienta software será verificada mediante la simulación de la selección de mutantes de un proceso de evolución dirigida aplicado a bacteriofagos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El presente documento pretende ofrecer una visión general del estado del conjunto de herramientas disponibles para el análisis y explotación de vulnerabilidades en sistemas informáticos y más concretamente en redes de ordenadores. Por un lado se ha procedido a describir analíticamente el conjunto de herramientas de software libre que se ofrecen en la actualidad para analizar y detectar vulnerabilidades en sistemas informáticos. Se ha descrito el funcionamiento, las opciones, y la motivación de uso para dichas herramientas, comparándolas con otras en algunos casos, describiendo sus diferencias en otros, y justificando su elección en todos ellos. Por otro lado se ha procedido a utilizar dichas herramientas analizadas con el objetivo de desarrollar ejemplos concretos de uso con sus diferentes parámetros seleccionados observando su comportamiento y tratando de discernir qué datos son útiles para obtener información acerca de las vulnerabilidades existentes en el sistema. Además, se ha desarrollado un caso práctico en el que se pone en práctica el conocimiento teórico presentado de forma que el lector sea capaz de asentar lo aprendido comprobando mediante un caso real la utilidad de las herramientas descritas. Los resultados obtenidos han demostrado que el análisis y detección de vulnerabilidades por parte de un administrador de sistemas competente permite ofrecer a la organización en cuestión un conjunto de técnicas para mejorar su seguridad informática y así evitar problemas con potenciales atacantes. ABSTRACT. This paper tries to provide an overview of the features of the set of tools available for the analysis and exploitation of vulnerabilities in computer systems and more specifically in computer networks. On the one hand we pretend analytically describe the set of free software tools that are offered today to analyze and detect vulnerabilities in computer systems. We have described the operation, options, and motivation to use these tools in comparison with other in some case, describing their differences in others, and justifying them in all cases. On the other hand we proceeded to use these analyzed tools in order to develop concrete examples of use with different parameters selected by observing their behavior and trying to discern what data are useful for obtaining information on existing vulnerabilities in the system. In addition, we have developed a practical case in which we put in practice the theoretical knowledge presented so that the reader is able to settle what has been learned through a real case verifying the usefulness of the tools previously described. The results have shown that vulnerabilities analysis and detection made by a competent system administrator can provide to an organization a set of techniques to improve its systems and avoid any potential attacker.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

After being designed, a product has to be manufactured, which means converting concepts and information into a real, physical object. This requires a big amount of resources and a careful planning. The product manufacturing must be designed too, and that is called Industrialization Design. An accepted methodology for this activity is starting defining simple structures and then progressively increasing the detail degree of the manufacturing solution. The impact of decisions taken at first stages of Industrialization Design is remarkable, and software tools to assist designers are required. In this paper a Knowledge Based Application prototype for the Industrialization Design is presented. The application is implemented within the environment CATIA V5/DELMIA. A case study with a simple Product from aerospace sector illustrates the prototype development.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Análisis de múltiples grabaciones de las cuatro baladas y los cuatro scherzos de Chopin realizadas por diversos intérpretes. El objetivo del proyecto será determinar las diferencias objetivas existentes entre las diferentes ejecuciones y respecto a una partitura de referencia. OBJETIVOS: Analizar las grabaciones realizadas de obras de Chopin atendiendo a criterios objetivos como el tempo y la dinámica. MÉTODO Y FASES DEL TRABAJO: Se realizará un estudio diferencial para determinar las variaciones de parámetros objetivos existentes en las diferentes grabaciones respecto a una, que será tomada como referencia, mediante herramientas informáticas. MEDIOS: Se utilizarán versiones digitalizadas de las grabaciones y la edición crítica de Jan Ekier (National Edition) de las partituras. Para la realización del análisis se recurrirá al software SonicVisualiser desarrollado por el Centre for Digital Music (Queen Mary, University of London) en conjunto con los plugins destinados al análisis temporal y espectral disponibles. Así mismo, serán utilizadas otras herramientas desarrolladas durante el Mazurka's Project realizado por el Research Centre for the History and Analysis of Recorded Music para facilitar la visualización de datos en el presente documento. OBJETIVE: To analyze several records from Chopin’s repertoire, according to objective criteria like tempi and dynamics evolution. METHODOLOGY: A differential analysis is going to be made to determine how this criterion differs from every digitalized record to another one stabilized in advance as model. In order to do so, we’ll use software tools as well as auditions to get the best detail. MEDIA: Digital editions of the recordings and the critical edition by Prof. Jan Ekier music scores have been used. We’ve chosen the software tool SonicVisualiser developed by Centre for Digital Music (Queen Mary, University of London) in conjunction with some of the temporal and spectral plugins available for this platform as our main analysis tool. We’ve also used some other tools developed by the Research Centre for the History and Analysis of Recorded Music to appropriately display some important information in the present document.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Actualmente las redes sociales son muy utilizadas en todo el mundo, existen diferentes tipos de redes sociales con las que podemos conectar con amigos, ampliar nuestra red de contactos profesionales, aprender cosas nuevas, etc. Su elevado uso las ha convertido en uno de los negocios más rentables en internet generando fortunas para sus creadores, principalmente, a través de la publicidad. Muchas redes sociales son creadas por empresas que cuentan con un equipo experto, pero otras muchas han sido creadas por personas comunes, con bajos conocimientos informáticos, muchos motivados en sus aficiones o carreras profesionales, que al no encontrar nada en la red que les resulte útil, han decidido desarrollar ellos mismos sus propias redes sociales con la ayuda de herramientas informáticas. Una de esas herramientas son los sistemas de gestión de contenidos (CMS), con los cuales ahorraremos mucho tiempo de desarrollo y no necesitaremos invertir grandes cantidades de dinero. Este proyecto trata, principalmente, de cómo crear redes sociales haciendo uso de estas herramientas y tiene el objetivo de ser lo suficientemente claro para que cualquier persona, sin importar su nivel de conocimientos técnicos, sea capaz de desarrollar sus ideas. En la primera parte del proyecto se habla sobre las redes sociales en general y el impacto de éstas en la sociedad actual, donde se ve que, actualmente, debido a la cantidad de redes sociales y a la accesibilidad móvil, el uso de las redes sociales es una cotidianidad. También se explican algunos métodos para obtener beneficios económicos de una red social y las ventajas que presentan las redes sociales específicas frente a las generalistas, concluyendo que las redes sociales específicas van ganando mayor protagonismo con el paso del tiempo. Además, se habla sobre la crítica de las redes sociales desde el punto de vista del usuario de la red, donde se resalta el tema de la administración de la publicidad y la privatización que han supuesto las redes sociales. Posteriormente se presenta una base teórica sobre las herramientas antes mencionadas, los CMS. Se explica su funcionamiento, clasificación y las ventajas que obtenemos con el uso de este software en nuestros proyectos, de los cuales destacan el corto tiempo de desarrollo y el bajo coste. Al final se eligen los CMS de estudio en este proyecto principalmente en base a tres criterios: licencia, cuota de uso y características de red social. En la segunda parte del proyecto se habla acerca de los CMS elegidos: WordPress con su plugin BuddyPress, Elgg, y Joomla con su plugin JomSocial. Se explican las características de cada uno de ellos y se muestran ejemplos de redes sociales reales hechas con estos CMS. En esta parte del proyecto se hace un uso práctico de estos CMS y se detalla paso a paso todo el proceso de creación de una red social (instalación, configuración y personalización de la red social) para cada CMS. El resultado son tres redes sociales hechas con distintos CMS, de los cuales se hace una valoración en base a la experiencia obtenida con el uso de los mismos, concluyéndose que JomSocial es una buena opción para redes sociales de uso generalista, pero para redes específicas son mejores tanto Elgg como BuddyPress, presentando este último una ligera ventaja por tener una gran comunidad en español. ABSTRACT. Nowadays social networks are widely used throughout the world, there are different types of social networks where you can connect with friends, expand your network of professional contacts, learn new things, etc. Its high usage has turned them into one of the most profitable businesses on Internet generating fortunes to its developers, mainly through advertising. Many social networks are developed by companies that count on an expert team, but many others have been created by ordinary people, with low computer skills, many of them motivated in their hobbies or careers, that did not find anything useful on Internet and decide to develop their own social networks with the help of software tools. One of those tools is a content management system (CMS), which will help us to save a lot of development time and we will not need to invest large amounts of money. This project is, mainly, about how to create social networks using these tools and aims to be clear enough to help anyone, regardless of their computer skills, to develop their ideas. The first part of the project is about social networks in general and the impact on today's society, where we can see that, due to the number of social networks and mobile accessibility, the use of social networks is daily. Also it explains some ways to obtain economic benefits from a social network and the advantages of specific social networks against generalist social networks, concluding that specific social networks are gaining more prominence with the passage of time. In addition, it refers on social networks critique from the point of view of social network users, where it highlights the issue of the advertising administration and privatization which have brought social networks. Subsequently, it presents a theoretical base of the above mentioned tools, CMS. Explains their operation, classification and the advantages we get with the use of this software in our projects, where the short development time and lower cost are highlighted. At the end the CMS studied in this project are chosen mainly based on three criteria: license, community size and social network features. The second part of the project is about the chosen CMS: WordPress with its plugin BuddyPress, Elgg, and Joomla with its plugin JomSocial. It explains features of all of them and shows examples of real social networks developed with these CMS. This part of the project is a practical use of these CMS and detailed step by step throughout the process of creating a social network (installation, configuration and customization of the social network) for each CMS. The result are three social networks made with different CMS, from which is made an assessment on the basis of the experience gained with the use of these software, concluding that JomSocial is a good choice to develop generalist social networks, but for specific social networks are better Elgg and BuddyPress, presenting the latter a slight advantage by having a large community in Spanish.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En los últimos tiempos, el tráfico generado por los usuarios de redes móviles ha crecido de manera muy notable, y se prevé que dicho crecimiento se mantenga de manera continuada a lo largo de los próximos años. El tráfico gestionado por redes móviles se ha multiplicado por cinco entre los años 2010 y 2013, y las predicciones señalan un aumento de diez veces entre 2013 y 2019. De este tráfico que deben gestionar las redes móviles, una gran parte se genera en el interior de edificios. En la actualidad, éste oscila entre el 70% y el 80% del tráfico móvil total, y este porcentaje se prevé que aumente hasta cerca del 95% en los próximos años. En esta situación, con el tráfico móvil aumentando de manera exponencial, especialmente en interiores, el despliegue de soluciones específicas para estos entornos se antoja imprescindible para evitar situaciones de saturación constante de las redes móviles. Desde el punto de vista de los operadores móviles, estas soluciones permitirán limitar los problemas de cobertura, mejorar la eficiencia del uso de recursos radio y reducir el coste de las infraestructuras. Asimismo, desde el punto de vista de los usuarios, estos despliegues específicos en interiores permitirán suministrar de manera continua altas tasas de transferencia y satisfacer los altos requisitos de calidad de servicio que demandan los servicios en tiempo real. La complejidad de las actuaciones a realizar para llevar a cabo el despliegue de soluciones específicas en interiores varía considerablemente según el tipo de entorno al que están destinadas. Por un lado, las soluciones en escenarios de tipo residencial se caracterizan por despliegues masivos de transmisores realizados por los propios usuarios. De esta manera, no hay posibilidad de realizar ningún tipo de planificación previa que permita la optimización del rendimiento y solo se puede recurrir, para la mejora de éste, a métodos de autoconfiguración y autooptimización. Por otro lado, las soluciones en entornos empresariales se caracterizan por la necesidad de realizar una labor de diseño y planificación previa, cuya dificultad estará asociada a las dimensiones del escenario de despliegue y al número de transmisores necesarias. De esta labor de diseño y de la configuración de los elementos involucrados en la solución desplegada dependerá el funcionamiento adecuado de la red, el rendimiento conseguido y la calidad del servicio que se podrá suministrar a través de ésta. En esta Tesis Doctoral se abordan dos de los problemas principales en el ámbito del despliegue de soluciones específicas de interiores. El primero de ellos es la dificultad para estimar la capacidad y el rendimiento que puede garantizarse mediante soluciones autodesplegadas, y el segundo es la complejidad de diseñar y configurar despliegues de soluciones específicas de interiores en entornos empresariales que requieran un número de transmisores considerable. En el ámbito de los autodespliegues en escenarios residenciales, las principales contribuciones originales de esta Tesis Doctoral se centran en el diseño, desarrollo e implementación de procedimientos que permitan de manera sencilla y precisa la estimación de la capacidad y el rendimiento en autodespliegues. Por otro lado, en el ámbito de los despliegues en escenarios empresariales, las aportaciones originales de esta Tesis consisten en el desarrollo de nuevas técnicas que permitan el diseño automático de soluciones específicas de interiores en estos entornos. Los resultados obtenidos han permitido la creación de herramientas específicas para el análisis del rendimiento de autodespliegues en escenarios residenciales reales y para el diseño y configuración de despliegues en escenarios empresariales. Estas herramientas permiten sistematizar la aplicación práctica de las contribuciones de la presente Tesis Doctoral. ABSTRACT In recent times, the traffic generated by users of mobile networks has grown very significantly, and this increase is expected to continue steadily over the next few years. Traffic carried by mobile networks has increased fivefold between 2010 and 2013, and forecasts indicate a tenfold increase between 2013 and 2019. Furthermore, a great part of this traffic is generated inside buildings. Currently, between 70% and 80% of mobile traffic occurs inside buildings, and this percentage is expected to increase to about 95% in the coming years. In this situation, with mobile traffic growing exponentially, especially indoors, the deployment of specific solutions for these environments can be essential to avoid a constant saturation of mobile networks. On the one hand, from the point of view of mobile operators, these solutions will help to reduce the problems of coverage, improve the efficiency of radio resource usage and reduce the cost of infrastructures. Also, from the point of view of users, these specific indoor deployments can both guarantee high data transfer rates and meet the high quality of service requirements associated with real-time services. The complexity of the actions required to carry out the deployment of specific solutions indoors varies considerably depending on the type of scenario they are conceived to. On the one hand, residential scenarios are characterized by massive deployments of base stations made by the user, so there is no possibility of any prior planning. In this case only self-configuration, selfoptimization and self-healing methods can be considered for performance optimization. On the other hand, specific in-building solutions in enterprise environments requires a previous design and planning phase, whose difficulty is closely associated with the size of the deployment scenario and the number of base stations required. The design and configuration of the elements included in the solution will determine its performance and the quality of service that can be guaranteed. The objective of the present Thesis is to address two of the main issues related to specific indoor solutions, such as the difficulty of assessing the capacity and the performance which can be guaranteed by means of self-deployments and the complexity of the design and configuration of deployments in enterprise environments requiring a large number of base stations. The main contribution of this thesis consists of the development of techniques and simple tools for design and performance analysis of indoor wireless networks deployments. The main results include the development of procedures for assessing the capacity and performance of self-deployments in residential scenarios, the performance analysis of real residential self-deployments using the proposed procedures and the development of techniques for the automatic design of wireless networks in enterprise environments. The results obtained have allowed the creation of specific software tools for both the performance analysis of self-deployments and the design and deployment of in-building solutions in enterprise scenarios. These software tools are conceived to systematize the practical application of the contributions of this Thesis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the cerebral cortex, most synapses are found in the neuropil, but relatively little is known about their 3-dimensional organization. Using an automated dual-beam electron microscope that combines focused ion beam milling and scanning electron microscopy, we have been able to obtain 10 three-dimensional samples with an average volume of 180 µm(3) from the neuropil of layer III of the young rat somatosensory cortex (hindlimb representation). We have used specific software tools to fully reconstruct 1695 synaptic junctions present in these samples and to accurately quantify the number of synapses per unit volume. These tools also allowed us to determine synapse position and to analyze their spatial distribution using spatial statistical methods. Our results indicate that the distribution of synaptic junctions in the neuropil is nearly random, only constrained by the fact that synapses cannot overlap in space. A theoretical model based on random sequential absorption, which closely reproduces the actual distribution of synapses, is also presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Within the building energy saving strategies, BIPV (building integrated photovoltaic systems) present a promising potential based on the close relationship existing between these multifunctional systems and the overall building energy balance. Building integration of STPV (semi-transparent photovoltaic) elements affects deeply the building energy demand since it influences the heating, cooling and lighting loads as well as the local electricity generation. This work analyses over different window-to-wall ratios the overall energy performance of five STPV elements, each element having a specific degree of transparency, in order to assess the energy saving potential compared to a conventional solar control glass compliant with the local technical standard. The prior optical characterization, focused to measure the spectral properties of the elements, was experimentally undertaken. The obtained data were used to perform simulations based on a reference office building using a package of specific software tools (DesignBuilder, EnergyPlus, PVsyst, and COMFEN) to take proper account of the STPV peculiarities. To evaluate the global energy performance of the STPV elements a new Energy Balance Index was formulated. The results show that for intermediate and large façade openings the energy saving potential provided by the STPV solutions ranges between 18% and 59% compared to the reference glass.