18 resultados para Access and Benefit Sharing
em Universidad Politécnica de Madrid
Resumo:
The emerging use of real-time 3D-based multimedia applications imposes strict quality of service (QoS) requirements on both access and core networks. These requirements and their impact to provide end-to-end 3D videoconferencing services have been studied within the Spanish-funded VISION project, where different scenarios were implemented showing an agile stereoscopic video call that might be offered to the general public in the near future. In view of the requirements, we designed an integrated access and core converged network architecture which provides the requested QoS to end-to-end IP sessions. Novel functional blocks are proposed to control core optical networks, the functionality of the standard ones is redefined, and the signaling improved to better meet the requirements of future multimedia services. An experimental test-bed to assess the feasibility of the solution was also deployed. In such test-bed, set-up and release of end-to-end sessions meeting specific QoS requirements are shown and the impact of QoS degradation in terms of the user perceived quality degradation is quantified. In addition, scalability results show that the proposed signaling architecture is able to cope with large number of requests introducing almost negligible delay.
Resumo:
The properties of data and activities in business processes can be used to greatly facilítate several relevant tasks performed at design- and run-time, such as fragmentation, compliance checking, or top-down design. Business processes are often described using workflows. We present an approach for mechanically inferring business domain-specific attributes of workflow components (including data Ítems, activities, and elements of sub-workflows), taking as starting point known attributes of workflow inputs and the structure of the workflow. We achieve this by modeling these components as concepts and applying sharing analysis to a Horn clause-based representation of the workflow. The analysis is applicable to workflows featuring complex control and data dependencies, embedded control constructs, such as loops and branches, and embedded component services.
Resumo:
En los años recientes se ha producido un rápido crecimiento del comercio internacional en productos semielaborados que son diseñados, producidos y ensamblados en diferentes localizaciones a lo largo de diferentes países, debido principalmente a los siguientes motivos: el desarrollo de las tecnologías de la información, la reducción de los costes de transporte, la liberalización de los mercados de capitales, la armonización de factores institucionales, la integración económica regional que implica la reducción y la eliminación de las barreras al comercio, el desarrollo económico de los países emergentes, el uso de economías de escala, así como una desregulación del comercio internacional. Todo ello ha incrementado la competencia a nivel mundial en los mercados y ha posibilitado a las compañías tener más facilidad de acceso a potenciales mercados, así como a la adquisición de capacidades y conocimientos en otros países y a la realización de alianzas estratégicas internacionales con terceros, creando un entorno con mayor incertidumbre y más exigente para las compañías que componen una industria, y que tiene consecuencias directas en las operaciones de las compañías y en la organización de su producción. Las compañías, para adaptarse, ser competitivas y beneficiarse de este nuevo escenario globalizado y más competitivo, han externalizado partes del proceso productivo hacia proveedores especializados, creando un nuevo mercado intermedio que divide el proceso productivo, anteriormente integrado en las compañías que conforman una industria, entre dos conjuntos de empresas especializadas en esa industria. Dicho proceso suele ocurrir conservando la industria en que tiene lugar, los mismos servicios y productos, la tecnología empleada y las compañías originales que la conformaban previamente a la desintegración vertical. Todo ello es así debido a que es beneficioso tanto para las compañías originales de la industria como para las nuevas compañías de este mercado intermedio por diversos motivos. La desintegración vertical en una industria tiene unas consecuencias que la transforman completamente, así como la forma de operar de las compañías que la integran, incluso para aquellas que permanecen verticalmente integradas. Una de las características más importantes de esta desintegración vertical en una industria es la posibilidad que tiene una compañía de adquirir a una tercera la primera parte del proceso productivo o un bien semielaborado, que posteriormente será finalizado por la compañía adquiriente con la práctica del outsourcing; así mismo, una compañía puede realizar la primera parte del proceso productivo o un bien semielaborado, que posteriormente será finalizado por una tercera compañía con la práctica de la fragmentación. El principal objetivo de la presente investigación es el estudio de los motivos, los facilitadores, los efectos, las consecuencias y los principales factores significativos, microeconómicos y macroeconómicos, que desencadenan o incrementan la práctica de la desintegración vertical en una industria; para ello, la investigación se divide en dos líneas completamente diferenciadas: el estudio de la práctica del outsourcing y, por otro lado, el estudio de la fragmentación por parte de las compañías que componen la industria del automóvil en España, puesto que se trata de una de las industrias más desintegradas verticalmente y fragmentadas, y este sector posee una gran importancia en la economía del país. En primer lugar, se hace una revisión de la literatura existente relativa a los siguientes aspectos: desintegración vertical, outsourcing, fragmentación, teoría del comercio internacional, historia de la industria del automóvil en España y el uso de las aglomeraciones geográficas y las tecnologías de la información en el sector del automóvil. La metodología empleada en cada uno de ellos ha sido diferente en función de la disponibilidad de los datos y del enfoque de investigación: los factores microeconómicos, utilizando el outsourcing, y los factores macroeconómicos, empleando la fragmentación. En el estudio del outsourcing, se usa un índice basado en las compras externas sobre el valor total de la producción. Así mismo, se estudia su correlación y significación con las variables económicas más importantes que definen a una compañía del sector del automóvil, utilizando la técnica estadística de regresión lineal. Aquellas variables relacionadas con la competencia en el mercado, la externalización de las actividades de menor valor añadido y el incremento de la modularización de las actividades de la cadena de valor, han resultado significativas con la práctica del outsourcing. En el estudio de la fragmentación se seleccionan un conjunto de factores macroeconómicos, comúnmente usados en este tipo de investigaciones, relacionados con las principales magnitudes económicas de un país, y un conjunto de factores macroeconómicos, no comúnmente usados en este tipo de investigaciones, relacionados con la libertad económica y el comercio internacional de un país. Se emplea un modelo de regresión logística para identificar qué factores son significativos en la práctica de la fragmentación. De entre todos los factores usados en el modelo, los relacionados con las economías de escala y los costes de servicio han resultado significativos. Los resultados obtenidos de los test estadísticos realizados en el modelo de regresión logística han resultado satisfactorios; por ello, el modelo propuesto de regresión logística puede ser considerado sólido, fiable y versátil; además, acorde con la realidad. De los resultados obtenidos en el estudio del outsourcing y de la fragmentación, combinados conjuntamente con el estado del arte, se concluye que el principal factor que desencadena la desintegración vertical en la industria del automóvil es la competencia en el mercado de vehículos. Cuanto mayor es la demanda de vehículos, más se reducen los beneficios y la rentabilidad para sus fabricantes. Estos, para ser competitivos, diferencian sus productos de la competencia centrándose en las actividades que mayor valor añadido aportan al producto final, externalizando las actividades de menor valor añadido a proveedores especializados, e incrementando la modularidad de las actividades de la cadena de valor. Las compañías de la industria del automóvil se especializan en alguna o varias de estas actividades modularizadas que, combinadas con el uso de factores facilitadores como las economías de escala, las tecnologías de la información, las ventajas de la globalización económica y la aglomeración geográfica de una industria, incrementan y motivan la desintegración vertical en la industria del automóvil, desencadenando la coespecialización en dos sectores claramente diferenciados: el sector de fabricantes de vehículos y el sector de proveedores especializados. Cada uno de ellos se especializa en unas actividades y en unos productos o servicios específicos de la cadena de valor, lo cual genera las siguientes consecuencias en la industria del automóvil: se reducen los costes de transacción en los productos o servicios intercambiados; se incrementan la relación de dependencia entre fabricantes de vehículos y proveedores especializados, provocando un aumento en la cooperación y la coordinación, acelerando el proceso de aprendizaje, posibilitando a ambos adquirir nuevas capacidades, conocimientos y recursos, y creando nuevas ventajas competitivas para ambos; por último, las barreras de entrada a la industria del automóvil y el número de compañías se ven alteradas cambiando su estructura. Como futura línea de investigación, los fabricantes de vehículos tenderán a centrarse en investigar, diseñar y comercializar el producto o servicio, delegando el ensamblaje en manos de nuevos especialistas en la materia, el contract manufacturer; por ello, sería conveniente investigar qué factores motivantes o facilitadores existen y qué consecuencias tendría la implantación de los contract manufacturer en la industria del automóvil. 1.1. ABSTRACT In recent years there has been a rapid growth of international trade in semi-finished products designed, produced and assembled in different locations across different countries, mainly due to the following reasons: development of information technologies, reduction of transportation costs, liberalisation of capital markets, harmonisation of institutional factors, regional economic integration, which involves the reduction and elimination of trade barriers, economic development of emerging countries, use of economies of scale and deregulation of international trade. All these factors have increased competition in markets at a global level and have allowed companies to gain easier access to potential markets and to the acquisition of skills and knowledge in other countries, as well as to the completion of international strategic alliances with third parties, thus creating a more demanding and uncertain environment for these companies constituting an industry, which has a direct impact on the companies' operations and the organization of their production. In order to adapt, be competitive and benefit from this new and more competitive global scenario, companies have outsourced some parts of their production process to specialist suppliers, generating a new intermediate market which divides the production process, previously integrated in the companies that made up the industry, into two sets of companies specialized in that industry. This process often occurs while preserving the industry where it takes place, its same services and products, the technology used and the original companies that formed it prior to vertical disintegration. This is because it is beneficial for both the industry's original companies and the companies belonging to this new intermediate market, for various reasons. Vertical disintegration has consequences which completely transform the industry where it takes place as well as the modus operandi of the companies that are part of it, even of those who remain vertically integrated. One of the most important features of vertical disintegration of an industry is the possibility for a company to acquire from a third one the first part of the production process or a semi-finished product, which will then be finished by the acquiring company through the practice of outsourcing; also, a company can perform the first part of the production process or a semi-finish product, which will then be completed by a third company through the practice of fragmentation. The main objective of this research is to study the motives, facilitators, effects, consequences and major significant microeconomic and macroeconomic factors that trigger or increase the practice of vertical disintegration in a certain industry; in order to do so, research is divided into two completely differentiated lines: on the one hand, the study of the practise of outsourcing and, on the other, the study of fragmentation by companies constituting the automotive industry in Spain, since this is one of the most vertically disintegrated and fragmented industries and this particular sector is of major significance in this country's economy. First, a review is made of the existing literature, on the following aspects: vertical disintegration, outsourcing, fragmentation, international trade theory, history of the automobile industry in Spain and the use of geographical agglomeration and information technologies in the automotive sector. The methodology used for each of these aspects has been different depending on the availability of data and the research approach: the microeconomic factors, using outsourcing, and the macroeconomic factors, using fragmentation. In the study on outsourcing, an index is used based on external purchases in relation to the total value of production. Likewise, their significance and correlation with the major economic variables that define an automotive company are studied, using the statistical technique of linear regression. Variables related to market competition, outsourcing of lowest value-added activities and increased modularisation of the activities of the value chain have turned out to be significant with the practice of outsourcing. In the study of fragmentation, a set of macroeconomic factors commonly used for this type of research, is selected, related to the main economic indicators of a country, as well as a set of macroeconomic factors, not commonly used for this type of research, which are related to economic freedom and the international trade of a certain country. A logistic regression model is used to identify which factors are significant in the practice of fragmentation. Amongst all factors used in the model, those related to economies of scale and service costs have turned out to be significant. The results obtained from the statistical tests performed on the logistic regression model have been successful; hence, the suggested logistic regression model can be considered to be solid, reliable and versatile; likewise, it is in line with reality. From the results obtained in the study of outsourcing and fragmentation, combined with the state of the art, it is concluded that the main factor that triggers vertical disintegration in the automotive industry is competition within the vehicle market. The greater the vehicle demand, the lower the earnings and profitability for manufacturers. These, in order to be competitive, differentiate their products from the competition by focusing on those activities that contribute with the highest added value to the final product, outsourcing the lower valueadded activities to specialist suppliers, and increasing the modularity of the activities of the value chain. Companies in the automotive industry specialize in one or more of these modularised activities which, combined with the use of enabling factors such as economies of scale, information technologies, the advantages of economic globalisation and the geographical agglomeration of an industry, increase and encourage vertical disintegration in the automotive industry, triggering co-specialization in two clearly distinct sectors: the sector of vehicle manufacturers and the specialist suppliers sector. Each of them specializes in certain activities and specific products or services of the value chain, generating the following consequences in the automotive industry: reduction of transaction costs of the goods or services exchanged; growth of the relationship of dependency between vehicle manufacturers and specialist suppliers, which causes an increase in cooperation and coordination, accelerates the learning process, enables both to acquire new skills, knowledge and resources, and creates new competitive advantages for both; finally, barriers to entry the automotive industry and the number of companies are altered, changing their structure. As a future line of research, vehicle manufacturers will tend to focus on researching, designing and marketing the product or service, delegating the assembly in the hands of new specialists in the field, the contract manufacturer; for this reason, it would be useful to investigate what motivating or facilitating factors exist in this respect and what consequences would the implementation of contract manufacturers have in the automotive industry.
Resumo:
Sensor networks are increasingly becoming one of the main sources of Big Data on the Web. However, the observations that they produce are made available with heterogeneous schemas, vocabularies and data formats, making it difficult to share and reuse these data for other purposes than those for which they were originally set up. In this thesis we address these challenges, considering how we can transform streaming raw data to rich ontology-based information that is accessible through continuous queries for streaming data. Our main contribution is an ontology-based approach for providing data access and query capabilities to streaming data sources, allowing users to express their needs at a conceptual level, independent of implementation and language-specific details. We introduce novel query rewriting and data translation techniques that rely on mapping definitions relating streaming data models to ontological concepts. Specific contributions include: • The syntax and semantics of the SPARQLStream query language for ontologybased data access, and a query rewriting approach for transforming SPARQLStream queries into streaming algebra expressions. • The design of an ontology-based streaming data access engine that can internally reuse an existing data stream engine, complex event processor or sensor middleware, using R2RML mappings for defining relationships between streaming data models and ontology concepts. Concerning the sensor metadata of such streaming data sources, we have investigated how we can use raw measurements to characterize streaming data, producing enriched data descriptions in terms of ontological models. Our specific contributions are: • A representation of sensor data time series that captures gradient information that is useful to characterize types of sensor data. • A method for classifying sensor data time series and determining the type of data, using data mining techniques, and a method for extracting semantic sensor metadata features from the time series.
Resumo:
The goal of the W3C's Media Annotation Working Group (MAWG) is to promote interoperability between multimedia metadata formats on the Web. As experienced by everybody, audiovisual data is omnipresent on today's Web. However, different interaction interfaces and especially diverse metadata formats prevent unified search, access, and navigation. MAWG has addressed this issue by developing an interlingua ontology and an associated API. This article discusses the rationale and core concepts of the ontology and API for media resources. The specifications developed by MAWG enable interoperable contextualized and semantic annotation and search, independent of the source metadata format, and connecting multimedia data to the Linked Data cloud. Some demonstrators of such applications are also presented in this article.
Resumo:
The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ‘traditional’ set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified-easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.
Resumo:
Durante el transcurso de esta Tesis Doctoral se ha realizado un estudio de la problemática asociada al desarrollo de sistemas de interacción hombre-máquina sensibles al contexto. Este problema se enmarca dentro de dos áreas de investigación: los sistemas interactivos y las fuentes de información contextual. Tradicionalmente la integración entre ambos campos se desarrollaba a través de soluciones verticales específicas, que abstraen a los sistemas interactivos de conocer los procedimientos de bajo nivel de acceso a la información contextual, pero limitan su interoperabilidad con otras aplicaciones y fuentes de información. Para solventar esta limitación se hace imprescindible potenciar soluciones interoperables que permitan acceder a la información del mundo real a través de procedimientos homogéneos. Esta problemática coincide perfectamente con los escenarios de \Computación Ubicua" e \Internet de las Cosas", donde se apunta a un futuro en el que los objetos que nos rodean serán capaces de obtener información del entorno y comunicarla a otros objetos y personas. Los sistemas interactivos, al ser capaces de obtener información de su entorno a través de la interacción con el usuario, pueden tomar un papel especial en este escenario tanto como consumidores como productores de información. En esta Tesis se ha abordado la integración de ambos campos teniendo en cuenta este escenario tecnológico. Para ello, en primer lugar se ha realizado un an álisis de las iniciativas más importantes para la definición y diseño de sistemas interactivos, y de las principales infraestructuras de suministro de información. Mediante este estudio se ha propuesto utilizar el lenguaje SCXML del W3C para el diseño de los sistemas interactivos y el procesamiento de los datos proporcionados por fuentes de contexto. Así, se ha reflejado cómo las capacidades del lenguaje SCXML para combinar información de diferentes modalidades pueden también utilizarse para procesar e integrar información contextual de diferentes fuentes heterogéneas, y por consiguiente diseñar sistemas de interacción sensibles al contexto. Del mismo modo se presenta a la iniciativa Sensor Web, y a su extensión semántica Semantic Sensor Web, como una iniciativa idónea para permitir un acceso y suministro homogéneo de la información a los sistemas interactivos sensibles al contexto. Posteriormente se han analizado los retos que plantea la integración de ambos tipos de iniciativas. Como resultado se ha conseguido establecer una serie de funcionalidades que son necesarias implementar para llevar a cabo esta integración. Utilizando tecnologías que aportan una gran flexibilidad al proceso de implementación y que se apoyan en recomendaciones y estándares actuales, se implementaron una serie de desarrollos experimentales que integraban las funcionalidades identificadas anteriormente. Finalmente, con el fin de validar nuestra propuesta, se realizaron un conjunto de experimentos sobre un entorno de experimentación que simula el escenario de la conducción. En este escenario un sistema interactivo se comunica con una extensión semántica de una plataforma basada en los estándares de la Sensor Web para poder obtener información y publicar las observaciones que el usuario realizaba al sistema. Los resultados obtenidos han demostrado la viabilidad de utilizar el lenguaje SCXML para el diseño de sistemas interactivos sensibles al contexto que requieren acceder a plataformas avanzadas de información para consumir y publicar información a la vez que interaccionan con el usuario. Del mismo modo, se ha demostrado cómo la utilización de tecnologías semánticas en los procesos de consulta y publicación de información puede facilitar la reutilización de la información publicada en infraestructuras Sensor Web por cualquier tipo de aplicación, y de este modo contribuir al futuro escenario de Internet de las Cosas. ABSTRACT In this Thesis, we have addressed the difficulties related to the development of context-aware human-machine interaction systems. This issue is part of two research fields: interactive systems and contextual information sources. Traditionally both fields have been integrated through domain-specific vertical solutions that allow interactive systems to access contextual information without having to deal with low-level procedures, but restricting their interoperability with other applications and heterogeneous data sources. Thus, it is essential to boost the research on interoperable solutions that provide access to real world information through homogeneous procedures. This issue perfectly matches with the scenarios of \Ubiquitous Computing" and \Internet of Things", which point toward a future in which many objects around us will be able to acquire meaningful information about the environment and communicate it to other objects and to people. Since interactive systems are able to get information from their environment through interaction with the user, they can play an important role in this scenario as they can both consume real-world data and produce enriched information. This Thesis deals with the integration of both fields considering this technological scenario. In order to do this, we first carried out an analysis of the most important initiatives for the definition and design of interactive systems, and the main infrastructures for providing information. Through this study the use of the W3C SCXML language is proposed for both the design of interactive systems and the processing of data provided by different context sources. Thus, this work has shown how the SCXML capabilities for combining information from different modalities can also be used to process and integrate contextual information from different heterogeneous sensor sources, and therefore to develope context-aware interaction systems. Similarly, we present the Sensor Web initiative, and its semantic extension Semantic Sensor Web, as an appropriate initiative to allow uniform access and delivery of information to the context-aware interactive systems. Subsequently we have analyzed the challenges of integrating both types of initiatives: SCXML and (Semantic) Sensor Web. As a result, we state a number of functionalities that are necessary to implement in order to perform this integration. By using technologies that provide exibility to the implementation process and are based on current recommendations and standards, we implemented a series of experimental developments that integrate the identified functionalities. Finally, in order to validate our approach, we conducted different experiments with a testing environment simulating a driving scenario. In this framework an interactive system can access a semantic extension of a Telco plataform, based on the standards of the Sensor Web, to acquire contextual information and publish observations that the user performed to the system. The results showed the feasibility of using the SCXML language for designing context-aware interactive systems that require access to advanced sensor platforms for consuming and publishing information while interacting with the user. In the same way, it was shown how the use of semantic technologies in the processes of querying and publication sensor data can assist in reusing and sharing the information published by any application in Sensor Web infrastructures, and thus contribute to realize the future scenario of \Internet of Things".
Resumo:
Nuestro cerebro contiene cerca de 1014 sinapsis neuronales. Esta enorme cantidad de conexiones proporciona un entorno ideal donde distintos grupos de neuronas se sincronizan transitoriamente para provocar la aparición de funciones cognitivas, como la percepción, el aprendizaje o el pensamiento. Comprender la organización de esta compleja red cerebral en base a datos neurofisiológicos, representa uno de los desafíos más importantes y emocionantes en el campo de la neurociencia. Se han propuesto recientemente varias medidas para evaluar cómo se comunican las diferentes partes del cerebro a diversas escalas (células individuales, columnas corticales, o áreas cerebrales). Podemos clasificarlos, según su simetría, en dos grupos: por una parte, la medidas simétricas, como la correlación, la coherencia o la sincronización de fase, que evalúan la conectividad funcional (FC); mientras que las medidas asimétricas, como la causalidad de Granger o transferencia de entropía, son capaces de detectar la dirección de la interacción, lo que denominamos conectividad efectiva (EC). En la neurociencia moderna ha aumentado el interés por el estudio de las redes funcionales cerebrales, en gran medida debido a la aparición de estos nuevos algoritmos que permiten analizar la interdependencia entre señales temporales, además de la emergente teoría de redes complejas y la introducción de técnicas novedosas, como la magnetoencefalografía (MEG), para registrar datos neurofisiológicos con gran resolución. Sin embargo, nos hallamos ante un campo novedoso que presenta aun varias cuestiones metodológicas sin resolver, algunas de las cuales trataran de abordarse en esta tesis. En primer lugar, el creciente número de aproximaciones para determinar la existencia de FC/EC entre dos o más señales temporales, junto con la complejidad matemática de las herramientas de análisis, hacen deseable organizarlas todas en un paquete software intuitivo y fácil de usar. Aquí presento HERMES (http://hermes.ctb.upm.es), una toolbox en MatlabR, diseñada precisamente con este fin. Creo que esta herramienta será de gran ayuda para todos aquellos investigadores que trabajen en el campo emergente del análisis de conectividad cerebral y supondrá un gran valor para la comunidad científica. La segunda cuestión practica que se aborda es el estudio de la sensibilidad a las fuentes cerebrales profundas a través de dos tipos de sensores MEG: gradiómetros planares y magnetómetros, esta aproximación además se combina con un enfoque metodológico, utilizando dos índices de sincronización de fase: phase locking value (PLV) y phase lag index (PLI), este ultimo menos sensible a efecto la conducción volumen. Por lo tanto, se compara su comportamiento al estudiar las redes cerebrales, obteniendo que magnetómetros y PLV presentan, respectivamente, redes más densamente conectadas que gradiómetros planares y PLI, por los valores artificiales que crea el problema de la conducción de volumen. Sin embargo, cuando se trata de caracterizar redes epilépticas, el PLV ofrece mejores resultados, debido a la gran dispersión de las redes obtenidas con PLI. El análisis de redes complejas ha proporcionado nuevos conceptos que mejoran caracterización de la interacción de sistemas dinámicos. Se considera que una red está compuesta por nodos, que simbolizan sistemas, cuyas interacciones se representan por enlaces, y su comportamiento y topología puede caracterizarse por un elevado número de medidas. Existe evidencia teórica y empírica de que muchas de ellas están fuertemente correlacionadas entre sí. Por lo tanto, se ha conseguido seleccionar un pequeño grupo que caracteriza eficazmente estas redes, y condensa la información redundante. Para el análisis de redes funcionales, la selección de un umbral adecuado para decidir si un determinado valor de conectividad de la matriz de FC es significativo y debe ser incluido para un análisis posterior, se convierte en un paso crucial. En esta tesis, se han obtenido resultados más precisos al utilizar un test de subrogadas, basado en los datos, para evaluar individualmente cada uno de los enlaces, que al establecer a priori un umbral fijo para la densidad de conexiones. Finalmente, todas estas cuestiones se han aplicado al estudio de la epilepsia, caso práctico en el que se analizan las redes funcionales MEG, en estado de reposo, de dos grupos de pacientes epilépticos (generalizada idiopática y focal frontal) en comparación con sujetos control sanos. La epilepsia es uno de los trastornos neurológicos más comunes, con más de 55 millones de afectados en el mundo. Esta enfermedad se caracteriza por la predisposición a generar ataques epilépticos de actividad neuronal anormal y excesiva o bien síncrona, y por tanto, es el escenario perfecto para este tipo de análisis al tiempo que presenta un gran interés tanto desde el punto de vista clínico como de investigación. Los resultados manifiestan alteraciones especificas en la conectividad y un cambio en la topología de las redes en cerebros epilépticos, desplazando la importancia del ‘foco’ a la ‘red’, enfoque que va adquiriendo relevancia en las investigaciones recientes sobre epilepsia. ABSTRACT There are about 1014 neuronal synapses in the human brain. This huge number of connections provides the substrate for neuronal ensembles to become transiently synchronized, producing the emergence of cognitive functions such as perception, learning or thinking. Understanding the complex brain network organization on the basis of neuroimaging data represents one of the most important and exciting challenges for systems neuroscience. Several measures have been recently proposed to evaluate at various scales (single cells, cortical columns, or brain areas) how the different parts of the brain communicate. We can classify them, according to their symmetry, into two groups: symmetric measures, such as correlation, coherence or phase synchronization indexes, evaluate functional connectivity (FC); and on the other hand, the asymmetric ones, such as Granger causality or transfer entropy, are able to detect effective connectivity (EC) revealing the direction of the interaction. In modern neurosciences, the interest in functional brain networks has increased strongly with the onset of new algorithms to study interdependence between time series, the advent of modern complex network theory and the introduction of powerful techniques to record neurophysiological data, such as magnetoencephalography (MEG). However, when analyzing neurophysiological data with this approach several questions arise. In this thesis, I intend to tackle some of the practical open problems in the field. First of all, the increase in the number of time series analysis algorithms to study brain FC/EC, along with their mathematical complexity, creates the necessity of arranging them into a single, unified toolbox that allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of them. I developed such a toolbox for this aim, it is named HERMES (http://hermes.ctb.upm.es), and encompasses several of the most common indexes for the assessment of FC and EC running for MatlabR environment. I believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis and will entail a great value for the scientific community. The second important practical issue tackled in this thesis is the evaluation of the sensitivity to deep brain sources of two different MEG sensors: planar gradiometers and magnetometers, in combination with the related methodological approach, using two phase synchronization indexes: phase locking value (PLV) y phase lag index (PLI), the latter one being less sensitive to volume conduction effect. Thus, I compared their performance when studying brain networks, obtaining that magnetometer sensors and PLV presented higher artificial values as compared with planar gradiometers and PLI respectively. However, when it came to characterize epileptic networks it was the PLV which gives better results, as PLI FC networks where very sparse. Complex network analysis has provided new concepts which improved characterization of interacting dynamical systems. With this background, networks could be considered composed of nodes, symbolizing systems, whose interactions with each other are represented by edges. A growing number of network measures is been applied in network analysis. However, there is theoretical and empirical evidence that many of these indexes are strongly correlated with each other. Therefore, in this thesis I reduced them to a small set, which could more efficiently characterize networks. Within this framework, selecting an appropriate threshold to decide whether a certain connectivity value of the FC matrix is significant and should be included in the network analysis becomes a crucial step, in this thesis, I used the surrogate data tests to make an individual data-driven evaluation of each of the edges significance and confirmed more accurate results than when just setting to a fixed value the density of connections. All these methodologies were applied to the study of epilepsy, analysing resting state MEG functional networks, in two groups of epileptic patients (generalized and focal epilepsy) that were compared to matching control subjects. Epilepsy is one of the most common neurological disorders, with more than 55 million people affected worldwide, characterized by its predisposition to generate epileptic seizures of abnormal excessive or synchronous neuronal activity, and thus, this scenario and analysis, present a great interest from both the clinical and the research perspective. Results revealed specific disruptions in connectivity and network topology and evidenced that networks’ topology is changed in epileptic brains, supporting the shift from ‘focus’ to ‘networks’ which is gaining importance in modern epilepsy research.
Resumo:
The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ?traditional? set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified, easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.
Resumo:
The project arises from the need to develop improved teaching methodologies in field of the mechanics of continuous media. The objective is to offer the student a learning process to acquire the necessary theoretical knowledge, cognitive skills and the responsibility and autonomy to professional development in this area. Traditionally the teaching of the concepts of these subjects was performed through lectures and laboratory practice. During these lessons the students attitude was usually passive, and therefore their effectiveness was poor. The proposed methodology has already been successfully employed in universities like University Bochum, Germany, University the South Australia and aims to improve the effectiveness of knowledge acquisition through use by the student of a virtual laboratory. This laboratory allows to adapt the curricula and learning techniques to the European Higher Education and improve current learning processes in the University School of Public Works Engineers -EUITOP- of the Technical University of Madrid -UPM-, due there are not laboratories in this specialization. The virtual space is created using a software platform built on OpenSim, manages 3D virtual worlds, and, language LSL -Linden Scripting Language-, which imprints specific powers to objects. The student or user can access this virtual world through their avatar -your character in the virtual world- and can perform practices within the space created for the purpose, at any time, just with computer with internet access and viewfinder. The virtual laboratory has three partitions. The virtual meeting rooms, where the avatar can interact with peers, solve problems and exchange existing documentation in the virtual library. The interactive game room, where the avatar is has to resolve a number of issues in time. And the video room where students can watch instructional videos and receive group lessons. Each audiovisual interactive element is accompanied by explanations framing it within the area of knowledge and enables students to begin to acquire a vocabulary and practice of the profession for which they are being formed. Plane elasticity concepts are introduced from the tension and compression testing of test pieces of steel and concrete. The behavior of reticulated and articulated structures is reinforced by some interactive games and concepts of tension, compression, local and global buckling will by tests to break articulated structures. Pure bending concepts, simple and composite torsion will be studied by observing a flexible specimen. Earthquake resistant design of buildings will be checked by a laboratory test video.
Resumo:
New digital artifacts are emerging in data-intensive science. For example, scientific workflows are executable descriptions of scientific procedures that define the sequence of computational steps in an automated data analysis, supporting reproducible research and the sharing and replication of best-practice and know-how through reuse. Workflows are specified at design time and interpreted through their execution in a variety of situations, environments, and domains. Hence it is essential to preserve both their static and dynamic aspects, along with the research context in which they are used. To achieve this, we propose the use of multidimensional digital objects (Research Objects) that aggregate the resources used and/or produced in scientific investigations, including workflow models, provenance of their executions, and links to the relevant associated resources, along with the provision of technological support for their preservation and efficient retrieval and reuse. In this direction, we specified a software architecture for the design and implementation of a Research Object preservation system, and realized this architecture with a set of services and clients, drawing together practices in digital libraries, preservation systems, workflow management, social networking and Semantic Web technologies. In this paper, we describe the backbone system of this realization, a digital library system built on top of dLibra.
Resumo:
Diversos estudios consideran que las claves para el desarrollo técnico, social y económico de los países en desarrollo, y en especial en África, son la educación, la investigación y la innovación. No obstante, en el ámbito sanitario en particular, la formación e investigación en África sigue siendo escasa por diversos motivos. Por un lado, los investigadores africanos encuentran dificultades añadidas para obtener financiación frente a investigadores en países de desarrollo. Y, por otro lado, los investigadores no disponen de las herramientas y tecnologías adecuadas para acceder a información relevante para su trabajo. En este contexto, el proyecto AFRICA BUILD tiene como principal objetivo mejorar y fomentar la investigación y educación sanitaria en África a través de las nuevas tecnologías. Uno de los resultados principales de este proyecto es el AFRICA BUILD Portal (ABP), que pretende ser un punto de encuentro para los investigadores africanos, donde podrán encontrar un amplio abanico de herramientas que les permita no sólo mejorar su educación, sino compartir conocimientos con otros investigadores. Este portal, que está actualmente en desarrollo, pretende fomentar la comunicación entre instituciones africanas y les proveerá herramientas biomédicas avanzadas. En este Trabajo de Fin de Grado se han marcado como principal objetivo la mejora del sistema de e-learning que existía en el ABP. El nuevo sistema será: (i) más potente, óptimo y rico en funcionalidades, (ii) tendrá una importante base social permitiendo a los usuarios aportar información y compartir conocimiento, y (iii) será distribuido y escalable, capaz de integrar distintos gestores de contenidos didácticos como fuentes de información. Los resultados obtenidos son positivos y el nuevo sistema de aprendizaje ha mejorado la usabilidad, funcionalidad y apariencia del anterior. Para concluir, la herramienta posee aún un gran margen de mejora, pero gracias al trabajo realizado el sistema de e-learning es ahora más social, distribuido, escalable e innovador. ---ABSTRACT---Several studies find that the keys to the technical, social and economic development of developing countries, especially in Africa, are education, research and innovation. However, in the health sector in particular, training and research in Africa remains low for various reasons. On the one hand, African researchers experience additional difficulties for funding against researchers in developing countries. And on the other hand, researchers do not have the right tools and technologies to access information relevant to their work. In this context, the AFRICA BUILD project's main objective is to improve and enhance research and health education in Africa through new technologies. One of the main results of this project is the AFRICA BUILD Portal (ABP), which aims to be a meeting point for African researchers, where they can find a wide range of tools that allows them to not only improve their education but to share knowledge with other researchers. This portal, which is currently under development, aims to promote communication between African institutions and will provide advanced biomedical tools. The main objective of this Final Year Project is the improvement of the former elearning system that already existed in the ABP. The new system will be: (i) more powerful, richer in features, (ii) will have an important social base allowing users to contribute information and knowledge sharing; and (iii) will be distributed and scalable, capable of integrating different learning management systems as sources of information. The final results obtained are fairly positive and the new e-learning system has improved its usability, functionality and appearance of the former. To conclude, the tool still has much room for improvement, but thanks to the new features of the improved elearning system has become more social, distributed, scalable and innovative.
Resumo:
Data-related properties of the activities involved in a service composition can be used to facilitate several design-time and run-time adaptation tasks, such as service evolution, distributed enactment, and instance-level adaptation. A number of these properties can be expressed using a notion of sharing. We present an approach for automated inference of data properties based on sharing analysis, which is able to handle service compositions with complex control structures, involving loops and sub-workflows. The properties inferred can include data dependencies, information content, domain-defined attributes, privacy or confidentiality levels, among others. The analysis produces characterizations of the data and the activities in the composition in terms of minimal and maximal sharing, which can then be used to verify compliance of potential adaptation actions, or as supporting information in their generation. This sharing analysis approach can be used both at design time and at run time. In the latter case, the results of analysis can be refined using the composition traces (execution logs) at the point of execution, in order to support run-time adaptation.
Capacity Building through education, research and collaboration: AFRICA BUILD, an eHealth Case Study
Resumo:
AFRICA BUILD (AB) is a Coordination Action project under the 7th European Framework Programme having the aim of improving the capacities for health research and education in Africa through Information and Communication Technologies (ICT). This project, started in 2012, has promoted health research, education and evidence-based practice in Africa through the creation of centers of excellence, by using ICT,?know-how?, eLearning and knowledge sharing, through Web-enabled virtual communities.
Resumo:
Long-lasting insecticide-treated nets (LLITNs) constitute a novel alternative that combines physical and chemical tactics to prevent insect access and the spread of insect-transmitted plant viruses in protected enclosures. This approach is based on a slow-release insecticide-treated net with large hole sizes that allow improved ventilation of greenhouses. The efficacy of a wide range of LLITNs was tested under laboratory conditions against Myzus persicae, Aphis gossypii and Bemisia tabaci. Two nets were selected for field tests under a high insect infestation pressure in the presence of plants infected with Cucumber mosaic virus and Cucurbit aphid-borne yellows virus. The efficacy of Aphidius colemani, a parasitoid commonly used for biological control of aphids, was studied in parallel field experiments.