91 resultados para Abstraction.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finding useful sharing information between instances in object- oriented programs has recently been the focus of much research. The applications of such static analysis are multiple: by knowing which variables definitely do not share in memory we can apply conventional compiler optimizations, find coarse-grained parallelism opportunities, or, more importantly, verify certain correctness aspects of programs even in the absence of annotations. In this paper we introduce a framework for deriving precise sharing information based on abstract interpretation for a Java-like language. Our analysis achieves precision in various ways, including supporting multivariance, which allows separating different contexts. We propose a combined Set Sharing + Nullity + Classes domain which captures which instances do not share and which ones are definitively null, and which uses the classes to refine the static information when inheritance is present. The use of a set sharing abstraction allows a more precise representation of the existing sharings and is crucial in achieving precision during interprocedural analysis. Carrying the domains in a combined way facilitates the interaction among them in the presence of multivariance in the analysis. We show through examples and experimentally that both the set sharing part of the domain as well as the combined domain provide more accurate information than previous work based on pair sharing domains, at reasonable cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La mayor parte de los entornos diseñados por el hombre presentan características geométricas específicas. En ellos es frecuente encontrar formas poligonales, rectangulares, circulares . . . con una serie de relaciones típicas entre distintos elementos del entorno. Introducir este tipo de conocimiento en el proceso de construcción de mapas de un robot móvil puede mejorar notablemente la calidad y la precisión de los mapas resultantes. También puede hacerlos más útiles de cara a un razonamiento de más alto nivel. Cuando la construcción de mapas se formula en un marco probabilístico Bayesiano, una especificación completa del problema requiere considerar cierta información a priori sobre el tipo de entorno. El conocimiento previo puede aplicarse de varias maneras, en esta tesis se presentan dos marcos diferentes: uno basado en el uso de primitivas geométricas y otro que emplea un método de representación cercano al espacio de las medidas brutas. Un enfoque basado en características geométricas supone implícitamente imponer un cierto modelo a priori para el entorno. En este sentido, el desarrollo de una solución al problema SLAM mediante la optimización de un grafo de características geométricas constituye un primer paso hacia nuevos métodos de construcción de mapas en entornos estructurados. En el primero de los dos marcos propuestos, el sistema deduce la información a priori a aplicar en cada caso en base a una extensa colección de posibles modelos geométricos genéricos, siguiendo un método de Maximización de la Esperanza para hallar la estructura y el mapa más probables. La representación de la estructura del entorno se basa en un enfoque jerárquico, con diferentes niveles de abstracción para los distintos elementos geométricos que puedan describirlo. Se llevaron a cabo diversos experimentos para mostrar la versatilidad y el buen funcionamiento del método propuesto. En el segundo marco, el usuario puede definir diferentes modelos de estructura para el entorno mediante grupos de restricciones y energías locales entre puntos vecinos de un conjunto de datos del mismo. El grupo de restricciones que se aplica a cada grupo de puntos depende de la topología, que es inferida por el propio sistema. De este modo, se pueden incorporar nuevos modelos genéricos de estructura para el entorno con gran flexibilidad y facilidad. Se realizaron distintos experimentos para demostrar la flexibilidad y los buenos resultados del enfoque propuesto. Abstract Most human designed environments present specific geometrical characteristics. In them, it is easy to find polygonal, rectangular and circular shapes, with a series of typical relations between different elements of the environment. Introducing this kind of knowledge in the mapping process of mobile robots can notably improve the quality and accuracy of the resulting maps. It can also make them more suitable for higher level reasoning applications. When mapping is formulated in a Bayesian probabilistic framework, a complete specification of the problem requires considering a prior for the environment. The prior over the structure of the environment can be applied in several ways; this dissertation presents two different frameworks, one using a feature based approach and another one employing a dense representation close to the measurements space. A feature based approach implicitly imposes a prior for the environment. In this sense, feature based graph SLAM was a first step towards a new mapping solution for structured scenarios. In the first framework, the prior is inferred by the system from a wide collection of feature based priors, following an Expectation-Maximization approach to obtain the most probable structure and the most probable map. The representation of the structure of the environment is based on a hierarchical model with different levels of abstraction for the geometrical elements describing it. Various experiments were conducted to show the versatility and the good performance of the proposed method. In the second framework, different priors can be defined by the user as sets of local constraints and energies for consecutive points in a range scan from a given environment. The set of constraints applied to each group of points depends on the topology, which is inferred by the system. This way, flexible and generic priors can be incorporated very easily. Several tests were carried out to demonstrate the flexibility and the good results of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visualisation of program executions has been used in applications which include education and debugging. However, traditional visualisation techniques often fall short of expectations or are altogether inadequate for new programming paradigms, such as Constraint Logic Programming (CLP), whose declarative and operational semantics differ in some crucial ways from those of other paradigms. In particular, traditional ideas regarding the behaviour of data often cannot be lifted in a straightforward way to (C)LP from other families of programming languages. In this chapter we discuss techniques for visualising data evolution in CLP. We briefly review some previously proposed visualisation paradigms, and also propose a number of (to our knowledge) novel ones. The graphical representations have been chosen based on the perceived needs of a programmer trying to analyse the behaviour and characteristics of an execution. In particular, we concentrate on the representation of the run-time values of the variables, and the constraints among them. Given our interest in visualising large executions, we also pay attention to abstraction techniques, i.e., techniques which are intended to help in reducing the complexity of the visual information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El proceso de planificación de una u otra forma ha estado implicado en el desarrollo del turismo, aunque su aplicación no ha sido del todo exitosa. Asimismo la revalorización que está teniendo la planificación en distintos ámbitos, público y privado, tampoco es ajena en el turismo, pero, aún requiere modificaciones en los métodos y técnicas para ser aplicada a un turismo sustentable. Las modificaciones en los métodos para la planificación del turismo sustentable en la Región de Coquimbo, constituyen el objeto de esta investigación. La investigación establece como hipótesis: “La integración de las metodologías de planificación con los principios del turismo sustentable permitiría generar un modelo de planificación turística sustentable, que incorpora los valores del territorio, las necesidades de los visitantes, la participación de la comunidad receptora y es aplicable a la Región de Coquimbo a través de un marco territorial-metodológico”. A partir de esta hipótesis, se define el objetivo general: Elaborar una propuesta de un marco territorial-metodológico para la planificación turística sustentable en la Región de Coquimbo. De este objetivo general se deducen los objetivos específicos relacionados con el análisis de los conceptos y enfoques teóricos de la planificación aplicada al turismo; el estudio de los instrumentos de planificación del desarrollo turístico de la Región de Coquimbo; la determinación de categorías y elementos estructurantes y la elaboración de un marco territorial-metodológico de la planificación para el desarrollo del turismo sustentable en la Región de Coquimbo. La metodología de la investigación es de carácter cualitativo, está relacionada principalmente con la teoría fundamentada o grounded theory y con las teorías procedimentales de la planificación. Ambas metodologías fueron utilizadas en la construcción del modelo genérico y en la elaboración del marco procedimental-territorial. La construcción del modelo genérico se asimiló con la de la teoría en la grounded theory y de esta forma, se generaron conceptos a partir del marco conceptual teórico (conceptualización o abstracción); se construyeron categorías de análisis procedimental, territorial y de turismo sustentable (codificación abierta), se elaboraron categorías síntesis procedimental-territoriales y de turismo sustentable (codificación axial); se analizó la dinámica relacional de las categorías procedimental-territoriales (codificación selectiva) y se estableció el modelo genérico de planificación del turismo sustentable (teoría), que cuenta con dos submodelos: el submodelo de Condiciones Generales y el submodelo de Etapas de la planificación. A partir del modelo genérico se jerarquizaron las categorías definidas y relacionadas en forma dinámica, de acuerdo a las características de la Región de Coquimbo. Conjuntamente se incluyeron los requisitos del turismo sustentable recomendados por el Servicio Nacional de Turismo (SERNATUR) a nivel nacional, regional y comunal y se incorporaron las etapas ausentes en los planes de desarrollo turístico formulados en la Región, para dar origen al marco procedimental-territorial aplicado en la Región de Coquimbo. La investigación entrega como resultado una metodología de planificación para el turismo sustentable (modelo genérico), que al aplicarse a un sistema territorial particular, como es la Región de Coquimbo, genera una nueva metodología específica de planificación para el turismo sustentable de esta región (marco procedimental-territorial). El modelo genérico podrá ser aplicable a cualquier territorio dentro o fuera de la Región de Coquimbo, relevando sus propios valores, las necesidades de los visitantes y la participación de la comunidad y generando sus propios marcos procedimentales—territoriales (considerando los rasgos propios de los territorios, los requisitos del turismo sustentable y corrigiendo las falencias de las planificaciones previas existentes en ese territorio particular). ABSTRACT One way or another the process of planning has been implied in the development of tourism, although its application has not been absolutely successful. Also the revaluation that the planning is having in different scopes, public and private, and tourism is not free from it either, but still it requires modifications in the methods and techniques to be applied to a sustainable tourism. The modifications in the methods for the planning of the sustainable tourism in the Region of Coquimbo constitute the object of this investigation. The research establishes as hypothesis: “The integration of planning methodologies with the principles of the sustainable tourism would allow to generate a model of sustainable tourist planning, that incorporates the values of the territory, the needs of the visitors, the participation of the receiving community and is applicable to the Region of Coquimbo through a territorial-methodologic frame”. From this hypothesis, the general objective is defined: To make a proposal of a territorial-methodologic frame for the sustainable tourist planning in the Region of Coquimbo. From this general objective the specific objectives related to the analysis of the concepts and theoretical approaches of the planning applied to the tourism are deduced; the study of the planning tools of the tourist development of the Region of Coquimbo; the determination of categories and structural elements and the elaboration of a territorial-methodologic frame of the planning for the development of the sustainable tourism in the Region of Coquimbo. The methodology of the investigation is of qualitative character, is related mainly to the based theory or grounded theory and to the procedural theories of the planning. Both methodologies were used in the construction of the generic model and in the elaboration of the procedural-territorial frame. The construction of the generic model assimilated with the one of the theory in grounded theory and thus, concepts from the theoretical conceptual frame were generated (conceptualization or abstraction); categories of procedural, territorial analysis were constructed and of sustainable tourism (open codification), procedural-territorial syntheses categories were elaborated and of sustainable tourism (axial codification); the relational dynamics of the procedural-territorial categories was analyzed (selective codification) and the generic model of sustainable tourism planning was established (theory), that counts with two submodels: the General Conditions submodel and the Stages of planning submodel. From the generic model the categories defined and related in dynamic way were hierarchized, according to the characteristics of the Region of Coquimbo. Together with the recommended requirements of sustainable tourism done by The National Service of Tourism (SERNATUR) were included at national, regional level and communal and the absent stages in the formulated plans of tourist development in the Region were included, to give rise to the applied procedural-territorial frame in the Region of Coquimbo. The research gives as a result a planning methodology for sustainable tourism (generic model), that when being applied to a given territorial system, as it is the Region of Coquimbo, it generates a new specific planning methodology for the sustainable tourism of this region (procedural-territorial frame). The generic model could be applicable to any territory inside or outside the Region of Coquimbo, standing out its own values, the needs of the visitors and the participation of the community generating its own procedural-territorial frames (considering the own features of the territories, the requirements of the sustainable tourism and correcting the mistakes of the previous existing planning in that particular territory).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-consciousness implies not only self or group recognition, but also real knowledge of one’s own identity. Self-consciousness is only possible if an individual is intelligent enough to formulate an abstract self-representation. Moreover, it necessarily entails the capability of referencing and using this elf-representation in connection with other cognitive features, such as inference, and the anticipation of the consequences of both one’s own and other individuals’ acts. In this paper, a cognitive architecture for self-consciousness is proposed. This cognitive architecture includes several modules: abstraction, self-representation, other individuals'representation, decision and action modules. It includes a learning process of self-representation by direct (self-experience based) and observational learning (based on the observation of other individuals). For model implementation a new approach is taken using Modular Artificial Neural Networks (MANN). For model testing, a virtual environment has been implemented. This virtual environment can be described as a holonic system or holarchy, meaning that it is composed of autonomous entities that behave both as a whole and as part of a greater whole. The system is composed of a certain number of holons interacting. These holons are equipped with cognitive features, such as sensory perception, and a simplified model of personality and self-representation. We explain holons’ cognitive architecture that enables dynamic self-representation. We analyse the effect of holon interaction, focusing on the evolution of the holon’s abstract self-representation. Finally, the results are explained and analysed and conclusions drawn.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent future, wireless sensor networks (WSNs) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of WSNs facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers (DCs). The high economical and environmental impact of the energy consumption in DCs requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of WSNs: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of DCs: energy-optimal workload assignment policies in heterogeneous DCs, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ubiquitous computing software needs to be autonomous so that essential decisions such as how to configure its particular execution are self-determined. Moreover, data mining serves an important role for ubiquitous computing by providing intelligence to several types of ubiquitous computing applications. Thus, automating ubiquitous data mining is also crucial. We focus on the problem of automatically configuring the execution of a ubiquitous data mining algorithm. In our solution, we generate configuration decisions in a resource aware and context aware manner since the algorithm executes in an environment in which the context often changes and computing resources are often severely limited. We propose to analyze the execution behavior of the data mining algorithm by mining its past executions. By doing so, we discover the effects of resource and context states as well as parameter settings on the data mining quality. We argue that a classification model is appropriate for predicting the behavior of an algorithm?s execution and we concentrate on decision tree classifier. We also define taxonomy on data mining quality so that tradeoff between prediction accuracy and classification specificity of each behavior model that classifies by a different abstraction of quality, is scored for model selection. Behavior model constituents and class label transformations are formally defined and experimental validation of the proposed approach is also performed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent future, wireless sensor networks ({WSNs}) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of {WSNs} facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers ({DCs}). The high economical and environmental impact of the energy consumption in {DCs} requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of {WSNs}: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of {DCs}: energy-optimal workload assignment policies in heterogeneous {DCs}, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Crop production has a great contribution to water use and abstraction. Sugar beet is an important crop in irrigated land in Spain and covers 70.000 Ha. Crop and resources management are key factors for a sustainable agriculture. The aim of this work is to mode the sugar beet crop growth and water consumption in order to quantify crop water use and virtual water content in different growing conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work a complete hardware-software support platform for a WSN testbed focused on developing wireless sensor applications in a simple and intuitive way is presented, as an alternative of commercial-motes-based testbeds that can be found in the state of the art. The main target of this hardware-software platform is to provide the highest abstraction level on the management of WSNs but in the simplest way in order to achieve a fast profiling mechanism for reliable prototyping based on the Cookies platform as well as helping users to develop, test and validate Cookie-Based WSN applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work a complete set of libraries for developing wireless sensor applications in a simple and intuitive way is presented, in contraposition to the most spread application abstraction-level mechanisms based on operating systems. The main target of this software platform, named CookieLibs, is to provide the highest abstraction level on the management of WSNs but in the simplest way for those users who are not familiar with software design, in order to achieve a fast profiling mechanism for reliable prototyping based on the Cookies platform.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente proyecto final de carrera titulado “Modelado de alto nivel con SystemC” tiene como objetivo principal el modelado de algunos módulos de un codificador de vídeo MPEG-2 utilizando el lenguaje de descripción de sistemas igitales SystemC con un nivel de abstracción TLM o Transaction Level Modeling. SystemC es un lenguaje de descripción de sistemas digitales basado en C++. En él hay un conjunto de rutinas y librerías que implementan tipos de datos, estructuras y procesos especiales para el modelado de sistemas digitales. Su descripción se puede consultar en [GLMS02] El nivel de abstracción TLM se caracteriza por separar la comunicación entre los módulos de su funcionalidad. Este nivel de abstracción hace un mayor énfasis en la funcionalidad de la comunicación entre los módulos (de donde a donde van datos) que la implementación exacta de la misma. En los documentos [RSPF] y [HG] se describen el TLM y un ejemplo de implementación. La arquitectura del modelo se basa en el codificador MVIP-2 descrito en [Gar04], de dicho modelo, los módulos implementados son: · IVIDEOH: módulo que realiza un filtrado del vídeo de entrada en la dimensión horizontal y guarda en memoria el video filtrado. · IVIDEOV: módulo que lee de la memoria el vídeo filtrado por IVIDEOH, realiza el filtrado en la dimensión horizontal y escribe el video filtrado en memoria. · DCT: módulo que lee el video filtrado por IVIDEOV, hace la transformada discreta del coseno y guarda el vídeo transformado en la memoria. · QUANT: módulo que lee el video transformado por DCT, lo cuantifica y guarda el resultado en la memoria. · IQUANT: módulo que lee el video cuantificado por QUANT, realiza la cuantificación inversa y guarda el resultado en memoria. · IDCT: módulo que lee el video procesado por IQUANT, realiza la transformada inversa del coseno y guarda el resultado en memoria. · IMEM: módulo que hace de interfaz entre los módulos anteriores y la memoria. Gestiona las peticiones simultáneas de acceso a la memoria y asegura el acceso exclusivo a la memoria en cada instante de tiempo. Todos estos módulos aparecen en gris en la siguiente figura en la que se muestra la arquitectura del modelo: Figura 1. Arquitectura del modelo (VER PDF DEL PFC) En figura también aparecen unos módulos en blanco, dichos módulos son de pruebas y se han añadido para realizar simulaciones y probar los módulos del modelo: · CAMARA: módulo que simula una cámara en blanco y negro, lee la luminancia de un fichero de vídeo y lo envía al modelo a través de una FIFO. · FIFO: hace de interfaz entre la cámara y el modelo, guarda los datos que envía la cámara hasta que IVIDEOH los lee. · CONTROL: módulo que se encarga de controlar los módulos que procesan el vídeo, estos le indican cuando terminan de procesar un frame de vídeo y este módulo se encarga de iniciar los módulos que sean necesarios para seguir con la codificación. Este módulo se encarga del correcto secuenciamiento de los módulos procesadores de vídeo. · RAM: módulo que simula una memoria RAM, incluye un retardo programable en el acceso. Para las pruebas también se han generado ficheros de vídeo con el resultado de cada módulo procesador de vídeo, ficheros con mensajes y un fichero de trazas en el que se muestra el secuenciamiento de los procesadores. Como resultado del trabajo en el presente PFC se puede concluir que SystemC permite el modelado de sistemas digitales con bastante sencillez (hace falta conocimientos previos de C++ y programación orientada objetos) y permite la realización de modelos con un nivel de abstracción mayor a RTL, el habitual en Verilog y VHDL, en el caso del presente PFC, el TLM. ABSTRACT This final career project titled “High level modeling with SystemC” have as main objective the modeling of some of the modules of an MPEG-2 video coder using the SystemC digital systems description language at the TLM or Transaction Level Modeling abstraction level. SystemC is a digital systems description language based in C++. It contains routines and libraries that define special data types, structures and process to model digital systems. There is a complete description of the SystemC language in the document [GLMS02]. The main characteristic of TLM abstraction level is that it separates the communication among modules of their functionality. This abstraction level puts a higher emphasis in the functionality of the communication (from where to where the data go) than the exact implementation of it. The TLM and an example are described in the documents [RSPF] and [HG]. The architecture of the model is based in the MVIP-2 video coder (described in the document [Gar04]) The modeled modules are: · IVIDEOH: module that filter the video input in the horizontal dimension. It saves the filtered video in the memory. · IVIDEOV: module that read the IVIDEOH filtered video, filter it in the vertical dimension and save the filtered video in the memory. · DCT: module that read the IVIDEOV filtered video, do the discrete cosine transform and save the transformed video in the memory. · QUANT: module that read the DCT transformed video, quantify it and save the quantified video in the memory. · IQUANT: module that read the QUANT processed video, do the inverse quantification and save the result in the memory. · IDCT: module that read the IQUANT processed video, do the inverse cosine transform and save the result in the memory. · IMEM: this module is the interface between the modules described previously and the memory. It manage the simultaneous accesses to the memory and ensure an unique access at each instant of time All this modules are included in grey in the following figure (SEE PDF OF PFC). This figure shows the architecture of the model: Figure 1. Architecture of the model This figure also includes other modules in white, these modules have been added to the model in order to simulate and prove the modules of the model: · CAMARA: simulates a black and white video camera, it reads the luminance of a video file and sends it to the model through a FIFO. · FIFO: is the interface between the camera and the model, it saves the video data sent by the camera until the IVIDEOH module reads it. · CONTROL: controls the modules that process the video. These modules indicate the CONTROL module when they have finished the processing of a video frame. The CONTROL module, then, init the necessary modules to continue with the video coding. This module is responsible of the right sequence of the video processing modules. · RAM: it simulates a RAM memory; it also simulates a programmable delay in the access to the memory. It has been generated video files, text files and a trace file to check the correct function of the model. The trace file shows the sequence of the video processing modules. As a result of the present final career project, it can be deduced that it is quite easy to model digital systems with SystemC (it is only needed previous knowledge of C++ and object oriented programming) and it also allow the modeling with a level of abstraction higher than the RTL used in Verilog and VHDL, in the case of the present final career project, the TLM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo aborda el problema de modelizar sistemas din´amicos reales a partir del estudio de sus series temporales, usando una formulaci´on est´andar que pretende ser una abstracci´on universal de los sistemas din´amicos, independientemente de su naturaleza determinista, estoc´astica o h´ıbrida. Se parte de modelizaciones separadas de sistemas deterministas por un lado y estoc´asticos por otro, para converger finalmente en un modelo h´ıbrido que permite estudiar sistemas gen´ericos mixtos, esto es, que presentan una combinaci´on de comportamiento determinista y aleatorio. Este modelo consta de dos componentes, uno determinista consistente en una ecuaci´on en diferencias, obtenida a partir de un estudio de autocorrelaci´on, y otro estoc´astico que modeliza el error cometido por el primero. El componente estoc´astico es un generador universal de distribuciones de probabilidad, basado en un proceso compuesto de variables aleatorias, uniformemente distribuidas en un intervalo variable en el tiempo. Este generador universal es deducido en la tesis a partir de una nueva teor´ıa sobre la oferta y la demanda de un recurso gen´erico. El modelo resultante puede formularse conceptualmente como una entidad con tres elementos fundamentales: un motor generador de din´amica determinista, una fuente interna de ruido generadora de incertidumbre y una exposici´on al entorno que representa las interacciones del sistema real con el mundo exterior. En las aplicaciones estos tres elementos se ajustan en base al hist´orico de las series temporales del sistema din´amico. Una vez ajustados sus componentes, el modelo se comporta de una forma adaptativa tomando como inputs los nuevos valores de las series temporales del sistema y calculando predicciones sobre su comportamiento futuro. Cada predicci´on se presenta como un intervalo dentro del cual cualquier valor es equipro- bable, teniendo probabilidad nula cualquier valor externo al intervalo. De esta forma el modelo computa el comportamiento futuro y su nivel de incertidumbre en base al estado actual del sistema. Se ha aplicado el modelo en esta tesis a sistemas muy diferentes mostrando ser muy flexible para afrontar el estudio de campos de naturaleza dispar. El intercambio de tr´afico telef´onico entre operadores de telefon´ıa, la evoluci´on de mercados financieros y el flujo de informaci´on entre servidores de Internet son estudiados en profundidad en la tesis. Todos estos sistemas son modelizados de forma exitosa con un mismo lenguaje, a pesar de tratarse de sistemas f´ısicos totalmente distintos. El estudio de las redes de telefon´ıa muestra que los patrones de tr´afico telef´onico presentan una fuerte pseudo-periodicidad semanal contaminada con una gran cantidad de ruido, sobre todo en el caso de llamadas internacionales. El estudio de los mercados financieros muestra por su parte que la naturaleza fundamental de ´estos es aleatoria con un rango de comportamiento relativamente acotado. Una parte de la tesis se dedica a explicar algunas de las manifestaciones emp´ıricas m´as importantes en los mercados financieros como son los “fat tails”, “power laws” y “volatility clustering”. Por ´ultimo se demuestra que la comunicaci´on entre servidores de Internet tiene, al igual que los mercados financieros, una componente subyacente totalmente estoc´astica pero de comportamiento bastante “d´ocil”, siendo esta docilidad m´as acusada a medida que aumenta la distancia entre servidores. Dos aspectos son destacables en el modelo, su adaptabilidad y su universalidad. El primero es debido a que, una vez ajustados los par´ametros generales, el modelo se “alimenta” de los valores observables del sistema y es capaz de calcular con ellos comportamientos futuros. A pesar de tener unos par´ametros fijos, la variabilidad en los observables que sirven de input al modelo llevan a una gran riqueza de ouputs posibles. El segundo aspecto se debe a la formulaci´on gen´erica del modelo h´ıbrido y a que sus par´ametros se ajustan en base a manifestaciones externas del sistema en estudio, y no en base a sus caracter´ısticas f´ısicas. Estos factores hacen que el modelo pueda utilizarse en gran variedad de campos. Por ´ultimo, la tesis propone en su parte final otros campos donde se han obtenido ´exitos preliminares muy prometedores como son la modelizaci´on del riesgo financiero, los algoritmos de routing en redes de telecomunicaci´on y el cambio clim´atico. Abstract This work faces the problem of modeling dynamical systems based on the study of its time series, by using a standard language that aims to be an universal abstraction of dynamical systems, irrespective of their deterministic, stochastic or hybrid nature. Deterministic and stochastic models are developed separately to be merged subsequently into a hybrid model, which allows the study of generic systems, that is to say, those having both deterministic and random behavior. This model is a combination of two different components. One of them is deterministic and consisting in an equation in differences derived from an auto-correlation study and the other is stochastic and models the errors made by the deterministic one. The stochastic component is an universal generator of probability distributions based on a process consisting in random variables distributed uniformly within an interval varying in time. This universal generator is derived in the thesis from a new theory of offer and demand for a generic resource. The resulting model can be visualized as an entity with three fundamental elements: an engine generating deterministic dynamics, an internal source of noise generating uncertainty and an exposure to the environment which depicts the interactions between the real system and the external world. In the applications these three elements are adjusted to the history of the time series from the dynamical system. Once its components have been adjusted, the model behaves in an adaptive way by using the new time series values from the system as inputs and calculating predictions about its future behavior. Every prediction is provided as an interval, where any inner value is equally probable while all outer ones have null probability. So, the model computes the future behavior and its level of uncertainty based on the current state of the system. The model is applied to quite different systems in this thesis, showing to be very flexible when facing the study of fields with diverse nature. The exchange of traffic between telephony operators, the evolution of financial markets and the flow of information between servers on the Internet are deeply studied in this thesis. All these systems are successfully modeled by using the same “language”, in spite the fact that they are systems physically radically different. The study of telephony networks shows that the traffic patterns are strongly weekly pseudo-periodic but mixed with a great amount of noise, specially in the case of international calls. It is proved that the underlying nature of financial markets is random with a moderate range of variability. A part of this thesis is devoted to explain some of the most important empirical observations in financial markets, such as “fat tails”, “power laws” and “volatility clustering”. Finally it is proved that the communication between two servers on the Internet has, as in the case of financial markets, an underlaying random dynamics but with a narrow range of variability, being this lack of variability more marked as the distance between servers is increased. Two aspects of the model stand out as being the most important: its adaptability and its universality. The first one is due to the fact that once the general parameters have been adjusted , the model is “fed” on the observable manifestations of the system in order to calculate its future behavior. Despite the fact that the model has fixed parameters the variability in the observable manifestations of the system, which are used as inputs of the model, lead to a great variability in the possible outputs. The second aspect is due to the general “language” used in the formulation of the hybrid model and to the fact that its parameters are adjusted based on external manifestations of the system under study instead of its physical characteristics. These factors made the model suitable to be used in great variety of fields. Lastly, this thesis proposes other fields in which preliminary and promising results have been obtained, such as the modeling of financial risk, the development of routing algorithms for telecommunication networks and the assessment of climate change.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) are spearheading the efforts taken to build and deploy systems aiming to accomplish the ultimate objectives of the Internet of Things. Due to the sensors WSNs nodes are provided with, and to their ubiquity and pervasive capabilities, these networks become extremely suitable for many applications that so-called conventional cabled or wireless networks are unable to handle. One of these still underdeveloped applications is monitoring physical parameters on a person. This is an especially interesting application regarding their age or activity, for any detected hazardous parameter can be notified not only to the monitored person as a warning, but also to any third party that may be helpful under critical circumstances, such as relatives or healthcare centers. We propose a system built to monitor a sportsman/woman during a workout session or performing a sport-related indoor activity. Sensors have been deployed by means of several nodes acting as the nodes of a WSN, along with a semantic middleware development used for hardware complexity abstraction purposes. The data extracted from the environment, combined with the information obtained from the user, will compose the basis of the services that can be obtained.