40 resultados para System modelling
em Universidad Politécnica de Madrid
Resumo:
El objetivo principal del presente proyecto es proporcionar al ingeniero de telecomunicaciones una visión general de las técnicas que se utilizan en el modelado del sistema auditivo. El modelado del sistema auditivo se realiza con los siguientes objetivos: a) Interpretar medidas directas, b)unificar el entendimiento de diferentes fenómenos, c) guiar estrategias de amplificación para suplir pérdidas auditivas y d) tener predicciones experimentalmente comprobables de comportamientos, con diferentes niveles de complejidad. En este trabajo se tratarán y explicarán brevemente las diferentes técnicas utilizadas para modelar las partes del sistema auditivo, desde las analogías electroacústicas, modelos biofísicos, binaurales, hasta la implementación de filtros auditivos mediante procesado de señal. Podemos concluir que el modelado mediante analogías electroacústicas permite una rápida implementación y entendimiento, pero tiene ciertas limitaciones. Las simulaciones mediante análisis numéricos son precisas y de gran utilidad tanto para del oído medio como para el interno. El procesado de señal es el procedimiento más completo y utilizado ya que permite modelar oído externo y medio además de permitir la implementación de filtros cocleares muy precisos y coherentes con la realidad incluyéndolos en modelos perceptivos. ABSTRACT. The main aim of the Project is to provide the Telecommunications Engineer an overview about the approaches for modelling the auditory system. The auditory system modelling is done for the next objectives: a) Interpret direct measures, b) Understand different phenomena c) get strategies of amplification for hearing impaired people and d) Obtain testable predictions experimentally about some behaviors with different complexity levels. Inside this document, several approaches about modeling of the auditory system parts will be explained: analog circuits, biophysics models, binaural models, and auditory filters made through signal processing. In conclusion, analog circuits are made quickly and they are easier to understand but they have many limitations. Simulations through numerical analysis are accurate and useful in middle and inner ear models. Signal processing is the more versatile approach because it lets to make a model of external and middle ear and then it allows to make complex auditory filters. Perceptive models can be made entirely through this method.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Resumo:
A module to estimate risks of ozone damage to vegetation has been implemented in the Integrated Assessment Modelling system for the Iberian Peninsula. It was applied to compute three different indexes for wheat and Holm oak; daylight AOT40 (cumulative ozone concentration over 40 ppb), cumulative ozone exposure index according to the Directive 2008/50/EC (AOT40-D) and PODY (Phytotoxic Ozone Dose over a given threshold of Y nmol m−2 s−1). The use of these indexes led to remarkable differences in spatial patterns of relative ozone risks on vegetation. Ozone critical levels were exceeded in most of the modelling domain and soil moisture content was found to have a significant impact on the results. According to the outputs of the model, daylight AOT40 constitutes a more conservative index than the AOT40-D. Additionally, flux-based estimations indicate high risk areas in Portugal for both wheat and Holm oak that are not identified by AOT-based methods.
Resumo:
This paper presents the development of the robotic multi-agent system SMART. In this system, the agent concept is applied to both hardware and software entities. Hardware agents are robots, with three and four legs, and an IP-camera that takes images of the scene where the cooperative task is carried out. Hardware agents strongly cooperate with software agents. These latter agents can be classified into image processing, communications, task management and decision making, planning and trajectory generation agents. To model, control and evaluate the performance of cooperative tasks among agents, a kind of PetriNet, called Work-Flow Petri Net, is used. Experimental results shows the good performance of the system.
Resumo:
The new reactor concepts proposed in the Generation IV International Forum (GIF) are conceived to improve the use of natural resources, reduce the amount of high-level radioactive waste and excel in their reliability and safe operation. Among these novel designs sodium fast reactors (SFRs) stand out due to their technological feasibility as demonstrated in several countries during the last decades. As part of the contribution of EURATOM to GIF the CP-ESFR is a collaborative project with the objective, among others, to perform extensive analysis on safety issues involving renewed SFR demonstrator designs. The verification of computational tools able to simulate the plant behaviour under postulated accidental conditions by code-to-code comparison was identified as a key point to ensure reactor safety. In this line, several organizations employed coupled neutronic and thermal-hydraulic system codes able to simulate complex and specific phenomena involving multi-physics studies adapted to this particular fast reactor technology. In the “Introduction” of this paper the framework of this study is discussed, the second section describes the envisaged plant design and the commonly agreed upon modelling guidelines. The third section presents a comparative analysis of the calculations performed by each organisation applying their models and codes to a common agreed transient with the objective to harmonize the models as well as validating the implementation of all relevant physical phenomena in the different system codes.
Resumo:
The achievement of the limit values established in the European legislation pose an important handicap for large urban areas with intense road traffic, such as Madrid (Spain). Despite permanent measures included in air quality plans it is important to assess additional measures that may be temporally applied under unfavourable conditions. This paper reports on the simulation of different traffic restriction strategies in Madrid for high-pollution episodes.
Resumo:
La mejora de la calidad del aire es una tarea eminentemente interdisciplinaria. Dada la gran variedad de ciencias y partes involucradas, dicha mejora requiere de herramientas de evaluación simples y completamente integradas. La modelización para la evaluación integrada (integrated assessment modeling) ha demostrado ser una solución adecuada para la descripción de los sistemas de contaminación atmosférica puesto que considera cada una de las etapas involucradas: emisiones, química y dispersión atmosférica, impactos ambientales asociados y potencial de disminución. Varios modelos de evaluación integrada ya están disponibles a escala continental, cubriendo cada una de las etapas antesmencionadas, siendo el modelo GAINS (Greenhouse Gas and Air Pollution Interactions and Synergies) el más reconocido y usado en el contexto europeo de toma de decisiones medioambientales. Sin embargo, el manejo de la calidad del aire a escala nacional/regional dentro del marco de la evaluación integrada es deseable. Esto sin embargo, no se lleva a cabo de manera satisfactoria con modelos a escala europea debido a la falta de resolución espacial o de detalle en los datos auxiliares, principalmente los inventarios de emisión y los patrones meteorológicos, entre otros. El objetivo de esta tesis es presentar los desarrollos en el diseño y aplicación de un modelo de evaluación integrada especialmente concebido para España y Portugal. El modelo AERIS (Atmospheric Evaluation and Research Integrated system for Spain) es capaz de cuantificar perfiles de concentración para varios contaminantes (NO2, SO2, PM10, PM2,5, NH3 y O3), el depósito atmosférico de especies de azufre y nitrógeno así como sus impactos en cultivos, vegetación, ecosistemas y salud como respuesta a cambios porcentuales en las emisiones de sectores relevantes. La versión actual de AERIS considera 20 sectores de emisión, ya sea equivalentes a sectores individuales SNAP o macrosectores, cuya contribución a los niveles de calidad del aire, depósito e impactos han sido modelados a través de matrices fuentereceptor (SRMs). Estas matrices son constantes de proporcionalidad que relacionan cambios en emisiones con diferentes indicadores de calidad del aire y han sido obtenidas a través de parametrizaciones estadísticas de un modelo de calidad del aire (AQM). Para el caso concreto de AERIS, su modelo de calidad del aire “de origen” consistió en el modelo WRF para la meteorología y en el modelo CMAQ para los procesos químico-atmosféricos. La cuantificación del depósito atmosférico, de los impactos en ecosistemas, cultivos, vegetación y salud humana se ha realizado siguiendo las metodologías estándar establecidas bajo los marcos internacionales de negociación, tales como CLRTAP. La estructura de programación está basada en MATLAB®, permitiendo gran compatibilidad con software típico de escritorio comoMicrosoft Excel® o ArcGIS®. En relación con los niveles de calidad del aire, AERIS es capaz de proveer datos de media anual y media mensual, así como el 19o valor horario más alto paraNO2, el 25o valor horario y el 4o valor diario más altos para SO2, el 36o valor diario más alto para PM10, el 26o valor octohorario más alto, SOMO35 y AOT40 para O3. En relación al depósito atmosférico, el depósito acumulado anual por unidad de area de especies de nitrógeno oxidado y reducido al igual que de azufre pueden ser determinados. Cuando los valores anteriormente mencionados se relacionan con características del dominio modelado tales como uso de suelo, cubiertas vegetales y forestales, censos poblacionales o estudios epidemiológicos, un gran número de impactos puede ser calculado. Centrándose en los impactos a ecosistemas y suelos, AERIS es capaz de estimar las superaciones de cargas críticas y las superaciones medias acumuladas para especies de nitrógeno y azufre. Los daños a bosques se calculan como una superación de los niveles críticos de NO2 y SO2 establecidos. Además, AERIS es capaz de cuantificar daños causados por O3 y SO2 en vid, maíz, patata, arroz, girasol, tabaco, tomate, sandía y trigo. Los impactos en salud humana han sido modelados como consecuencia de la exposición a PM2,5 y O3 y cuantificados como pérdidas en la esperanza de vida estadística e indicadores de mortalidad prematura. La exactitud del modelo de evaluación integrada ha sido contrastada estadísticamente con los resultados obtenidos por el modelo de calidad del aire convencional, exhibiendo en la mayoría de los casos un buen nivel de correspondencia. Debido a que la cuantificación de los impactos no es llevada a cabo directamente por el modelo de calidad del aire, un análisis de credibilidad ha sido realizado mediante la comparación de los resultados de AERIS con los de GAINS para un escenario de emisiones determinado. El análisis reveló un buen nivel de correspondencia en las medias y en las distribuciones probabilísticas de los conjuntos de datos. Las pruebas de verificación que fueron aplicadas a AERIS sugieren que los resultados son suficientemente consistentes para ser considerados como razonables y realistas. En conclusión, la principal motivación para la creación del modelo fue el producir una herramienta confiable y a la vez simple para el soporte de las partes involucradas en la toma de decisiones, de cara a analizar diferentes escenarios “y si” con un bajo coste computacional. La interacción con políticos y otros actores dictó encontrar un compromiso entre la complejidad del modeladomedioambiental con el carácter conciso de las políticas, siendo esto algo que AERIS refleja en sus estructuras conceptual y computacional. Finalmente, cabe decir que AERIS ha sido creado para su uso exclusivo dentro de un marco de evaluación y de ninguna manera debe ser considerado como un sustituto de los modelos de calidad del aire ordinarios. ABSTRACT Improving air quality is an eminently inter-disciplinary task. The wide variety of sciences and stakeholders that are involved call for having simple yet fully-integrated and reliable evaluation tools available. Integrated AssessmentModeling has proved to be a suitable solution for the description of air pollution systems due to the fact that it considers each of the involved stages: emissions, atmospheric chemistry, dispersion, environmental impacts and abatement potentials. Some integrated assessment models are available at European scale that cover each of the before mentioned stages, being the Greenhouse Gas and Air Pollution Interactions and Synergies (GAINS) model the most recognized and widely-used within a European policy-making context. However, addressing air quality at the national/regional scale under an integrated assessment framework is desirable. To do so, European-scale models do not provide enough spatial resolution or detail in their ancillary data sources, mainly emission inventories and local meteorology patterns as well as associated results. The objective of this dissertation is to present the developments in the design and application of an Integrated Assessment Model especially conceived for Spain and Portugal. The Atmospheric Evaluation and Research Integrated system for Spain (AERIS) is able to quantify concentration profiles for several pollutants (NO2, SO2, PM10, PM2.5, NH3 and O3), the atmospheric deposition of sulfur and nitrogen species and their related impacts on crops, vegetation, ecosystems and health as a response to percentual changes in the emissions of relevant sectors. The current version of AERIS considers 20 emission sectors, either corresponding to individual SNAP sectors or macrosectors, whose contribution to air quality levels, deposition and impacts have been modeled through the use of source-receptor matrices (SRMs). Thesematrices are proportionality constants that relate emission changes with different air quality indicators and have been derived through statistical parameterizations of an air qualitymodeling system (AQM). For the concrete case of AERIS, its parent AQM relied on the WRF model for meteorology and on the CMAQ model for atmospheric chemical processes. The quantification of atmospheric deposition, impacts on ecosystems, crops, vegetation and human health has been carried out following the standard methodologies established under international negotiation frameworks such as CLRTAP. The programming structure isMATLAB ® -based, allowing great compatibility with typical software such as Microsoft Excel ® or ArcGIS ® Regarding air quality levels, AERIS is able to provide mean annual andmean monthly concentration values, as well as the indicators established in Directive 2008/50/EC, namely the 19th highest hourly value for NO2, the 25th highest daily value and the 4th highest hourly value for SO2, the 36th highest daily value of PM10, the 26th highest maximum 8-hour daily value, SOMO35 and AOT40 for O3. Regarding atmospheric deposition, the annual accumulated deposition per unit of area of species of oxidized and reduced nitrogen as well as sulfur can be estimated. When relating the before mentioned values with specific characteristics of the modeling domain such as land use, forest and crops covers, population counts and epidemiological studies, a wide array of impacts can be calculated. When focusing on impacts on ecosystems and soils, AERIS is able to estimate critical load exceedances and accumulated average exceedances for nitrogen and sulfur species. Damage on forests is estimated as an exceedance of established critical levels of NO2 and SO2. Additionally, AERIS is able to quantify damage caused by O3 and SO2 on grapes, maize, potato, rice, sunflower, tobacco, tomato, watermelon and wheat. Impacts on human health aremodeled as a consequence of exposure to PM2.5 and O3 and quantified as losses in statistical life expectancy and premature mortality indicators. The accuracy of the IAM has been tested by statistically contrasting the obtained results with those yielded by the conventional AQM, exhibiting in most cases a good agreement level. Due to the fact that impacts cannot be directly produced by the AQM, a credibility analysis was carried out for the outputs of AERIS for a given emission scenario by comparing them through probability tests against the performance of GAINS for the same scenario. This analysis revealed a good correspondence in the mean behavior and the probabilistic distributions of the datasets. The verification tests that were applied to AERIS suggest that results are consistent enough to be credited as reasonable and realistic. In conclusion, the main reason thatmotivated the creation of this model was to produce a reliable yet simple screening tool that would provide decision and policy making support for different “what-if” scenarios at a low computing cost. The interaction with politicians and other stakeholders dictated that reconciling the complexity of modeling with the conciseness of policies should be reflected by AERIS in both, its conceptual and computational structures. It should be noted however, that AERIS has been created under a policy-driven framework and by no means should be considered as a substitute of the ordinary AQM.
Resumo:
The last decade, scientific studies have indicated an association between air pollution to which people are exposed and wide range of adverse health outcomes. We have developed a tool which is based on a model (MM5-CMAQ) running over Europe with 50 km spatial resolution, based on EMEP annual emissions, to produce a short-term forecast of the impact on health. In order to estimate the mortality change (forecasted for the next 24 hours) we have chosen a log-linear (Poisson) regression form to estimate the concentration-response function. The parameters involved in the C-R function have been estimated based on epidemiological studies, which have been published. Finally, we have derived the relationship between concentration change and mortality change from the C-R function which is the final health impact function.
Resumo:
It is presented a mathematical model of the oculomotor plant, based on experimental data in cats. The system that generates, from the neuronal processes at the motoneuron, the control signals to the eye muscles that moves the eye. In contrast with previous models, that base the eye movement related motoneuron behavior on a first order linear differential equation, non-linear effects are described: A dependency on the eye angular position of the model parameters.
Resumo:
Embedded context management in resource-constrained devices (e.g. mobile phones, autonomous sensors or smart objects) imposes special requirements in terms of lightness for data modelling and reasoning. In this paper, we explore the state-of-the-art on data representation and reasoning tools for embedded mobile reasoning and propose a light inference system (LIS) aiming at simplifying embedded inference processes offering a set of functionalities to avoid redundancy in context management operations. The system is part of a service-oriented mobile software framework, conceived to facilitate the creation of context-aware applications—it decouples sensor data acquisition and context processing from the application logic. LIS, composed of several modules, encapsulates existing lightweight tools for ontology data management and rule-based reasoning, and it is ready to run on Java-enabled handheld devices. Data management and reasoning processes are designed to handle a general ontology that enables communication among framework components. Both the applications running on top of the framework and the framework components themselves can configure the rule and query sets in order to retrieve the information they need from LIS. In order to test LIS features in a real application scenario, an ‘Activity Monitor’ has been designed and implemented: a personal health-persuasive application that provides feedback on the user’s lifestyle, combining data from physical and virtual sensors. In this case of use, LIS is used to timely evaluate the user’s activity level, to decide on the convenience of triggering notifications and to determine the best interface or channel to deliver these context-aware alerts.d
Resumo:
The objective of this research was the implementation of a participatory process for the development of a tool to support decision making in water management. The process carried out aims at attaining an improved understanding of the water system and an encouragement of the exchange of knowledge and views between stakeholders to build a shared vision of the system. In addition, the process intends to identify impacts of possible solutions to given problems, which will help to take decisions.
Resumo:
The modelling of critical infrastructures (CIs) is an important issue that needs to be properly addressed, for several reasons. It is a basic support for making decisions about operation and risk reduction. It might help in understanding high-level states at the system-of-systems layer, which are not ready evident to the organisations that manage the lower level technical systems. Moreover, it is also indispensable for setting a common reference between operator and authorities, for agreeing on the incident scenarios that might affect those infrastructures. So far, critical infrastructures have been modelled ad-hoc, on the basis of knowledge and practice derived from less complex systems. As there is no theoretical framework, most of these efforts proceed without clear guides and goals and using informally defined schemas based mostly on boxes and arrows. Different CIs (electricity grid, telecommunications networks, emergency support, etc) have been modelled using particular schemas that were not directly translatable from one CI to another. If there is a desire to build a science of CIs it is because there are some observable commonalities that different CIs share. Up until now, however, those commonalities were not adequately compiled or categorized, so building models of CIs that are rooted on such commonalities was not possible. This report explores the issue of which elements underlie every CI and how those elements can be used to develop a modelling language that will enable CI modelling and, subsequently, analysis of CI interactions, with a special focus on resilience
Resumo:
Systems of Systems (SoS) present challenging features and existing tools result often inadequate for their analysis, especially for heteregeneous networked infrastructures. Most accident scenarios in networked systems cannot be addressed by a simplistic black or white (i.e. functioning or failed) approach. Slow deviations from nominal operation conditions may cause degraded behaviours that suddenly end up into unexpected malfunctioning, with large portions of the network affected. In this paper,we present a language for modelling networked SoS. The language makes it possible to represent interdependencies of various natures, e.g. technical, organizational and human. The representation of interdependencies is based on control relationships that exchange physical quantities and related information. The language also makes it possible the identification of accident scenarios, by representing the propagation of failure events throughout the network. The results can be used for assessing the effectiveness of those mechanisms and measures that contribute to the overall resilience, both in qualitative and quantitative terms. The presented modelling methodology is general enough to be applied in combination with already existing system analysis techniques, such as risk assessment, dependability and performance evaluation
Resumo:
Diffusion controls the gaseous transport process in soils when advective transport is almost null. Knowledge of the soil structure and pore connectivity are critical issues to understand and modelling soil aeration, sequestration or emission of greenhouse gasses, volatilization of volatile organic chemicals among other phenomena. In the last decades these issues increased our attention as scientist have realize that soil is one of the most complex materials on the earth, within which many biological, physical and chemical processes that support life and affect climate change take place. A quantitative and explicit characterization of soil structure is difficult because of the complexity of the pore space. This is the main reason why most theoretical approaches to soil porosity are idealizations to simplify this system. In this work, we proposed a more realistic attempt to capture the complexity of the system developing a model that considers the size and location of pores in order to relate them into a network. In the model we interpret porous soils as heterogeneous networks where pores are represented by nodes, characterized by their size and spatial location, and the links representing flows between them. In this work we perform an analysis of the community structure of porous media of soils represented as networks. For different real soils samples, modelled as heterogeneous complex networks, spatial communities of pores have been detected depending on the values of the parameters of the porous soil model used. These types of models are named as Heterogeneous Preferential Attachment (HPA). Developing an exhaustive analysis of the model, analytical solutions are obtained for the degree densities and degree distribution of the pore networks generated by the model in the thermodynamic limit and shown that the networks exhibit similar properties to those observed in other complex networks. With the aim to study in more detail topological properties of these networks, the presence of soil pore community structures is studied. The detection of communities of pores, as groups densely connected with only sparser connections between groups, could contribute to understand the mechanisms of the diffusion phenomena in soils.