17 resultados para Transaction level modeling

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente proyecto final de carrera titulado “Modelado de alto nivel con SystemC” tiene como objetivo principal el modelado de algunos módulos de un codificador de vídeo MPEG-2 utilizando el lenguaje de descripción de sistemas igitales SystemC con un nivel de abstracción TLM o Transaction Level Modeling. SystemC es un lenguaje de descripción de sistemas digitales basado en C++. En él hay un conjunto de rutinas y librerías que implementan tipos de datos, estructuras y procesos especiales para el modelado de sistemas digitales. Su descripción se puede consultar en [GLMS02] El nivel de abstracción TLM se caracteriza por separar la comunicación entre los módulos de su funcionalidad. Este nivel de abstracción hace un mayor énfasis en la funcionalidad de la comunicación entre los módulos (de donde a donde van datos) que la implementación exacta de la misma. En los documentos [RSPF] y [HG] se describen el TLM y un ejemplo de implementación. La arquitectura del modelo se basa en el codificador MVIP-2 descrito en [Gar04], de dicho modelo, los módulos implementados son: · IVIDEOH: módulo que realiza un filtrado del vídeo de entrada en la dimensión horizontal y guarda en memoria el video filtrado. · IVIDEOV: módulo que lee de la memoria el vídeo filtrado por IVIDEOH, realiza el filtrado en la dimensión horizontal y escribe el video filtrado en memoria. · DCT: módulo que lee el video filtrado por IVIDEOV, hace la transformada discreta del coseno y guarda el vídeo transformado en la memoria. · QUANT: módulo que lee el video transformado por DCT, lo cuantifica y guarda el resultado en la memoria. · IQUANT: módulo que lee el video cuantificado por QUANT, realiza la cuantificación inversa y guarda el resultado en memoria. · IDCT: módulo que lee el video procesado por IQUANT, realiza la transformada inversa del coseno y guarda el resultado en memoria. · IMEM: módulo que hace de interfaz entre los módulos anteriores y la memoria. Gestiona las peticiones simultáneas de acceso a la memoria y asegura el acceso exclusivo a la memoria en cada instante de tiempo. Todos estos módulos aparecen en gris en la siguiente figura en la que se muestra la arquitectura del modelo: Figura 1. Arquitectura del modelo (VER PDF DEL PFC) En figura también aparecen unos módulos en blanco, dichos módulos son de pruebas y se han añadido para realizar simulaciones y probar los módulos del modelo: · CAMARA: módulo que simula una cámara en blanco y negro, lee la luminancia de un fichero de vídeo y lo envía al modelo a través de una FIFO. · FIFO: hace de interfaz entre la cámara y el modelo, guarda los datos que envía la cámara hasta que IVIDEOH los lee. · CONTROL: módulo que se encarga de controlar los módulos que procesan el vídeo, estos le indican cuando terminan de procesar un frame de vídeo y este módulo se encarga de iniciar los módulos que sean necesarios para seguir con la codificación. Este módulo se encarga del correcto secuenciamiento de los módulos procesadores de vídeo. · RAM: módulo que simula una memoria RAM, incluye un retardo programable en el acceso. Para las pruebas también se han generado ficheros de vídeo con el resultado de cada módulo procesador de vídeo, ficheros con mensajes y un fichero de trazas en el que se muestra el secuenciamiento de los procesadores. Como resultado del trabajo en el presente PFC se puede concluir que SystemC permite el modelado de sistemas digitales con bastante sencillez (hace falta conocimientos previos de C++ y programación orientada objetos) y permite la realización de modelos con un nivel de abstracción mayor a RTL, el habitual en Verilog y VHDL, en el caso del presente PFC, el TLM. ABSTRACT This final career project titled “High level modeling with SystemC” have as main objective the modeling of some of the modules of an MPEG-2 video coder using the SystemC digital systems description language at the TLM or Transaction Level Modeling abstraction level. SystemC is a digital systems description language based in C++. It contains routines and libraries that define special data types, structures and process to model digital systems. There is a complete description of the SystemC language in the document [GLMS02]. The main characteristic of TLM abstraction level is that it separates the communication among modules of their functionality. This abstraction level puts a higher emphasis in the functionality of the communication (from where to where the data go) than the exact implementation of it. The TLM and an example are described in the documents [RSPF] and [HG]. The architecture of the model is based in the MVIP-2 video coder (described in the document [Gar04]) The modeled modules are: · IVIDEOH: module that filter the video input in the horizontal dimension. It saves the filtered video in the memory. · IVIDEOV: module that read the IVIDEOH filtered video, filter it in the vertical dimension and save the filtered video in the memory. · DCT: module that read the IVIDEOV filtered video, do the discrete cosine transform and save the transformed video in the memory. · QUANT: module that read the DCT transformed video, quantify it and save the quantified video in the memory. · IQUANT: module that read the QUANT processed video, do the inverse quantification and save the result in the memory. · IDCT: module that read the IQUANT processed video, do the inverse cosine transform and save the result in the memory. · IMEM: this module is the interface between the modules described previously and the memory. It manage the simultaneous accesses to the memory and ensure an unique access at each instant of time All this modules are included in grey in the following figure (SEE PDF OF PFC). This figure shows the architecture of the model: Figure 1. Architecture of the model This figure also includes other modules in white, these modules have been added to the model in order to simulate and prove the modules of the model: · CAMARA: simulates a black and white video camera, it reads the luminance of a video file and sends it to the model through a FIFO. · FIFO: is the interface between the camera and the model, it saves the video data sent by the camera until the IVIDEOH module reads it. · CONTROL: controls the modules that process the video. These modules indicate the CONTROL module when they have finished the processing of a video frame. The CONTROL module, then, init the necessary modules to continue with the video coding. This module is responsible of the right sequence of the video processing modules. · RAM: it simulates a RAM memory; it also simulates a programmable delay in the access to the memory. It has been generated video files, text files and a trace file to check the correct function of the model. The trace file shows the sequence of the video processing modules. As a result of the present final career project, it can be deduced that it is quite easy to model digital systems with SystemC (it is only needed previous knowledge of C++ and object oriented programming) and it also allow the modeling with a level of abstraction higher than the RTL used in Verilog and VHDL, in the case of the present final career project, the TLM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work is to propose a structure for simulating power systems using behavioral models of nonlinear DC to DC converters implemented through a look-up table of gains. This structure is specially designed for converters whose output impedance depends on the load current level, e.g. quasi-resonant converters. The proposed model is a generic one whose parameters can be obtained by direct measuring the transient response at different operating points. It also includes optional functionalities for modeling converters with current limitation and current sharing in paralleling characteristics. The pusposed structured also allows including aditional characteristics of the DC to DC converter as the efficency as a function of the input voltage and the output current or overvoltage and undervoltage protections. In addition, this proposed model is valid for overdamped and underdamped situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the field of detection and monitoring of dynamic objects in quasi-static scenes, background subtraction techniques where background is modeled at pixel-level, although showing very significant limitations, are extensively used. In this work we propose a novel approach to background modeling that operates at region-level in a wavelet based multi-resolution framework. Based on a segmentation of the background, characterization is made for each region independently as a mixture of K Gaussian modes, considering the model of the approximation and detail coefficients at the different wavelet decomposition levels. Background region characterization is updated along time, and the detection of elements of interest is carried out computing the distance between background region models and those of each incoming image in the sequence. The inclusion of the context in the modeling scheme through each region characterization makes the model robust, being able to support not only gradual illumination and long-term changes, but also sudden illumination changes and the presence of strong shadows in the scene

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In arid countries worldwide, social conflicts between irrigation-based human development and the conservation of aquatic ecosystems are widespread and attract many public debates. This research focuses on the analysis of water and agricultural policies aimed at conserving groundwater resources and maintaining rurallivelihoods in a basin in Spain's central arid region. Intensive groundwater mining for irrigation has caused overexploitation of the basin's large aquifer, the degradation of reputed wetlands and has given rise to notable social conflicts over the years. With the aim of tackling the multifaceted socio-ecological interactions of complex water systems, the methodology used in this study consists in a novel integration into a common platform of an economic optimization model and a hydrology model WEAP (Water Evaluation And Planning system). This robust tool is used to analyze the spatial and temporal effects of different water and agricultural policies under different climate scenarios. It permits the prediction of different climate and policy outcomes across farm types (water stress impacts and adaptation), at basin's level (aquifer recovery), and along the policies’ implementation horizon (short and long run). Results show that the region's current quota-based water policies may contribute to reduce water consumption in the farms but will not be able to recover the aquifer and will inflict income losses to the rural communities. This situation would worsen in case of drought. Economies of scale and technology are evidenced as larger farms with cropping diversification and those equipped with modern irrigation will better adapt to water stress conditions. However, the long-term sustainability of the aquifer and the maintenance of rurallivelihoods will be attained only if additional policy measures are put in place such as the control of illegal abstractions and the establishing of a water bank. Within the policy domain, the research contributes to the new sustainable development strategy of the EU by concluding that, in water-scarce regions, effective integration of water and agricultural policies is essential for achieving the water protection objectives of the EU policies. Therefore, the design and enforcement of well-balanced region-specific polices is a major task faced by policy makers for achieving successful water management that will ensure nature protection and human development at tolerable social costs. From a methodological perspective, this research initiative contributes to better address hydrological questions as well as economic and social issues in complex water and human systems. Its integrated vision provides a valuable illustration to inform water policy and management decisions within contexts of water-related conflicts worldwide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy management has always been recognized as a challenge in mobile systems, especially in modern OS-based mobile systems where multi-functioning are widely supported. Nowadays, it is common for a mobile system user to run multiple applications simultaneously while having a target battery lifetime in mind for a specific application. Traditional OS-level power management (PM) policies make their best effort to save energy under performance constraint, but fail to guarantee a target lifetime, leaving the painful trading off between the total performance of applications and the target lifetime to the user itself. This thesis provides a new way to deal with the problem. It is advocated that a strong energy-aware PM scheme should first guarantee a user-specified battery lifetime to a target application by restricting the average power of those less important applications, and in addition to that, maximize the total performance of applications without harming the lifetime guarantee. As a support, energy, instead of CPU or transmission bandwidth, should be globally managed as the first-class resource by the OS. As the first-stage work of a complete PM scheme, this thesis presents the energy-based fair queuing scheduling, a novel class of energy-aware scheduling algorithms which, in combination with a mechanism of battery discharge rate restricting, systematically manage energy as the first-class resource with the objective of guaranteeing a user-specified battery lifetime for a target application in OS-based mobile systems. Energy-based fair queuing is a cross-application of the traditional fair queuing in the energy management domain. It assigns a power share to each task, and manages energy by proportionally serving energy to tasks according to their assigned power shares. The proportional energy use establishes proportional share of the system power among tasks, which guarantees a minimum power for each task and thus, avoids energy starvation on any task. Energy-based fair queuing treats all tasks equally as one type and supports periodical time-sensitive tasks by allocating each of them a share of system power that is adequate to meet the highest energy demand in all periods. However, an overly conservative power share is usually required to guarantee the meeting of all time constraints. To provide more effective and flexible support for various types of time-sensitive tasks in general purpose operating systems, an extra real-time friendly mechanism is introduced to combine priority-based scheduling into the energy-based fair queuing. Since a method is available to control the maximum time one time-sensitive task can run with priority, the power control and time-constraint meeting can be flexibly traded off. A SystemC-based test-bench is designed to assess the algorithms. Simulation results show the success of the energy-based fair queuing in achieving proportional energy use, time-constraint meeting, and a proper trading off between them. La gestión de energía en los sistema móviles está considerada hoy en día como un reto fundamental, notándose, especialmente, en aquellos terminales que utilizando un sistema operativo implementan múltiples funciones. Es común en los sistemas móviles actuales ejecutar simultaneamente diferentes aplicaciones y tener, para una de ellas, un objetivo de tiempo de uso de la batería. Tradicionalmente, las políticas de gestión de consumo de potencia de los sistemas operativos hacen lo que está en sus manos para ahorrar energía y satisfacer sus requisitos de prestaciones, pero no son capaces de proporcionar un objetivo de tiempo de utilización del sistema, dejando al usuario la difícil tarea de buscar un compromiso entre prestaciones y tiempo de utilización del sistema. Esta tesis, como contribución, proporciona una nueva manera de afrontar el problema. En ella se establece que un esquema de gestión de consumo de energía debería, en primer lugar, garantizar, para una aplicación dada, un tiempo mínimo de utilización de la batería que estuviera especificado por el usuario, restringiendo la potencia media consumida por las aplicaciones que se puedan considerar menos importantes y, en segundo lugar, maximizar las prestaciones globales sin comprometer la garantía de utilización de la batería. Como soporte de lo anterior, la energía, en lugar del tiempo de CPU o el ancho de banda, debería gestionarse globalmente por el sistema operativo como recurso de primera clase. Como primera fase en el desarrollo completo de un esquema de gestión de consumo, esta tesis presenta un algoritmo de planificación de encolado equitativo (fair queueing) basado en el consumo de energía, es decir, una nueva clase de algoritmos de planificación que, en combinación con mecanismos que restrinjan la tasa de descarga de una batería, gestionen de forma sistemática la energía como recurso de primera clase, con el objetivo de garantizar, para una aplicación dada, un tiempo de uso de la batería, definido por el usuario, en sistemas móviles empotrados. El encolado equitativo de energía es una extensión al dominio de la energía del encolado equitativo tradicional. Esta clase de algoritmos asigna una reserva de potencia a cada tarea y gestiona la energía sirviéndola de manera proporcional a su reserva. Este uso proporcional de la energía garantiza que cada tarea reciba una porción de potencia y evita que haya tareas que se vean privadas de recibir energía por otras con un comportamiento más ambicioso. Esta clase de algoritmos trata a todas las tareas por igual y puede planificar tareas periódicas en tiempo real asignando a cada una de ellas una reserva de potencia que es adecuada para proporcionar la mayor de las cantidades de energía demandadas por período. Sin embargo, es posible demostrar que sólo se consigue cumplir con los requisitos impuestos por todos los plazos temporales con reservas de potencia extremadamente conservadoras. En esta tesis, para proporcionar un soporte más flexible y eficiente para diferentes tipos de tareas de tiempo real junto con el resto de tareas, se combina un mecanismo de planificación basado en prioridades con el encolado equitativo basado en energía. En esta clase de algoritmos, gracias al método introducido, que controla el tiempo que se ejecuta con prioridad una tarea de tiempo real, se puede establecer un compromiso entre el cumplimiento de los requisitos de tiempo real y el consumo de potencia. Para evaluar los algoritmos, se ha diseñado en SystemC un banco de pruebas. Los resultados muestran que el algoritmo de encolado equitativo basado en el consumo de energía consigue el balance entre el uso proporcional a la energía reservada y el cumplimiento de los requisitos de tiempo real.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents the design, kinematic model and communication architecture for the multi-agent robotic system called SMART. The philosophy behind this kind of system requires the communication architecture to contemplate the concurrence of the whole system. The proposed architecture combines different communication technologies (TCP/IP and Bluetooth) under one protocol designed for the cooperation among agents and other elements of the system such as IP-Cameras, image processing library, path planner, user Interface, control block and data block. The high level control is modeled by Work-Flow Petri nets and implemented in C++ and C♯♯. Experimental results show the performance of the designed architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling is an essential tool for the development of atmospheric emission abatement measures and air quality plans. Most often these plans are related to urban environments with high emission density and population exposure. However, air quality modeling in urban areas is a rather challenging task. As environmental standards become more stringent (e.g. European Directive 2008/50/EC), more reliable and sophisticated modeling tools are needed to simulate measures and plans that may effectively tackle air quality exceedances, common in large urban areas across Europe, particularly for NO2. This also implies that emission inventories must satisfy a number of conditions such as consistency across the spatial scales involved in the analysis, consistency with the emission inventories used for regulatory purposes and versatility to match the requirements of different air quality and emission projection models. This study reports the modeling activities carried out in Madrid (Spain) highlighting the atmospheric emission inventory development and preparation as an illustrative example of the combination of models and data needed to develop a consistent air quality plan at urban level. These included a series of source apportionment studies to define contributions from the international, national, regional and local sources in order to understand to what extent local authorities can enforce meaningful abatement measures. Moreover, source apportionment studies were conducted in order to define contributions from different sectors and to understand the maximum feasible air quality improvement that can be achieved by reducing emissions from those sectors, thus targeting emission reduction policies to the most relevant activities. Finally, an emission scenario reflecting the effect of such policies was developed and the associated air quality was modeled.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, Software Product Line (SPL) engineering [1] has been widely-adopted in software development due to the significant improvements that has provided, such as reducing cost and time-to-market and providing flexibility to respond to planned changes [2]. SPL takes advantage of common features among the products of a family through the systematic reuse of the core-assets and the effective management of variabilities across the products. SPL features are realized at the architectural level in product-line architecture (PLA) models. Therefore, suitable modeling and specification techniques are required to model variability. In fact, architectural variability modeling has become a challenge for SPLE due to the fact that PLA modeling requires not only modeling variability at the level of the external architecture configuration (see [3,4] literature reviews), but also at the level of internal specification of components [5]. In addition, PLA modeling requires preserving the traceability between features and PLAs. Finally, it is important to take into account that PLA modeling should guide architects in modeling the PLA core assets and variability, and in deriving the customized products. To deal with these needs, we present in this demonstration the FPLA Modeling Framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The SESAR (Single European Sky ATM Research) program is an ambitious re-search and development initiative to design the future European air traffic man-agement (ATM) system. The study of the behavior of ATM systems using agent-based modeling and simulation tools can help the development of new methods to improve their performance. This paper presents an overview of existing agent-based approaches in air transportation (paying special attention to the challenges that exist for the design of future ATM systems) and, subsequently, describes a new agent-based approach that we proposed in the CASSIOPEIA project, which was developed according to the goals of the SESAR program. In our approach, we use agent models for different ATM stakeholders, and, in contrast to previous work, our solution models new collaborative decision processes for flow traffic management, it uses an intermediate level of abstraction (useful for simulations at larger scales), and was designed to be a practical tool (open and reusable) for the development of different ATM studies. It was successfully applied in three stud-ies related to the design of future ATM systems in Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shopping agents are web-based applications that help consumers to find appropriate products in the context of e-commerce. In this paper we argue about the utility of advanced model-based techniques that recently have been proposed in the fields of Artificial Intelligence and Knowledge Engineering, in order to increase the level of support provided by this type of applications. We illustrate this approach with a virtual sales assistant that dynamically configures a product according to the needs and preferences of customers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Situado en el límite entre Ingeniería, Informática y Biología, la mecánica computacional de las neuronas aparece como un nuevo campo interdisciplinar que potencialmente puede ser capaz de abordar problemas clínicos desde una perspectiva diferente. Este campo es multiescala por naturaleza, yendo desde la nanoescala (como, por ejemplo, los dímeros de tubulina) a la macroescala (como, por ejemplo, el tejido cerebral), y tiene como objetivo abordar problemas que son complejos, y algunas veces imposibles, de estudiar con medios experimentales. La modelización computacional ha sido ampliamente empleada en aplicaciones Neurocientíficas tan diversas como el crecimiento neuronal o la propagación de los potenciales de acción compuestos. Sin embargo, en la mayoría de los enfoques de modelización hechos hasta ahora, la interacción entre la célula y el medio/estímulo que la rodea ha sido muy poco explorada. A pesar de la tremenda importancia de esa relación en algunos desafíos médicos—como, por ejemplo, lesiones traumáticas en el cerebro, cáncer, la enfermedad del Alzheimer—un puente que relacione las propiedades electrofisiológicas-químicas y mecánicas desde la escala molecular al nivel celular todavía no existe. Con ese objetivo, esta investigación propone un marco computacional multiescala particularizado para dos escenarios respresentativos: el crecimiento del axón y el acomplamiento electrofisiológicomecánico de las neuritas. En el primer caso, se explora la relación entre los constituyentes moleculares del axón durante su crecimiento y sus propiedades mecánicas resultantes, mientras que en el último, un estímulo mecánico provoca deficiencias funcionales a nivel celular como consecuencia de sus alteraciones electrofisiológicas-químicas. La modelización computacional empleada en este trabajo es el método de las diferencias finitas, y es implementada en un nuevo programa llamado Neurite. Aunque el método de los elementos finitos es también explorado en parte de esta investigación, el método de las diferencias finitas tiene la flexibilidad y versatilidad necesaria para implementar mode los biológicos, así como la simplicidad matemática para extenderlos a simulaciones a gran escala con un coste computacional bajo. Centrándose primero en el efecto de las propiedades electrofisiológicas-químicas sobre las propiedades mecánicas, una versión adaptada de Neurite es desarrollada para simular la polimerización de los microtúbulos en el crecimiento del axón y proporcionar las propiedades mecánicas como función de la ocupación de los microtúbulos. Después de calibrar el modelo de crecimiento del axón frente a resultados experimentales disponibles en la literatura, las características mecánicas pueden ser evaluadas durante la simulación. Las propiedades mecánicas del axón muestran variaciones dramáticas en la punta de éste, donde el cono de crecimiento soporta las señales químicas y mecánicas. Bansándose en el conocimiento ganado con el modelo de diferencias finitas, y con el objetivo de ir de 1D a 3D, este esquema preliminar pero de una naturaleza innovadora allana el camino a futuros estudios con el método de los elementos finitos. Centrándose finalmente en el efecto de las propiedades mecánicas sobre las propiedades electrofisiológicas- químicas, Neurite es empleado para relacionar las cargas mecánicas macroscópicas con las deformaciones y velocidades de deformación a escala microscópica, y simular la propagación de la señal eléctrica en las neuritas bajo carga mecánica. Las simulaciones fueron calibradas con resultados experimentales publicados en la literatura, proporcionando, por tanto, un modelo capaz de predecir las alteraciones de las funciones electrofisiológicas neuronales bajo cargas externas dañinas, y uniendo lesiones mecánicas con las correspondientes deficiencias funcionales. Para abordar simulaciones a gran escala, aunque otras arquitecturas avanzadas basadas en muchos núcleos integrados (MICs) fueron consideradas, los solvers explícito e implícito se implementaron en unidades de procesamiento central (CPU) y unidades de procesamiento gráfico (GPUs). Estudios de escalabilidad fueron llevados acabo para ambas implementaciones mostrando resultados prometedores para casos de simulaciones extremadamente grandes con GPUs. Esta tesis abre la vía para futuros modelos mecánicos con el objetivo de unir las propiedades electrofisiológicas-químicas con las propiedades mecánicas. El objetivo general es mejorar el conocimiento de las comunidades médicas y de bioingeniería sobre la mecánica de las neuronas y las deficiencias funcionales que aparecen de los daños producidos por traumatismos mecánicos, como lesiones traumáticas en el cerebro, o enfermedades neurodegenerativas como la enfermedad del Alzheimer. ABSTRACT Sitting at the interface between Engineering, Computer Science and Biology, Computational Neuron Mechanics appears as a new interdisciplinary field potentially able to tackle clinical problems from a new perspective. This field is multiscale by nature, ranging from the nanoscale (e.g., tubulin dimers) to the macroscale (e.g., brain tissue), and aims at tackling problems that are complex, and sometime impossible, to study through experimental means. Computational modeling has been widely used in different Neuroscience applications as diverse as neuronal growth or compound action potential propagation. However, in the majority of the modeling approaches done in this field to date, the interactions between the cell and its surrounding media/stimulus have been rarely explored. Despite of the tremendous importance of such relationship in several medical challenges—e.g., traumatic brain injury (TBI), cancer, Alzheimer’s disease (AD)—a bridge between electrophysiological-chemical and mechanical properties of neurons from the molecular scale to the cell level is still lacking. To this end, this research proposes a multiscale computational framework particularized for two representative scenarios: axon growth and electrophysiological-mechanical coupling of neurites. In the former case, the relation between the molecular constituents of the axon during its growth and its resulting mechanical properties is explored, whereas in the latter, a mechanical stimulus provokes functional deficits at cell level as a consequence of its electrophysiological-chemical alterations. The computational modeling approach chosen in this work is the finite difference method (FDM), and was implemented in a new program called Neurite. Although the finite element method (FEM) is also explored as part of this research, the FDM provides the necessary flexibility and versatility to implement biological models, as well as the mathematical simplicity to extend them to large scale simulations with a low computational cost. Focusing first on the effect of electrophysiological-chemical properties on the mechanical proper ties, an adaptation of Neurite was developed to simulate microtubule polymerization in axonal growth and provide the axon mechanical properties as a function of microtubule occupancy. After calibrating the axon growth model against experimental results available in the literature, the mechanical characteristics can be tracked during the simulation. The axon mechanical properties show dramatic variations at the tip of the axon, where the growth cone supports the chemical and mechanical signaling. Based on the knowledge gained from the FDM scheme, and in order to go from 1D to 3D, this preliminary yet novel scheme paves the road for future studies with FEM. Focusing then on the effect of mechanical properties on the electrophysiological-chemical properties, Neurite was used to relate macroscopic mechanical loading to microscopic strains and strain rates, and simulate the electrical signal propagation along neurites under mechanical loading. The simulations were calibrated against experimental results published in the literature, thus providing a model able to predict the alteration of neuronal electrophysiological function under external damaging load, and linking mechanical injuries to subsequent acute functional deficits. To undertake large scale simulations, although other state-of-the-art architectures based on many integrated cores (MICs) were considered, the explicit and implicit solvers were implemented for central processing units (CPUs) and graphics processing units (GPUs). Scalability studies were done for both implementations showing promising results for extremely large scale simulations with GPUs. This thesis opens the avenue for future mechanical modeling approaches aimed at linking electrophysiological- chemical properties to mechanical properties. Its overarching goal is to enhance the bioengineering and medical communities knowledge on neuronal mechanics and functional deficits arising from damages produced by direct mechanical insults, such as TBI, or neurodegenerative evolving illness, such as AD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In recent years, Spain has implemented a number of air quality control measures that are expected to lead to a future reduction in fine particle concentrations and an ensuing positive impact on public health. Objectives: We aimed to assess the impact on mortality attributable to a reduction in fine particle levels in Spain in 2014 in relation to the estimated level for 2007. Methods: To estimate exposure, we constructed fine particle distribution models for Spain for 2007 (reference scenario) and 2014 (projected scenario) with a spatial resolution of 16x16 km2. In a second step, we used the concentration-response functions proposed by cohort studies carried out in Europe (European Study of Cohorts for Air Pollution Effects and Rome longitudinal cohort) and North America (American Cancer Society cohort, Harvard Six Cities study and Canadian national cohort) to calculate the number of attributable annual deaths corresponding to all causes, all non-accidental causes, ischemic heart disease and lung cancer among persons aged over 25 years (2005-2007 mortality rate data). We examined the effect of the Spanish demographic shift in our analysis using 2007 and 2012 population figures. Results: Our model suggested that there would be a mean overall reduction in fine particle levels of 1mg/m3 by 2014. Taking into account 2007 population data, between 8 and 15 all-cause deaths per 100,000 population could be postponed annually by the expected reduction in fine particle levels. For specific subgroups, estimates varied from 10 to 30 deaths for all non-accidental causes, from 1 to 5 for lung cancer, and from 2 to 6 for ischemic heart disease. The expected burden of preventable mortality would be even higher in the future due to the Spanish population growth. Taking into account the population older than 30 years in 2012, the absolute mortality impact estimate would increase approximately by 18%. Conclusions: Effective implementation of air quality measures in Spain, in a scenario with a short-term projection, would amount to an appreciable decline infine particle concentrations, and this, in turn, would lead to notable health-related benefits. Recent European cohort studies strengthen the evidence of an association between long-term exposure to fine particles and health effects, and could enhance the health impact quantification in Europe. Air quality models can contribute to improved assessment of air pollution health impact estimates, particularly in study areas without air pollution monitoring data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A frame-level distortion model based on perceptual features of the human visual system is proposed to improve the performance of unequal error protection strategies and provide better quality of experience to users in Side-by-Side 3D video delivery systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En una planta de fusión, los materiales en contacto con el plasma así como los materiales de primera pared experimentan condiciones particularmente hostiles al estar expuestos a altos flujos de partículas, neutrones y grandes cargas térmicas. Como consecuencia de estas diferentes y complejas condiciones de trabajo, el estudio, desarrollo y diseño de estos materiales es uno de los más importantes retos que ha surgido en los últimos años para la comunidad científica en el campo de los materiales y la energía. Debido a su baja tasa de erosión, alta resistencia al sputtering, alta conductividad térmica, muy alto punto de fusión y baja retención de tritio, el tungsteno (wolframio) es un importante candidato como material de primera pared y como posible material estructural avanzado en fusión por confinamiento magnético e inercial. Sin embargo, el tiempo de vida del tungsteno viene controlado por diversos factores como son su respuesta termo-mecánica en la superficie, la posibilidad de fusión y el fallo por acumulación de helio. Es por ello que el tiempo de vida limitado por la respuesta mecánica del tungsteno (W), y en particular su fragilidad, sean dos importantes aspectos que tienes que ser investigados. El comportamiento plástico en materiales refractarios con estructura cristalina cúbica centrada en las caras (bcc) como el tungsteno está gobernado por las dislocaciones de tipo tornillo a escala atómica y por conjuntos e interacciones de dislocaciones a escalas más grandes. El modelado de este complejo comportamiento requiere la aplicación de métodos capaces de resolver de forma rigurosa cada una de las escalas. El trabajo que se presenta en esta tesis propone un modelado multiescala que es capaz de dar respuestas ingenieriles a las solicitudes técnicas del tungsteno, y que a su vez está apoyado por la rigurosa física subyacente a extensas simulaciones atomísticas. En primer lugar, las propiedades estáticas y dinámicas de las dislocaciones de tipo tornillo en cinco potenciales interatómicos de tungsteno son comparadas, determinando cuáles de ellos garantizan una mayor fidelidad física y eficiencia computacional. Las grandes tasas de deformación asociadas a las técnicas de dinámica molecular hacen que las funciones de movilidad de las dislocaciones obtenidas no puedan ser utilizadas en los siguientes pasos del modelado multiescala. En este trabajo, proponemos dos métodos alternativos para obtener las funciones de movilidad de las dislocaciones: un modelo Monte Cario cinético y expresiones analíticas. El conjunto de parámetros necesarios para formular el modelo de Monte Cario cinético y la ley de movilidad analítica son calculados atomísticamente. Estos parámetros incluyen, pero no se limitan a: la determinación de las entalpias y energías de formación de las parejas de escalones que forman las dislocaciones, la parametrización de los efectos de no Schmid característicos en materiales bcc,etc. Conociendo la ley de movilidad de las dislocaciones en función del esfuerzo aplicado y la temperatura, se introduce esta relación como ecuación de flujo dentro de un modelo de plasticidad cristalina. La predicción del modelo sobre la dependencia del límite de fluencia con la temperatura es validada experimentalmente con ensayos uniaxiales en tungsteno monocristalino. A continuación, se calcula el límite de fluencia al aplicar ensayos uniaxiales de tensión para un conjunto de orientaciones cristalográticas dentro del triángulo estándar variando la tasa de deformación y la temperatura de los ensayos. Finalmente, y con el objetivo de ser capaces de predecir una respuesta más dúctil del tungsteno para una variedad de estados de carga, se realizan ensayos biaxiales de tensión sobre algunas de las orientaciones cristalográficas ya estudiadas en función de la temperatura.-------------------------------------------------------------------------ABSTRACT ----------------------------------------------------------Tungsten and tungsten alloys are being considered as leading candidates for structural and functional materials in future fusion energy devices. The most attractive properties of tungsten for the design of magnetic and inertial fusion energy reactors are its high melting point, high thermal conductivity, low sputtering yield and low longterm disposal radioactive footprint. However, tungsten also presents a very low fracture toughness, mostly associated with inter-granular failure and bulk plasticity, that limits its applications. As a result of these various and complex conditions of work, the study, development and design of these materials is one of the most important challenges that have emerged in recent years to the scientific community in the field of materials for energy applications. The plastic behavior of body-centered cubic (bcc) refractory metals like tungsten is governed by the kink-pair mediated thermally activated motion of h¿ (\1 11)i screw dislocations on the atomistic scale and by ensembles and interactions of dislocations at larger scales. Modeling this complex behavior requires the application of methods capable of resolving rigorously each relevant scale. The work presented in this thesis proposes a multiscale model approach that gives engineering-level responses to the technical specifications required for the use of tungsten in fusion energy reactors, and it is also supported by the rigorous underlying physics of extensive atomistic simulations. First, the static and dynamic properties of screw dislocations in five interatomic potentials for tungsten are compared, determining which of these ensure greater physical fidelity and computational efficiency. The large strain rates associated with molecular dynamics techniques make the dislocation mobility functions obtained not suitable to be used in the next steps of the multiscale model. Therefore, it is necessary to employ mobility laws obtained from a different method. In this work, we suggest two alternative methods to get the dislocation mobility functions: a kinetic Monte Carlo model and analytical expressions. The set of parameters needed to formulate the kinetic Monte Carlo model and the analytical mobility law are calculated atomistically. These parameters include, but are not limited to: enthalpy and energy barriers of kink-pairs as a function of the stress, width of the kink-pairs, non-Schmid effects ( both twinning-antitwinning asymmetry and non-glide stresses), etc. The function relating dislocation velocity with applied stress and temperature is used as the main source of constitutive information into a dislocation-based crystal plasticity framework. We validate the dependence of the yield strength with the temperature predicted by the model against existing experimental data of tensile tests in singlecrystal tungsten, with excellent agreement between the simulations and the measured data. We then extend the model to a number of crystallographic orientations uniformly distributed in the standard triangle and study the effects of temperature and strain rate. Finally, we perform biaxial tensile tests and provide the yield surface as a function of the temperature for some of the crystallographic orientations explored in the uniaxial tensile tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To date, only few initiatives have been carried out in Spain in order to use mathematical models (e.g. DNDC, DayCent, FASSET y SIMSNIC) to estimate nitrogen (N) and carbon (C) dynamics as well as greenhouse gases (GHG) in Spanish agrosystems. Modeling at this level may allow to gain insight on both the complex relationships between biological and physicochemical processes, controlling the processes leading to GHG production and consumption in soils (e.g. nitrification, denitrification, decomposing, etc.), and the interactions between C and N cycles within the different components of the continuum plant-soil-environment. Additionally, these models can simulate the processes behind production, consumition and transport of GHG (e.g. nitrous oxide, N2O, and carbon dioxide, CO2) in the short and medium term and at different scales. Other sources of potential pollution from soils can be identified and quantified using these process-based models (e.g. NO3 y NH3).