952 resultados para subgrid-scale models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explore how simulation results change with different choice of trade specification, and the strength of preference for traded variety by economic agent differs, utilizing two types of three-region, three-sector AGE model that includes the Armington-Krugman-Melitz Encompassing module based on Dixon and Rimmer (2012). Simulation experiments reveal that: (1) the Melitz-type specification does not always enhance effectiveness of a certain policy change more than the one obtained with the Krugman-type, especially when economic agents' preference for traded variety is not so strong; (2) there are likely to be points where the volumes of effects obtained with the Melitz-type exceed the ones with the Krugman-type; and (3) the preference of the producers, those who are in the sectors that exhibit increasing returns to scale, for traded variety might be the engine of explosive effects as suggested by Fujita, et al. (2000).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important step to assess water availability is to have monthly time series representative of the current situation. In this context, a simple methodology is presented for application in large-scale studies in regions where a properly calibrated hydrologic model is not available, using the output variables simulated by regional climate models (RCMs) of the European project PRUDENCE under current climate conditions (period 1961–1990). The methodology compares different interpolation methods and alternatives to generate annual times series that minimise the bias with respect to observed values. The objective is to identify the best alternative to obtain bias-corrected, monthly runoff time series from the output of RCM simulations. This study uses information from 338 basins in Spain that cover the entire mainland territory and whose observed values of natural runoff have been estimated by the distributed hydrological model SIMPA. Four interpolation methods for downscaling runoff to the basin scale from 10 RCMs are compared with emphasis on the ability of each method to reproduce the observed behaviour of this variable. The alternatives consider the use of the direct runoff of the RCMs and the mean annual runoff calculated using five functional forms of the aridity index, defined as the ratio between potential evapotranspiration and precipitation. In addition, the comparison with respect to the global runoff reference of the UNH/GRDC dataset is evaluated, as a contrast of the “best estimator” of current runoff on a large scale. Results show that the bias is minimised using the direct original interpolation method and the best alternative for bias correction of the monthly direct runoff time series of RCMs is the UNH/GRDC dataset, although the formula proposed by Schreiber (1904) also gives good results

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At present there is much literature that refers to the advantages and disadvantages of different methods of statistical and dynamical downscaling of climate variables projected by climate models. Less attention has been paid to other indirect variables, like runoff, which play a significant role in evaluating the impact of climate change on hydrological systems. Runoff presents a much greater bias in climate models than other climate variables, like temperature or precipitation. It is very important to identify the methods that minimize bias while downscaling runoff from the gridded results of climate models to the basin scale

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-scale structure formation can be modeled as a nonlinear process that transfers energy from the largest scales to successively smaller scales until it is dissipated, in analogy with Kolmogorov’s cascade model of incompressible turbulence. However, cosmic turbulence is very compressible, and vorticity plays a secondary role in it. The simplest model of cosmic turbulence is the adhesion model, which can be studied perturbatively or adapting to it Kolmogorov’s non-perturbative approach to incompressible turbulence. This approach leads to observationally testable predictions, e.g., to the power-law exponent of the matter density two-point correlation function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of river flow using hydraulic modelling and its implications in derived environ-mental applications are inextricably connected with the way in which the river boundary shape is represented. This relationship is scale-dependent upon the modelling resolution which in turn determines the importance of a subscale performance of the model and the way subscale (surface and flow) processes are parameterised. Commonly, the subscale behaviour of the model relies upon a roughness parameterisation whose meaning depends on the dimensionality of the hydraulic model and the resolution of the topographic represen¬tation scale. This latter is, in turn, dependent on the resolution of the computational mesh as well as on the detail of measured topographic data. Flow results are affected by this interactions between scale and subscale parameterisation according to the dimensionality approach. The aim of this dissertation is the evaluation of these interactions upon hy¬draulic modelling results. Current high resolution topographic source availability induce this research which is tackled using a suitable roughness approach according to each di¬mensionality with the purpose of the interaction assessment. A 1D HEC-RAS model, a 2D raster-based diffusion-wave model with a scale-dependent distributed roughness parame-terisation and a 3D finite volume scheme with a porosity algorithm approach to incorporate complex topography have been used. Different topographic sources are assessed using a 1D scheme. LiDAR data are used to isolate the mesh resolution from the topographic content of the DEM effects upon 2D and 3D flow results. A distributed roughness parameterisation, using a roughness height approach dependent upon both mesh resolution and topographic content is developed and evaluated for the 2D scheme. Grain-size data and fractal methods are used for the reconstruction of topography with microscale information, required for some applications but not easily available. Sensitivity of hydraulic parameters to this topographic parameterisation is evaluated in a 3D scheme at different mesh resolu¬tions. Finally, the structural variability of simulated flow is analysed and related to scale interactions. Model simulations demonstrate (i) the importance of the topographic source in a 1D models; (ii) the mesh resolution approach is dominant in 2D and 3D simulations whereas in a 1D model the topographic source and even the roughness parameterisation impacts are more critical; (iii) the increment of the sensitivity to roughness parameterisa-tion in 1D and 2D schemes with detailed topographic sources and finer mesh resolutions; and (iv) the topographic content and microtopography impact throughout the vertical profile of computed 3D velocity in a depth-dependent way, whereas 2D results are not affected by topographic content variations. Finally, the spatial analysis shows that the mesh resolution controls high resolution model scale results, roughness parameterisation control 2D simulation results for a constant mesh resolution; and topographic content and micro-topography variations impacts upon the organisation of flow results depth-dependently in a 3D scheme. Resumen La topografía juega un papel fundamental en la distribución del agua y la energía en los paisajes naturales (Beven and Kirkby 1979; Wood et al. 1997). La simulación hidráulica combinada con métodos de medición del terreno por teledetección constituyen una poderosa herramienta de investigación en la comprensión del comportamiento de los flujos de agua debido a la variabilidad de la superficie sobre la que fluye. La representación e incorporación de la topografía en el esquema hidráulico tiene una importancia crucial en los resultados y determinan el desarrollo de sus aplicaciones al campo medioambiental. Cualquier simulación es una simplificación de un proceso del mundo real, y por tanto el grado de simplificación determinará el significado de los resultados simulados. Este razonamiento es particularmente difícil de trasladar a la simulación hidráulica donde aspectos de la escala tan diferentes como la escala de los procesos de flujo y de representación del contorno son considerados conjuntamente incluso en fases de parametrización (e.g. parametrización de la rugosidad). Por una parte, esto es debido a que las decisiones de escala vienen condicionadas entre ellas (e.g. la dimensionalidad del modelo condiciona la escala de representación del contorno) y por tanto interaccionan en sus resultados estrechamente. Y por otra parte, debido a los altos requerimientos numéricos y computacionales de una representación explícita de alta resolución de los procesos de flujo y discretización de la malla. Además, previo a la modelización hidráulica, la superficie del terreno sobre la que el agua fluye debe ser modelizada y por tanto presenta su propia escala de representación, que a su vez dependerá de la escala de los datos topográficos medidos con que se elabora el modelo. En última instancia, esta topografía es la que determina el comportamiento espacial del flujo. Por tanto, la escala de la topografía en sus fases de medición y modelización (resolución de los datos y representación topográfica) previas a su incorporación en el modelo hidráulico producirá a su vez un impacto que se acumulará al impacto global resultante debido a la escala computacional del modelo hidráulico y su dimensión. La comprensión de las interacciones entre las complejas geometrías del contorno y la estructura del flujo utilizando la modelización hidráulica depende de las escalas consideradas en la simplificación de los procesos hidráulicos y del terreno (dimensión del modelo, tamaño de escala computacional y escala de los datos topográficos). La naturaleza de la aplicación del modelo hidráulico (e.g. habitat físico, análisis de riesgo de inundaciones, transporte de sedimentos) determina en primer lugar la escala del estudio y por tanto el detalle de los procesos a simular en el modelo (i.e. la dimensionalidad) y, en consecuencia, la escala computacional a la que se realizarán los cálculos (i.e. resolución computacional). Esta última a su vez determina, el detalle geográfico con que deberá representarse el contorno acorde con la resolución de la malla computacional. La parametrización persigue incorporar en el modelo hidráulico la cuantificación de los procesos y condiciones físicas del sistema natural y por tanto debe incluir no solo aquellos procesos que tienen lugar a la escala de modelización, sino también aquellos que tienen lugar a un nivel subescalar y que deben ser definidos mediante relaciones de escalado con las variables modeladas explícitamente. Dicha parametrización se implementa en la práctica mediante la provisión de datos al modelo, por tanto la escala de los datos geográficos utilizados para parametrizar el modelo no sólo influirá en los resultados, sino también determinará la importancia del comportamiento subescalar del modelo y el modo en que estos procesos deban ser parametrizados (e.g. la variabilidad natural del terreno dentro de la celda de discretización o el flujo en las direcciones laterales y verticales en un modelo unidimensional). En esta tesis, se han utilizado el modelo unidimensional HEC-RAS, (HEC 1998b), un modelo ráster bidimensional de propagación de onda, (Yu 2005) y un esquema tridimensional de volúmenes finitos con un algoritmo de porosidad para incorporar la topografía, (Lane et al. 2004; Hardy et al. 2005). La geometría del contorno viene definida por la escala de representación topográfica (resolución de malla y contenido topográfico), la cual a su vez depende de la escala de la fuente cartográfica. Todos estos factores de escala interaccionan en la respuesta del modelo hidráulico a la topografía. En los últimos años, métodos como el análisis fractal y las técnicas geoestadísticas utilizadas para representar y analizar elementos geográficos (e.g. en la caracterización de superficies (Herzfeld and Overbeck 1999; Butler et al. 2001)), están promoviendo nuevos enfoques en la cuantificación de los efectos de escala (Lam et al. 2004; Atkinson and Tate 2000; Lam et al. 2006) por medio del análisis de la estructura espacial de la variable (e.g. Bishop et al. 2006; Ju et al. 2005; Myint et al. 2004; Weng 2002; Bian and Xie 2004; Southworth et al. 2006; Pozd-nyakova et al. 2005; Kyriakidis and Goodchild 2006). Estos métodos cuantifican tanto el rango de valores de la variable presentes a diferentes escalas como la homogeneidad o heterogeneidad de la variable espacialmente distribuida (Lam et al. 2004). En esta tesis, estas técnicas se han utilizado para analizar el impacto de la topografía sobre la estructura de los resultados hidráulicos simulados. Los datos de teledetección de alta resolución y técnicas GIS también están siendo utilizados para la mejor compresión de los efectos de escala en modelos medioambientales (Marceau 1999; Skidmore 2002; Goodchild 2003) y se utilizan en esta tesis. Esta tesis como corpus de investigación aborda las interacciones de esas escalas en la modelización hidráulica desde un punto de vista global e interrelacionado. Sin embargo, la estructura y el foco principal de los experimentos están relacionados con las nociones espaciales de la escala de representación en relación con una visión global de las interacciones entre escalas. En teoría, la representación topográfica debe caracterizar la superficie sobre la que corre el agua a una adecuada (conforme a la finalidad y dimensión del modelo) escala de discretización, de modo que refleje los procesos de interés. La parametrización de la rugosidad debe de reflejar los efectos de la variabilidad de la superficie a escalas de más detalle que aquellas representadas explícitamente en la malla topográfica (i.e. escala de discretización). Claramente, ambos conceptos están físicamente relacionados por un

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial variability of Vertisol properties is relevant for identifying those zones with physical degradation. In this sense, one has to face the problem of identifying the origin and distribution of spatial variability patterns. The objectives of the present work were (i) to quantify the spatial structure of different physical properties collected from a Vertisol, (ii) to search for potential correlations between different spatial patterns and (iii) to identify relevant components through multivariate spatial analysis. The study was conducted on a Vertisol (Typic Hapludert) dedicated to sugarcane (Saccharum officinarum L.) production during the last sixty years. We used six soil properties collected from a squared grid (225 points) (penetrometer resistance (PR), total porosity, fragmentation dimension (Df), vertical electrical conductivity (ECv), horizontal electrical conductivity (ECh) and soil water content (WC)). All the original data sets were z-transformed before geostatistical analysis. Three different types of semivariogram models were necessary for fitting individual experimental semivariograms. This suggests the different natures of spatial variability patterns. Soil water content rendered the largest nugget effect (C0 = 0.933) while soil total porosity showed the largest range of spatial correlation (A = 43.92 m). The bivariate geostatistical analysis also rendered significant cross-semivariance between different paired soil properties. However, four different semivariogram models were required in that case. This indicates an underlying co-regionalization between different soil properties, which is of interest for delineating management zones within sugarcane fields. Cross-semivariograms showed larger correlation ranges than individual, univariate, semivariograms (A ≥ 29 m). All the findings were supported by multivariate spatial analysis, which showed the influence of soil tillage operations, harvesting machinery and irrigation water distribution on the status of the investigated area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many cities in Europe have difficulties to meet the air quality standards set by the European legislation, most particularly the annual mean Limit Value for NO2. Road transport is often the main source of air pollution in urban areas and therefore, there is an increasing need to estimate current and future traffic emissions as accurately as possible. As a consequence, a number of specific emission models and emission factors databases have been developed recently. They present important methodological differences and may result in largely diverging emission figures and thus may lead to alternative policy recommendations. This study compares two approaches to estimate road traffic emissions in Madrid (Spain): the COmputer Programme to calculate Emissions from Road Transport (COPERT4 v.8.1) and the Handbook Emission Factors for Road Transport (HBEFA v.3.1), representative of the ‘average-speed’ and ‘traffic situation’ model types respectively. The input information (e.g. fleet composition, vehicle kilometres travelled, traffic intensity, road type, etc.) was provided by the traffic model developed by the Madrid City Council along with observations from field campaigns. Hourly emissions were computed for nearly 15 000 road segments distributed in 9 management areas covering the Madrid city and surroundings. Total annual NOX emissions predicted by HBEFA were a 21% higher than those of COPERT. The discrepancies for NO2 were lower (13%) since resulting average NO2/NOX ratios are lower for HBEFA. The larger differences are related to diesel vehicle emissions under “stop & go” traffic conditions, very common in distributor/secondary roads of the Madrid metropolitan area. In order to understand the representativeness of these results, the resulting emissions were integrated in an urban scale inventory used to drive mesoscale air quality simulations with the Community Multiscale Air Quality (CMAQ) modelling system (1 km2 resolution). Modelled NO2 concentrations were compared with observations through a series of statistics. Although there are no remarkable differences between both model runs, the results suggest that HBEFA may overestimate traffic emissions. However, the results are strongly influenced by methodological issues and limitations of the traffic model. This study was useful to provide a first alternative estimate to the official emission inventory in Madrid and to identify the main features of the traffic model that should be improved to support the application of an emission system based on “real world” emission factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes the correlation between the fluctuations of the electrical power generated by the ensemble of 70 DC/AC inverters from a 45.6 MW PV plant. The use of real electrical power time series from a large collection of photovoltaic inverters of a same plant is an impor- tant contribution in the context of models built upon simplified assumptions to overcome the absence of such data. This data set is divided into three different fluctuation categories with a clustering proce- dure which performs correctly with the clearness index and the wavelet variances. Afterwards, the time dependent correlation between the electrical power time series of the inverters is esti- mated with the wavelet transform. The wavelet correlation depends on the distance between the inverters, the wavelet time scales and the daily fluctuation level. Correlation values for time scales below one minute are low without dependence on the daily fluctuation level. For time scales above 20 minutes, positive high correlation values are obtained, and the decay rate with the distance depends on the daily fluctuation level. At intermediate time scales the correlation depends strongly on the daily fluctuation level. The proposed methods have been implemented using free software. Source code is available as supplementary material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La predicción de energía eólica ha desempeñado en la última década un papel fundamental en el aprovechamiento de este recurso renovable, ya que permite reducir el impacto que tiene la naturaleza fluctuante del viento en la actividad de diversos agentes implicados en su integración, tales como el operador del sistema o los agentes del mercado eléctrico. Los altos niveles de penetración eólica alcanzados recientemente por algunos países han puesto de manifiesto la necesidad de mejorar las predicciones durante eventos en los que se experimenta una variación importante de la potencia generada por un parque o un conjunto de ellos en un tiempo relativamente corto (del orden de unas pocas horas). Estos eventos, conocidos como rampas, no tienen una única causa, ya que pueden estar motivados por procesos meteorológicos que se dan en muy diferentes escalas espacio-temporales, desde el paso de grandes frentes en la macroescala a procesos convectivos locales como tormentas. Además, el propio proceso de conversión del viento en energía eléctrica juega un papel relevante en la ocurrencia de rampas debido, entre otros factores, a la relación no lineal que impone la curva de potencia del aerogenerador, la desalineación de la máquina con respecto al viento y la interacción aerodinámica entre aerogeneradores. En este trabajo se aborda la aplicación de modelos estadísticos a la predicción de rampas a muy corto plazo. Además, se investiga la relación de este tipo de eventos con procesos atmosféricos en la macroescala. Los modelos se emplean para generar predicciones de punto a partir del modelado estocástico de una serie temporal de potencia generada por un parque eólico. Los horizontes de predicción considerados van de una a seis horas. Como primer paso, se ha elaborado una metodología para caracterizar rampas en series temporales. La denominada función-rampa está basada en la transformada wavelet y proporciona un índice en cada paso temporal. Este índice caracteriza la intensidad de rampa en base a los gradientes de potencia experimentados en un rango determinado de escalas temporales. Se han implementado tres tipos de modelos predictivos de cara a evaluar el papel que juega la complejidad de un modelo en su desempeño: modelos lineales autorregresivos (AR), modelos de coeficientes variables (VCMs) y modelos basado en redes neuronales (ANNs). Los modelos se han entrenado en base a la minimización del error cuadrático medio y la configuración de cada uno de ellos se ha determinado mediante validación cruzada. De cara a analizar la contribución del estado macroescalar de la atmósfera en la predicción de rampas, se ha propuesto una metodología que permite extraer, a partir de las salidas de modelos meteorológicos, información relevante para explicar la ocurrencia de estos eventos. La metodología se basa en el análisis de componentes principales (PCA) para la síntesis de la datos de la atmósfera y en el uso de la información mutua (MI) para estimar la dependencia no lineal entre dos señales. Esta metodología se ha aplicado a datos de reanálisis generados con un modelo de circulación general (GCM) de cara a generar variables exógenas que posteriormente se han introducido en los modelos predictivos. Los casos de estudio considerados corresponden a dos parques eólicos ubicados en España. Los resultados muestran que el modelado de la serie de potencias permitió una mejora notable con respecto al modelo predictivo de referencia (la persistencia) y que al añadir información de la macroescala se obtuvieron mejoras adicionales del mismo orden. Estas mejoras resultaron mayores para el caso de rampas de bajada. Los resultados también indican distintos grados de conexión entre la macroescala y la ocurrencia de rampas en los dos parques considerados. Abstract One of the main drawbacks of wind energy is that it exhibits intermittent generation greatly depending on environmental conditions. Wind power forecasting has proven to be an effective tool for facilitating wind power integration from both the technical and the economical perspective. Indeed, system operators and energy traders benefit from the use of forecasting techniques, because the reduction of the inherent uncertainty of wind power allows them the adoption of optimal decisions. Wind power integration imposes new challenges as higher wind penetration levels are attained. Wind power ramp forecasting is an example of such a recent topic of interest. The term ramp makes reference to a large and rapid variation (1-4 hours) observed in the wind power output of a wind farm or portfolio. Ramp events can be motivated by a broad number of meteorological processes that occur at different time/spatial scales, from the passage of large-scale frontal systems to local processes such as thunderstorms and thermally-driven flows. Ramp events may also be conditioned by features related to the wind-to-power conversion process, such as yaw misalignment, the wind turbine shut-down and the aerodynamic interaction between wind turbines of a wind farm (wake effect). This work is devoted to wind power ramp forecasting, with special focus on the connection between the global scale and ramp events observed at the wind farm level. The framework of this study is the point-forecasting approach. Time series based models were implemented for very short-term prediction, this being characterised by prediction horizons up to six hours ahead. As a first step, a methodology to characterise ramps within a wind power time series was proposed. The so-called ramp function is based on the wavelet transform and it provides a continuous index related to the ramp intensity at each time step. The underlying idea is that ramps are characterised by high power output gradients evaluated under different time scales. A number of state-of-the-art time series based models were considered, namely linear autoregressive (AR) models, varying-coefficient models (VCMs) and artificial neural networks (ANNs). This allowed us to gain insights into how the complexity of the model contributes to the accuracy of the wind power time series modelling. The models were trained in base of a mean squared error criterion and the final set-up of each model was determined through cross-validation techniques. In order to investigate the contribution of the global scale into wind power ramp forecasting, a methodological proposal to identify features in atmospheric raw data that are relevant for explaining wind power ramp events was presented. The proposed methodology is based on two techniques: principal component analysis (PCA) for atmospheric data compression and mutual information (MI) for assessing non-linear dependence between variables. The methodology was applied to reanalysis data generated with a general circulation model (GCM). This allowed for the elaboration of explanatory variables meaningful for ramp forecasting that were utilized as exogenous variables by the forecasting models. The study covered two wind farms located in Spain. All the models outperformed the reference model (the persistence) during both ramp and non-ramp situations. Adding atmospheric information had a noticeable impact on the forecasting performance, specially during ramp-down events. Results also suggested different levels of connection between the ramp occurrence at the wind farm level and the global scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years new models for organizations working on overty alleviation have emerged. One of them, the social enterprise, has attracted the attention of both academics and practitioners all over the world. Even if defined in different ways depending on the context, social enterprise has an enormous potential to generate social benefits and to promote local agency and private initiative in poverty alleviation. In this sense, it is fitting to highlight the importance of identifying the main standards that permit the characterization of diverse social enterprises, in order to understand their main specificities and guarantee value generation for low-income populations. Another crucial factor is understanding innovation as a critical factor in promoting social enterprises. A powerful tool to enhance the impact and application of this model is Information and Communication Technologies. In the 21st century,these tools allow users to find new ways of collaboration, new sustainable business models and a cost-effective way of scaling-up initiatives. This paper, a product of the collaborative research between the Universidad Politécnica de Madrid and the Universidade Federal Fluminense, examines different business models for social enterprises and the role that ICT can play in scale and impact of these initiatives

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The physical appearance of granular media suggests the existence of geometrical scale invariance. The paper discuss how this physico-empirical property can be mathematically encoded leading to different generative models: a smooth one encoded by a differential equation and another encoded by an equation coming from a measure theoretical property.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Crowd induced dynamic loading in large structures, such as gymnasiums or stadium, is usually modelled as a series of harmonic loads which are defined in terms of their Fourier coefficients. Different values of these coefficients that were obtained from full scale measurements can be found in codes. Recently, an alternative has been proposed, based on random generation of load time histories that take into account phase lag among individuals inside the crowd. This paper presents the testing done on a structure designed to be a gymnasium. Two series of dynamic test were performed on the gym slab. For the first test an electrodynamic shaker was placed at several locations and during the second one people located inside a marked area bounced and jumped guided by different metronome rates. A finite element model (FEM) is presented and a comparison of numerically predicted and experimentally observed vibration modes and frequencies has been used to assess its validity. The second group of measurements will be compared with predictions made using the FEM model and three alternatives for crowd induced load modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Situado en el límite entre Ingeniería, Informática y Biología, la mecánica computacional de las neuronas aparece como un nuevo campo interdisciplinar que potencialmente puede ser capaz de abordar problemas clínicos desde una perspectiva diferente. Este campo es multiescala por naturaleza, yendo desde la nanoescala (como, por ejemplo, los dímeros de tubulina) a la macroescala (como, por ejemplo, el tejido cerebral), y tiene como objetivo abordar problemas que son complejos, y algunas veces imposibles, de estudiar con medios experimentales. La modelización computacional ha sido ampliamente empleada en aplicaciones Neurocientíficas tan diversas como el crecimiento neuronal o la propagación de los potenciales de acción compuestos. Sin embargo, en la mayoría de los enfoques de modelización hechos hasta ahora, la interacción entre la célula y el medio/estímulo que la rodea ha sido muy poco explorada. A pesar de la tremenda importancia de esa relación en algunos desafíos médicos—como, por ejemplo, lesiones traumáticas en el cerebro, cáncer, la enfermedad del Alzheimer—un puente que relacione las propiedades electrofisiológicas-químicas y mecánicas desde la escala molecular al nivel celular todavía no existe. Con ese objetivo, esta investigación propone un marco computacional multiescala particularizado para dos escenarios respresentativos: el crecimiento del axón y el acomplamiento electrofisiológicomecánico de las neuritas. En el primer caso, se explora la relación entre los constituyentes moleculares del axón durante su crecimiento y sus propiedades mecánicas resultantes, mientras que en el último, un estímulo mecánico provoca deficiencias funcionales a nivel celular como consecuencia de sus alteraciones electrofisiológicas-químicas. La modelización computacional empleada en este trabajo es el método de las diferencias finitas, y es implementada en un nuevo programa llamado Neurite. Aunque el método de los elementos finitos es también explorado en parte de esta investigación, el método de las diferencias finitas tiene la flexibilidad y versatilidad necesaria para implementar mode los biológicos, así como la simplicidad matemática para extenderlos a simulaciones a gran escala con un coste computacional bajo. Centrándose primero en el efecto de las propiedades electrofisiológicas-químicas sobre las propiedades mecánicas, una versión adaptada de Neurite es desarrollada para simular la polimerización de los microtúbulos en el crecimiento del axón y proporcionar las propiedades mecánicas como función de la ocupación de los microtúbulos. Después de calibrar el modelo de crecimiento del axón frente a resultados experimentales disponibles en la literatura, las características mecánicas pueden ser evaluadas durante la simulación. Las propiedades mecánicas del axón muestran variaciones dramáticas en la punta de éste, donde el cono de crecimiento soporta las señales químicas y mecánicas. Bansándose en el conocimiento ganado con el modelo de diferencias finitas, y con el objetivo de ir de 1D a 3D, este esquema preliminar pero de una naturaleza innovadora allana el camino a futuros estudios con el método de los elementos finitos. Centrándose finalmente en el efecto de las propiedades mecánicas sobre las propiedades electrofisiológicas- químicas, Neurite es empleado para relacionar las cargas mecánicas macroscópicas con las deformaciones y velocidades de deformación a escala microscópica, y simular la propagación de la señal eléctrica en las neuritas bajo carga mecánica. Las simulaciones fueron calibradas con resultados experimentales publicados en la literatura, proporcionando, por tanto, un modelo capaz de predecir las alteraciones de las funciones electrofisiológicas neuronales bajo cargas externas dañinas, y uniendo lesiones mecánicas con las correspondientes deficiencias funcionales. Para abordar simulaciones a gran escala, aunque otras arquitecturas avanzadas basadas en muchos núcleos integrados (MICs) fueron consideradas, los solvers explícito e implícito se implementaron en unidades de procesamiento central (CPU) y unidades de procesamiento gráfico (GPUs). Estudios de escalabilidad fueron llevados acabo para ambas implementaciones mostrando resultados prometedores para casos de simulaciones extremadamente grandes con GPUs. Esta tesis abre la vía para futuros modelos mecánicos con el objetivo de unir las propiedades electrofisiológicas-químicas con las propiedades mecánicas. El objetivo general es mejorar el conocimiento de las comunidades médicas y de bioingeniería sobre la mecánica de las neuronas y las deficiencias funcionales que aparecen de los daños producidos por traumatismos mecánicos, como lesiones traumáticas en el cerebro, o enfermedades neurodegenerativas como la enfermedad del Alzheimer. ABSTRACT Sitting at the interface between Engineering, Computer Science and Biology, Computational Neuron Mechanics appears as a new interdisciplinary field potentially able to tackle clinical problems from a new perspective. This field is multiscale by nature, ranging from the nanoscale (e.g., tubulin dimers) to the macroscale (e.g., brain tissue), and aims at tackling problems that are complex, and sometime impossible, to study through experimental means. Computational modeling has been widely used in different Neuroscience applications as diverse as neuronal growth or compound action potential propagation. However, in the majority of the modeling approaches done in this field to date, the interactions between the cell and its surrounding media/stimulus have been rarely explored. Despite of the tremendous importance of such relationship in several medical challenges—e.g., traumatic brain injury (TBI), cancer, Alzheimer’s disease (AD)—a bridge between electrophysiological-chemical and mechanical properties of neurons from the molecular scale to the cell level is still lacking. To this end, this research proposes a multiscale computational framework particularized for two representative scenarios: axon growth and electrophysiological-mechanical coupling of neurites. In the former case, the relation between the molecular constituents of the axon during its growth and its resulting mechanical properties is explored, whereas in the latter, a mechanical stimulus provokes functional deficits at cell level as a consequence of its electrophysiological-chemical alterations. The computational modeling approach chosen in this work is the finite difference method (FDM), and was implemented in a new program called Neurite. Although the finite element method (FEM) is also explored as part of this research, the FDM provides the necessary flexibility and versatility to implement biological models, as well as the mathematical simplicity to extend them to large scale simulations with a low computational cost. Focusing first on the effect of electrophysiological-chemical properties on the mechanical proper ties, an adaptation of Neurite was developed to simulate microtubule polymerization in axonal growth and provide the axon mechanical properties as a function of microtubule occupancy. After calibrating the axon growth model against experimental results available in the literature, the mechanical characteristics can be tracked during the simulation. The axon mechanical properties show dramatic variations at the tip of the axon, where the growth cone supports the chemical and mechanical signaling. Based on the knowledge gained from the FDM scheme, and in order to go from 1D to 3D, this preliminary yet novel scheme paves the road for future studies with FEM. Focusing then on the effect of mechanical properties on the electrophysiological-chemical properties, Neurite was used to relate macroscopic mechanical loading to microscopic strains and strain rates, and simulate the electrical signal propagation along neurites under mechanical loading. The simulations were calibrated against experimental results published in the literature, thus providing a model able to predict the alteration of neuronal electrophysiological function under external damaging load, and linking mechanical injuries to subsequent acute functional deficits. To undertake large scale simulations, although other state-of-the-art architectures based on many integrated cores (MICs) were considered, the explicit and implicit solvers were implemented for central processing units (CPUs) and graphics processing units (GPUs). Scalability studies were done for both implementations showing promising results for extremely large scale simulations with GPUs. This thesis opens the avenue for future mechanical modeling approaches aimed at linking electrophysiological- chemical properties to mechanical properties. Its overarching goal is to enhance the bioengineering and medical communities knowledge on neuronal mechanics and functional deficits arising from damages produced by direct mechanical insults, such as TBI, or neurodegenerative evolving illness, such as AD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, function of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells has only very recently been proposed (Jerusalem et al., 2013). In this paper, we present the implementation details of Neurite: the finite difference parallel program used in this reference. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite-explicit and implicit-were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between lectrophysiology and mechanics (Jerusalem et al., 2013). This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented dendritic tree, and a damaged axon. The capabilities of the program to deal with large scale scenarios, segmented neuronal structures, and functional deficits under mechanical loading are specifically highlighted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In crop insurance, the accuracy with which the insurer quantifies the actual risk is highly dependent on the availability on actual yield data. Crop models might be valuable tools to generate data on expected yields for risk assessment when no historical records are available. However, selecting a crop model for a specific objective, location and implementation scale is a difficult task. A look inside the different crop and soil modules to understand how outputs are obtained might facilitate model choice. The objectives of this paper were (i) to assess the usefulness of crop models to be used within a crop insurance analysis and design and (ii) to select the most suitable crop model for drought risk assessment in semi-arid regions in Spain. For that purpose first, a pre-selection of crop models simulating wheat yield under rainfed growing conditions at the field scale was made, and second, four selected models (Aquacrop, CERES- Wheat, CropSyst and WOFOST) were compared in terms of modelling approaches, process descriptions and model outputs. Outputs of the four models for the simulation of winter wheat growth are comparable when water is not limiting, but differences are larger when simulating yields under rainfed conditions. These differences in rainfed yields are mainly related to the dissimilar simulated soil water availability and the assumed linkages with dry matter formation. We concluded that for the simulation of winter wheat growth at field scale in such semi-arid conditions, CERES-Wheat and CropSyst are preferred. WOFOST is a satisfactory compromise between data availability and complexity when detail data on soil is limited. Aquacrop integrates physiological processes in some representative parameters, thus diminishing the number of input parameters, what is seen as an advantage when observed data is scarce. However, the high sensitivity of this model to low water availability limits its use in the region considered. Contrary to the use of ensembles of crop models, we endorse that efforts be concentrated on selecting or rebuilding a model that includes approaches that better describe the agronomic conditions of the regions in which they will be applied. The use of such complex methodologies as crop models is associated with numerous sources of uncertainty, although these models are the best tools available to get insight in these complex agronomic systems.