59 resultados para Many-to-many-assignment problem

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A proper allocation of resources targeted to solve hunger is essential to optimize the efficacy of actions and maximize results. This requires an adequate measurement and formulation of the problem as, paraphrasing Einstein, the formulation of a problem is essential to reach a solution. Different measurement methods have been designed to count, score, classify and compare hunger at local level and to allow comparisons between different places. However, the alternative methods produce significantly reach different results. These discrepancies make decisions on the targeting of resource allocations difficult. To assist decision makers, a new method taking into account the dimension of hunger and the coping capacities of countries, is proposed enabling to establish both geographical and sectoral priorities for the allocation of resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A proper allocation of resources targeted to solve hunger is essential to optimize the efficacy of actions and maximize results. This requires an adequate measurement and formulation of the problem as, paraphrasing Einstein, the formulation of a problem is essential to reach a solution. Different measurement methods have been designed to count, score, classify and compare hunger at local level and to allow comparisons between different places. However, the alternative methods reach significantly different results. These discrepancies make decisions on the targeting of resource allocations difficult. To assist decision makers, a new method taking into account the dimension of hunger and the coping capacities of countries is proposed enabling to establish both geographical and sectoral priorities for the allocation of resources

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The two-body problem subject to a constant radial thrust is analyzed as a planar motion. The description of the problem is performed in terms of three perturbation methods: DROMO and two others due to Deprit. All of them rely on Hansen?s ideal frame concept. An explicit, analytic, closed-form solution is obtained for this problem when the initial orbit is circular (Tsien problem), based on the DROMO special perturbation method, and expressed in terms of elliptic integral functions. The analytical solution to the Tsien problem is later used as a reference to test the numerical performance of various orbit propagation methods, including DROMO and Deprit methods, as well as Cowell and Kustaanheimo?Stiefel methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fuzzy min–max neural network classifier is a supervised learning method. This classifier takes the hybrid neural networks and fuzzy systems approach. All input variables in the network are required to correspond to continuously valued variables, and this can be a significant constraint in many real-world situations where there are not only quantitative but also categorical data. The usual way of dealing with this type of variables is to replace the categorical by numerical values and treat them as if they were continuously valued. But this method, implicitly defines a possibly unsuitable metric for the categories. A number of different procedures have been proposed to tackle the problem. In this article, we present a new method. The procedure extends the fuzzy min–max neural network input to categorical variables by introducing new fuzzy sets, a new operation, and a new architecture. This provides for greater flexibility and wider application. The proposed method is then applied to missing data imputation in voting intention polls. The micro data—the set of the respondents’ individual answers to the questions—of this type of poll are especially suited for evaluating the method since they include a large number of numerical and categorical attributes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Zeldovič’s article “On Russian Dative Reflexive Constructions: Accidental or Compositional” is very interesting. It contains a good deal of insightful observations and is painstakingly argued. Its research object is the Russian dative reflexive construction (DRC) like Ивану не работается ‘Ivan does not feel like reading’. The aim of the article is to show that the DRC is fully compositional. Like many other works by Zeldovič, the article is written from the radical-pragmatic perspective and constitutes a very good illustration of this trend in linguistic research. The language material that it analyzes has often been investigated within more traditional frameworks, especially in Russian linguistics, which makes Zeldovič’s novel approach to the old problem particularly interesting. In this short note I would like (by way of discussion) to address two problems connected not so much with the DRC itself as with methodological issues concerning compositionality. I will dwell on two aspects: on the question of how we understand the very concept of compositionality, and what instruments we employ to demonstrate it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis is model some processes from the nature as evolution and co-evolution, and proposing some techniques that can ensure that these learning process really happens and useful to solve some complex problems as Go game. The Go game is ancient and very complex game with simple rules which still is a challenge for the Artificial Intelligence. This dissertation cover some approaches that were applied to solve this problem, proposing solve this problem using competitive and cooperative co-evolutionary learning methods and other techniques proposed by the author. To study, implement and prove these methods were used some neural networks structures, a framework free available and coded many programs. The techniques proposed were coded by the author, performed many experiments to find the best configuration to ensure that co-evolution is progressing and discussed the results. Using co-evolutionary learning processes can be observed some pathologies which could impact co-evolution progress. In this dissertation is introduced some techniques to solve pathologies as loss of gradients, cycling dynamics and forgetting. According to some authors, one solution to solve these co-evolution pathologies is introduce more diversity in populations that are evolving. In this thesis is proposed some techniques to introduce more diversity and some diversity measurements for neural networks structures to monitor diversity during co-evolution. The genotype diversity evolved were analyzed in terms of its impact to global fitness of the strategies evolved and their generalization. Additionally, it was introduced a memory mechanism in the network neural structures to reinforce some strategies in the genes of the neurons evolved with the intention that some good strategies learned are not forgotten. In this dissertation is presented some works from other authors in which cooperative and competitive co-evolution has been applied. The Go board size used in this thesis was 9x9, but can be easily escalated to more bigger boards.The author believe that programs coded and techniques introduced in this dissertation can be used for other domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ‘traditional’ set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified-easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Transport is the foundation of any economy: it boosts economic growth, creates wealth, enhances trade, geographical accessibility and the mobility of people. Transport is also a key ingredient for a high quality of life, making places accessible and bringing people together. The future prosperity of our world will depend on the ability of all of its regions to remain fully and competitively integrated in the world economy. Efficient transport is vital in making this happen. Operations research can help in efficiently planning the design and operating transport systems. Planning and operational processes are fields that are rich in combinatorial optimization problems. These problems can be analyzed and solved through the application of mathematical models and optimization techniques, which may lead to an improvement in the performance of the transport system, as well as to a reduction in the time required for solving these problems. The latter aspect is important, because it increases the flexibility of the system: the system can adapt in a faster way to changes in the environment (i.e.: weather conditions, crew illness, failures, etc.). These disturbing changes (called disruptions) often enforce the schedule to be adapted. The direct consequences are delays and cancellations, implying many schedule adjustments and huge costs. Consequently, robust schedules and recovery plans must be developed in order to fight against disruptions. This dissertation makes contributions to two different fields: rail and air applications. Robust planning and recovery methods are presented. In the field of railway transport we develop several mathematical models which answer to RENFE’s (the major railway operator in Spain) needs: 1. We study the rolling stock assignment problem: here, we introduce some robust aspects in order to ameliorate some operations which are likely to fail. Once the rolling stock assignment is known, we propose a robust routing model which aims at identifying the train units’ sequences while minimizing the expected delays and human resources needed to perform the sequences. 2. It is widely accepted that the sequential solving approach produces solutions that are not global optima. Therefore, we develop an integrated and robust model to determine the train schedule and rolling stock assignment. We also propose an integrated model to study the rolling stock circulations. Circulations are determined by the rolling stock assignment and routing of the train units. 3. Although our aim is to develop robust plans, disruptions will be likely to occur and recovery methods will be needed. Therefore, we propose a recovery method which aims to recover the train schedule and rolling stock assignment in an integrated fashion all while considering the passenger demand. In the field of air transport we develop several mathematical models which answer to IBERIA’s (the major airline in Spain) needs: 1. We look at the airline-scheduling problem and develop an integrated approach that optimizes schedule design, fleet assignment and passenger use so as to reduce costs and create fewer incompatibilities between decisions. Robust itineraries are created to ameliorate misconnected passengers. 2. Air transport operators are continuously facing competition from other air operators and different modes of transport (e.g., High Speed Rail). Consequently, airline profitability is critically influenced by the airline’s ability to estimate passenger demands and construct profitable flight schedules. We consider multi-modal competition including airline and rail, and develop a new approach that estimates the demand associated with a given schedule; and generates airline schedules and fleet assignments using an integrated schedule design and fleet assignment optimization model that captures the impacts of schedule decisions on passenger demand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the ?traditional? set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified, easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab® environment (The Mathworks, Inc), which is designed to study functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El óxido nitroso (N2O) es un potente gas de efecto invernadero (GHG) proveniente mayoritariamente de la fertilización nitrogenada de los suelos agrícolas. Identificar estrategias de manejo de la fertilización que reduzcan estas emisiones sin suponer un descenso de los rendimientos es vital tanto a nivel económico como medioambiental. Con ese propósito, en esta Tesis se han evaluado: (i) estrategias de manejo directo de la fertilización (inhibidores de la nitrificación/ureasa); y (ii) interacciones de los fertilizantes con (1) el manejo del agua, (2) residuos de cosecha y (3) diferentes especies de plantas. Para conseguirlo se llevaron a cabo meta-análisis, incubaciones de laboratorio, ensayos en invernadero y experimentos de campo. Los inhibidores de la nitrificación y de la actividad ureasa se proponen habitualmente como medidas para reducir las pérdidas de nitrógeno (N), por lo que su aplicación estaría asociada al uso eficiente del N por parte de los cultivos (NUE). Sin embargo, su efecto sobre los rendimientos es variable. Con el objetivo de evaluar en una primera fase su efectividad para incrementar el NUE y la productividad de los cultivos, se llevó a cabo un meta-análisis. Los inhibidores de la nitrificación dicyandiamide (DCD) y 3,4-dimetilepyrazol phosphate (DMPP) y el inhibidor de la ureasa N-(n-butyl) thiophosphoric triamide (NBPT) fueron seleccionados para el análisis ya que generalmente son considerados las mejores opciones disponibles comercialmente. Nuestros resultados mostraron que su uso puede ser recomendado con el fin de incrementar tanto el rendimiento del cultivo como el NUE (incremento medio del 7.5% y 12.9%, respectivamente). Sin embargo, se observó que su efectividad depende en gran medida de los factores medioambientales y de manejo de los estudios evaluados. Una mayor respuesta fue encontrada en suelos de textura gruesa, sistemas irrigados y/o en cultivos que reciben altas tasas de fertilizante nitrogenado. En suelos alcalinos (pH ≥ 8), el inhibidor de la ureasa NBPT produjo el mayor efecto. Dado que su uso representa un coste adicional para los agricultores, entender las mejores prácticas que permitan maximizar su efectividad es necesario para posteriormente realizar comparaciones efectivas con otras prácticas que incrementen la productividad de los cultivos y el NUE. En base a los resultados del meta-análisis, se seleccionó el NBPT como un inhibidor con gran potencial. Inicialmente desarrollado para reducir la volatilización de amoniaco (NH3), en los últimos años algunos investigadores han demostrado en estudios de campo un efecto mitigador de este inhibidor sobre las pérdidas de N2O provenientes de suelos fertilizados bajo condiciones de baja humedad del suelo. Dada la alta variabilidad de los experimentos de campo, donde la humedad del suelo cambia rápidamente, ha sido imposible entender mecanísticamente el potencial de los inhibidores de la ureasa (UIs) para reducir emisiones de N2O y su dependencia con respecto al porcentaje de poros llenos de agua del suelo (WFPS). Por lo tanto se realizó una incubación en laboratorio con el propósito de evaluar cuál es el principal mecanismo biótico tras las emisiones de N2O cuando se aplican UIs bajo diferentes condiciones de humedad del suelo (40, 60 y 80% WFPS), y para analizar hasta qué punto el WFPS regula el efecto del inhibidor sobre las emisiones de N2O. Un segundo UI (i.e. PPDA) fue utilizado para comparar el efecto del NBPT con el de otro inhibidor de la ureasa disponible comercialmente; esto nos permitió comprobar si el efecto de NBPT es específico de ese inhibidor o no. Las emisiones de N2O al 40% WFPS fueron despreciables, siendo significativamente más bajas que las de todos los tratamientos fertilizantes al 60 y 80% WFPS. Comparado con la urea sin inhibidor, NBPT+U redujo las emisiones de N2O al 60% WFPS pero no tuvo efecto al 80% WFPS. La aplicación de PPDA incrementó significativamente las emisiones con respecto a la urea al 80% WFPS mientras que no se encontró un efecto significativo al 60% WFPS. Al 80% WFPS la desnitrificación fue la principal fuente de las emisiones de N2O en todos los tratamientos mientras que al 60% tanto la nitrificación como la desnitrificación tuvieron un papel relevante. Estos resultados muestran que un correcto manejo del NBPT puede suponer una estrategia efectiva para mitigar las emisiones de N2O. Con el objetivo de trasladar nuestros resultados de los estudios previos a condiciones de campo reales, se desarrolló un experimento en el que se evaluó la efectividad del NBPT para reducir pérdidas de N y aumentar la productividad durante un cultivo de cebada (Hordeum vulgare L.) en secano Mediterráneo. Se determinó el rendimiento del cultivo, las concentraciones de N mineral del suelo, el carbono orgánico disuelto (DOC), el potencial de desnitrificación, y los flujos de NH3, N2O y óxido nítrico (NO). La adición del inhibidor redujo las emisiones de NH3 durante los 30 días posteriores a la aplicación de urea en un 58% y las emisiones netas de N2O y NO durante los 95 días posteriores a la aplicación de urea en un 86 y 88%, respectivamente. El uso de NBPT también incrementó el rendimiento en grano en un 5% y el consumo de N en un 6%, aunque ninguno de estos incrementos fue estadísticamente significativo. Bajo las condiciones experimentales dadas, estos resultados demuestran el potencial del inhibidor de la ureasa NBPT para mitigar las emisiones de NH3, N2O y NO provenientes de suelos arables fertilizados con urea, mediante la ralentización de la hidrólisis de la urea y posterior liberación de menores concentraciones de NH4 + a la capa superior del suelo. El riego por goteo combinado con la aplicación dividida de fertilizante nitrogenado disuelto en el agua de riego (i.e. fertirriego por goteo) se considera normalmente una práctica eficiente para el uso del agua y de los nutrientes. Algunos de los principales factores (WFPS, NH4 + y NO3 -) que regulan las emisiones de GHGs (i.e. N2O, CO2 y CH4) y NO pueden ser fácilmente manipulados por medio del fertirriego por goteo sin que se generen disminuciones del rendimiento. Con ese propósito se evaluaron opciones de manejo para reducir estas emisiones en un experimento de campo durante un cultivo de melón (Cucumis melo L.). Los tratamientos incluyeron distintas frecuencias de riego (semanal/diario) y tipos de fertilizantes nitrogenados (urea/nitrato cálcico) aplicados por fertirriego. Fertirrigar con urea en lugar de nitrato cálcico aumentó las emisiones de N2O y NO por un factor de 2.4 y 2.9, respectivamente (P < 0.005). El riego diario redujo las emisiones de NO un 42% (P < 0.005) pero aumentó las emisiones de CO2 un 21% (P < 0.05) comparado con el riego semanal. Analizando el Poder de Calentamiento global en base al rendimiento así como los factores de emisión del NO, concluimos que el fertirriego semanal con un fertilizante de tipo nítrico es la mejor opción para combinar productividad agronómica con sostenibilidad medioambiental en este tipo de agroecosistemas. Los suelos agrícolas en las áreas semiáridas Mediterráneas se caracterizan por su bajo contenido en materia orgánica y bajos niveles de fertilidad. La aplicación de residuos de cosecha y/o abonos es una alternativa sostenible y eficiente desde el punto de vista económico para superar este problema. Sin embargo, estas prácticas podrían inducir cambios importantes en las emisiones de N2O de estos agroecosistemas, con impactos adicionales en las emisiones de CO2. En este contexto se llevó a cabo un experimento de campo durante un cultivo de cebada (Hordeum vulgare L.) bajo condiciones Mediterráneas para evaluar el efecto de combinar residuos de cosecha de maíz con distintos inputs de fertilizantes nitrogenados (purín de cerdo y/o urea) en estas emisiones. La incorporación de rastrojo de maíz incrementó las emisiones de N2O durante el periodo experimental un 105%. Sin embargo, las emisiones de NO se redujeron significativamente en las parcelas enmendadas con rastrojo. La sustitución parcial de urea por purín de cerdo redujo las emisiones netas de N2O un 46 y 39%, con y sin incorporación de residuo de cosecha respectivamente. Las emisiones netas de NO se redujeron un 38 y un 17% para estos mismos tratamientos. El ratio molar DOC:NO3 - demostró predecir consistentemente las emisiones de N2O y NO. El efecto principal de la interacción entre el fertilizante nitrogenado y el rastrojo de maíz se dio a los 4-6 meses de su aplicación, generando un aumento del N2O y una disminución del NO. La sustitución de urea por purín de cerdo puede considerarse una buena estrategia de manejo dado que el uso de este residuo orgánico redujo las emisiones de óxidos de N. Los pastos de todo el mundo proveen numerosos servicios ecosistémicos pero también suponen una importante fuente de emisión de N2O, especialmente en respuesta a la deposición de N proveniente del ganado mientras pasta. Para explorar el papel de las plantas como mediadoras de estas emisiones, se analizó si las emisiones de N2O dependen de la riqueza en especies herbáceas y/o de la composición específica de especies, en ausencia y presencia de una deposición de orina. Las hipótesis fueron: 1) las emisiones de N2O tienen una relación negativa con la productividad de las plantas; 2) mezclas de cuatro especies generan menores emisiones que monocultivos (dado que su productividad será mayor); 3) las emisiones son menores en combinaciones de especies con distinta morfología radicular y alta biomasa de raíz; y 4) la identidad de las especies clave para reducir el N2O depende de si hay orina o no. Se establecieron monocultivos y mezclas de dos y cuatro especies comunes en pastos con rasgos funcionales divergentes: Lolium perenne L. (Lp), Festuca arundinacea Schreb. (Fa), Phleum pratense L. (Php) y Poa trivialis L. (Pt), y se cuantificaron las emisiones de N2O durante 42 días. No se encontró relación entre la riqueza en especies y las emisiones de N2O. Sin embargo, estas emisiones fueron significativamente menores en ciertas combinaciones de especies. En ausencia de orina, las comunidades de plantas Fa+Php actuaron como un sumidero de N2O, mientras que los monocultivos de estas especies constituyeron una fuente de N2O. Con aplicación de orina la comunidad Lp+Pt redujo (P < 0.001) las emisiones de N2O un 44% comparado con los monocultivos de Lp. Las reducciones de N2O encontradas en ciertas combinaciones de especies pudieron explicarse por una productividad total mayor y por una complementariedad en la morfología radicular. Este estudio muestra que la composición de especies herbáceas es un componente clave que define las emisiones de N2O de los ecosistemas de pasto. La selección de combinaciones de plantas específicas en base a la deposición de N esperada puede, por lo tanto, ser clave para la mitigación de las emisiones de N2O. ABSTRACT Nitrous oxide (N2O) is a potent greenhouse gas (GHG) directly linked to applications of nitrogen (N) fertilizers to agricultural soils. Identifying mitigation strategies for these emissions based on fertilizer management without incurring in yield penalties is of economic and environmental concern. With that aim, this Thesis evaluated: (i) the use of nitrification and urease inhibitors; and (ii) interactions of N fertilizers with (1) water management, (2) crop residues and (3) plant species richness/identity. Meta-analysis, laboratory incubations, greenhouse mesocosm and field experiments were carried out in order to understand and develop effective mitigation strategies. Nitrification and urease inhibitors are proposed as means to reduce N losses, thereby increasing crop nitrogen use efficiency (NUE). However, their effect on crop yield is variable. A meta-analysis was initially conducted to evaluate their effectiveness at increasing NUE and crop productivity. Commonly used nitrification inhibitors (dicyandiamide (DCD) and 3,4-dimethylepyrazole phosphate (DMPP)) and the urease inhibitor N-(n-butyl) thiophosphoric triamide (NBPT) were selected for analysis as they are generally considered the best available options. Our results show that their use can be recommended in order to increase both crop yields and NUE (grand mean increase of 7.5% and 12.9%, respectively). However, their effectiveness was dependent on the environmental and management factors of the studies evaluated. Larger responses were found in coarse-textured soils, irrigated systems and/or crops receiving high nitrogen fertilizer rates. In alkaline soils (pH ≥ 8), the urease inhibitor NBPT produced the largest effect size. Given that their use represents an additional cost for farmers, understanding the best management practices to maximize their effectiveness is paramount to allow effective comparison with other practices that increase crop productivity and NUE. Based on the meta-analysis results, NBPT was identified as a mitigation option with large potential. Urease inhibitors (UIs) have shown to promote high N use efficiency by reducing ammonia (NH3) volatilization. In the last few years, however, some field researches have shown an effective mitigation of UIs over N2O losses from fertilized soils under conditions of low soil moisture. Given the inherent high variability of field experiments where soil moisture content changes rapidly, it has been impossible to mechanistically understand the potential of UIs to reduce N2O emissions and its dependency on the soil water-filled pore space (WFPS). An incubation experiment was carried out aiming to assess what is the main biotic mechanism behind N2O emission when UIs are applied under different soil moisture conditions (40, 60 and 80% WFPS), and to analyze to what extent the soil WFPS regulates the effect of the inhibitor over N2O emissions. A second UI (i.e. PPDA) was also used aiming to compare the effect of NBPT with that of another commercially available urease inhibitor; this allowed us to see if the effect of NBPT was inhibitor-specific or not. The N2O emissions at 40% WFPS were almost negligible, being significantly lower from all fertilized treatments than that produced at 60 and 80% WFPS. Compared to urea alone, NBPT+U reduced the N2O emissions at 60% WFPS but had no effect at 80% WFPS. The application of PPDA significantly increased the emissions with respect to U at 80% WFPS whereas no significant effect was found at 60% WFPS. At 80% WFPS denitrification was the main source of N2O emissions for all treatments. Both nitrification and denitrification had a determinant role on these emissions at 60% WFPS. These results suggest that adequate management of the UI NBPT can provide, under certain soil conditions, an opportunity for N2O mitigation. We translated our previous results to realistic field conditions by means of a field experiment with a barley crop (Hordeum vulgare L.) under rainfed Mediterranean conditions in which we evaluated the effectiveness of NBPT to reduce N losses and increase crop yields. Crop yield, soil mineral N concentrations, dissolved organic carbon (DOC), denitrification potential, NH3, N2O and nitric oxide (NO) fluxes were measured during the growing season. The inclusion of the inhibitor reduced NH3 emissions in the 30 d following urea application by 58% and net N2O and NO emissions in the 95 d following urea application by 86 and 88%, respectively. NBPT addition also increased grain yield by 5% and N uptake by 6%, although neither increase was statistically significant. Under the experimental conditions presented here, these results demonstrate the potential of the urease inhibitor NBPT in abating NH3, N2O and NO emissions from arable soils fertilized with urea, slowing urea hydrolysis and releasing lower concentrations of NH4 + to the upper soil layer. Drip irrigation combined with split application of N fertilizer dissolved in the irrigation water (i.e. drip fertigation) is commonly considered best management practice for water and nutrient efficiency. Some of the main factors (WFPS, NH4 + and NO3 -) regulating the emissions of GHGs (i.e. N2O, carbon dioxide (CO2) and methane (CH4)) and NO can easily be manipulated by drip fertigation without yield penalties. In this study, we tested management options to reduce these emissions in a field experiment with a melon (Cucumis melo L.) crop. Treatments included drip irrigation frequency (weekly/daily) and type of N fertilizer (urea/calcium nitrate) applied by fertigation. Crop yield, environmental parameters, soil mineral N concentrations, N2O, NO, CH4, and CO2 fluxes were measured during the growing season. Fertigation with urea instead of calcium nitrate increased N2O and NO emissions by a factor of 2.4 and 2.9, respectively (P < 0.005). Daily irrigation reduced NO emissions by 42% (P < 0.005) but increased CO2 emissions by 21% (P < 0.05) compared with weekly irrigation. Based on yield-scaled Global Warming Potential as well as NO emission factors, we conclude that weekly fertigation with a NO3 --based fertilizer is the best option to combine agronomic productivity with environmental sustainability. Agricultural soils in semiarid Mediterranean areas are characterized by low organic matter contents and low fertility levels. Application of crop residues and/or manures as amendments is a cost-effective and sustainable alternative to overcome this problem. However, these management practices may induce important changes in the nitrogen oxide emissions from these agroecosystems, with additional impacts on CO2 emissions. In this context, a field experiment was carried out with a barley (Hordeum vulgare L.) crop under Mediterranean conditions to evaluate the effect of combining maize (Zea mays L.) residues and N fertilizer inputs (organic and/or mineral) on these emissions. Crop yield and N uptake, soil mineral N concentrations, dissolved organic carbon (DOC), denitrification capacity, N2O, NO and CO2 fluxes were measured during the growing season. The incorporation of maize stover increased N2O emissions during the experimental period by c. 105 %. Conversely, NO emissions were significantly reduced in the plots amended with crop residues. The partial substitution of urea by pig slurry reduced net N2O emissions by 46 and 39 %, with and without the incorporation of crop residues respectively. Net emissions of NO were reduced 38 and 17 % for the same treatments. Molar DOC:NO3 - ratio was found to be a robust predictor of N2O and NO fluxes. The main effect of the interaction between crop residue and N fertilizer application occurred in the medium term (4-6 month after application), enhancing N2O emissions and decreasing NO emissions as consequence of residue incorporation. The substitution of urea by pig slurry can be considered a good management strategy since N2O and NO emissions were reduced by the use of the organic residue. Grassland ecosystems worldwide provide many important ecosystem services but they also function as a major source of N2O, especially in response to N deposition by grazing animals. In order to explore the role of plants as mediators of these emissions, we tested whether and how N2O emissions are dependent on grass species richness and/or specific grass species composition in the absence and presence of urine deposition. We hypothesized that: 1) N2O emissions relate negatively to plant productivity; 2) four-species mixtures have lower emissions than monocultures (as they are expected to be more productive); 3) emissions are lowest in combinations of species with diverging root morphology and high root biomass; and 4) the identity of the key species that reduce N2O emissions is dependent on urine deposition. We established monocultures and two- and four-species mixtures of common grass species with diverging functional traits: Lolium perenne L. (Lp), Festuca arundinacea Schreb. (Fa), Phleum pratense L. (Php) and Poa trivialis L. (Pt), and quantified N2O emissions for 42 days. We found no relation between plant species richness and N2O emissions. However, N2O emissions were significantly reduced in specific plant species combinations. In the absence of urine, plant communities of Fa+Php acted as a sink for N2O, whereas the monocultures of these species constituted a N2O source. With urine application Lp+Pt plant communities reduced (P < 0.001) N2O emissions by 44% compared to monocultures of Lp. Reductions in N2O emissions by species mixtures could be explained by total biomass productivity and by complementarity in root morphology. Our study shows that plant species composition is a key component underlying N2O emissions from grassland ecosystems. Selection of specific grass species combinations in the context of the expected nitrogen deposition regimes may therefore provide a key management practice for mitigation of N2O emissions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nondestructive techniques are widely used to assess existing timber structures. The models proposed for these methods are usually performed in the laboratory using small clear wood specimens. But in real situations many anomalies, defects and biological damage are found in wood. In these cases the existing models only indicate that the values are outside normality without providing any other information. To solve this problem, a study of non-destructive probing methods for wood was performed, testing the behaviour of four different techniques (penetration resistance, pullout resistance, drill resistance and chip drill extraction) on wood samples with different biological damage, simulating an in-situ test. The wood samples were obtained from existing Spanish timber structures with biotic damage caused by borer insects, termites, brown rot and white rot. The study concludes that all of the methods offer more or less detailed information about the degree of deterioration of wood, but that the first two methods (penetration and pullout resistance) cannot distinguish between pathologies. On the other hand, drill resistance and chip drill extraction make it possible to differentiate pathologies and even to identify species or damage location. Finally, the techniques used were compared to characterize their advantages and disadvantages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las metodologías de desarrollo ágiles han sufrido un gran auge en entornos industriales durante los últimos años debido a la rapidez y fiabilidad de los procesos de desarrollo que proponen. La filosofía DevOps y específicamente las metodologías derivadas de ella como Continuous Delivery o Continuous Deployment promueven la gestión completamente automatizada del ciclo de vida de las aplicaciones, desde el código fuente a las aplicaciones ejecutándose en entornos de producción. La automatización se ve como un medio para producir procesos repetibles, fiables y rápidos. Sin embargo, no todas las partes de las metodologías Continuous están completamente automatizadas. En particular, la gestión de la configuración de los parámetros de ejecución es un problema que ha sido acrecentado por la elasticidad y escalabilidad que proporcionan las tecnologías de computación en la nube. La mayoría de las herramientas de despliegue actuales pueden automatizar el despliegue de la configuración de parámetros de ejecución, pero no ofrecen soporte a la hora de fijar esos parámetros o de validar los ficheros que despliegan, principalmente debido al gran abanico de opciones de configuración y el hecho de que el valor de muchos de esos parámetros es fijado en base a preferencias expresadas por el usuario. Esto hecho hace que pueda parecer que cualquier solución al problema debe estar ajustada a una aplicación específica en lugar de ofrecer una solución general. Con el objetivo de solucionar este problema, propongo un modelo de configuración que puede ser inferido a partir de instancias de configuración existentes y que puede reflejar las preferencias de los usuarios para ser usado para facilitar los procesos de configuración. El modelo de configuración puede ser usado como la base de un proceso de configuración interactivo capaz de guiar a un operador humano a través de la configuración de una aplicación para su despliegue en un entorno determinado o para detectar cambios de configuración automáticamente y producir una configuración válida que se ajuste a esos cambios. Además, el modelo de configuración debería ser gestionado como si se tratase de cualquier otro artefacto software y debería ser incorporado a las prácticas de gestión habituales. Por eso también propongo un modelo de gestión de servicios que incluya información relativa a la configuración de parámetros de ejecución y que además es capaz de describir y gestionar propuestas arquitectónicas actuales tales como los arquitecturas de microservicios. ABSTRACT Agile development methodologies have risen in popularity within the industry in recent years due to the speed and reliability of the processes they propose. The DevOps philosophy and specifically the methodologies derived from it such as Continuous Delivery and Continuous Deployment push for a totally automated management of the application lifecycle, from the source code to the software running in production environment. Automation in this regard is used as a means to produce repeatable, reliable and fast processes. However, not all parts of the Continuous methodologies are completely automatized. In particular, management of runtime parameter configuration is a problem that has increased its impact in deployment process due to the scalability and elasticity provided by cloud technologies. Most deployment tools nowadays can automate the deployment of runtime parameter configuration, but they offer no support for parameter setting o configuration validation, as the range of different configuration options and the fact that the value of many of those parameters is based on user preference seems to imply that any solution to the problem will have to be tailored to a specific application. With the aim to solve this problem I propose a configuration model that can be inferred from existing configurations and reflect user preferences in order to ease the configuration process. The configuration model can be used as the base of an interactive configuration process capable of guiding a human operator through the configuration of an application for its deployment in a specific environment or to automatically detect configuration changes and produce valid runtime parameter configurations that take into account those changes. Additionally, the configuration model should be managed as any other software artefact and should be incorporated into current management practices. I also propose a service management model that includes the configuration information and that is able to describe and manage current architectural practices such as the microservices architecture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.