15 resultados para Freezing and processing

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Following the processing and validation of JEFF-3.1 performed in 2006 and presented in ND2007, and as a consequence of the latest updated of this library (JEFF-3.1.2) in February 2012, a new processing and validation of JEFF-3.1.2 cross section library is presented in this paper. The processed library in ACE format at ten different temperatures was generated with NJOY-99.364 nuclear data processing system. In addition, NJOY-99 inputs are provided to generate PENDF, GENDF, MATXSR and BOXER formats. The library has undergone strict QA procedures, being compared with other available libraries (e.g. ENDF/B-VII.1) and processing codes as PREPRO-2000 codes. A set of 119 criticality benchmark experiments taken from ICSBEP-2010 has been used for validation purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long-length ultrafine-grained (UFG) Ti rods are produced by equal-channel angular pressing via the conform scheme (ECAP-C) at 200 °C, which is followed by drawing at 200 °C. The evolution of microstructure, macrotexture, and mechanical properties (yield strength, ultimate tensile strength, failure stress, uniform elongation, elongation to failure) of pure Ti during this thermo-mechanical processing is studied. Special attention is also paid to the effect of microstructure on the mechanical behavior of the material after macrolocalization of plastic flow. The number of ECAP-C passes varies in the range of 1–10. The microstructure is more refined with increasing number of ECAP-C passes. Formation of homogeneous microstructure with a grain/subgrain size of 200 nm and its saturation after 6 ECAP-C passes are observed. Strength properties increase with increasing number of ECAP passes and saturate after 6 ECAP-C passes to a yield strength of 973 MPa, an ultimate tensile strength of 1035 MPa, and a true failure stress of 1400 MPa (from 625, 750, and 1150 MPa in the as-received condition). The true strain at failure failure decreases after ECAP-C processing. The reduction of area and true strain to failure values do not decrease after ECAP-C processing. The sample after 6 ECAP-C passes is subjected to drawing at 200¯C resulting in reduction of a grain/subgrain size to 150 nm, formation of (10 View the MathML source1¯0) fiber texture with respect to the rod axis, and further increase of the yield strength up to 1190 MPa, the ultimate tensile strength up to 1230 MPa and the true failure stress up to 1600 MPa. It is demonstrated that UFG CP Ti has low resistance to macrolocalization of plastic deformation and high resistance to crack formation after necking.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The author participated in the 6 th EU Framework Project ―Q-pork Chains (FP6-036245-2)‖ from 2007 to 2009. With understanding of work reports from China and other countries, it is found that compared with other countries, China has great problems in pork quality and safety. By comparing the pork chain management between China and Spain, It is found that the difference in governance structure is one of the main differences in pork chain management between Spain and China. In China, spot-market relationship still dominates governance structure of pork chain, especially between the numerous house-hold pig holders and the great number of small slaughters. While in Spain, chain agents commonly apply cooperatives or integrations to cooperate. It also has been proven by recent studies, that in quality management at the chain level that supply chain integration has a direct effect on quality management practices (Han, 2010). Therefore, the author started to investigate the governance structure choices in supply chain management. And it has been set as the first research objective, which is to explain the governance structure choices process and the influencing factors in supply chain management, analyzing the pork chains cases in Spain and in China. During the further investigation, the author noticed the international trade of pork between Spain and China is not smooth since the signature of bi-lateral agreement on pork trade in 2007. Thus, another objective of the research is to find and solve the problems exist in the international pork chain between Spain and China. For the first objective, to explain the governance structure choices in supply chain management, the thesis conducts research in three main sections. 10 First of all, the thesis gives a literature overview in chapter two on Supply Chain Management (SCM), agri-food chain management and pork chain management. It concludes that SCM is a systems approach to view the supply chains as a whole, and to manage the total flow of goods inventory from the supplier to the ultimate customer. It includes the bi-directional flow of products (materials and services) and information, and the associated managerial and operational activities. And it also is a customer focus to create unique and individual source of customer value with an appropriate use of resources, leading to customer satisfaction and building competitive chain advantages. Agri-food chain management and pork chain management are applications of SCM in agri-food sector and pork sector respectively. Then, the research gives a comparative study in chapter three in the pork chain and pork chain management between Spain and China. Many differences are found, while the main difference is governance structure in pork chain management. Furthermore, the author gives an empirical study on governance structure choice in chapter five. It is concluded that governance structure of supply chain consists of a collection of rules/institutions/constraints structuring the transactions between the various stakeholders. Based on the overview on literatures closely related with governance structure, such as transaction cost economics, transaction value analysis and resource-based view theories, seven hypotheses are proposed, which are: Hypothesis 1: Transaction cost has positive relationship with governance structure choice Hypothesis 2: Uncertainty has positive relationship with transaction cost; higher uncertainty exerts high transaction cost Hypothesis 3: The relationship between asset specificity and transaction cost is positive Hypothesis 4: Collaboration advantages and governance structure choice have positive relationship11 Hypothesis 5: Willingness to collaborate has positive relationship with collaboration advantages Hypothesis 6: Capability to collaborate has positive relationship with collaboration advantages Hypothesis 7: Uncertainty has negative effect on collaboration advantages It is noted that as transaction cost value is negative, the transaction cost mentioned in the hypotheses is its absolute value. To test the seven hypotheses, Structural Equation Model (SEM) is applied and data collected from 350 pork slaughtering and processing companies in Jiangsu, Shandong and Henan Provinces in China is used. Based on the empirical SEM model and its results, the seven hypotheses are proved. The author generates several conclusions accordingly. It is found that the governance structure choice of the chain not only depends on transaction cost, it also depends on collaboration advantages. Exchange partners establish more stable and more intense relationship to reduce transaction cost and to maximize collaboration advantages. ―Collaboration advantages‖ in this thesis is defined as the joint value achieved through transaction (mutual activities) of agents in supply chains. This value forms as improvements, mainly in mutual logistics systems, cash response, information exchange, technological improvements and innovative improvements and quality management improvements, etc. Governance structure choice is jointly decided by transaction cost and collaboration advantages. Chain agents take different governance structures to coordinate in order to decrease their transaction cost and to increase their collaboration advantages. In China´s pork chain case, spot market relationship dominates the governance structure among the numerous backyard pig farmer and small family slaughterhouse 12 as they are connected by acquaintance relationship and the transaction cost in turn is low. Their relationship is reliable as they know each other in the neighborhood; as a result, spot market relationship is suitable for their exchange. However, the transaction between large-scale slaughtering and processing industries and small-scale pig producers is becoming difficult. The information hold back behavior and hold-up behavior of small-scale pig producers increase transaction cost between them and large-scale slaughtering and processing industries. Thus, through the more intense and stable relationship between processing industries and pig producers, processing industries reduce the transaction cost and improve the collaboration advantages with their chain partners, in which quality and safety collaboration advantages be increased, meaning that processing industries are able to provide consumers products with better quality and higher safety. It is also drawn that transaction cost is influenced mainly by uncertainty and asset specificity, which is in line with new institutional economics theories developed by Williamson O. E. In China´s pork chain case, behavioral uncertainty is created by the hold-up behaviors of great numbers of small pig producers, while big slaughtering and processing industries having strong asset specificity. On the other hand, ―collaboration advantages‖ is influenced by chain agents´ willingness to collaborate and chain agents´ capabilities to cooperate. With the fast growth of big scale slaughtering and processing industries, they are more willing to know and make effort to cooperate with their chain members, and they are more capable to create joint value together with other chain agents. Therefore, they are now the main chain agents who drive more intense and stable governance structure in China‘s pork chain. For the other objective, to find and solve the problems in the international pork chain between Spain and China, the research gives an analysis in chapter four on the 13 international pork chain. This study gives explanations why the international trade of pork between Spain and China is not sufficient from the chain perspective. It is found that the first obstacle is the high quality and safety requirement set by Chinese government. It makes the Spanish companies difficult to get authorities to export. Other aspects, such as Spanish pork is not competitive in price compared with other countries such as Denmark, United States, Canada, etc., Chinese consumers do not have sufficient information on Spanish pork products, are also important reasons that Spain does not export great quantity of pork products to China. It is concluded that China´s government has too much concern on the quality and safety requirements to Spanish pork products, which makes trade difficult to complete. The two countries need to establish a more stable and intense trade relationship. They also should make the information exchange sufficient and efficient and try to break trade barriers. Spanish companies should consider proper price strategies to win the Chinese pork market

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Freezing of water or salt solution in concrete pores is a main cause for severe damage and significant reduction of the service life. Most of the freeze-thaw (F-T) accelerated tests measure the scaling of concrete by weighting. This paper presents complementary procedures based on the use of strain gages and ultrasonic pulse velocity (UPV) for measuring the deterioration of concrete due to freezing and thawing. These non-destructive testing (NDT) procedures are applied to two types of concretes, one susceptible to F-T damage and the other does not. The results show a good correlation between scaling and the measurements obtained with NDT. Showing NDT the advantage to detect before the damage and to perform continuous measurement

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Soybean meal (SBM) is the main protein source in livestock feeds. United States (USA), Brazil (BRA), and Argentine (ARG) are the major SBM exporter countries. The nutritive value of SBM varies because genetics, environment, farming conditions, and processing of the beans influence strongly the content and availability of major nutrients. The present research was conducted to determine the influence of origin (USA, BRA and ARG) on nutritive value and protein quality of SBM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Virtual certification partially substitutes by computer simulations the experimental techniques required for rail vehicle certification. In this paper, several works were these techniques were used in the vehicle design and track maintenance processes are presented. Dynamic simulation of multibody systems was used to virtually apply the EN14363 standard to certify the dynamic behaviour of vehicles. The works described are: assessment of a freight bogie design adapted to meter-gauge, assessment of a railway track layout for a subway network, freight bogie design with higher speed and axle load, and processing of the data acquired by a track recording vehicle for track maintenance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work we propose an image acquisition and processing methodology (framework) developed for performance in-field grapes and leaves detection and quantification, based on a six step methodology: 1) image segmentation through Fuzzy C-Means with Gustafson Kessel (FCM-GK) clustering; 2) obtaining of FCM-GK outputs (centroids) for acting as seeding for K-Means clustering; 3) Identification of the clusters generated by K-Means using a Support Vector Machine (SVM) classifier. 4) Performance of morphological operations over the grapes and leaves clusters in order to fill holes and to eliminate small pixels clusters; 5)Creation of a mosaic image by Scale-Invariant Feature Transform (SIFT) in order to avoid overlapping between images; 6) Calculation of the areas of leaves and grapes and finding of the centroids in the grape bunches. Image data are collected using a colour camera fixed to a mobile platform. This platform was developed to give a stabilized surface to guarantee that the images were acquired parallel to de vineyard rows. In this way, the platform avoids the distortion of the images that lead to poor estimation of the areas. Our preliminary results are promissory, although they still have shown that it is necessary to implement a camera stabilization system to avoid undesired camera movements, and also a parallel processing procedure in order to speed up the mosaicking process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We discuss from a practical point of view a number of ssues involved in writing distributed Internet and WWW applications using LP/CLP systems. We describe PiLLoW, a publicdomain Internet and WWW programming library for LP/CLP systems that we have designed in order to simplify the process of writing such applications. PiLLoW provides facilities for accessing documents and code on the WWW; parsing, manipulating and generating HTML and XML structured documents and data; producing HTML forms; writing form handlers and CGI-scripts; and processing HTML/XML templates. An important contribution of PÍ'LLOW is to model HTML/XML code (and, thus, the content of WWW pages) as terms. The PÍ'LLOW library has been developed in the context of the Ciao Prolog system, but it has been adapted to a number of popular LP/CLP systems, supporting most of its functionality. We also describe the use of concurrency and a highlevel model of client-server interaction, Ciao Prolog's active modules, in the context of WWW programming. We propose a solution for client-side downloading and execution of Prolog code, using generic browsers. Finally, we also provide an overview of related work on the topic.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tradicionalmente, la fabricación de materiales compuestos de altas prestaciones se lleva a cabo en autoclave mediante la consolidación de preimpregnados a través de la aplicación simultánea de altas presiones y temperatura. Las elevadas presiones empleadas en autoclave reducen la porosidad de los componentes garantizando unas buenas propiedades mecánicas. Sin embargo, este sistema de fabricación conlleva tiempos de producción largos y grandes inversiones en equipamiento lo que restringe su aplicación a otros sectores alejados del sector aeronáutico. Este hecho ha generado una creciente demanda de sistemas de fabricación alternativos al autoclave. Aunque estos sistemas son capaces de reducir los tiempos de producción y el gasto energético, por lo general, dan lugar a materiales con menores prestaciones mecánicas debido a que se reduce la compactación del material al aplicar presiones mas bajas y, por tanto, la fracción volumétrica de fibras, y disminuye el control de la porosidad durante el proceso. Los modelos numéricos existentes permiten conocer los fundamentos de los mecanismos de crecimiento de poros durante la fabricación de materiales compuestos de matriz polimérica mediante autoclave. Dichos modelos analizan el comportamiento de pequeños poros esféricos embebidos en una resina viscosa. Su validez no ha sido probada, sin embargo, para la morfología típica observada en materiales compuestos fabricados fuera de autoclave, consistente en poros cilíndricos y alargados embebidos en resina y rodeados de fibras continuas. Por otro lado, aunque existe una clara evidencia experimental del efecto pernicioso de la porosidad en las prestaciones mecánicas de los materiales compuestos, no existe información detallada sobre la influencia de las condiciones de procesado en la forma, fracción volumétrica y distribución espacial de los poros en los materiales compuestos. Las técnicas de análisis convencionales para la caracterización microestructural de los materiales compuestos proporcionan información en dos dimensiones (2D) (microscopía óptica y electrónica, radiografía de rayos X, ultrasonidos, emisión acústica) y sólo algunas son adecuadas para el análisis de la porosidad. En esta tesis, se ha analizado el efecto de ciclo de curado en el desarrollo de los poros durante la consolidación de preimpregnados Hexply AS4/8552 a bajas presiones mediante moldeo por compresión, en paneles unidireccionales y multiaxiales utilizando tres ciclos de curado diferentes. Dichos ciclos fueron cuidadosamente diseñados de acuerdo a la caracterización térmica y reológica de los preimpregnados. La fracción volumétrica de poros, su forma y distribución espacial se analizaron en detalle mediante tomografía de rayos X. Esta técnica no destructiva ha demostrado su capacidad para analizar la microestructura de materiales compuestos. Se observó, que la porosidad depende en gran medida de la evolución de la viscosidad dinámica a lo largo del ciclo y que la mayoría de la porosidad inicial procedía del aire atrapado durante el apilamiento de las láminas de preimpregnado. En el caso de los laminados multiaxiales, la porosidad también se vio afectada por la secuencia de apilamiento. En general, los poros tenían forma cilíndrica y se estaban orientados en la dirección de las fibras. Además, la proyección de la población de poros a lo largo de la dirección de la fibra reveló la existencia de una estructura celular de un diámetro aproximado de 1 mm. Las paredes de las celdas correspondían con regiones con mayor densidad de fibra mientras que los poros se concentraban en el interior de las celdas. Esta distribución de la porosidad es el resultado de una consolidación no homogenea. Toda esta información es crítica a la hora de optimizar las condiciones de procesado y proporcionar datos de partida para desarrollar herramientas de simulación de los procesos de fabricación de materiales compuestos fuera de autoclave. Adicionalmente, se determinaron ciertas propiedades mecánicas dependientes de la matriz termoestable con objeto de establecer la relación entre condiciones de procesado y las prestaciones mecánicas. En el caso de los laminados unidireccionales, la resistencia interlaminar depende de la porosidad para fracciones volumétricas de poros superiores 1%. Las mismas tendencias se observaron en el caso de GIIc mientras GIc no se vio afectada por la porosidad. En el caso de los laminados multiaxiales se evaluó la influencia de la porosidad en la resistencia a compresión, la resistencia a impacto a baja velocidad y la resistencia a copresión después de impacto. La resistencia a compresión se redujo con el contenido en poros, pero éste no influyó significativamente en la resistencia a compresión despues de impacto ya que quedó enmascarada por otros factores como la secuencia de apilamiento o la magnitud del daño generado tras el impacto. Finalmente, el efecto de las condiciones de fabricación en el proceso de compactación mediante moldeo por compresión en laminados unidireccionales fue simulado mediante el método de los elementos finitos en una primera aproximación para simular la fabricación de materiales compuestos fuera de autoclave. Los parámetros del modelo se obtuvieron mediante experimentos térmicos y reológicos del preimpregnado Hexply AS4/8552. Los resultados obtenidos en la predicción de la reducción de espesor durante el proceso de consolidación concordaron razonablemente con los resultados experimentales. Manufacturing of high performance polymer-matrix composites is normally carried out by means of autoclave using prepreg tapes stacked and consolidated under the simultaneous application of pressure and temperature. High autoclave pressures reduce the porosity in the laminate and ensure excellent mechanical properties. However, this manufacturing route is expensive in terms of capital investment and processing time, hindering its application in many industrial sectors. This fact has driven the demand of alternative out-of-autoclave processing routes. These techniques claim to produce composite parts faster and at lower cost but the mechanical performance is also reduced due to the lower fiber content and to the higher porosity. Corrient numerical models are able to simulate the mechanisms of void growth in polymer-matrix composites processed in autoclave. However these models are restricted to small spherical voids surrounded by a viscous resin. Their validity is not proved for long cylindrical voids in a viscous matrix surrounded by aligned fibers, the standard morphology observed in out-of-autoclave composites. In addition, there is an experimental evidence of the detrimental effect of voids on the mechanical performance of composites but, there is detailed information regarding the influence of curing conditions on the actual volume fraction, shape and spatial distribution of voids within the laminate. The standard techniques of microstructural characterization of composites (optical or electron microscopy, X-ray radiography, ultrasonics) provide information in two dimensions and are not always suitable to determine the porosity or void population. Moreover, they can not provide 3D information. The effect of curing cycle on the development of voids during consolidation of AS4/8552 prepregs at low pressure by compression molding was studied in unidirectional and multiaxial panels. They were manufactured using three different curing cycles carefully designed following the rheological and thermal analysis of the raw prepregs. The void volume fraction, shape and spatial distribution were analyzed in detail by means of X-ray computed microtomography, which has demonstrated its potential for analyzing the microstructural features of composites. It was demonstrated that the final void volume fraction depended on the evolution of the dynamic viscosity throughout the cycle. Most of the initial voids were the result of air entrapment and wrinkles created during lay-up. Differences in the final void volume fraction depended on the processing conditions for unidirectional and multiaxial panels. Voids were rod-like shaped and were oriented parallel to the fibers and concentrated in channels along the fiber orientation. X-ray computer tomography analysis of voids along the fiber direction showed a cellular structure with an approximate cell diameter of 1 mm. The cell walls were fiber-rich regions and porosity was localized at the center of the cells. This porosity distribution within the laminate was the result of inhomogeneous consolidation. This information is critical to optimize processing parameters and to provide inputs for virtual testing and virtual processing tools. In addition, the matrix-controlled mechanical properties of the panels were measured in order to establish the relationship between processing conditions and mechanical performance. The interlaminar shear strength (ILSS) and the interlaminar toughness (GIc and GIIc) were selected to evaluate the effect of porosity on the mechanical performance of unidirectional panels. The ILSS was strongly affected by the porosity when the void contents was higher than 1%. The same trends were observed in the case of GIIc while GIc was insensitive to the void volume fraction. Additionally, the mechanical performance of multiaxial panels in compression, low velocity impact and compression after impact (CAI) was measured to address the effect of processing conditions. The compressive strength decreased with porosity and ply-clustering. However, the porosity did not influence the impact resistance and the coompression after impact strength because the effect of porosity was masked by other factors as the damage due to impact or the laminate lay-up. Finally, the effect of the processing conditions on the compaction behavior of unidirectional AS4/8552 panels manufactured by compression moulding was simulated using the finite element method, as a first approximation to more complex and accurate models for out-of autoclave curing and consolidation of composite laminates. The model parameters were obtained from rheological and thermo-mechanical experiments carried out in raw prepreg samples. The predictions of the thickness change during consolidation were in reasonable agreement with the experimental results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Some neural bruise prediction models have been implemented in the laboratory, for the most traded fruit species and varieties, allowing the prediction of the acceptability or rejectability for damages, with respect to the EC Standards. Different models have been built for both quasi-static (compression) and dynamic (impact) loads covering the whole commercial ripening period of fruits. A simulation process has been developed gathering the information on laboratory bruise models and load sensor calibrations for different electronic devices (IS-100 and DEA-1, for impact and compression loads respectively). Some evaluation methodology has been designed gathering the information on the mechanical properties of fruits and the loading records of electronic devices. The evaluation system allows to determine the current stage of fruit handling process and machinery.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent developments in the area of multiscale modeling of fiber-reinforced polymers are presented. The overall strategy takes advantage of the separa-tion of length scales between different entities (ply, laminate, and component) found in composite structures. This allows us to carry out multiscale modeling by computing the properties of one entity (e.g., individual plies) at the relevant length scale, homogenizing the results into a constitutive model, and passing this information to the next length scale to determine the mechanical behavior of the larger entity (e.g., laminate). As a result, high-fidelity numerical sim-ulations of the mechanical behavior of composite coupons and small compo-nents are nowadays feasible starting from the matrix, fiber, and interface properties and spatial distribution. Finally, the roadmap is outlined for extending the current strategy to include functional properties and processing into the simulation scheme.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El interés creciente en encontrar alimentos precocinados congelados que se asemejen a productos naturales, capaces de superar un procesado con el menor daño, ha generado un aumento en el estudio de nuevos productos en este campo de la investigación. Las características de cada matriz alimentaria, la composición y estructura de los ingredientes, así como el efecto de las interacciones entre ellos, modifica la textura, estructura y las propiedades físicas y sensoriales del alimento, así como su aceptación por el consumidor. En este contexto, la investigación realizada en esta tesis doctoral se ha llevado a cabo en puré de patata considerado como una matriz alimentaria semisólida y se ha centrado en analizar los efectos de la concentración y modificación de la composición en las propiedades reológicas y de textura, en las propiedades físico-químicas y estructurales, así como en los atributos sensoriales de los purés de patata cuando a estos se le añaden diferentes ingredientes funcionales como fibra de guisante, inulina, aceite de oliva, aislado de proteína de soja, ácidos grasos omega 3 y/o sus mezclas. Para ello, se han realizado cuatro estudios donde se determinan las propiedades reológicas mediante ensayos dinámicos oscilatorios y en estado estacionario, los parámetros instrumentales de textura mediante ensayos de extrusión inversa y de penetración cónica, además de los cambios estructurales a través de cromatografía iónica con detector de pulsos amperométrico, cromatografía de gases con detector de ionización de llama y microscopía electrónica de barrido. Conjuntamente, se han evaluado los atributos sensoriales de los diferentes purés generando los descriptores que mejor definen la calidad sensorial del producto, utilizando un panel de jueces entrenados y valorándose la aceptación global de los nuevos productos mediante un panel de consumidores. En un primer estudio, el puré de patata natural congelado elaborado con crioprotectores se enriqueció con fibra dietética insoluble (fibra de guisante), fibra dietética soluble (inulina) y sus mezclas. La fibra de guisante influyó significativa y negativamente en la textura del puré de patata, percibiéndose en el producto un incremento de la dureza y de la arenosidad, mientras que la inulina produjo un ablandamiento del sistema. En un segundo estudio, el puré de patata natural fresco y congelado/descongelado elaborado con y sin crioprotectores, se enriqueció con fibra dietética soluble (inulina), aceite de oliva virgen extra y sus mezclas. La adición de estos dos ingredientes generó un ablandamiento de la matriz del sistema, produciéndose, sin embargo, un efecto sinérgico entre ambos ingredientes funcionales. La inulina tuvo un efecto más significativo en la viscosidad aparente del producto, mientras que el aceite de oliva virgen extra afectó más significativamente a la pseudoplasticidad, al índice de consistencia y a la viscosidad plástica del mismo. El proceso de congelación y descongelación utilizado favoreció la reducción del tamaño de las partículas de inulina haciéndolas imperceptibles al paladar, obteniéndose productos más cremosos y con mayor aceptabilidad global que sus homólogos frescos. En un tercer estudio, el puré de patata natural fresco y congelado/descongelado elaborado con crioprotectores se enriqueció con mezclas de fibra dietética soluble (inulina) y aislado de proteína de soja. Los resultados demostraron que el ciclo de congelación y descongelación realizado no afecta el grado de polimerización de la inulina. La estructura química de la inulina tampoco se vio afectada por la incorporación de la soja. El proceso de congelación/descongelación, así como la adición de concentraciones altas de inulina y bajas de aislado de proteína de soja, favorecen la disminución de la contribución de la componente viscosa en las propiedades viscoelásticas del puré de patata. La cremosidad fue el único atributo sensorial que presentó una correlación lineal significativa entre las puntuaciones otorgadas por panelistas entrenados y no entrenados. Por último, se elaboró un puré de patata natural fresco y congelado/descongelado optimizado con crioprotectores y enriquecido con la suma de ácido docosahexaenoico (DHA, C22:6 n-3) y ácido eicosapentaenoico (EPA, C20:5 n-3) y con ácido α-linolénico (ALA, C18:3 n-3) microencapsulados. El ciclo de congelación y descongelación no afectó al perfil de ácidos grasos del puré de patata. La adición de omega 3 procedente de aceites de lino y pescado microencapsulados mejora los indicadores nutricionales que definen la calidad de la grasa, obteniéndose un producto más saludable. ABSTRACT The growing interest in finding frozen precooked products that are like a natural product and capable of withstanding initial processing with minimum damage and remaining stable during preservation and reheating prior to consumption has generated an increase in studies of new products in this field of research. The characteristics of each food matrix, the composition and structure of the ingredients and the effect of interactions between them alter the texture, structure and physical and sensory properties of the food product and its acceptance by the consumer. In this context, the research conducted in this doctoral thesis was carried out on mashed potato, considered as a semi-solid food matrix, and focused on analysing the effects of concentration and modification of the composition of the mashed potato matrix on the rheological and textural properties, physicochemical and structural properties and sensory attributes of mashed potato when various functional ingredients are added to it, such as pea fibre, inulin, olive oil, soy protein isolate, omega 3 fatty acids and/or mixtures of these ingredients. Four studies were conducted for this purpose. Rheological properties were determined by oscillatory dynamic tests and stationary state tests, and instrumental texture parameters by backward extrusion and cone penetration tests. Structural changes were studied by ion chromatography with pulsed amperometric detector, gas chromatography with flame ionisation detector and scanning electron microscopy. The sensory attributes of the various mashed potato mixtures were evaluated by generating the descriptors that best defined the sensory quality of the products and using a panel of trained judges, and overall acceptance of the new products was evaluated by a panel of consumers. In the first study, frozen natural mashed potato incorporating cryoprotectants was enriched with insoluble dietary fibre (pea fibre), soluble dietary fibre (inulin) and mixtures of the two. Pea fibre had a significant negative influence on the texture of the mashed potato, producing an increase in hardness and granularity, whereas inulin produced a softening of the system. In the second study, fresh and frozen/thawed natural mashed potato prepared with and without cryoprotectants was enriched with soluble dietary fibre (inulin), extra virgin olive oil and mixtures of the two. The addition of these two ingredients generated softening of the matrix of the system, but a synergic effect between the two functional ingredients was produced. Inulin had a more significant effect on the apparent viscosity of the product, whereas extra virgin olive oil had a more significant effect on its pseudoplasticity, consistency index and plastic viscosity. The freezing and thawing process that was used contributed to a reduction in the size of the inulin particles, making them imperceptible to the palate and producing creamier products with greater overall acceptability than their fresh equivalents. In the third study, the fresh and frozen/thawed natural mashed potato incorporating cryoprotectants was enriched with mixtures of soluble dietary fibre (inulin) and soy protein isolate. The results showed that the freezing and thawing process that was performed did not affect the degree of polymerisation of the inulin. The chemical structure of the inulin was also not affected by the incorporation of soy. The freezing and thawing process and the addition of high concentrations of inulin and low concentrations of soy protein isolate favoured a decrease in the contribution of the viscous component to the viscoelastic properties of the mashed potato. Creaminess was the only sensory attribute that presented a significant linear correlation between the scores given by trained and untrained panellists. Lastly, fresh and frozen/thawed natural mashed potato optimised with cryoprotectants was prepared and enriched with the sum of docosahexaenoic acid (DHA, C22:6 n-3) and eicosapentaenoic acid (EPA, C20:5 n-3) and with α-linolenic acid (ALA, C18:3 n-3), microencapsulated. The freezing and thawing process did not affect the fatty acid profile of the mashed potato. The addition of omega 3 obtained from microencapsulated linseed and fish oils improved the nutritional indicators that define the quality of the fat, producing a healthier product.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Current fusion devices consist of multiple diagnostics and hundreds or even thousands of signals. This situation forces on multiple occasions to use distributed data acquisition systems as the best approach. In this type of distributed systems, one of the most important issues is the synchronization between signals, so that it is possible to have a temporal correlation as accurate as possible between the acquired samples of all channels. In last decades, many fusion devices use different types of video cameras to provide inside views of the vessel during operations and to monitor plasma behavior. The synchronization between each video frame and the rest of the different signals acquired from any other diagnostics is essential in order to know correctly the plasma evolution, since it is possible to analyze jointly all the information having accurate knowledge of their temporal correlation. The developed system described in this paper allows timestamping image frames in a real-time acquisition and processing system using 1588 clock distribution. The system has been implemented using FPGA based devices together with a 1588 synchronized timing card (see Fig.1). The solution is based on a previous system [1] that allows image acquisition and real-time image processing based on PXIe technology. This architecture is fully compatible with the ITER Fast Controllers [2] and offers integration with EPICS to control and monitor the entire system. However, this set-up is not able to timestamp the frames acquired since the frame grabber module does not present any type of timing input (IRIG-B, GPS, PTP). To solve this lack, an IEEE1588 PXI timing device its used to provide an accurate way to synchronize distributed data acquisition systems using the Precision Time Protocol (PTP) IEEE 1588 2008 standard. This local timing device can be connected to a master clock device for global synchronization. The timing device has a buffer timestamp for each PXI trigger line and requires tha- a software application assigns each frame the corresponding timestamp. The previous action is critical and cannot be achieved if the frame rate is high. To solve this problem, it has been designed a solution that distributes the clock from the IEEE 1588 timing card to all FlexRIO devices [3]. This solution uses two PXI trigger lines that provide the capacity to assign timestamps to every frame acquired and register events by hardware in a deterministic way. The system provides a solution for timestamping frames to synchronize them with the rest of the different signals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La actividad volcánica interviene en multitud de facetas de la propia actividad humana, no siempre negativas. Sin embargo, son más los motivos de peligrosidad y riesgo que incitan al estudio de la actividad volcánica. Existen razones de seguridad que inciden en el mantenimiento del seguimiento y monitorización de la actividad volcánica para garantizar la vida y la seguridad de los asentamientos antrópicos en las proximidades de los edificios volcánicos. En esta tesis se define e implementa un sistema de monitorización de movimientos de la corteza en las islas de Tenerife y La Palma, donde el impacto social que representa un aumento o variación de la actividad volcánica en las islas es muy severo. Aparte de la alta densidad demográfica del Archipiélago, esta población aumenta significativamente, en diferentes periodos a lo largo del año, debido a la actividad turística que representa la mayor fuente de ingresos de las islas. La población y los centros turísticos se diseminan predominantemente a lo largo de las costas y también a lo largo de los flancos de los edificios volcánicos. Quizá el mantenimiento de estas estructuras sociales y socio-económicas son los motivos más importantes que justifican una monitorización de la actividad volcánica en las Islas Canarias. Recientemente se ha venido trabajando cada vez más en el intento de predecir la actividad volcánica utilizando los nuevos sistemas de monitorización geodésica, puesto que la actividad volcánica se manifiesta anteriormente por deformación de la corteza terrestre y cambios en la fuerza de la gravedad en la zona donde más tarde se registran eventos volcánicos. Los nuevos dispositivos y sensores que se han desarrollado en los últimos años en materias como la geodesia, la observación de la Tierra desde el espacio y el posicionamiento por satélite, han permitido observar y medir tanto la deformación producida en el terreno como los cambios de la fuerza de la gravedad antes, durante y posteriormente a los eventos volcánicos que se producen. Estos nuevos dispositivos y sensores han cambiado las técnicas o metodologías geodésicas que se venían utilizando hasta la aparición de los mismos, renovando métodos clásicos y desarrollando otros nuevos que ya se están afianzando como metodologías probadas y reconocidas para ser usadas en la monitorización volcánica. Desde finales de la década de los noventa del siglo pasado se han venido desarrollando en las Islas Canarias varios proyectos que han tenido como objetivos principales el desarrollo de nuevas técnicas de observación y monitorización por un lado y el diseño de una metodología de monitorización volcánica adecuada, por otro. Se presenta aquí el estudio y desarrollo de técnicas GNSS para la monitorización de deformaciones corticales y su campo de velocidades para las islas de Tenerife y La Palma. En su implementación, se ha tenido en cuenta el uso de la infraestructura geodésica y de monitorización existente en el archipiélago a fin de optimizar costes, además de complementarla con nuevas estaciones para dar una cobertura total a las dos islas. Los resultados obtenidos en los proyectos, que se describen en esta memoria, han dado nuevas perspectivas en la monitorización geodésica de la actividad volcánica y nuevas zonas de interés que anteriormente no se conocían en el entorno de las Islas Canarias. Se ha tenido especial cuidado en el tratamiento y propagación de los errores durante todo el proceso de observación, medida y proceso de los datos registrados, todo ello en aras de cuantificar el grado de fiabilidad de los resultados obtenidos. También en este sentido, los resultados obtenidos han sido verificados con otros procedentes de sistemas de observación radar de satélite, incorporando además a este estudio las implicaciones que el uso conjunto de tecnologías radar y GNSS tendrán en un futuro en la monitorización de deformaciones de la corteza terrestre. ABSTRACT Volcanic activity occurs in many aspects of human activity, and not always in a negative manner. Nonetheless, research into volcanic activity is more likely to be motivated by its danger and risk. There are security reasons that influence the monitoring of volcanic activity in order to guarantee the life and safety of human settlements near volcanic edifices. This thesis defines and implements a monitoring system of movements in the Earth’s crust in the islands of Tenerife and La Palma, where the social impact of an increase (or variation) of volcanic activity is very severe. Aside from the high demographic density of the archipelago, the population increases significantly in different periods throughout the year due to tourism, which represents a major source of revenue for the islands. The population and the tourist centres are mainly spread along the coasts and also along the flanks of the volcanic edifices. Perhaps the preservation of these social and socio-economic structures is the most important reason that justifies monitoring volcanic activity in the Canary Islands. Recently more and more work has been done with the intention of predicting volcanic activity, using new geodesic monitoring systems, since volcanic activity is evident prior to eruption because of a deformation of the Earth’s crust and changes in the force of gravity in the zone where volcanic events will later be recorded. The new devices and sensors that have been developed in recent years in areas such as geodesy, the observation of the Earth from space, and satellite positioning have allowed us to observe and measure the deformation produced in the Earth as well as the changes in the force of gravity before, during, and after the volcanic events occur. The new devices and sensors have changed the geodetic techniques and methodologies that were used previously. The classic methods have been renovated and other newer ones developed that are now vouched for as proven recognised methodologies to be used for volcanic monitoring. Since the end of the 1990s, in the Canary Islands various projects have been developed whose principal aim has been the development of new observation and monitoring techniques on the one hand, and the design of an appropriate volcanic monitoring methodology on the other. The study and development of GNSS techniques for the monitoring of crustal deformations and their velocity field is presented here. To carry out the study, the use of geodetic infrastructure and existing monitoring in the archipelago have been taken into account in order to optimise costs, besides complementing it with new stations for total coverage on both islands. The results obtained in the projects, which are described below, have produced new perspectives in the geodetic monitoring of volcanic activity and new zones of interest which previously were unknown in the environment of the Canary Islands. Special care has been taken with the treatment and propagation of errors during the entire process of observing, measuring, and processing the recorded data. All of this was done in order to quantify the degree of trustworthiness of the results obtained. Also in this sense, the results obtained have been verified with others from satellite radar observation systems, incorporating as well in this study the implications that the joint use of radar technologies and GNSS will have for the future of monitoring deformations in the Earth’s crust.