951 resultados para reasoning with different levels of abstraction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Genética e Melhoramento Animal - FCAV

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Meditation is a mental training, which involves attention and the ability to maintain focus on a particular object. In this study we have applied a specific attentional task to simply measure the performance of the participants with different levels of meditation experience, rather than evaluating meditation practice per se or task performance during meditation. Our objective was to evaluate the performance of regular meditators and non-meditators during an fMRI adapted Stroop Word-Colour Task (SWCT), which requires attention and impulse control, using a block design paradigm. We selected 20 right-handed regular meditators and 19 non-meditators matched for age, years of education and gender. Participants had to choose the colour (red, blue or green) of single words presented visually in three conditions: congruent, neutral and incongruent. Non-meditators showed greater activity than meditators in the right medial frontal, middle temporal, precentral and postcentral gyri and the lentiform nucleus during the incongruent conditions. No regions were more activated in meditators relative to non-meditators in the same comparison. Non-meditators showed an increased pattern of brain activation relative to regular meditators under the same behavioural performance level. This suggests that meditation training improves efficiency, possibly via improved sustained attention and impulse control. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A colheita é uma das principais etapas no processo de produção de sementes de soja. O momento adequado para se efetuar a colheita pode variar em função do tipo de colheita e do grau de umidade das sementes. Neste sentido, o objetivo dessa pesquisa foi determinar as alterações fisiológicas de sementes de soja colhidas de forma manual e mecânica com diferentes teores de água. Para tanto, utilizaram-se sementes das cultivares Embrapa 48 e FTS Águia, que têm diferenças no teor de lignina do tegumento, colhidas de forma manual e mecanicamente com 18,0; 15,0 e 12,0% de água. Após a colheita, as sementes foram submetidas à secagem, e a qualidade das sementes foi avaliada pelos testes de germinação, emergência de plântulas, tetrazólio (viabilidade e vigor), envelhecimento acelerado e índice de velocidade de emergência de plântula. Em seguida, as sementes foram armazenadas (20 °C e 45% UR do ar) e analisadas logo após a colheita e aos seis meses de armazenamento. Os resultados obtidos indicam que o parâmetro fisiológico das sementes de soja, não é afetado, quando: se realiza colheita manual com grau de umidade das sementes de 11,4% a 18,4%; e para sementes colhidas à máquina, com grau de umidade entre 12,0% a 15,9%; mantendo a qualidade fisiológica mesmo após seis meses de armazenamento.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this PhD thesis, developed in the framework of the Italian Agroscenari research project, is to compare current irrigation volumes in two study area in Emilia-Romagna with the likely irrigation under climate change conditions. This comparison was carried out between the reference period 1961-1990, as defined by WMO, and the 2021-2050 period. For this period, multi-model climatic projections on the two study areas were available. So, the climatic projections were analyzed in term of their impact on irrigation demand and adaptation strategies for fruit and horticultural crops in the study area of Faenza, with a detailed analysis for kiwifruit vine, and for horticultural crops in Piacenza plan, focusing on the irrigation water needs of tomato. We produced downscaled climatic projections (based on A1B Ipcc emission scenario) for the two study areas. The climate change impacts for the period 2021-2050 on crop irrigation water needs and other agrometeorological index were assessed by means of the Criteria water balance model, in the two versions available, Criteria BdP (local) and Geo (spatial) with different levels of detail. We found in general for both the areas an irrigation demand increase of about +10% comparing the 2021-2050 period with the reference years 1961-1990, but no substantial differences with more recent years (1991-2008), mainly due to a projected increase in spring precipitation compensating the projected higher summer temperature and evapotranspiration. As a consequence, it is not forecasted a dramatic increase in the irrigation volumes with respect to the current volumes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hepatitis B x protein (HBx) is a non structural, multifunctional protein of hepatitis B virus (HBV) that modulates a variety of host processes.Due to its transcriptional activity,able to alter the expression of growth-control genes,it has been implicated in hepatocarcinogenesis.Increased expression of HBx has been reported on the liver tissue samples of hepatocellular carcinoma (HCC),and a specific anti-HBx immune response can be detected in the peripheral blood of patients with chronic HBV.However,its role and entity has not been yet clarified.Thus,we performed a cross-sectional analysis of anti-HBx specific T cell response in HBV-infected patients in different stage of disease.A total of 70 HBV-infected subjects were evaluated:15 affected by chronic hepatitis (CH-median age 45 yrs),14 by cirrhosis (median age 55 yrs),11 with dysplastic nodules (median age 64 yrs),15 with HCC (median age 60 yrs),15 with IC(median age 53 yrs).All patients were infected by virus genotype D with different levels of HBV viremia and most of them (91%) were HBeAb positive.The HBx-specific T cell response was evaluated by anti-Interferon (IFN)-gamma Elispot assay after in vitro stimulation of peripheral blood mononuclear cells,using 20 overlapping synthetic peptides covering all HBx protein sequence.HBx-specific IFN-gamma-secreting T cells were found in 6 out of 15 patients with chronic hepatitis (40%), 3 out of 14 cirrhosis (21%), in 5 out of 11 cirrhosis with macronodules (54%), and in 10 out of 15 HCC patients (67%). The number of responding patients resulted significantly higher in HCC than IC (p=0.02) and cirrhosis (p=0.02). Central specific region of the protein x was preferentially recognize,between 86-88 peptides. HBx response does not correlate with clinical feature disease(AFP,MELD).The HBx specific T-cell response seems to increase accordingly to progression of the disease, being increased in subjects with dysplastic or neoplastic lesions and can represent an additional tool to monitor the patients at high risk to develop HCC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis work is devoted to the conceptual and technical development of the Adaptive Resolution Scheme (AdResS), a molecular dynamics method that allows the simulation of a system with different levels of resolution simultaneously. The simulation domain is divided into high and low resolution zones and a transition region that links them, through which molecules can freely diffuse.rnThe first issue of this work regards the thermodynamic consistency of the method, which is tested and verified in a model liquid of tetrahedral molecules. The results allow the introduction of the concept of the Thermodynamic Force, an external field able to correct spurious density fluctuations present in the transition region in usual AdResS simulations.rnThe AdResS is also applied to a system where two different representations with the same degree of resolution are confronted. This simple test extends the method from an Adaptive Resolution Scheme to an Adaptive Representation Scheme, providing a way of coupling different force fields based on thermodynamic consistency arguments. The Thermodynamic Force is successfully applied to the example described in this work as well.rnAn alternative approach of deducing the Thermodynamic Force from pressure consistency considerations allows the interpretation of AdResS as a first step towards a molecular dynamics simulation in the Grand Canonical ensemble. Additionally, such a definition leads to a practical way of determining the Thermodynamic Force, tested in the well studied tetrahedral liquid. The effects of AdResS and this correction on the atomistic domain are analyzed by inspecting the local distribution of velocities, radial distribution functions, pressure and particle number fluctuation. Their comparison with analogous results coming from purely atomistic simulations shows good agreement, which is greatly improved under the effect of the external field.rnA further step in the development of AdResS, necessary for several applications in biophysics and material science, consists of its application to multicomponent systems. To this aim, the high-resolution representation of a model binary mixture is confronted with its coarse-grained representation systematically parametrized. The Thermodynamic Force, whose development requires a more delicate treatment, also gives satisfactory results.rnFinally, AdResS is tested in systems including two-body bonded forces, through the simulation of a model polymer allowed to adaptively change its representation. It is shown that the distribution functions that characterize the polymer structure are in practice not affected by the change of resolution.rnThe technical details of the implementation of AdResS in the ESPResSo package conclude this thesis work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work illustrates a soil-tunnel-structure interaction study performed by an integrated,geotechnical and structural,approach based on 3D finite element analyses and validated against experimental observations.The study aims at analysing the response of reinforced concrete framed buildings on discrete foundations in interaction with metro lines.It refers to the case of the twin tunnels of the Milan (Italy) metro line 5,recently built in coarse grained materials using EPB machines,for which subsidence measurements collected along ground and building sections during tunnelling were available.Settlements measured under freefield conditions are firstly back interpreted using Gaussian empirical predictions. Then,the in situ measurements’ analysis is extended to include the evolving response of a 9 storey reinforced concrete building while being undercrossed by the metro line.In the finite element study,the soil mechanical behaviour is described using an advanced constitutive model. This latter,when combined with a proper simulation of the excavation process, proves to realistically reproduce the subsidence profiles under free field conditions and to capture the interaction phenomena occurring between the twin tunnels during the excavation. Furthermore, when the numerical model is extended to include the building, schematised in a detailed manner, the results are in good agreement with the monitoring data for different stages of the twin tunnelling. Thus, they indirectly confirm the satisfactory performance of the adopted numerical approach which also allows a direct evaluation of the structural response as an outcome of the analysis. Further analyses are also carried out modelling the building with different levels of detail. The results highlight that, in this case, the simplified approach based on the equivalent plate schematisation is inadequate to capture the real tunnelling induced displacement field. The overall behaviour of the system proves to be mainly influenced by the buried portion of the building which plays an essential role in the interaction mechanism, due to its high stiffness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present multi-modal study we aimed to investigate the role of visual exploration in relation to the neuronal activity and performance during visuospatial processing. To this end, event related functional magnetic resonance imaging er-fMRI was combined with simultaneous eye tracking recording and transcranial magnetic stimulation (TMS). Two groups of twenty healthy subjects each performed an angle discrimination task with different levels of difficulty during er-fMRI. The number of fixations as a measure of visual exploration effort was chosen to predict blood oxygen level-dependent (BOLD) signal changes using the general linear model (GLM). Without TMS, a positive linear relationship between the visual exploration effort and the BOLD signal was found in a bilateral fronto-parietal cortical network, indicating that these regions reflect the increased number of fixations and the higher brain activity due to higher task demands. Furthermore, the relationship found between the number of fixations and the performance demonstrates the relevance of visual exploration for visuospatial task solving. In the TMS group, offline theta bursts TMS (TBS) was applied over the right posterior parietal cortex (PPC) before the fMRI experiment started. Compared to controls, TBS led to a reduced correlation between visual exploration and BOLD signal change in regions of the fronto-parietal network of the right hemisphere, indicating a disruption of the network. In contrast, an increased correlation was found in regions of the left hemisphere, suggesting an intent to compensate functionality of the disturbed areas. TBS led to fewer fixations and faster response time while keeping accuracy at the same level, indicating that subjects explored more than actually needed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Along a downstream stretch of River Mure , Romania, adult males of two feral fish species, European chub (Leuciscus cephalus) and sneep (Chondrostoma nasus) were sampled at four sites with different levels of contamination. Fish were analysed for the biochemical markers hsp70 (in liver and gills) and hepatic EROD activity, as well as several biometrical parameters (age, length, wet weight, condition factor). None of the biochemical markers correlated with any biometrical parameter, thus biomarker reactions were related to site-specific criteria. While the hepatic hsp70 level did not differ among the sites, significant elevation of the hsp70 level in the gills revealed proteotoxic damage in chub at the most upstream site, where we recorded the highest heavy metal contamination of the investigated stretch, and in both chub and sneep at the site right downstream of the city of Arad. In both species, significantly elevated hepatic EROD activity downstream of Arad indicated that fish from these sites are also exposed to organic chemicals. The results were indicative of impaired fish health at least at three of the four investigated sites. The approach to relate biomarker responses to analytical data on pollution was shown to fit well the recent EU demands on further enhanced efforts in the monitoring of Romanian water quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study tested the hypothesis that career indecisiveness among men tends to be associated with different levels of self-reported psychological adjustment and with different remembrances of parental (maternal and paternal) acceptance and behavioral control in childhood from those of women. One hundred twenty-six respondents ages 17 through 54 (M = 23.7 years, SD = 8.21 years) participated in this study. Thirty-seven where males; 90 were females. Measures used in this study included the Career Decision Scale, the Adult version of the Parental Acceptance-Rejection/Control Questionnaire for mothers and for fathers, and the Adult version of the Personality Assessment Questionnaire. Both men and women remembered their mothers as well as their fathers as being loving in childhood. Additionally, men and women remembered both parents as being moderately behaviorally controlling in childhood. Finally, both men and women reported a fair level of psychological maladjustment. And on average, both men and women were fairly indecisive about their careers. Results of analyses supported the hypothesis in that career indecisiveness among women but not men was significantly correlated with remembered maternal and paternal acceptance in childhood, as well as with self-reported psychological adjustment and age. However, only women’s self-reported psychological adjustment made a significant and unique contribution to variations in their reports of career indecisiveness. None of the predictor variables were significantly associated with career indecisiveness among men.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

My dissertation focuses on developing methods for gene-gene/environment interactions and imprinting effect detections for human complex diseases and quantitative traits. It includes three sections: (1) generalizing the Natural and Orthogonal interaction (NOIA) model for the coding technique originally developed for gene-gene (GxG) interaction and also to reduced models; (2) developing a novel statistical approach that allows for modeling gene-environment (GxE) interactions influencing disease risk, and (3) developing a statistical approach for modeling genetic variants displaying parent-of-origin effects (POEs), such as imprinting. In the past decade, genetic researchers have identified a large number of causal variants for human genetic diseases and traits by single-locus analysis, and interaction has now become a hot topic in the effort to search for the complex network between multiple genes or environmental exposures contributing to the outcome. Epistasis, also known as gene-gene interaction is the departure from additive genetic effects from several genes to a trait, which means that the same alleles of one gene could display different genetic effects under different genetic backgrounds. In this study, we propose to implement the NOIA model for association studies along with interaction for human complex traits and diseases. We compare the performance of the new statistical models we developed and the usual functional model by both simulation study and real data analysis. Both simulation and real data analysis revealed higher power of the NOIA GxG interaction model for detecting both main genetic effects and interaction effects. Through application on a melanoma dataset, we confirmed the previously identified significant regions for melanoma risk at 15q13.1, 16q24.3 and 9p21.3. We also identified potential interactions with these significant regions that contribute to melanoma risk. Based on the NOIA model, we developed a novel statistical approach that allows us to model effects from a genetic factor and binary environmental exposure that are jointly influencing disease risk. Both simulation and real data analyses revealed higher power of the NOIA model for detecting both main genetic effects and interaction effects for both quantitative and binary traits. We also found that estimates of the parameters from logistic regression for binary traits are no longer statistically uncorrelated under the alternative model when there is an association. Applying our novel approach to a lung cancer dataset, we confirmed four SNPs in 5p15 and 15q25 region to be significantly associated with lung cancer risk in Caucasians population: rs2736100, rs402710, rs16969968 and rs8034191. We also validated that rs16969968 and rs8034191 in 15q25 region are significantly interacting with smoking in Caucasian population. Our approach identified the potential interactions of SNP rs2256543 in 6p21 with smoking on contributing to lung cancer risk. Genetic imprinting is the most well-known cause for parent-of-origin effect (POE) whereby a gene is differentially expressed depending on the parental origin of the same alleles. Genetic imprinting affects several human disorders, including diabetes, breast cancer, alcoholism, and obesity. This phenomenon has been shown to be important for normal embryonic development in mammals. Traditional association approaches ignore this important genetic phenomenon. In this study, we propose a NOIA framework for a single locus association study that estimates both main allelic effects and POEs. We develop statistical (Stat-POE) and functional (Func-POE) models, and demonstrate conditions for orthogonality of the Stat-POE model. We conducted simulations for both quantitative and qualitative traits to evaluate the performance of the statistical and functional models with different levels of POEs. Our results showed that the newly proposed Stat-POE model, which ensures orthogonality of variance components if Hardy-Weinberg Equilibrium (HWE) or equal minor and major allele frequencies is satisfied, had greater power for detecting the main allelic additive effect than a Func-POE model, which codes according to allelic substitutions, for both quantitative and qualitative traits. The power for detecting the POE was the same for the Stat-POE and Func-POE models under HWE for quantitative traits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the present work is to examine the differences between two groups of fencers with different levels of competition, elite and medium level. The timing parameters of the response reaction have been compared together with the kinetic variables which determine the sequence of segmented participation used during the lunge with a change in target during movement. A total of 30 male sword fencers participated, 13 elite and 17 medium level. Two force platforms recorded the horizontal component of the force and the start of the movement. One system filmed the movement in 3D, recording the spatial positions of 11 markers, while another system projected a mobile target over a screen. For synchronisation, an electronic signal enabled all the systems to be started simultaneously. Among the timing parameters of the reaction response, the choice reaction time (CRT) to the target change during the lunge was measured. The results revealed differences between the groups regarding the flight time, horizontal velocity at the end of the acceleration phase, and the length of the lunge, these being higher for the elite group, as well as other variables related to the temporal sequence of movement. No significant differences have been found in the simple reaction time or in CRT. According to the literature, the CRT appears to improve with sports practice, although this factor did not differentiate the elite from medium-level fencers. The coordination of fencing movements, that is, the right technique, constitutes a factor that differentiates elite fencers from medium-level ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable real-time kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern embedded applications typically integrate a multitude of functionalities with potentially different criticality levels into a single system. Without appropriate preconditions, the integration of mixed-criticality subsystems can lead to a significant and potentially unacceptable increase of engineering and certification costs. A promising solution is to incorporate mechanisms that establish multiple partitions with strict temporal and spatial separation between the individual partitions. In this approach, subsystems with different levels of criticality can be placed in different partitions and can be verified and validated in isolation. The MultiPARTES FP7 project aims at supporting mixed- criticality integration for embedded systems based on virtualization techniques for heterogeneous multicore processors. A major outcome of the project is the MultiPARTES XtratuM, an open source hypervisor designed as a generic virtualization layer for heterogeneous multicore. MultiPARTES evaluates the developed technology through selected use cases from the offshore wind power, space, visual surveillance, and automotive domains. The impact of MultiPARTES on the targeted domains will be also discussed. In a number of ongoing research initiatives (e.g., RECOMP, ARAMIS, MultiPARTES, CERTAINTY) mixed-criticality integration is considered in multicore processors. Key challenges are the combination of software virtualization and hardware segregation and the extension of partitioning mechanisms to jointly address significant non-functional requirements (e.g., time, energy and power budgets, adaptivity, reliability, safety, security, volume, weight, etc.) along with development and certification methodology.