950 resultados para Quality function deployment


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pencil beam scanned (PBS) proton therapy has many advantages over conventional radiotherapy, but its effectiveness for treating mobile tumours remains questionable. Gating dose delivery to the breathing pattern is a well-developed method in conventional radiotherapy for mitigating tumour-motion, but its clinical efficiency for PBS proton therapy is not yet well documented. In this study, the dosimetric benefits and the treatment efficiency of beam gating for PBS proton therapy has been comprehensively evaluated. A series of dedicated 4D dose calculations (4DDC) have been performed on 9 different 4DCT(MRI) liver data sets, which give realistic 4DCT extracting motion information from 4DMRI. The value of 4DCT(MRI) is its capability of providing not only patient geometries and deformable breathing characteristics, but also includes variations in the breathing patterns between breathing cycles. In order to monitor target motion and derive a gating signal, we simulate time-resolved beams' eye view (BEV) x-ray images as an online motion surrogate. 4DDCs have been performed using three amplitude-based gating window sizes (10/5/3 mm) with motion surrogates derived from either pre-implanted fiducial markers or the diaphragm. In addition, gating has also been simulated in combination with up to 19 times rescanning using either volumetric or layered approaches. The quality of the resulting 4DDC plans has been quantified in terms of the plan homogeneity index (HI), total treatment time and duty cycle. Results show that neither beam gating nor rescanning alone can fully retrieve the plan homogeneity of the static reference plan. Especially for variable breathing patterns, reductions of the effective duty cycle to as low as 10% have been observed with the smallest gating rescanning window (3 mm), implying that gating on its own for such cases would result in much longer treatment times. In addition, when rescanning is applied on its own, large differences between volumetric and layered rescanning have been observed as a function of increasing number of re-scans. However, once gating and rescanning is combined, HI to within 2% of the static plan could be achieved in the clinical target volume, with only moderately prolonged treatment times, irrespective of the rescanning strategy used. Moreover, these results are independent of the motion surrogate used. In conclusion, our results suggest image guided beam gating, combined with rescanning, is a feasible, effective and efficient motion mitigation approach for PBS-based liver tumour treatments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Characterization of spatial and temporal variation in grassland productivity and nutrition is crucial for a comprehensive understanding of ecosystem function. Although within-site heterogeneity in soil and plant properties has been shown to be relevant for plant community stability, spatiotemporal variability in these factors is still understudied in temperate grasslands. Our study aimed to detect if soil characteristics and plant diversity could explain observed small-scale spatial and temporal variability in grassland productivity, biomass nutrient concentrations, and nutrient limitation. Therefore, we sampled 360 plots of 20 cm × 20 cm each at six consecutive dates in an unfertilized grassland in Southern Germany. Nutrient limitation was estimated using nutrient ratios in plant biomass. Absolute values of, and spatial variability in, productivity, biomass nutrient concentrations, and nutrient limitation were strongly associated with sampling date. In April, spatial heterogeneity was high and most plots showed phosphorous deficiency, while later in the season nitrogen was the major limiting nutrient. Additionally, a small significant positive association between plant diversity and biomass phosphorus concentrations was observed, but should be tested in more detail. We discuss how low biological activity e.g., of soil microbial organisms might have influenced observed heterogeneity of plant nutrition in early spring in combination with reduced active acquisition of soil resources by plants. These early-season conditions are particularly relevant for future studies as they differ substantially from more thoroughly studied later season conditions. Our study underlines the importance of considering small spatial scales and temporal variability to better elucidate mechanisms of ecosystem functioning and plant community assembly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this study was to relate physical changes in image quality measured by Modulation Transfer Function (MTF) to diagnostic accuracy.^ One Hundred and Fifty Kodak Min-R screen/film combination conventional craniocaudal mammograms obtained with the Pfizer Microfocus Mammographic system were selected from the files of the Department of Radiology, at M.D. Anderson Hospital and Tumor Institute.^ The mammograms included 88 cases with a variety of benign diagnosis and 62 cases with a variety of malignant biopsy diagnosis. The average age of the patient population was 55 years old. 70 cases presented calcifications with 30 cases having calcifications smaller than 0.5mm. 46 cases presented irregular bordered masses larger than 1 cm. 30 cases presented smooth bordered masses with 20 larger than 1 cm.^ Four separated copies of the original images were made each having a different change in the MTF using a defocusing technique whereby copies of the original were obtained by light exposure through different thicknesses (spacing) of transparent film base.^ The mammograms were randomized, and evaluated by three experienced mammographers for the degree of visibility of various anatomical breast structures and pathological lesions (masses and calicifications), subjective image quality, and mammographic interpretation.^ 3,000 separate evaluations were anayzed by several statistical techniques including Receiver Operating Characteristic curve analysis, McNemar test for differences between proportions and the Landis et al. method of agreement weighted kappa for ordinal categorical data.^ Results from the statistical analysis show: (1) There were no statistical significant differences in the diagnostic accuracy of the observers when diagnosing from mammograms with the same MTF. (2) There were no statistically significant differences in diagnostic accuracy for each observer when diagnosing from mammograms with the different MTF's used in the study. (3) There statistical significant differences in detail visibility between the copies and the originals. Detail visibility was better in the originals. (4) Feature interpretations were not significantly different between the originals and the copies. (5) Perception of image quality did not affect image interpretation.^ Continuation and improvement of this research ca be accomplished by: using a case population more sensitive to MTF changes, i.e., asymptomatic women with minimum breast cancer, more observers (including less experienced radiologists and experienced technologists) must collaborate in the study, and using a minimum of 200 benign and 200 malignant cases.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cancer patients increasingly request alternative therapies such as imagery techniques and support groups. Although research suggests evidence of enhanced psychosocial functioning with supportive group therapy and enhanced immune function with imagery techniques, studies are anecdotal or limited to case studies or descriptive reports. The efficacy of these alternative therapies should be validated by randomized, controlled trials and the mechanisms of action mediating immune function and outcome examined.^ In a 12-month pilot study, we evaluate the feasibility of conducting a controlled study with clinical trial methodology to test the effects of imagery/relaxation and support on quality of life, emotional well-being, and immune function for women after breast cancer. Using a randomized pre-post test design with three intervention waves, we assigned women (n = 47) to either standard care (n = 15), standard care plus 6-weekly support sessions (n = 16) or imagery/relaxation sessions (n = 16).^ The primary aim of this pilot study is to determine the feasibility of conducting a clinical trial of alternative therapies in a clinical care setting. Secondary aims are to determine parameter estimates for the effects of the two treatment groups on quality of life, coping, social support, and immune function and describe methodology issues related to trials of alternative therapies.^ The research provides direction for future studies of alternative therapies by describing the recruitment, clinical trial experience, and related methodology issues. The study extends previous work by differentiating the effects of support group from mental imagery among outpatient groups who are homogeneous regarding cancer type and treatment stage. The study provides data for future longitudinal studies of disease progression by differentiating the effectiveness of interventions designed to enhance quality of life, coping, social support, and immune function and subsequently, alter the clinical course of disease. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Technology assessment is a comprehensive form of policy research that examines the short- and long-term social consequences of the application or use of technology" (US Congress 1967).^ This study explored a research methodology appropriate for technology assessment (TA) within the health industry. The case studied was utilization of external Small-Volume Infusion Pumps (SVIP) at a cancer treatment and research center. Primary and secondary data were collected in three project phases. In Phase I, hospital prescription records (N = 14,979) represented SVIP adoption and utilization for the years 1982-1984. The Candidate Adoption-Use (CA-U) diffusion paradigm developed for this study was germane. Compared to classic and unorthodox curves, CA-U more accurately simulated empiric experience. The hospital SVIP 1983-1984 trends denoted assurance in prescribing chemotherapy and concomitant balloon SVIP efficacy and efficiency. Abandonment of battery pumps was predicted while exponential demand for balloon SVIP was forecast for 1985-1987. In Phase II, patients using SVIP (N = 117) were prospectively surveyed from July to October 1984; the data represented a single episode of therapy. The questionnaire and indices, specifically designed to measure the impact of SVIP, evinced face validity. Compeer group data were from pre-SVIP case reviews rather than from an inpatient sample. Statistically significant results indicated that outpatients using SVIP interacted socially more than inpatients using the alternative technology. Additionally, the hospital's education program effectively taught clients to discriminate between self care and professional SVIP services. In these contexts, there was sufficient evidence that the alternative technology restricted patients activity whereas SVIP permitted patients to function more independently and in a social lifestyle, thus adding quality to life. In Phase III, diffusion forecast and patient survey findings were combined with direct observation of clinic services to profile some economic dimensions of SVIP. These three project phases provide a foundation for executing: (1) cost effectiveness analysis of external versus internal infusors, (2) institutional resource allocation, and (3) technology deployment to epidemiology-significant communities. The models and methods tested in this research of clinical technology assessment are innovative and do assess biotechnology. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microzooplankton (the 20 to 200 µm size class of zooplankton) is recognised as an important part of marine pelagic ecosystems. In terms of biomass and abundance heterotrophic dinoflagellates are one of the important groups of organism in microzooplankton. However, their rates - grazing and growth - , feeding behaviour and prey preferences are poorly known and understood. A set of data was assembled in order to derive a better understanding of heterotrophic dinoflagellates rates, in response to parameters such as prey concentration, prey type (size and species), temperature and their own size. With these objectives, literature was searched for laboratory experiments with information on one or more of these parameters effect studied. The criteria for selection and inclusion in the database included: (i) controlled laboratory experiment with a known dinoflagellate feeding on a known prey; (ii) presence of ancillary information about experimental conditions, used organisms - cell volume, cell dimensions, and carbon content. Rates and ancillary information were measured in units that meet the experimenter need, creating a need to harmonize the data units after collection. In addition different units can link to different mechanisms (carbon to nutritive quality of the prey, volume to size limits). As a result, grazing rates are thus available as pg C dinoflagellate-1 h-1, µm3 dinoflagellate-1 h-1 and prey cell dinoflagellate-1 h-1; clearance rate was calculated if not given and growth rate is expressed as the growth rate per day.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The CoastColour project Round Robin (CCRR) project (http://www.coastcolour.org) funded by the European Space Agency (ESA) was designed to bring together a variety of reference datasets and to use these to test algorithms and assess their accuracy for retrieving water quality parameters. This information was then developed to help end-users of remote sensing products to select the most accurate algorithms for their coastal region. To facilitate this, an inter-comparison of the performance of algorithms for the retrieval of in-water properties over coastal waters was carried out. The comparison used three types of datasets on which ocean colour algorithms were tested. The description and comparison of the three datasets are the focus of this paper, and include the Medium Resolution Imaging Spectrometer (MERIS) Level 2 match-ups, in situ reflectance measurements and data generated by a radiative transfer model (HydroLight). The datasets mainly consisted of 6,484 marine reflectance associated with various geometrical (sensor viewing and solar angles) and sky conditions and water constituents: Total Suspended Matter (TSM) and Chlorophyll-a (CHL) concentrations, and the absorption of Coloured Dissolved Organic Matter (CDOM). Inherent optical properties were also provided in the simulated datasets (5,000 simulations) and from 3,054 match-up locations. The distributions of reflectance at selected MERIS bands and band ratios, CHL and TSM as a function of reflectance, from the three datasets are compared. Match-up and in situ sites where deviations occur are identified. The distribution of the three reflectance datasets are also compared to the simulated and in situ reflectances used previously by the International Ocean Colour Coordinating Group (IOCCG, 2006) for algorithm testing, showing a clear extension of the CCRR data which covers more turbid waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microzooplankton (the 20 to 200 µm size class of zooplankton) is recognised as an important part of marine pelagic ecosystems. In terms of biomass and abundance pelagic ciliates are one of the important groups of organism in microzooplankton. However, their rates - grazing and growth - , feeding behaviour and prey preferences are poorly known and understood. A set of data was assembled in order to derive a better understanding of pelagic ciliates rates, in response to parameters such as prey concentration, prey type (size and species), temperature and their own size. With these objectives, literature was searched for laboratory experiments with information on one or more of these parameters effect studied. The criteria for selection and inclusion in the database included: (i) controlled laboratory experiment with a known ciliates feeding on a known prey; (ii) presence of ancillary information about experimental conditions, used organisms - cell volume, cell dimensions, and carbon content. Rates and ancillary information were measured in units that meet the experimenter need, creating a need to harmonize the data units after collection. In addition different units can link to different mechanisms (carbon to nutritive quality of the prey, volume to size limits). As a result, grazing rates are thus available as pg C/(ciliate*h), µm**3/(ciliate*h) and prey cell/(ciliate*h); clearance rate was calculated if not given and growth rate is expressed as the growth rate per day.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we examine the effects of tariff reduction on firms' quality upgrading by employing an Indonesian plant-product-level panel dataset matched with a plant-level dataset. We explore the effects of lower output and input tariffs separately, by focusing on the apparel industry. By estimating the Berry-type demand function, we derive product-quality indicators based on the Khandelwal (Review of Economic Studies, 2010) methodology, which enables us to isolate quality upgrading from changes in prices. Our findings are as follows. First, a reduction in output tariffs does not affect product quality upgrading. Second, a reduction in input tariffs boosts quality upgrading in general. In particular, this impact is greater for import firms, which is consistent with the fact that the source of the boost is the import of high-quality foreign inputs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Next generation telecommunications infrastructures are considered as a principal example of a new technology for sustainable economic growth. From their deployment it is expected that a wealth of innovations – hopefully converted into economic growth – new sources of employment and improved quality of life will result. In line with these prospects, public administrations at supranational, national, regional and local levels have encouraged the development of these new infrastructures. Moreover, in times of economic crisis, public assistance to deploy such networks encompasses the promise of placing a weak economy on the road to prosperity. However, such arguments and political claims clearly require rigorous assessment. In particular, any such assessment must adequately address the appropriate form of modelling that best captures key elements for identifiable progress from next generation access networks (NGAN).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuro-evolutive development from birth until the age of six years is a decisive factor in a child?s quality of life. Early detection of development disorders in early childhood can facilitate necessary diagnosis and/or treatment. Primary-care pediatricians play a key role in its detection as they can undertake the preventive and therapeutic actions requested to promote a child?s optimal development. However, the lack of time and little specific knowledge at primary-care avoid to applying continuous early-detection anomalies procedures. This research paper focuses on the deployment and evaluation of a smart system that enhances the screening of language disorders in primary care. Pediatricians get support to proceed with early referral of language disorders. The proposed model provides them with a decision-support tool for referral actions to trigger essential diagnostic and/or therapeutic actions for a comprehensive individual development. The research was conducted by starting from a sample of 60 cases of children with language disorders. Validation was carried out through two complementary steps: first, by including a team of seven experts from the fields of neonatology, pediatrics, neurology and language therapy, and, second, through the evaluation of 21 more previously diagnosed cases. The results obtained show that therapist positively accepted the system proposal in 18 cases (86%) and suggested system redesign for single referral to a speech therapist in three remaining cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, three-dimensional (3D) video has decisively burst onto the entertainment industry scene, and has arrived in households even before the standardization process has been completed. 3D television (3DTV) adoption and deployment can be seen as a major leap in television history, similar to previous transitions from black and white (B&W) to color, from analog to digital television (TV), and from standard definition to high definition. In this paper, we analyze current 3D video technology trends in order to define a taxonomy of the availability and possible introduction of 3D-based services. We also propose an audiovisual network services architecture which provides a smooth transition from two-dimensional (2D) to 3DTV in an Internet Protocol (IP)-based scenario. Based on subjective assessment tests, we also analyze those factors which will influence the quality of experience in those 3D video services, focusing on effects of both coding and transmission errors. In addition, examples of the application of the architecture and results of assessment tests are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complexity of planning a wireless sensor network is dependent on the aspects of optimization and on the application requirements. Even though Murphy's Law is applied everywhere in reality, a good planning algorithm will assist the designers to be aware of the short plates of their design and to improve them before the problems being exposed at the real deployment. A 3D multi-objective planning algorithm is proposed in this paper to provide solutions on the locations of nodes and their properties. It employs a developed ray-tracing scheme for sensing signal and radio propagation modelling. Therefore it is sensitive to the obstacles and makes the models of sensing coverage and link quality more practical compared with other heuristics that use ideal unit-disk models. The proposed algorithm aims at reaching an overall optimization on hardware cost, coverage, link quality and lifetime. Thus each of those metrics are modelled and normalized to compose a desirability function. Evolutionary algorithm is designed to efficiently tackle this NP-hard multi-objective optimization problem. The proposed algorithm is applicable for both indoor and outdoor 3D scenarios. Different parameters that affect the performance are analyzed through extensive experiments; two state-of-the-art algorithms are rebuilt and tested with the same configuration as that of the proposed algorithm. The results indicate that the proposed algorithm converges efficiently within 600 iterations and performs better than the compared heuristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.