955 resultados para grattacielo-soft kill option-shenzhen-parametric design


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statically balanced compliant mechanisms require no holding force throughout their range of motion while maintaining the advantages of compliant mechanisms. In this paper, a postbuckled fixed-guided beam is proposed to provide the negative stiffness to balance the positive stiffness of a compliant mechanism. To that end, a curve decomposition modeling method is presented to simplify the large deflection analysis. The modeling method facilitates parametric design insight and elucidates key points on the force-deflection curve. Experimental results validate the analysis. Furthermore, static balancing with fixed-guided beams is demonstrated for a rectilinear proof-of-concept prototype.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uno de los temas más importantes dentro del debate contemporáneo, es el que se refiere a la sostenibilidad a largo plazo de la sociedad tal y como la entendemos hoy. El ser humano está recuperando la sensibilidad perdida que le concebía como una pieza más dentro del ciclo natural de la vida. Por fin hemos entendido que no podemos ser auto suficientes e independientes del entorno natural que nos rodea. Más allá del respeto y del cuidado, está abierta la puerta del conocimiento infinito que nos brinda la naturaleza a todos los niveles y a todas las escalas. Dentro de la disciplina arquitectónica han existido ejemplos como Antoni Gaudí o Frei Otto que han referenciado su obra en el mundo Natural, encontrando en él las estrategias y bases para el diseño arquitectónico. Sin embargo han sido una minoría dentro del enorme elenco de arquitectos defensores del ángulo recto. En las últimas décadas, la tendencia está cambiando. No nos referimos tanto a la sensibilidad creciente por conseguir una mayor eficiencia energética que ha llevado a una puesta en valor de la arquitectura vernácula, trasladando su sabiduría a las estrategias bioclimáticas. Nos referimos a un caso específico dentro del amplio abanico de formas arquitectónicas que han aparecido gracias a la incorporación de las herramientas computacionales en el diseño y la producción. Las arquitecturas que nos interesan son las que aprovechan estas técnicas para analizar e interpretar las estrategias complejas y altamente eficientes que encontramos en la naturaleza, y trasladarlas a la disciplina arquitectónica. Esta tendencia que se enmarca dentro de la Biomímesis o Biomimética es conocida con el nombre de Bioarquitectura. La presente tesis trata de morfología y sobre todo de morfogénesis. El término morfología se refiere al estudio de una forma concreta que nos permite entender un caso específico, nuestro foco de atención se centra sin embargo en la morfogénesis, es decir, en el estudio de los procesos de generación de esas formas, para poder reproducir patrones y generar abanicos de casos adaptables y reconfigurables. El hecho de estudiar la forma no quiere decir que ésta sea una tesis “formalista” con la connotación peyorativa y gestual que se le suele atribuir a este término. La investigación concibe el concepto de forma como lo hace el mundo natural: forma como síntesis de eficiencia. No hay ninguna forma natural gratuita, que no cumpla una función determinada y que no se desarrolle con el mínimo material y gaste la mínima energía posible. Este afán por encontrar la “forma eficaz” es lo que nos hace traspasar la frontera de la arquitectura formalista. El camino de investigación morfológica se traza, como el título de la tesis indica, siguiendo el hilo conductor concreto de los radiolarios. Estos microorganismos unicelulares poseen unos esqueletos tan complejos que para poder entender su morfología es necesario establecer un amplio recorrido que abarca más de 4.000 años de conocimiento humano. Desde el descubrimiento de los sólidos platónicos, poliedros que configuran muchas de las formas globales de estos esqueletos; hasta la aplicación de los algoritmos generativos, que permiten entender y reproducir los patrones de comportamiento que existen detrás de los sistemas de compactación y teselación irregular de los esqueletos radiolarios. La tesis no pretende plantear el problema desde un punto de vista biológico, ni paleontológico, aunque inevitablemente en el primer capítulo se realiza un análisis referenciado del estado del conocimiento científico actual. Sí se analizan en mayor profundidad cuestiones morfológicas y se tratan los diferentes posicionamientos desde los cuales estos microorganismos han servido de referencia en la disciplina arquitectónica. Además encontramos necesario analizar otros patrones naturales que comparten estrategias generativas con los esqueletos radiolarios. Como ya hemos apuntado, en el segundo capítulo se aborda un recorrido desde las geometrías más básicas a las más complejas, que tienen relación con las estrategias de generación de las formas detectadas en los microorganismos. A su vez, el análisis de estas geometrías se intercala con ejemplos de aplicaciones dentro de la arquitectura, el diseño y el arte. Finalizando con un cronograma que sintetiza y relaciona las tres vías de investigación abordadas: natural, geométrica y arquitectónica. Tras los dos capítulos centrales, el capítulo final recapitula las estrategias analizadas y aplica el conocimiento adquirido en la tesis, mediante la realización de diferentes prototipos que abarcan desde el dibujo analítico tradicional, a la fabricación digital y el diseño paramétrico, pasando por modelos analógicos de escayola, barras metálicas, resina, silicona, látex, etc. ABSTRACT One of the most important issues in the contemporary debate, is the one concerning the long-term sustainability of society as we understand it today. The human being is recovering the lost sensitivity that conceived us as part of the natural cycle of life. We have finally understood that we cannot be self-sufficient and independent of the natural environment which surrounds us. Beyond respect and care, we’ll find that the gateway to the infinite knowledge that nature provides us at all levels and at all scales is open. Within the architectural discipline, there have been remarkable examples such as Antoni Gaudí or Frei Otto who have inspired their work in the natural world. Both, found in nature the strategies and basis of their architectural designs. However, they have been a minority within the huge cast of architects defenders of the right angle. In recent decades, the trend is changing. We are not referring to the growing sensitivity in trying to achieve energy efficiency that has led to an enhancement of vernacular architecture, transferring its wisdom to bioclimatic strategies. We refer to a specific case within the wide range of architectural forms that have appeared thanks to the integration of computer tools in both design and production processes. We are interested in architectures that exploit these techniques to analyse and interpret the complex and highly efficient strategies found in nature, and shift them to the discipline of architecture. This trend, which is being implemented in the framework of the Biomimicry or biomimetics, is called Bioarchitecture. This thesis deals with morphology and more specifically with morphogenesis. Morphology is the study of a concrete form that allows us to understand a specific case. However, our focus is centered in morphogenesis or, in other words, the study of the processes of generation of these forms, in order to replicate patterns and generate a range of adaptable and reconfigurable cases. The fact of studying shapes does not mean that this is a “formalistic” thesis with the pejorative connotation that is often attributed to this term. This study conceives the concept of shape as Nature does: as a synthesis of efficiency. There is no meaningless form in nature. Furthermore, forms and shapes in nature play a particular role and are developed with minimum energetic consumption. This quest to find the efficient shape is what makes us go beyond formalistic architecture. The road of morphological investigation is traced, as the title of the thesis suggests, following the thread of radiolaria. These single-cell microorganisms possess very complex skeletons, so to be able to understand their morphology we must establish a wide spectrum which spans throughout more than 4.000 years of human knowledge. From the discovery of the platonic solids, polyhedrons which configure a huge range of global shapes of these skeletons, through the application of generative algorithms which allow us to understand and recreate the behavioral patterns behind the systems of compression and irregular tessellation of the radiolarian skeletons. The thesis does not pretend to lay out the problem from a biological, paleontological standpoint, although inevitably the first chapter is developed through an analysis in reference to the current state of the science. A deeper analysis of morphological aspects and different positionings is taken into account where these microorganisms have served as reference in the architectonic discipline. In addition we find necessary to analyse other natural patterns which share generative strategies with radiolarian skeletons. Aforementioned, in the second chapter an itinerary of the most basic geometries to the more complex ones is addressed. These are related, in this chapter, to the generative strategies of the shapes found in microorganisms. At the same time, the analysis of these geometries is placed among examples of applications inside the fields of architecture, design and the arts. To come to an end, a time chart synthesizes and relates the three investigation paths addressed: natural, geometrical and architectonic. After the two central chapters, the final chapter summarises the strategies analysed and applies the knowledge acquired throughout the thesis. This final chapter is shaped by the realization of different prototypes which range from traditional analytical drawings, to digital fabrication and parametric design, going through plaster analogical models, metal bars, resin, silicone, latex, etc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los procesos de diseño y construcción en Arquitectura han mostrado un desarrollo de optimización históricamente muy deficiente cuando se compara con las restantes actividades típicamente industriales. La aspiración constante a una industrialización efectiva, tanto en aras de alcanzar mayores cotas de calidad así como de ahorro de recursos, recibe hoy una oportunidad inmejorable desde el ámbito informático: el Building Information Modelling o BIM. Lo que en un inicio puede parecer meramente un determinado tipo de programa informático, en realidad supone un concepto de “proceso” que subvierte muchas rutinas hoy habituales en el desarrollo de proyectos y construcciones arquitectónicas. La inclusión y desarrollo de datos ligados al proyecto, desde su inicio hasta el fin de su ciclo de vida, conlleva la oportunidad de crear una realidad virtual dinámica y actualizable, que por añadidura posibilita su ensayo y optimización en todos sus aspectos: antes y durante su ejecución, así como vida útil. A ello se suma la oportunidad de transmitir eficientemente los datos completos de proyecto, sin apenas pérdidas o reelaboración, a la cadena de fabricación, lo que facilita el paso a una industrialización verdaderamente significativa en edificación. Ante una llamada mundial a la optimización de recursos y el interés indudable de aumentar beneficios económicos por medio de la reducción del factor de incertidumbre de los procesos, BIM supone un opción de mejora indudable, y así ha sido reconocido a través de la inminente implantación obligatoria por parte de los gobiernos (p. ej. Gran Bretaña en 2016 y España en 2018). La modificación de procesos y roles profesionales que conlleva la incorporación de BIM resulta muy significativa y marcará el ejercicio profesional de los futuros graduados en las disciplinas de Arquitectura, Ingeniería y Construcción (AEC por sus siglas en inglés). La universidad debe responder ágilmente a estas nuevas necesidades incorporando esta metodología en la enseñanza reglada y aportando una visión sinérgica que permita extraer los beneficios formativos subyacentes en el propio marco BIM. En este sentido BIM, al aglutinar el conjunto de datos sobre un único modelo virtual, ofrece un potencial singularmente interesante. La realidad tridimensional del modelo, desarrollada y actualizada continuamente, ofrece al estudiante una gestión radicalmente distinta de la representación gráfica, en la que las vistas parciales de secciones y plantas, tan complejas de asimilar en los inicios de la formación universitaria, resultan en una mera petición a posteriori, para ser extraída según necesidad del modelo virtual. El diseño se realiza siempre sobre el propio modelo único, independientemente de la vista de trabajo elegida en cada momento, permaneciendo los datos y sus relaciones constructivas siempre actualizados y plenamente coherentes. Esta descripción condensada de características de BIM preconfiguran gran parte de las beneficios formativos que ofrecen los procesos BIM, en especial, en referencia al desarrollo del diseño integrado y la gestión de la información (incluyendo TIC). Destacan a su vez las facilidades en comprensión visual de elementos arquitectónicos, sistemas técnicos, sus relaciones intrínsecas así como procesos constructivos. A ello se une el desarrollo experimental que la plataforma BIM ofrece a través de sus software colaborativos: la simulación del comportamiento estructural, energético, económico, entre otros muchos, del modelo virtual en base a los datos inherentes del proyecto. En la presente tesis se describe un estudio de conjunto para explicitar tanto las cualidades como posibles reservas en el uso de procesos BIM, en el marco de una disciplina concreta: la docencia de la Arquitectura. Para ello se ha realizado una revisión bibliográfica general sobre BIM y específica sobre docencia en Arquitectura, así como analizado las experiencias de distintos grupos de interés en el marco concreto de la enseñanza de la en Arquitectura en la Universidad Europea de Madrid. El análisis de beneficios o reservas respecto al uso de BIM se ha enfocado a través de la encuesta a estudiantes y la entrevista a profesionales AEC relacionados o no con BIM. Las conclusiones del estudio permiten sintetizar una implantación de metodología BIM que para mayor claridad y facilidad de comunicación y manejo, se ha volcado en un Marco de Implantación eminentemente gráfico. En él se orienta sobre las acciones docentes para el desarrollo de competencias concretas, valiéndose de la flexibilidad conceptual de los Planes de Estudio en el contexto del Espacio Europeo de Educación Superior (Declaración de Bolonia) para incorporar con naturalidad la nueva herramienta docente al servicio de los objetivos formativo legalmente establecidos. El enfoque global del Marco de Implementación propuesto facilita la planificación de acciones formativas con perspectiva de conjunto: combinar los formatos puntuales o vehiculares BIM, establecer sinergias transversales y armonizar recursos, de modo que la metodología pueda beneficiar tanto la asimilación de conocimientos y habilidades establecidas para el título, como el propio flujo de aprendizaje o learn flow BIM. Del mismo modo reserva, incluso visualmente, aquellas áreas de conocimiento en las que, al menos en la planificación actual, la inclusión de procesos BIM no se considera ventajosa respecto a otras metodologías, o incluso inadecuadas para los objetivos docentes establecidos. Y es esta última categorización la que caracteriza el conjunto de conclusiones de esta investigación, centrada en: 1. la incuestionable necesidad de formar en conceptos y procesos BIM desde etapas muy iniciales de la formación universitaria en Arquitectura, 2. los beneficios formativos adicionales que aporta BIM en el desarrollo de competencias muy diversas contempladas en el currículum académico y 3. la especificidad del rol profesional del arquitecto que exigirá una implantación cuidadosa y ponderada de BIM que respete las metodologías de desarrollo creativo tradicionalmente efectivas, y aporte valor en una reorientación simbiótica con el diseño paramétrico y fabricación digital que permita un diseño finalmente generativo. ABSTRACT The traditional architectural design and construction procedures have proven to be deficient where process optimization is concerned, particularly when compared to other common industrial activities. The ever‐growing strife to achieve effective industrialization, both in favor of reaching greater quality levels as well as sustainable management of resources, has a better chance today than ever through a mean out of the realm of information technology, the Building Information Modelling o BIM. What may initially seem to be merely another computer program, in reality turns out to be a “process” concept that subverts many of today’s routines in architectural design and construction. Including and working with project data from the very beginning to the end of its full life cycle allows for creating a dynamic and updatable virtual reality, enabling data testing and optimizing throughout: before and during execution, all the way to the end of its lifespan. In addition, there is an opportunity to transmit complete project data efficiently, with hardly any loss or redeveloping of the manufacture chain required, which facilitates attaining a truly significant industrialization within the construction industry. In the presence of a world‐wide call for optimizing resources, along with an undeniable interest in increasing economic benefits through reducing uncertainty factors in its processes, BIM undoubtedly offers a chance for improvement as acknowledged by its imminent and mandatory implementation on the part of governments (for example United Kingdom in 2016 and Spain in 2018). The changes involved in professional roles and procedures upon incorporating BIM are highly significant and will set the course for future graduates of Architecture, Engineering and Construction disciplines (AEC) within their professions. Higher Education must respond to such needs with swiftness by incorporating this methodology into their educational standards and providing a synergetic vision that focuses on the underlying educational benefits inherent in the BIM framework. In this respect, BIM, in gathering data set under one single virtual model, offers a uniquely interesting potential. The three‐dimensional reality of the model, under continuous development and updating, provides students with a radically different graphic environment, in which partial views of elevation, section or plan that tend characteristically to be difficult to assimilate at the beginning of their studies, become mere post hoc requests to be ordered when needed directly out the virtual model. The design is always carried out on the sole model itself, independently of the working view chosen at any particular moment, with all data and data relations within construction permanently updated and fully coherent. This condensed description of the features of BIM begin to shape an important part of the educational benefits posed by BIM processes, particularly in reference to integrated design development and information management (including ITC). At the same time, it highlights the ease with which visual understanding is achieved regarding architectural elements, technology systems, their intrinsic relationships, and construction processes. In addition to this, there is the experimental development the BIM platform grants through its collaborative software: simulation of structural, energetic, and economic behavior, among others, of the virtual model according to the data inherent to the project. This doctoral dissertation presents a broad study including a wide array of research methods and issues in order to specify both the virtues and possible reservations in the use of BIM processes within the framework of a specific discipline: teaching Architecture. To do so, a literature review on BIM has been carried out, specifically concerning teaching in the discipline of Architecture, as well as an analysis of the experience of different groups of interest delimited to Universidad Europea de Madrid. The analysis of the benefits and/or limitations of using BIM has been approached through student surveys and interviews with professionals from the AEC sector, associated or not, with BIM. Various diverse educational experiences are described and academic management for experimental implementation has been analyzed. The conclusions of this study offer a synthesis for a Framework of Implementation of BIM methodology, which in order to reach greater clarity, communication ease and user‐friendliness, have been posed in an eminently graphic manner. The proposed framework proffers guidance on teaching methods conducive to the development of specific skills, taking advantage of the conceptual flexibility of the European Higher Education Area guidelines based on competencies, which naturally facilitate for the incorporation of this new teaching tool to achieve the educational objectives established by law. The global approach of the Implementation Framework put forth in this study facilitates the planning of educational actions within a common perspective: combining exceptional or vehicular BIM formats, establishing cross‐disciplinary synergies, and sharing resources, so as to purport a methodology that contributes to the assimilation of knowledge and pre‐defined competencies within the degree program, and to the flow of learning itself. At the same time, it reserves, even visually, those areas of knowledge in which the use of BIM processes is not considered necessarily an advantage over other methodologies, or even inadequate for the learning outcomes established, at least where current planning is concerned. It is this last category which characterizes the research conclusions as a whole, centering on: 1. The unquestionable need for teaching BIM concepts and processes in Architecture very early on, in the initial stages of higher education; 2. The additional educational benefits that BIM offers in a varied array of competency development within the academic curriculum; and 3. The specific nature of the professional role of the Architect, which demands a careful and balanced implementation of BIM that respects the traditional teaching methodologies that have proven effective and creative, and adds value by a symbiotic reorientation merged with parametric design and digital manufacturing so to enable for a finally generative design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2′-O-(2-methoxyethyl) (2′-MOE) RNA possesses favorable pharmocokinetic properties that make it a promising option for the design of oligonucleotide drugs. Telomerase is a ribonucleoprotein that is up-regulated in many types of cancer, but its potential as a target for chemotherapy awaits the development of potent and selective inhibitors. Here we report inhibition of human telomerase by 2′-MOE RNA oligomers that are complementary to the RNA template region. Fully complementary oligomers inhibited telomerase in a cell extract with IC50 values of 5–10 nM at 37°C. IC50 values for mismatch-containing oligomers varied with length and phosphorothioate substitution. After introduction into DU 145 prostate cancer cells inhibition of telomerase activity persisted for up to 7 days, equivalent to six population doublings. Inside cells discrimination between complementary and mismatch-containing oligomers increased over time. Our results reveal two oligomers as especially promising candidates for initiation of in vivo preclinical trials and emphasize that conclusions regarding oligonucleotide efficacy and specificity in cell extracts do not necessarily offer accurate predictions of activity inside cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: The purpose of this work is to increase the possibilities of designing building components for specific demands to increase the building’s value, and to investigate how the possibilities can be affected by automating the production process. Method: The theoretical framework, which this study is based on, was collected using literature studies and was thereafter combined with the empirics, which were retrieved from qualitative methods as interviews and planned observations. A case study was made of the building Ormhuset in Jönköping. Findings: The objective of this work is to investigate the possibilities for designing roofs by using new automation methods for the production process of wooden roof structures. This study implies that parametric design can be used to generate new innovative shapes and designs that are optimised according to specific criteria. Furthermore, an increased use of automation in the production process of wooden roof trusses result in cheaper roof trusses, regardless of their shapes. The generated optimized designs are therefore cheaper and easier to produce using more automation in the production process. Implications: If parametric design is used, almost any kind of shapes can be generated and optimised. To ensure manufacturability of a design, an early connection between architect and manufacturer is important. Furthermore, increased use of automation can lead to easier and faster production of roof trusses and investing in more automation can be relevant for companies with large production volumes. Using digital files to control the manufacturing machines is time saving. There are alternative manufacturing methods for advanced roof structurers in wood, which are better suited for production, which cannot be rationalized as for roof trusses. Constraints for increased automation are often a high investment cost and limited space. Limitations: If the study is performed on another case than Ormhuset and with other respondents, the result might have differed but could be similar, why this study is not generally valid but only shows one possible outcome.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Segment poses and joint kinematics estimated from skin markers are highly affected by soft tissue artifact (STA) and its rigid motion component (STARM). While four marker-clusters could decrease the STA non-rigid motion during gait activity, other data, such as marker location or STARM patterns, would be crucial to compensate for STA in clinical gait analysis. The present study proposed 1) to devise a comprehensive average map illustrating the spatial distribution of STA for the lower limb during treadmill gait and 2) to analyze STARM from four marker-clusters assigned to areas extracted from spatial distribution. All experiments were realized using a stereophotogrammetric system to track the skin markers and a bi-plane fluoroscopic system to track the knee prosthesis. Computation of the spatial distribution of STA was realized on 19 subjects using 80 markers apposed on the lower limb. Three different areas were extracted from the distribution map of the thigh. The marker displacement reached a maximum of 24.9mm and 15.3mm in the proximal areas of thigh and shank, respectively. STARM was larger on thigh than the shank with RMS error in cluster orientations between 1.2° and 8.1°. The translation RMS errors were also large (3.0mm to 16.2mm). No marker-cluster correctly compensated for STARM. However, the coefficient of multiple correlations exhibited excellent scores between skin and bone kinematics, as well as for STARM between subjects. These correlations highlight dependencies between STARM and the kinematic components. This study provides new insights for modeling STARM for gait activity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A low cost, compact embedded design approach for actuating soft robots is presented. The complete fabrication procedure and mode of operation was demonstrated, and the performance of the complete system was also demonstrated by building a microcontroller based hardware system which was used to actuate a soft robot for bending motion. The actuation system including the electronic circuit board and actuation components was embedded in a 3D-printed casing to ensure a compact approach for actuating soft robots. Results show the viability of the system in actuating and controlling siliconebased soft robots to achieve bending motions. Qualitative measurements of uniaxial tensile test, bending distance and pressure were obtained. This electronic design is easy to reproduce and integrate into any specified soft robotic device requiring pneumatic actuation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Laterally loaded piles are a typical situation for a large number of cases in which deep foundations are used. Dissertation herein reported, is a focus upon the numerical simulation of laterally loaded piles. In the first chapter the best model settings are largely discussed, so a clear idea about the effects of interface adoption, model dimension, refinement cluster and mesh coarseness is reached. At a second stage, there are three distinct parametric analyses, in which the model response sensibility is studied for variation of interface reduction factor, Eps50 and tensile cut-off. In addition, the adoption of an advanced soil model is analysed (NGI-ADP). This was done in order to use the complex behaviour (different undrained shear strengths are involved) that governs the resisting process of clay under short time static loads. Once set a definitive model, a series of analyses has been carried out with the objective of defining the resistance-deflection (P-y) curves for Plaxis3D (2013) data. Major results of a large number of comparisons made with curves from API (America Petroleum Institute) recommendation are that the empirical curves have almost the same ultimate resistance but a bigger initial stiffness. In the second part of the thesis a simplified structural preliminary design of a jacket structure has been carried out to evaluate the environmental forces that act on it and on its piles foundation. Finally, pile lateral response is studied using the empirical curves.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES In 2003 the International Breast Cancer Study Group (IBCSG) initiated the TEXT and SOFT randomized phase III trials to answer two questions concerning adjuvant treatment for premenopausal women with endocrine-responsive early breast cancer: 1-What is the role of aromatase inhibitors (AI) for women treated with ovarian function suppression (OFS)? 2-What is the role of OFS for women who remain premenopausal and are treated with tamoxifen? METHODS TEXT randomized patients to receive exemestane or tamoxifen with OFS. SOFT randomized patients to receive exemestane with OFS, tamoxifen with OFS, or tamoxifen alone. Treatment was for 5 years from randomization. RESULTS TEXT and SOFT successfully met their enrollment goals in 2011. The 5738 enrolled women had lower-risk disease and lower observed disease-free survival (DFS) event rates than anticipated. Consequently, 7 and 13 additional years of follow-up for TEXT and SOFT, respectively, were required to reach the targeted DFS events (median follow-up about 10.5 and 15 years). To provide timely answers, protocol amendments in 2011 specified analyses based on chronological time and median follow-up. To assess the AI question, exemestane + OFS versus tamoxifen + OFS, a combined analysis of TEXT and SOFT became the primary analysis (n = 4717). The OFS question became the primary analysis from SOFT, assessing the unique comparison of tamoxifen + OFS versus tamoxifen alone (n = 2045). The first reports are anticipated in mid- and late-2014. CONCLUSIONS We present the original designs of TEXT and SOFT and adaptations to ensure timely answers to two questions concerning optimal adjuvant endocrine treatment for premenopausal women with endocrine-responsive breast cancer. Trial Registration TEXT: Clinicaltrials.govNCT00066703 SOFT: Clinicaltrials.govNCT00066690.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.