204 resultados para Glue.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polymeric adhesives have been used for many applications like suture and embolization, instead of classic surgical methods or as for dental uses. In this work both subjects have been investigated and the results separated in two parts. In the first, new dentinal adhesives with different polymerizable groups (methacrylic or vinyl-ethereal) were synthesized. A low sensitivity to hydrolysis and equal or enhanced properties, compared to existing commercial products, were considered essentials. Moreover, these monomers need to polymerize by radical photopolymerization and functional groups of different characteristics were tested. All these products were characterized by microtensile bond strength test to determine the bonding strength between the adhesive and tooth. Concerning embolization, cyanoacrylates are nowadays the most-used adhesives in surgery. Thus, they must respond to several requirements. For instance, polymerization time and adhesive strength need to be low, to avoid diffusion of the products in the body and adhesion to the catheter. In order to overcome these problems we developed new cyanoacrylates, which practically instantly polymerize upon contact with blood but do not demonstrate strong adhesion to the catheter, thank to the presence of fluorine atoms, linked to the ester chain. The synthesis of these products was carried out in several steps, such as the depolymerization of the corresponding oligomers at high temperature in acid conditions. Two types of adhesion strengths were determined. Bonding strength between human veins and a microcatheter was determined in vitro by using organic materials as the most realistic model. Another test, on two layers of skin, was conducted to verify the possible use of these new cyanoacrylates as a glue for sutures. As a conclusion, we were able to demonstrate that some of the prepared monomers posses adhesive strength and polymerization time lower than the commercial product Glubran2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Studio prospettico su 75 pazienti con malattia paranale di Crohn che ha come obiettivo quello di confrontare i risultati tra le nuove terapie medico-chirurgiche emergenti. La prima procedura è comune a tutti i pazienti e consiste in un intervento di incisione degli ascessi, fistulectomia e posizionamento di setoni di drenaggio nei tramiti fistolosi per il controllo della sepsi.Successivamente i pazienti vengono divisi in cinque gruppi e sottoposti ai trattamenti per la chiusura dei tramiti fistolosi: terapia sistemica con Infliximab,terapia sistemica con Adalimumab,confezionamento di Flap endoanale, instillazione di colla di fibrina o posizionamento di protesi biologiche. Abbiamo osservato una chiusura completa dei tramiti fistolosi nel 60% dei pazienti trattati con Infliximab, 53% di quelli trattati con Adalimumab, 40% di quelli in terapia con colla di fibrina, 80% di quelli sottoposti a Flap endoanale e 60% di quelli trattati con protesi biologiche. Gli ottimi risultati raggiunti in con le diverse metodiche di trattamento chirurgico locale rappresentano una valida alternativa alla terapia con farmaci biologici. Tali nuove metodiche risultano anzi fondamentali per il trattamento di quei pazienti che dopo una terapia con farmaci biologici non hanno raggiunto una completa risoluzione del quadro (rescue therapy). Terapia biologica e nuove tecniche chirurgiche risultano pertanto complementari, la prima contribuendo al miglioramento della qualità della mucosa del canale anale e del retto basso sulla quale risulta quindi più agevole agire con le seconde con una percentuale di successo sempre maggiore.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nella computer grafica, nell’ambito della modellazione geometrica, si fa uso delle operazioni booleane tra solidi per la manipolazione e la creazione di nuovi oggetti. Queste operazioni, quali unione, intersezione e differenza, vengono applicate alle superfici degli oggetti 3D esattamente come si fa su altri insiemi. In questo modo si riescono ad ottenere nuove forme complesse come combinazione delle altre, che sono in genere più semplici. Ciò che è stato realizzato in questo lavoro di tesi si colloca all’interno di un progetto preesistente, realizzato per consentire la manipolazione di modelli tridimensionali mediante l’utilizzo di operatori booleani: Mesh Glue. In questo lavoro, si è estesa la logica dell’applicazione degli operatori booleani, presente in Mesh Glue, per poter gestire anche scenari con mesh che presentano facce in tangenza. Inoltre, si è inserito Mesh Glue all’interno di un progetto più grande: Mesh Craft. Mesh Craft è un progetto che consiste in un ambiente di modellazione che utilizza come sistema di input il Leap Motion Controller, un dispositivo capace di identificare le dita di una mano e seguirne i movimenti con alta precisione.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Primary ciliary dyskinesia (PCD) is an autosomal recessive disease with an incidence estimated between 1:2,000 and 1:40,000. Ciliated epithelia line the airways, nasal and sinus cavities, Eustachian tube and fallopian tubes. Congenital abnormalities of ciliary structure and function impair mucociliary clearance. As a consequence, patients present with chronic sinopulmonary infections, recurrent glue ear and female subfertility. Similarities in the ultrastructure of respiratory cilia, nodal cilia and sperm result in patients with PCD also presenting with male infertility, abnormalities of left-right asymmetry (most commonly situs inversus totalis) and congenital heart disease. Early diagnosis is essential to ensure specialist management of the respiratory and otological complications of PCD. Diagnostic tests focus on analysis of ciliary function and electron microscopy structure. Analysis is technically difficult and labour intensive. It requires expertise for interpretation, restricting diagnosis to specialist centres. Management is currently based on the consensus of experts, and there is a pressing need for randomised clinical trials to inform treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogels are considered promising for disc regeneration strategies. However, it is currently unknown whether the destruction of the natural interface between nucleus and surrounding structures caused by nucleotomy and an inadequate annulus closure diminishes the mechanical competence of the disc. This in vitro study aimed to clarify these mechanisms and to evaluate whether hydrogels are able to restore the biomechanical behaviour of the disc. Nucleus pressure in an ovine intervertebral disc was measured in vivo during day and night and adapted to an in vitro axial compressive diurnal (15min) and night (30min) load. Effects of different defects on disc height and nucleus pressure were subsequently measured in vitro using 30 ovine motion segments. Following cases were considered: intact; annulus incision repaired by suture and glue; annulus incision with removal and re-implantation of nucleus tissue; and two different hydrogels repaired by suture and glue. The intradiscal pressure in vivo was 0.75MPa during day and 0.5MPa during night corresponding to an in vitro axial compressive force of 130 and 58N, respectively. The compression test showed that neither the implantation of hydrogels nor the re-implantation of the natural nucleus, assumed as being the ideal implant, was able to restore the mechanical functionality of an intact disc. Results indicate the importance of the natural anchorage of the nucleus with its surrounding structures and the relevance of an appropriate annulus closure. Therefore, hydrogels that are able to mimic the mechanical behaviour of the native nucleus may fail in restoring the mechanical behaviour of the disc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Oesophageal and fundic varices belong to the most frequent complications of cirrhosis and portal hypertension. Due to their significant morbidity and mortality, bleedings from oesophageal or fundic varices represent a challenge for the emergency medical team as well as for the gastroenterologist. The patient with a variceal bleeding should be accurately monitored and his/her hemodynamic parameters should be maintained stable with the administration of plasma expanders and blood units when indicated. An antibiotic prophylaxis in this setting--norfloxacin or ceftriaxon--has been demonstrated to significantly reduce morbidity and mortality. Additionally, the early administration of vasoactive compounds, such as terlipressin, somatostatin or octreotide, is associated with beneficial effects in reducing the bleeding. An upper gastrointestinal endoscopy should be generally performed within the first twelve hours from the beginning of the bleeding in order to obtain an accurate diagnosis and to provide an adequate treatment. Endoscopic procedures to control the bleeding include the rubber band ligation, the treatment of the varix with a sclerosing agent or the injection of tissue glue into the varix. In case of recurrent bleeding, beyond the above methods, different techniques, such as the transjugular porto-caval shunt, surgical shunt procedures, as well as embolisation of splanchnic blood vessels, represent additional therapeutic options. However, they are associated with very high mortality rates and their indication has to be discussed case by case by an interdisciplinary team of experts. Future therapies include the optimisation and the improvement of the current medical and endoscopic armamentarium, as well as the application of treatments to novel targets, such as the coagulation cascade.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: To compare four different implantation modalities for the repair of superficial osteochondral defects in a caprine model using autologous, scaffold-free, engineered cartilage constructs, and to describe the short-term outcome of successfully implanted constructs. METHODS: Scaffold-free, autologous cartilage constructs were implanted within superficial osteochondral defects created in the stifle joints of nine adult goats. The implants were distributed between four 6-mm-diameter superficial osteochondral defects created in the trochlea femoris and secured in the defect using a covering periosteal flap (PF) alone or in combination with adhesives (platelet-rich plasma (PRP) or fibrin), or using PRP alone. Eight weeks after implantation surgery, the animals were killed. The defect sites were excised and subjected to macroscopic and histopathologic analyses. RESULTS: At 8 weeks, implants that had been held in place exclusively with a PF were well integrated both laterally and basally. The repair tissue manifested an architecture similar to that of hyaline articular cartilage. However, most of the implants that had been glued in place in the absence of a PF were lost during the initial 4-week phase of restricted joint movement. The use of human fibrin glue (FG) led to massive cell infiltration of the subchondral bone. CONCLUSIONS: The implantation of autologous, scaffold-free, engineered cartilage constructs might best be performed beneath a PF without the use of tissue adhesives. Successfully implanted constructs showed hyaline-like characteristics in adult goats within 2 months. Long-term animal studies and pilot clinical trials are now needed to evaluate the efficacy of this treatment strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: To retrospectively evaluate our experience with frontal sinus obliteration using hydroxyapatite cement (BoneSource; Stryker Biotech Europe, Montreux, Switzerland) and compare it with fat obliteration over the approximate same period. Frontal sinus obliteration with hydroxyapatite cement represents a new technique for obliteration of the frontal sinus after mucocele resection. METHODS: Exploration of the frontal sinus was performed using bicoronal, osteoplastic flaps, with mucosal removal and duct obliteration with tissue glue and muscle or fascia. Flaps were elevated over the periorbita, and Silastic sheeting was used to protect the BoneSource material from exposure as it dried. The frontal table was replaced when appropriate. RESULTS: Sixteen patients underwent frontal sinus obliteration with fat (fat obliteration group), and 38 patients underwent obliteration with BoneSource (BoneSource group). Fat obliteration failed in 2 patients, who underwent subsequent BoneSource obliteration, and none of the patients in the BoneSource group has required removal of material because of recurrent complications. Frontobasal trauma (26 patients [68%] in the BoneSource group and 9 patients [56%] in the fat obliteration group) was the most common history of mucocele formation in both groups. Major complications in the BoneSource group included 1 patient with skin fistula, which was managed conservatively, and 1 patient with recurrent ethmoiditis, which was managed surgically. Both complications were not directly attributed to the use of BoneSource. Contour deficit of the frontal bone occurred in 1 patient in the fat obliteration group and in none in the BoneSource group. Two patients in the fat obliteration group had donor site complications (hematoma and infection). Thirteen patients in the BoneSource group had at least 1 prior attempt at mucocele drainage, and no statistical relation existed between recurrent surgery and preservation of the anterior table. CONCLUSION: Hydroxyapatite is a safe, effective material to obliterate frontal sinuses infected with mucoceles, with minimal morbidity and excellent postoperative contour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Several technical advances in thoracic aortic surgery, such as the use of antegrade cerebral perfusion, avoidance of cross-clamping and the application of glue, have beneficially influenced postoperative outcome. The aim of the present study was to analyse the impact of these developments on outcome of patients undergoing surgery of the thoracic aorta. METHODS AND RESULTS: Between January 1996 and December 2005, 835 patients (37.6%) out of 2215 aortic patients underwent surgery on the thoracic ascending aorta or the aortic arch at our institution. All in-hospital data were assessed. Two hundred and forty-one patients (28.8%) suffered from acute type A dissection (AADA). Overall aortic caseload increased from 41 patients in 1996 to 141 in 2005 (+339%). The increase was more pronounced for thoracic aortic aneurysms (TAA) (+367.9%), than for acute type A aortic dissections (+276.9%). Especially in TAA, combined procedures increased and the amount of patients with impaired left ventricular function (EF <50%) raised up from 14% in 1996 to 24% in 2005. Average age remained stable. Logistic regression curve revealed a significant decrease in mortality (AADA) and in the overall incidence of neurological deficits. CONCLUSIONS: Technical advances in the field of thoracic aortic surgery lead to a decrease of mortality and morbidity, especially in the incidence of adverse neurological events, in a large collective of patients. Long-term outcome and quality of life are better, since antegrade cerebral perfusion has been introduced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integrated pest management is a viable alternative to traditional pest control methods. A paired sample design was utilized to measure the effect of IPM education on the number of cockroaches in a 200 unit, seven story public housing building for the elderly in Houston, TX. Glue traps were placed in 71 randomly selected apartments (5traps/unit) and left in place for two nights. Baseline cockroach counts were shared with the property manager, maintenance/janitorial staff, service coordinator, pest control professional and tenant representatives at the end of a one day “Integrated Pest Management in Multi-Family Housing” training course.^ There was a significant decrease in the average number of cockroaches after IPM education and implementation of IPM principles (P < 0.0003). Positive changes in behavior by members of the IPM team and changes in the housing authority operational plan were also found. Paired t-tests comparing the difference between mean cockroach counts at baseline and follow-up by location within the apartment all demonstrated a significant decrease in the number of cockroaches.^ Results supported the premise that IPM education and the implementation of IPM principles are effective measures to change pest control behaviors and control cockroaches. Cockroach infestations in multi-story housing are not solely determined by the actions of individual tenants. The actions of other residents, property managers and pest control professionals are also important factors in pest control.^ Findings support the implementation of IPM education and the adoption of IPM practices by public housing authorities. This study adds to existing evidence that clear communication of policies, a team approach and a commitment to ongoing inspection and monitoring of pests combined with corrective action to eliminate food, water and harborage and the judicial use of low risk pesticides have the potential to improve the living conditions of elderly residents living in public housing.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aggregation of algae, mainly diatoms, is an important process in marine systems leading to the settling of particulate organic carbon predominantly in the form of marine snow. Exudation products of phytoplankton form transparent exopolymer particles (TEP), which acts as the glue for particle aggregation. Heterotrophic bacteria interacting with phytoplankton may influence TEP formation and phytoplankton aggregation. This bacterial impact has not been explored in detail. We hypothesized that bacteria attaching to Thalassiosira weissflogii might interact in a yet-to-be determined manner, which could impact TEP formation and aggregate abundance. The role of individual T. weissflogii-attaching and free-living new bacterial isolates for TEP production and diatom aggregation was investigated in vitro. T. weissflogii did not aggregate in axenic culture, and striking differences in aggregation dynamics and TEP abundance were observed when diatom cultures were inoculated with either diatom-attaching or free-living bacteria. The data indicated that free-living bacteria might not influence aggregation whereas bacteria attaching to diatom cells may increase aggregate formation. Interestingly, photosynthetically inactivated T. weissflogii cells did not aggregate regardless of the presence of bacteria. Comparison of aggregate formation, TEP production, aggregate sinking velocity and solid hydrated density revealed remarkable differences. Both, photosynthetically active T. weissflogii and specific diatom-attaching bacteria were required for aggregation. It was concluded that interactions between heterotrophic bacteria and diatoms increased aggregate formation and particle sinking and thus may enhance the efficiency of the biological pump.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El refuerzo de estructuras existentes mediante el encolado exterior de láminas de polímeros reforzados con fibras (FRP) se ha convertido en la aplicación más común de los materiales compuestos avanzados en construcción. Estos materiales presentan muchas ventajas frente a los materiales convencionales (sin corrosión, ligeros, de fácil aplicación, etc.). Pero a pesar de las numerosas investigaciones realizadas, aún persisten ciertas dudas sobre algunos aspectos de su comportamiento y las aplicaciones prácticas se llevan a cabo sólo con la ayuda de guías, sin que haya una normativa oficial. El objetivo de este trabajo es incrementar el conocimiento sobre esta técnica de refuerzo, y más concretamente, sobre el refuerzo a flexión de estructuras de fábrica. Con frecuencia el elemento reforzado es de hormigón armado y las láminas de FRP encoladas al exterior sirven para mejorar su resistencia a flexión, cortante o compresión (encamisados). Sin embargo su empleo en otros materiales como las estructuras de fábrica resulta muy prometedor. Las fábricas se caracterizan por soportar muy bien los esfuerzos de compresión pero bastante mal los de tracción. Adherir láminas de materiales compuestos puede servir para mejorar la capacidad resistente de elementos de fábrica sometidos a esfuerzos de flexión. Pero para ello, debe quedar garantizada una correcta adherencia entre el FRP y la fábrica, especialmente en edificios antiguos cuya superficie puede estar deteriorada por encontrarse a la intemperie o por el propio paso del tiempo. En el capítulo II se describen los objetivos fundamentales del trabajo y el método seguido. En el capítulo III se hace una amplia revisión del estado de conocimiento sobre el tema. En el apartado III.1 se detallan las principales características y propiedades mecánicas de fibras, matrices y materiales compuestos así como sus principales aplicaciones, haciendo especial hincapié en aspectos relativos a su durabilidad. En el apartado III.2 se incluye una revisión histórica de las líneas de investigación, tanto teóricas como empíricas, publicadas sobre estructuras de hormigón reforzadas a flexión encolando materiales compuestos. El apartado III.3 se centra en el aspecto fundamental de la adherencia refuerzo-soporte. Se hace un repaso a distintos modelos propuestos para prevenir el despegue distinguiendo si éste se inicia en la zona de anclaje o si está inducido por fisuras en la zona interior del elemento. Se observa falta de consenso en las propuestas. Además en este punto se relatan las campañas experimentales publicadas acerca de la adherencia entre materiales compuestos y fábricas. En el apartado III.4 se analizan las particularidades de las estructuras de fábrica. Además, se revisan algunas de las investigaciones relativas a la mejora de su comportamiento a flexión mediante láminas de FRP. El comportamiento mecánico de muros reforzados solicitados a flexión pura (sin compresión) ha sido documentado por varios autores, si bien es una situación poco frecuente en fábricas reales. Ni el comportamiento mecánico de muros reforzados solicitados a flexocompresión ni la incidencia que el nivel de compresión soportado por la fábrica tiene sobre la capacidad resistente del elemento reforzado han sido suficientemente tratados. En cuanto a los trabajos teóricos, las diferentes propuestas se basan en los métodos utilizados para hormigón armado y comparten los principios habituales de cálculo. Sin embargo, presentan diferencias relativas, sobre todo, a tres aspectos: 1) la forma de modelar el comportamiento de la fábrica, 2) el valor de deformación de cálculo del refuerzo, y 3) el modo de fallo que se considera recomendable buscar con el diseño. A pesar de ello, el ajuste con la parte experimental de cada trabajo suele ser bueno debido a una enorme disparidad en las variables consideradas. Cada campaña presenta un modo de fallo característico y la formulación que se propone resulta apropiada para él. Parece necesario desarrollar un método de cálculo para fábricas flexocomprimidas reforzadas con FRP que pueda ser utilizado para todos los posibles fallos, tanto atribuibles a la lámina como a la fábrica. En el apartado III.4 se repasan algunas lesiones habituales en fábricas solicitadas a flexión y se recogen ejemplos de refuerzos con FRP para reparar o prevenir estos daños. Para mejorar el conocimiento sobre el tema, se llevan a cabo dos pequeñas campañas experimentales realizadas en el Instituto de Ciencias de la Construcción Eduardo Torroja. La primera acerca de la adherencia de materiales compuestos encolados a fábricas deterioradas (apartado IV.1) y la segunda sobre el comportamiento estructural a flexocompresión de probetas de fábrica reforzadas con estos materiales (apartado IV.2). En el capítulo V se analizan algunos de los modelos de adherencia propuestos para prevenir el despegue del extremo del refuerzo. Se confirma que las predicciones obtenidas con ellos resultan muy dispares. Se recopila una base de datos con los resultados experimentales de campañas sobre adherencia de FRP a fábricas extraídas de la literatura y de los resultados propios de la campaña descrita en el punto IV.1. Esta base de datos permite conocer cual de los métodos analizados resulta más adecuado para dimensionar el anclaje de láminas de FRP adheridas a fábricas. En el capítulo VI se propone un método para la comprobación en agotamiento de secciones de fábrica reforzadas con materiales compuestos sometidas a esfuerzos combinados de flexión y compresión. Está basado en el procedimiento de cálculo de la capacidad resistente de secciones de hormigón armado pero adaptado a las fábricas reforzadas. Para ello, se utiliza un diagrama de cálculo tensión deformación de la fábrica de tipo bilineal (acorde con el CTE DB SE-F) cuya simplicidad facilita el desarrollo de toda la formulación al tiempo que resulta adecuado para predecir la capacidad resistente a flexión tanto para fallos debidos al refuerzo como a la fábrica. Además se limita la deformación de cálculo del refuerzo teniendo en consideración ciertos aspectos que provocan que la lámina adherida no pueda desarrollar toda su resistencia, como el desprendimiento inducido por fisuras en el interior del elemento o el deterioro medioambiental. En concreto, se propone un “coeficiente reductor por adherencia” que se determina a partir de una base de datos con 68 resultados experimentales procedentes de publicaciones de varios autores y de los ensayos propios de la campaña descrita en el punto IV.2. También se revisa la formulación propuesta con ayuda de la base de datos. En el capítulo VII se estudia la incidencia de las principales variables, como el axil, la deformación de cálculo del refuerzo o su rigidez, en la capacidad final del elemento. Las conclusiones del trabajo realizado y las posibles líneas futuras de investigación se exponen en el capítulo VIII. ABSTRACT Strengthening of existing structures with externally bonded fiber reinforced polymers (FRP) has become the most common application of advanced composite materials in construction. These materials exhibit many advantages in comparison with traditional ones (corrosion resistance, light weight, easy to apply, etc.). But despite countless researches have been done, there are still doubts about some aspects of their behaviour and applications are carried out only with the help of guidelines, without official regulations. The aim of this work is to improve the knowledge on this retrofitting technique, particularly in regard to flexural strengthening of masonry structures. Reinforced concrete is often the strengthened material and external glued FRP plates are used to improve its flexural, shear or compressive (by wrapping) capacity. However the use of this technique on other materials like masonry structures looks promising. Unreinforced masonry is characterized for being a good material to support compressive stresses but really bad to withstand tensile ones. Glue composite plates can improve the flexural capacity of masonry elements subject to bending. But a proper bond between FRP sheet and masonry must be ensured to do that, especially in old buildings whose surface can be damaged due to being outside or ageing. The main objectives of the work and the methodology carried out are described In Chapter II. An extensive overview of the state of art is done in Chapter III. In Section III.1 physical and mechanical properties of fibers, matrix and composites and their main applications are related. Durability aspects are especially emphasized. Section III.2 includes an historical overview of theoretical and empirical researches on concrete structures strengthened gluing FRP plates to improve their flexural behaviour. Section III.3 focuses on the critical point of bonding between FRP and substrate. Some theoretical models to prevent debonding of FRP laminate are reviewed, it has made a distinction between models for detachment at the end of the plate or debonding in the intermediate zones due to the effects of cracks. It is observed a lack of agreement in the proposals. Some experimental studies on bonding between masonry and FRP are also related in this chapter. The particular characteristics of masonry structures are analyzed in Section III.4. Besides some empirical and theoretical investigations relative to improve their flexural capacity with FRP sheets are reviewed. The mechanical behaviour of strengthened walls subject to pure bending (without compression) has been established by several authors, but this is an unusual situation for real masonry. Neither mechanical behaviour of walls subject to bending and compression nor influence of axial load in the final capacity of the strengthened element are adequately studied. In regard to theoretical studies, the different proposals are based on reinforced concrete analytical methods and share common design principles. However, they present differences, especially, about three aspects: 1) the constitutive law of masonry, 2) the value of ultimate FRP strain and 3) the desirable failure mode that must be looked for. In spite of them, a good agreement between each experimental program and its theoretical study is often exhibited due to enormous disparity in considered test parameters. Each experimental program usually presents a characteristic failure mode and the proposed formulation results appropriate for this one. It seems necessary to develop a method for FRP strengthened walls subject to bending and compression enable for all failure modes (due to FRP or masonry). Some common damages in masonry subject to bending are explained in Section III.4. Examples of FRP strengthening to repair or prevent these damages are also written. Two small experimental programs are carried out in Eduardo Torroja Institute to improve the knowledge on this topic. The first one is concerned about the bond between FRP plates and damaged masonry (section IV.1) and the second one is related to the mechanical behaviour of the strengthened masonry specimens subject to out of plane bending combined with axial force (section IV.2). In the Chapter V some bond models to prevent the debonding at the FRP plate end are checked. It is confirmed that their predictions are so different. A pure-shear test database is compiled with results from the existing literature and others from the experimental program described in section IV.1. This database lets know which of the considered model is more suitable to design anchorage lengths of glued FRP to masonry. In the Chapter VI a method to check unreinforced masonry sections with external FRP strengthening subject to bending and compression to the ultimate limit state is proposed. This method is based on concrete reinforced one, but it is adapted to strengthened masonry. A bilinear constitutive law is used for masonry (according to CTE DB SE-F). Its simplicity helps to develop the model formulation and it has proven to be suitable to predict bending capacity either for FRP failures or masonry crushing. With regard to FRP, the design strain is limited. It is taken into account different aspects which cause the plate can’t reach its ultimate strength, like intermediate FRP debonding induced by opening cracking or environmental damage. A “bond factor” is proposed. It is obtained by means of an experimental bending test database that includes 68 results from the existing literature and from the experimental program described in section IV.2. The proposed formulation has also been checked with the help of bending database. The effects of the main parameters, like axial load, FRP design effective strain or FRP stiffness, on the bending capacity of the strengthened element are studied in Chapter VII. Finally, the main conclusions from the work carried out are summarized in Chapter VIII. Future lines of research to be explored are suggested as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computing the modal parameters of structural systems often requires processing data from multiple non-simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors, which are fixed for all measurements, while the other sensors change their position from one setup to the next. One possibility is to process the setups separately resulting in different modal parameter estimates for each setup. Then, the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global mode shapes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a new state space model that processes all setups at once. The result is that the global mode shapes are obtained automatically, and only a value for the natural frequency and damping ratio of each mode is estimated. We also investigate the estimation of this model using maximum likelihood and the Expectation Maximization algorithm, and apply this technique to simulated and measured data corresponding to different structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computing the modal parameters of large structures in Operational Modal Analysis often requires to process data from multiple non simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors that are fixed for all the measurements, while the other sensors are moved from one setup to the next. One possibility is to process the setups separately what result in different modal parameter estimates for each setup. Then the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global modes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a state space model that can be used to process all setups at once so the global mode shapes are obtained automatically and subsequently only a value for the natural frequency and damping ratio of each mode is computed. We also present how this model can be estimated using maximum likelihood and the Expectation Maximization algorithm. We apply this technique to real data measured at a footbridge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Los sistemas empotrados han sido concebidos tradicionalmente como sistemas de procesamiento específicos que realizan una tarea fija durante toda su vida útil. Para cumplir con requisitos estrictos de coste, tamaño y peso, el equipo de diseño debe optimizar su funcionamiento para condiciones muy específicas. Sin embargo, la demanda de mayor versatilidad, un funcionamiento más inteligente y, en definitiva, una mayor capacidad de procesamiento comenzaron a chocar con estas limitaciones, agravado por la incertidumbre asociada a entornos de operación cada vez más dinámicos donde comenzaban a ser desplegados progresivamente. Esto trajo como resultado una necesidad creciente de que los sistemas pudieran responder por si solos a eventos inesperados en tiempo diseño tales como: cambios en las características de los datos de entrada y el entorno del sistema en general; cambios en la propia plataforma de cómputo, por ejemplo debido a fallos o defectos de fabricación; y cambios en las propias especificaciones funcionales causados por unos objetivos del sistema dinámicos y cambiantes. Como consecuencia, la complejidad del sistema aumenta, pero a cambio se habilita progresivamente una capacidad de adaptación autónoma sin intervención humana a lo largo de la vida útil, permitiendo que tomen sus propias decisiones en tiempo de ejecución. Éstos sistemas se conocen, en general, como sistemas auto-adaptativos y tienen, entre otras características, las de auto-configuración, auto-optimización y auto-reparación. Típicamente, la parte soft de un sistema es mayoritariamente la única utilizada para proporcionar algunas capacidades de adaptación a un sistema. Sin embargo, la proporción rendimiento/potencia en dispositivos software como microprocesadores en muchas ocasiones no es adecuada para sistemas empotrados. En este escenario, el aumento resultante en la complejidad de las aplicaciones está siendo abordado parcialmente mediante un aumento en la complejidad de los dispositivos en forma de multi/many-cores; pero desafortunadamente, esto hace que el consumo de potencia también aumente. Además, la mejora en metodologías de diseño no ha sido acorde como para poder utilizar toda la capacidad de cómputo disponible proporcionada por los núcleos. Por todo ello, no se están satisfaciendo adecuadamente las demandas de cómputo que imponen las nuevas aplicaciones. La solución tradicional para mejorar la proporción rendimiento/potencia ha sido el cambio a unas especificaciones hardware, principalmente usando ASICs. Sin embargo, los costes de un ASIC son altamente prohibitivos excepto en algunos casos de producción en masa y además la naturaleza estática de su estructura complica la solución a las necesidades de adaptación. Los avances en tecnologías de fabricación han hecho que la FPGA, una vez lenta y pequeña, usada como glue logic en sistemas mayores, haya crecido hasta convertirse en un dispositivo de cómputo reconfigurable de gran potencia, con una cantidad enorme de recursos lógicos computacionales y cores hardware empotrados de procesamiento de señal y de propósito general. Sus capacidades de reconfiguración han permitido combinar la flexibilidad propia del software con el rendimiento del procesamiento en hardware, lo que tiene la potencialidad de provocar un cambio de paradigma en arquitectura de computadores, pues el hardware no puede ya ser considerado más como estático. El motivo es que como en el caso de las FPGAs basadas en tecnología SRAM, la reconfiguración parcial dinámica (DPR, Dynamic Partial Reconfiguration) es posible. Esto significa que se puede modificar (reconfigurar) un subconjunto de los recursos computacionales en tiempo de ejecución mientras el resto permanecen activos. Además, este proceso de reconfiguración puede ser ejecutado internamente por el propio dispositivo. El avance tecnológico en dispositivos hardware reconfigurables se encuentra recogido bajo el campo conocido como Computación Reconfigurable (RC, Reconfigurable Computing). Uno de los campos de aplicación más exóticos y menos convencionales que ha posibilitado la computación reconfigurable es el conocido como Hardware Evolutivo (EHW, Evolvable Hardware), en el cual se encuentra enmarcada esta tesis. La idea principal del concepto consiste en convertir hardware que es adaptable a través de reconfiguración en una entidad evolutiva sujeta a las fuerzas de un proceso evolutivo inspirado en el de las especies biológicas naturales, que guía la dirección del cambio. Es una aplicación más del campo de la Computación Evolutiva (EC, Evolutionary Computation), que comprende una serie de algoritmos de optimización global conocidos como Algoritmos Evolutivos (EA, Evolutionary Algorithms), y que son considerados como algoritmos universales de resolución de problemas. En analogía al proceso biológico de la evolución, en el hardware evolutivo el sujeto de la evolución es una población de circuitos que intenta adaptarse a su entorno mediante una adecuación progresiva generación tras generación. Los individuos pasan a ser configuraciones de circuitos en forma de bitstreams caracterizados por descripciones de circuitos reconfigurables. Seleccionando aquellos que se comportan mejor, es decir, que tienen una mejor adecuación (o fitness) después de ser evaluados, y usándolos como padres de la siguiente generación, el algoritmo evolutivo crea una nueva población hija usando operadores genéticos como la mutación y la recombinación. Según se van sucediendo generaciones, se espera que la población en conjunto se aproxime a la solución óptima al problema de encontrar una configuración del circuito adecuada que satisfaga las especificaciones. El estado de la tecnología de reconfiguración después de que la familia de FPGAs XC6200 de Xilinx fuera retirada y reemplazada por las familias Virtex a finales de los 90, supuso un gran obstáculo para el avance en hardware evolutivo; formatos de bitstream cerrados (no conocidos públicamente); dependencia de herramientas del fabricante con soporte limitado de DPR; una velocidad de reconfiguración lenta; y el hecho de que modificaciones aleatorias del bitstream pudieran resultar peligrosas para la integridad del dispositivo, son algunas de estas razones. Sin embargo, una propuesta a principios de los años 2000 permitió mantener la investigación en el campo mientras la tecnología de DPR continuaba madurando, el Circuito Virtual Reconfigurable (VRC, Virtual Reconfigurable Circuit). En esencia, un VRC en una FPGA es una capa virtual que actúa como un circuito reconfigurable de aplicación específica sobre la estructura nativa de la FPGA que reduce la complejidad del proceso reconfiguración y aumenta su velocidad (comparada con la reconfiguración nativa). Es un array de nodos computacionales especificados usando descripciones HDL estándar que define recursos reconfigurables ad-hoc: multiplexores de rutado y un conjunto de elementos de procesamiento configurables, cada uno de los cuales tiene implementadas todas las funciones requeridas, que pueden seleccionarse a través de multiplexores tal y como ocurre en una ALU de un microprocesador. Un registro grande actúa como memoria de configuración, por lo que la reconfiguración del VRC es muy rápida ya que tan sólo implica la escritura de este registro, el cual controla las señales de selección del conjunto de multiplexores. Sin embargo, esta capa virtual provoca: un incremento de área debido a la implementación simultánea de cada función en cada nodo del array más los multiplexores y un aumento del retardo debido a los multiplexores, reduciendo la frecuencia de funcionamiento máxima. La naturaleza del hardware evolutivo, capaz de optimizar su propio comportamiento computacional, le convierten en un buen candidato para avanzar en la investigación sobre sistemas auto-adaptativos. Combinar un sustrato de cómputo auto-reconfigurable capaz de ser modificado dinámicamente en tiempo de ejecución con un algoritmo empotrado que proporcione una dirección de cambio, puede ayudar a satisfacer los requisitos de adaptación autónoma de sistemas empotrados basados en FPGA. La propuesta principal de esta tesis está por tanto dirigida a contribuir a la auto-adaptación del hardware de procesamiento de sistemas empotrados basados en FPGA mediante hardware evolutivo. Esto se ha abordado considerando que el comportamiento computacional de un sistema puede ser modificado cambiando cualquiera de sus dos partes constitutivas: una estructura hard subyacente y un conjunto de parámetros soft. De esta distinción, se derivan dos lineas de trabajo. Por un lado, auto-adaptación paramétrica, y por otro auto-adaptación estructural. El objetivo perseguido en el caso de la auto-adaptación paramétrica es la implementación de técnicas de optimización evolutiva complejas en sistemas empotrados con recursos limitados para la adaptación paramétrica online de circuitos de procesamiento de señal. La aplicación seleccionada como prueba de concepto es la optimización para tipos muy específicos de imágenes de los coeficientes de los filtros de transformadas wavelet discretas (DWT, DiscreteWavelet Transform), orientada a la compresión de imágenes. Por tanto, el objetivo requerido de la evolución es una compresión adaptativa y más eficiente comparada con los procedimientos estándar. El principal reto radica en reducir la necesidad de recursos de supercomputación para el proceso de optimización propuesto en trabajos previos, de modo que se adecúe para la ejecución en sistemas empotrados. En cuanto a la auto-adaptación estructural, el objetivo de la tesis es la implementación de circuitos auto-adaptativos en sistemas evolutivos basados en FPGA mediante un uso eficiente de sus capacidades de reconfiguración nativas. En este caso, la prueba de concepto es la evolución de tareas de procesamiento de imagen tales como el filtrado de tipos desconocidos y cambiantes de ruido y la detección de bordes en la imagen. En general, el objetivo es la evolución en tiempo de ejecución de tareas de procesamiento de imagen desconocidas en tiempo de diseño (dentro de un cierto grado de complejidad). En este caso, el objetivo de la propuesta es la incorporación de DPR en EHW para evolucionar la arquitectura de un array sistólico adaptable mediante reconfiguración cuya capacidad de evolución no había sido estudiada previamente. Para conseguir los dos objetivos mencionados, esta tesis propone originalmente una plataforma evolutiva que integra un motor de adaptación (AE, Adaptation Engine), un motor de reconfiguración (RE, Reconfiguration Engine) y un motor computacional (CE, Computing Engine) adaptable. El el caso de adaptación paramétrica, la plataforma propuesta está caracterizada por: • un CE caracterizado por un núcleo de procesamiento hardware de DWT adaptable mediante registros reconfigurables que contienen los coeficientes de los filtros wavelet • un algoritmo evolutivo como AE que busca filtros wavelet candidatos a través de un proceso de optimización paramétrica desarrollado específicamente para sistemas caracterizados por recursos de procesamiento limitados • un nuevo operador de mutación simplificado para el algoritmo evolutivo utilizado, que junto con un mecanismo de evaluación rápida de filtros wavelet candidatos derivado de la literatura actual, asegura la viabilidad de la búsqueda evolutiva asociada a la adaptación de wavelets. En el caso de adaptación estructural, la plataforma propuesta toma la forma de: • un CE basado en una plantilla de array sistólico reconfigurable de 2 dimensiones compuesto de nodos de procesamiento reconfigurables • un algoritmo evolutivo como AE que busca configuraciones candidatas del array usando un conjunto de funcionalidades de procesamiento para los nodos disponible en una biblioteca accesible en tiempo de ejecución • un RE hardware que explota la capacidad de reconfiguración nativa de las FPGAs haciendo un uso eficiente de los recursos reconfigurables del dispositivo para cambiar el comportamiento del CE en tiempo de ejecución • una biblioteca de elementos de procesamiento reconfigurables caracterizada por bitstreams parciales independientes de la posición, usados como el conjunto de configuraciones disponibles para los nodos de procesamiento del array Las contribuciones principales de esta tesis se pueden resumir en la siguiente lista: • Una plataforma evolutiva basada en FPGA para la auto-adaptación paramétrica y estructural de sistemas empotrados compuesta por un motor computacional (CE), un motor de adaptación (AE) evolutivo y un motor de reconfiguración (RE). Esta plataforma se ha desarrollado y particularizado para los casos de auto-adaptación paramétrica y estructural. • En cuanto a la auto-adaptación paramétrica, las contribuciones principales son: – Un motor computacional adaptable mediante registros que permite la adaptación paramétrica de los coeficientes de una implementación hardware adaptativa de un núcleo de DWT. – Un motor de adaptación basado en un algoritmo evolutivo desarrollado específicamente para optimización numérica, aplicada a los coeficientes de filtros wavelet en sistemas empotrados con recursos limitados. – Un núcleo IP de DWT auto-adaptativo en tiempo de ejecución para sistemas empotrados que permite la optimización online del rendimiento de la transformada para compresión de imágenes en entornos específicos de despliegue, caracterizados por tipos diferentes de señal de entrada. – Un modelo software y una implementación hardware de una herramienta para la construcción evolutiva automática de transformadas wavelet específicas. • Por último, en cuanto a la auto-adaptación estructural, las contribuciones principales son: – Un motor computacional adaptable mediante reconfiguración nativa de FPGAs caracterizado por una plantilla de array sistólico en dos dimensiones de nodos de procesamiento reconfigurables. Es posible mapear diferentes tareas de cómputo en el array usando una biblioteca de elementos sencillos de procesamiento reconfigurables. – Definición de una biblioteca de elementos de procesamiento apropiada para la síntesis autónoma en tiempo de ejecución de diferentes tareas de procesamiento de imagen. – Incorporación eficiente de la reconfiguración parcial dinámica (DPR) en sistemas de hardware evolutivo, superando los principales inconvenientes de propuestas previas como los circuitos reconfigurables virtuales (VRCs). En este trabajo también se comparan originalmente los detalles de implementación de ambas propuestas. – Una plataforma tolerante a fallos, auto-curativa, que permite la recuperación funcional online en entornos peligrosos. La plataforma ha sido caracterizada desde una perspectiva de tolerancia a fallos: se proponen modelos de fallo a nivel de CLB y de elemento de procesamiento, y usando el motor de reconfiguración, se hace un análisis sistemático de fallos para un fallo en cada elemento de procesamiento y para dos fallos acumulados. – Una plataforma con calidad de filtrado dinámica que permite la adaptación online a tipos de ruido diferentes y diferentes comportamientos computacionales teniendo en cuenta los recursos de procesamiento disponibles. Por un lado, se evolucionan filtros con comportamientos no destructivos, que permiten esquemas de filtrado en cascada escalables; y por otro, también se evolucionan filtros escalables teniendo en cuenta requisitos computacionales de filtrado cambiantes dinámicamente. Este documento está organizado en cuatro partes y nueve capítulos. La primera parte contiene el capítulo 1, una introducción y motivación sobre este trabajo de tesis. A continuación, el marco de referencia en el que se enmarca esta tesis se analiza en la segunda parte: el capítulo 2 contiene una introducción a los conceptos de auto-adaptación y computación autonómica (autonomic computing) como un campo de investigación más general que el muy específico de este trabajo; el capítulo 3 introduce la computación evolutiva como la técnica para dirigir la adaptación; el capítulo 4 analiza las plataformas de computación reconfigurables como la tecnología para albergar hardware auto-adaptativo; y finalmente, el capítulo 5 define, clasifica y hace un sondeo del campo del hardware evolutivo. Seguidamente, la tercera parte de este trabajo contiene la propuesta, desarrollo y resultados obtenidos: mientras que el capítulo 6 contiene una declaración de los objetivos de la tesis y la descripción de la propuesta en su conjunto, los capítulos 7 y 8 abordan la auto-adaptación paramétrica y estructural, respectivamente. Finalmente, el capítulo 9 de la parte 4 concluye el trabajo y describe caminos de investigación futuros. ABSTRACT Embedded systems have traditionally been conceived to be specific-purpose computers with one, fixed computational task for their whole lifetime. Stringent requirements in terms of cost, size and weight forced designers to highly optimise their operation for very specific conditions. However, demands for versatility, more intelligent behaviour and, in summary, an increased computing capability began to clash with these limitations, intensified by the uncertainty associated to the more dynamic operating environments where they were progressively being deployed. This brought as a result an increasing need for systems to respond by themselves to unexpected events at design time, such as: changes in input data characteristics and system environment in general; changes in the computing platform itself, e.g., due to faults and fabrication defects; and changes in functional specifications caused by dynamically changing system objectives. As a consequence, systems complexity is increasing, but in turn, autonomous lifetime adaptation without human intervention is being progressively enabled, allowing them to take their own decisions at run-time. This type of systems is known, in general, as selfadaptive, and are able, among others, of self-configuration, self-optimisation and self-repair. Traditionally, the soft part of a system has mostly been so far the only place to provide systems with some degree of adaptation capabilities. However, the performance to power ratios of software driven devices like microprocessors are not adequate for embedded systems in many situations. In this scenario, the resulting rise in applications complexity is being partly addressed by rising devices complexity in the form of multi and many core devices; but sadly, this keeps on increasing power consumption. Besides, design methodologies have not been improved accordingly to completely leverage the available computational power from all these cores. Altogether, these factors make that the computing demands new applications pose are not being wholly satisfied. The traditional solution to improve performance to power ratios has been the switch to hardware driven specifications, mainly using ASICs. However, their costs are highly prohibitive except for some mass production cases and besidesthe static nature of its structure complicates the solution to the adaptation needs. The advancements in fabrication technologies have made that the once slow, small FPGA used as glue logic in bigger systems, had grown to be a very powerful, reconfigurable computing device with a vast amount of computational logic resources and embedded, hardened signal and general purpose processing cores. Its reconfiguration capabilities have enabled software-like flexibility to be combined with hardware-like computing performance, which has the potential to cause a paradigm shift in computer architecture since hardware cannot be considered as static anymore. This is so, since, as is the case with SRAMbased FPGAs, Dynamic Partial Reconfiguration (DPR) is possible. This means that subsets of the FPGA computational resources can now be changed (reconfigured) at run-time while the rest remains active. Besides, this reconfiguration process can be triggered internally by the device itself. This technological boost in reconfigurable hardware devices is actually covered under the field known as Reconfigurable Computing. One of the most exotic fields of application that Reconfigurable Computing has enabled is the known as Evolvable Hardware (EHW), in which this dissertation is framed. The main idea behind the concept is turning hardware that is adaptable through reconfiguration into an evolvable entity subject to the forces of an evolutionary process, inspired by that of natural, biological species, that guides the direction of change. It is yet another application of the field of Evolutionary Computation (EC), which comprises a set of global optimisation algorithms known as Evolutionary Algorithms (EAs), considered as universal problem solvers. In analogy to the biological process of evolution, in EHW the subject of evolution is a population of circuits that tries to get adapted to its surrounding environment by progressively getting better fitted to it generation after generation. Individuals become circuit configurations representing bitstreams that feature reconfigurable circuit descriptions. By selecting those that behave better, i.e., with a higher fitness value after being evaluated, and using them as parents of the following generation, the EA creates a new offspring population by using so called genetic operators like mutation and recombination. As generations succeed one another, the whole population is expected to approach to the optimum solution to the problem of finding an adequate circuit configuration that fulfils system objectives. The state of reconfiguration technology after Xilinx XC6200 FPGA family was discontinued and replaced by Virtex families in the late 90s, was a major obstacle for advancements in EHW; closed (non publicly known) bitstream formats; dependence on manufacturer tools with highly limiting support of DPR; slow speed of reconfiguration; and random bitstream modifications being potentially hazardous for device integrity, are some of these reasons. However, a proposal in the first 2000s allowed to keep investigating in this field while DPR technology kept maturing, the Virtual Reconfigurable Circuit (VRC). In essence, a VRC in an FPGA is a virtual layer acting as an application specific reconfigurable circuit on top of an FPGA fabric that reduces the complexity of the reconfiguration process and increases its speed (compared to native reconfiguration). It is an array of computational nodes specified using standard HDL descriptions that define ad-hoc reconfigurable resources; routing multiplexers and a set of configurable processing elements, each one containing all the required functions, which are selectable through functionality multiplexers as in microprocessor ALUs. A large register acts as configuration memory, so VRC reconfiguration is very fast given it only involves writing this register, which drives the selection signals of the set of multiplexers. However, large overheads are introduced by this virtual layer; an area overhead due to the simultaneous implementation of every function in every node of the array plus the multiplexers, and a delay overhead due to the multiplexers, which also reduces maximum frequency of operation. The very nature of Evolvable Hardware, able to optimise its own computational behaviour, makes it a good candidate to advance research in self-adaptive systems. Combining a selfreconfigurable computing substrate able to be dynamically changed at run-time with an embedded algorithm that provides a direction for change, can help fulfilling requirements for autonomous lifetime adaptation of FPGA-based embedded systems. The main proposal of this thesis is hence directed to contribute to autonomous self-adaptation of the underlying computational hardware of FPGA-based embedded systems by means of Evolvable Hardware. This is tackled by considering that the computational behaviour of a system can be modified by changing any of its two constituent parts: an underlying hard structure and a set of soft parameters. Two main lines of work derive from this distinction. On one side, parametric self-adaptation and, on the other side, structural self-adaptation. The goal pursued in the case of parametric self-adaptation is the implementation of complex evolutionary optimisation techniques in resource constrained embedded systems for online parameter adaptation of signal processing circuits. The application selected as proof of concept is the optimisation of Discrete Wavelet Transforms (DWT) filters coefficients for very specific types of images, oriented to image compression. Hence, adaptive and improved compression efficiency, as compared to standard techniques, is the required goal of evolution. The main quest lies in reducing the supercomputing resources reported in previous works for the optimisation process in order to make it suitable for embedded systems. Regarding structural self-adaptation, the thesis goal is the implementation of self-adaptive circuits in FPGA-based evolvable systems through an efficient use of native reconfiguration capabilities. In this case, evolution of image processing tasks such as filtering of unknown and changing types of noise and edge detection are the selected proofs of concept. In general, evolving unknown image processing behaviours (within a certain complexity range) at design time is the required goal. In this case, the mission of the proposal is the incorporation of DPR in EHW to evolve a systolic array architecture adaptable through reconfiguration whose evolvability had not been previously checked. In order to achieve the two stated goals, this thesis originally proposes an evolvable platform that integrates an Adaptation Engine (AE), a Reconfiguration Engine (RE) and an adaptable Computing Engine (CE). In the case of parametric adaptation, the proposed platform is characterised by: • a CE featuring a DWT hardware processing core adaptable through reconfigurable registers that holds wavelet filters coefficients • an evolutionary algorithm as AE that searches for candidate wavelet filters through a parametric optimisation process specifically developed for systems featured by scarce computing resources • a new, simplified mutation operator for the selected EA, that together with a fast evaluation mechanism of candidate wavelet filters derived from existing literature, assures the feasibility of the evolutionary search involved in wavelets adaptation In the case of structural adaptation, the platform proposal takes the form of: • a CE based on a reconfigurable 2D systolic array template composed of reconfigurable processing nodes • an evolutionary algorithm as AE that searches for candidate configurations of the array using a set of computational functionalities for the nodes available in a run time accessible library • a hardware RE that exploits native DPR capabilities of FPGAs and makes an efficient use of the available reconfigurable resources of the device to change the behaviour of the CE at run time • a library of reconfigurable processing elements featured by position-independent partial bitstreams used as the set of available configurations for the processing nodes of the array Main contributions of this thesis can be summarised in the following list. • An FPGA-based evolvable platform for parametric and structural self-adaptation of embedded systems composed of a Computing Engine, an evolutionary Adaptation Engine and a Reconfiguration Engine. This platform is further developed and tailored for both parametric and structural self-adaptation. • Regarding parametric self-adaptation, main contributions are: – A CE adaptable through reconfigurable registers that enables parametric adaptation of the coefficients of an adaptive hardware implementation of a DWT core. – An AE based on an Evolutionary Algorithm specifically developed for numerical optimisation applied to wavelet filter coefficients in resource constrained embedded systems. – A run-time self-adaptive DWT IP core for embedded systems that allows for online optimisation of transform performance for image compression for specific deployment environments characterised by different types of input signals. – A software model and hardware implementation of a tool for the automatic, evolutionary construction of custom wavelet transforms. • Lastly, regarding structural self-adaptation, main contributions are: – A CE adaptable through native FPGA fabric reconfiguration featured by a two dimensional systolic array template of reconfigurable processing nodes. Different processing behaviours can be automatically mapped in the array by using a library of simple reconfigurable processing elements. – Definition of a library of such processing elements suited for autonomous runtime synthesis of different image processing tasks. – Efficient incorporation of DPR in EHW systems, overcoming main drawbacks from the previous approach of virtual reconfigurable circuits. Implementation details for both approaches are also originally compared in this work. – A fault tolerant, self-healing platform that enables online functional recovery in hazardous environments. The platform has been characterised from a fault tolerance perspective: fault models at FPGA CLB level and processing elements level are proposed, and using the RE, a systematic fault analysis for one fault in every processing element and for two accumulated faults is done. – A dynamic filtering quality platform that permits on-line adaptation to different types of noise and different computing behaviours considering the available computing resources. On one side, non-destructive filters are evolved, enabling scalable cascaded filtering schemes; and on the other, size-scalable filters are also evolved considering dynamically changing computational filtering requirements. This dissertation is organized in four parts and nine chapters. First part contains chapter 1, the introduction to and motivation of this PhD work. Following, the reference framework in which this dissertation is framed is analysed in the second part: chapter 2 features an introduction to the notions of self-adaptation and autonomic computing as a more general research field to the very specific one of this work; chapter 3 introduces evolutionary computation as the technique to drive adaptation; chapter 4 analyses platforms for reconfigurable computing as the technology to hold self-adaptive hardware; and finally chapter 5 defines, classifies and surveys the field of Evolvable Hardware. Third part of the work follows, which contains the proposal, development and results obtained: while chapter 6 contains an statement of the thesis goals and the description of the proposal as a whole, chapters 7 and 8 address parametric and structural self-adaptation, respectively. Finally, chapter 9 in part 4 concludes the work and describes future research paths.