36 resultados para number and operation
em Universidad Politécnica de Madrid
Resumo:
The quasisteady structure of the corona of a laser-irradiated pellet is completely determined for arbitrary Z, (ion charge number} and re/ra (ratio of critical and ablation radii), and for heat-flux saturation factor/above approximately 0.04. The ion-to-electron temperature ratio at rc grows sensibly with Z,; all other quantities depend weakly and nonmonotonically on Z,. For rc /ra close to unity, and all Z, of interest (Z, < 47}, the flow is subsonic at rc. For a given laser power W, flux saturation may decrease (low/) or increase (high/) the ablation pressure Pa relative to the value obtained when saturation is not considered; in some cases a decrease in/with W fixed increases Pa. For intermediate^ ~0.1), Pa cc (W/r* )2/3 p\n\pc = critical density), independently of rc/ra; for/~0.6, Pa «s larger by a factor of about [rc/raf13. For rjra > 1.2 roughly, the mass ablation rate is C{Z,) [{m/kZ.f^Kr^Pl) l,\ independent of pc and/, and barely dependent on Z,(m, is ion mass; k, Boltzmann's constant; K, conductivity coefficient; and C, a tabulated function).
Resumo:
Speed enforcement on public roadways is an important issue in order to guarantee road security and to reduce the number and seriousness of traffic accidents. Traditionally, this task has been partially solved using radar and/or laser technologies and, more recently, using video-camera based systems. All these systems have significant shortcomings that have yet to be overcome. The main drawback of classical Doppler radar technology is that the velocity measurement fails when several vehicles are in the radars beam. Modern radar systems are able to measure speed and range between vehicle and radar. However, this is not enough to discriminate the lane where the vehicle is driving on. The limitation of several vehicles in the beam is overcome using laser technology. However, laser systems have another important limitation: They cannot measure the speed of several vehicles simultaneously. Novel video-camera systems, based on license plate identification, solve the previous drawbacks, but they have the problem that they can only measure average speed but never top-speed. This paper studies the feasibility of using an interferometric linear frequency modulated continuous wave radar to improve top-speed enforcement on roadways. Two different systems based on down-the-road and across-the-road radar configurations are presented. The main advantage of the proposed solutions is they can simultaneously measure speed, range, and lane of several vehicles, allowing the univocal identification of the offenders. A detailed analysis about the operation and accuracy of these solutions is reported. In addition, the feasibility of the proposed techniques has been demonstrated with simulations and real experiments using a Ka-band interferometric radar developed by our research group.
Resumo:
This paper proposes a way to quantify the emissions of mercury (Hg) and CO2 associated with the manufacture and operation of compact fluorescent lamps with integrated ballasts (CFLis), as well as the economic cost of using them under different operating cycles. The main purpose of this paper is to find simple criteria for reducing the polluting emissions under consideration and the economic cost of CFLi to a minimum. A lifetime model is proposed that allows the emissions and costs to be described as a function of degradation from turning CFLi on and their continuous operation. An idealized model of a CFLi is defined that combines characteristics stated by different manufacturers. In addition, two CFLi models representing poor-quality products are analyzed. It was found that the emissions and costs per unit of time of operation of the CFLi depend linearly on the number of times per unit of time it is turned on and the time of continuous operation. The optimal conditions (lowest emissions and costs) depend on the place of manufacture, the place of operation and the quality of the components of the lamp/ballast. Finally, it was also found that for each lamp, there are intervals when it is turned off during which emissions of pollutants and costs are identical regardless of how often the lamp is turned on or the time it remains on. For CO2 emissions, the lamp must be off up to 5 minutes; for the cost, up to 7 minutes and for Hg emissions, up to 43 minutes. It is advisable not to turn on a CFLi sooner than 43 minutes from the last time it was turned off.
Resumo:
Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.
Resumo:
The more and more demanding conditions in the power generation sector requires to apply all the available technologies to optimize processes and reduce costs. An integrated asset management strategy, combining technical analysis and operation and maintenance management can help to improve plant performance, flexibility and reliability. In order to deploy such a model it is necessary to combine plant data and specific equipment condition information, with different systems devoted to analyze performance and equipment condition, and take advantage of the results to support operation and maintenance decisions. This model that has been dealt in certain detail for electricity transmission and distribution networks, is not yet broadly extended in the power generation sector, as proposed in this study for the case of a combined power plant. Its application would turn in direct benefit for the operation and maintenance and for the interaction to the energy market
Resumo:
Buses are considered a slow, low comfort and low reliability transport system, thus its negative and por image. In the framework of the 3iBS project (2012), several examples of innovative and/or effective solutions regarding the Level of Service (LoS) were analysed aiming to provide operators, practitioners and policy makers with a set of Good Practice Guidelines to strengthen the competitiveness of the bus in the urban environment. The identification of the key indicators regarding vehicles, infrastructure and operation was possible through the analysis of a set of case studies -among which Barcelona (Spain), Cagliari (Italy), London (United Kingdom), Paris and Nantes (France). A cross comparison between the case studies was carried out for contrasting the level of achievement of the different criteria considered. The information provided on Regulatory, Financial and Technical issues allows the identification of a number of specific factors influencing the implementation of a high quality transport scheme, and set the basis for the elaboration of a set of Guidelines for the implementation of an intelligent, innovative and integrated bus system, including the main barriers to be tackled.
Resumo:
We study the evolution of a viscous fluid drop rotating about a fixed axis at constant angular velocity $Omega$ or constant angular momentum L surrounded by another viscous fluid. The problem is considered in the limit of large Ekman number and small Reynolds number. The analysis is carried out by combining asymptotic analysis and full numerical simulation by means of the boundary element method. We pay special attention to the stability/instability of equilibrium shapes and the possible formation of singularities representing a change in the topology of the fluid domain. When the evolution is at constant $Omega$, depending on its value, drops can take the form of a flat film whose thickness goes to zero in finite time or an elongated filament that extends indefinitely. When evolution takes place at constant L and axial symmetry is imposed, thin films surrounded by a toroidal rim can develop, but the film thickness does not vanish in finite time. When axial symmetry is not imposed and L is sufficiently large, drops break axial symmetry and, depending on the value of L, reach an equilibrium configuration with a 2-fold symmetry or break up into several drops with a 2- or 3-fold symmetry. The mechanism of breakup is also described
Resumo:
Los polímeros armados con fibras (FRP) se utilizan en refuerzos de estructuras de hormigón debido sobre todo a sus excelentes propiedades mecánicas, su resistencia a la corrosión y a su ligereza que se traduce en facilidad y ahorro en el transporte, puesta en obra y aplicación, la cual se realiza de forma muy rápida, con pocos operarios y utilizando medios auxiliares ligeros, minimizándose las interrupciones del uso de la estructura y las molestias a los usuarios. Las razones presentadas anteriormente, han despertado un gran inter´es por parte de diferentes grupos de investigación a nivel mundial y que actualmente se encuentran desarrollando nuevas técnicas de aplicación y métodos de cálculo. Sin embargo, las investigaciones realizadas hasta la fecha, muestran un procedimiento bien definido y aceptado en lo referente al cálculo a flexión, lo cual no ocurre con el refuerzo a cortante y aunque se ha demostrado que el refuerzo con FRP es un sistema eficaz para incrementar la capacidad ´ultima frente a esfuerzos cortantes, también se pone de manifiesto la necesidad de más estudios experimentales y teóricos para avanzar en el entendimiento de los mecanismos involucrados para este tipo de refuerzo y establecer un procedimiento de diseño apropiado que maximice las excelentes propiedades de este material. Los modelos que explican el comportamiento del refuerzo a cortante de elementos de hormigón armado son complejos y sin transposición directa a fórmulas ingenieriles. Las normas actualmente en vigor, generalmente, establecen empíricamente la capacidad cortante como la suma de las capacidades del hormigón y el refuerzo transversal de acero. Cuando un elemento es reforzado externamente con FRP, los modelos son evidentemente aun más complejos. Las guías y recomendaciones existentes proponen calcular la capacidad del elemento añadiendo la resistencia aportada por el refuerzo externo de FRP a la ya dada por el hormigón y acero transversal. Sin embargo, la idoneidad de este acercamiento es cuestionable puesto que no tiene en cuenta una posible interacción entre refuerzos. Con base en lo anterior se da origen al tema objeto de este trabajo, el cual está orientado al estudio a cortante de elementos de hormigón armado (HA), reforzados externamente con material compuesto de tejido unidireccional de fibra de carbono y resina epoxi. Inicialmente se hace una completa revisión del estado actual del conocimiento de la resistencia a cortante en elementos de hormigón armado con y sin refuerzo externo de FRP, prestando especial atención en los mecanismos actuantes estudiados hasta la fecha. La bibliografía consultada ha sido exhaustiva y actualizada lo que ha permitido el estudio de los modelos propuestos más importantes, tanto para la descripción del fenómeno de adherencia entre hormigón-FRP como de la valoración del aporte al cortante total hecho por el FRP, a través de sendas bases de datos de ensayos de pull-out y de vigas de hormigón armado ensayadas a cortante. Con base en todo lo anterior, se expusieron los mecanismos actuantes en el aporte a cortante hecho por el FRP en elementos de hormigón armado y la forma como las principales guías de cálculo existentes hasta la fecha los abordan. De igual forma se define un modelo de resistencia de esfuerzos para el FRP y se proponen dos modelos para el cálculo de las tensiones o deformaciones efectivas, de los cuales uno esta basado en el modelo de adherencia propuesto por Oller (2005) y el otro en una regresión multivariante para los mecanismos expuestos. Como complemento del estudio de los trabajos encontrados en la literatura, se lleva acabo un programa experimental que, además de aportar más registros a la exigua base de datos existentes, aporte mayor luz a los puntos que se consideran están deficientemente resueltos. Dentro de este programa se realizaron 32 ensayos sobre 16 vigas de 4.5 m de longitud (dos ensayos por viga), reforzadas a cortante con tejido unidireccional de CFRP. Finalmente, estos estudios han permitido proponer modificaciones a las formulaciones existentes en los códigos y guías en vigor. Abstract Its excellent mechanical properties, as well as its corrosion resistance and light weight, which make it easy to apply and inexpensive to ship to the worksite, are the basis of the extended use of fiber reinforced polymer (FRP) as external strengthening for structures. FRP strengthening is a rapid operation calling for only limited labor and lightweight ancillary equipment, all of which minimizes both the interruption of facility usage and user inconvenience. These advantages have aroused considerable interest in civil engineering science and technology and have led to countless applications the world over. Research studies on the shear strength of FRP-strengthened members have been much fewer in number and more controversial than the research on flexural strengthening, for which a more or less standardized and generally accepted procedure has been established. The research conducted and a host of applications around the world have shown that FRP strengthening is an effective technique for raising ultimate shear strength, but it has also revealed a need for further experimental and theoretical research to advance in the understanding of the mechanisms involved and establish suitable design procedures that optimize the excellent properties of this material The models that explain reinforced concrete (RC) shear strength behavior are complex and cannot be directly transposed to engineering formulas. The standards presently in place generally establish shear capacity empirically as the sum of the capacities of the concrete and the passive reinforcement. When members are externally strengthened with FRP, the models are obviously even more complex. The existing guides and recommendations propose calculating capacity by adding the external strength provided by the FRP to the contributions of the concrete and passive reinforcement. The suitability of this approach is questionable, however, because it fails to consider the interaction between passive reinforcement and external strengthening. The subject of this work is based in above, which is focused on externally shear strengthening for reinforced concrete members with unidirectional carbon fiber sheets bonded with epoxy resin. v Initially a thorough literature review on shear of reinforced concrete beams with and without external FRP strengthening was performed, paying special attention to the acting mechanisms studied to date, which allowed the study of the most important models both to describe the bond phenomenon as well as calculating the FRP shear contribution, through separate databases of pull-out tests and shear tests on reinforced concrete beams externally strengthened with FRP. Based on above, they were exposed the acting mechanisms in a FRP shear strengthening on reinforced concrete beams and how guidelines deal the topic. The same way, it is defined a FRP stress strength model and two more models are proposed for calculating the effective stress, one of these is based on the Oller (2005) bond model and another one is the data best fit, taking into account most of the acting mechanisms. To complement the theoretical part we develop an experimental program that, in addition to providing more records to the meager existing database provide greater understanding to the points considered poorly resolved. The test program included 32 tests of 16 beams (2 per beam) of 4.5 m long, shear strengthened with FRP, externally. Finally, modifications to the existing codes and guidelines are proposed.
Resumo:
Under-deck cable-stayed bridges are very effective structural systems for which the strong contribution of the stay cables under live loading allows for the design of very slender decks for persistent and transient loading scenarios. Their behaviour when subjected to seismic excitation is investigated herein and a set of design criteria are presented that relate to the type and arrangement of bearings, the number and configuration of struts, and the transverse distribution of stay cables. The nonlinear behaviour of these bridges when subject to both near-field and far-field accelerograms has been thoroughly investigated through the use of incremental dynamic analyses. An intensity measure that reflects the pertinent contributions to response when several vibration modes are activated was proposed and is shown to be effective for the analysis of this structural type. The under-deck cable-stay system contributes in a very positive manner to reducing the response when the bridges are subject to very strong seismic excitation. For such scenarios, the reduction in the stiffness of the deck because of crack formation, when prestressed concrete decks are used, mobilises the cable system and enhances the overall performance of the system. Sets of natural accelerograms that are compliant with the prescriptions of Eurocode 8 were also applied to propose a set of design criteria for this bridge type in areas prone to earthquakes. Particular attention is given to outlining the optimal strategies for the deployment of bearings
Resumo:
The aim of this research was to implement a methodology through the generation of a supervised classifier based on the Mahalanobis distance to characterize the grapevine canopy and assess leaf area and yield using RGB images. The method automatically processes sets of images, and calculates the areas (number of pixels) corresponding to seven different classes (Grapes, Wood, Background, and four classes of Leaf, of increasing leaf age). Each one is initialized by the user, who selects a set of representative pixels for every class in order to induce the clustering around them. The proposed methodology was evaluated with 70 grapevine (V. vinifera L. cv. Tempranillo) images, acquired in a commercial vineyard located in La Rioja (Spain), after several defoliation and de-fruiting events on 10 vines, with a conventional RGB camera and no artificial illumination. The segmentation results showed a performance of 92% for leaves and 98% for clusters, and allowed to assess the grapevine’s leaf area and yield with R2 values of 0.81 (p < 0.001) and 0.73 (p = 0.002), respectively. This methodology, which operates with a simple image acquisition setup and guarantees the right number and kind of pixel classes, has shown to be suitable and robust enough to provide valuable information for vineyard management.
Resumo:
The general objective of this work is to analyze the regulatory processes underlying flowering transition and inflorescence and flower development in grapevine. Most of these crucial developmental events take place within buds growing during two seasons in two consecutive years. During the first season, the shoot apical meristem within the bud differentiates all the basic elements of the shoot including flowering transition in lateral primordia and development of inflorescence primordia. These events practically end with bud dormancy. The second season, buds resume shoot growth associated to flower formation and development. In grapevine, the lateral meristems can give rise either to tendril or inflorescence primordia that are homologous organs. With this purpose, we performed global transcriptome analyses along the bud annual cycle and during inflorescence and tendril development. In addition, we approach the genomic analysis of the MIKC type MADS-box gene family in grapevine to identify all its members and assign them putative biological functions. Regarding buds developmental cycle, the results indicate that the main factors explaining the global gene expression differences were the processes of bud dormancy and active growth as well as stress responses. Non dormant buds exhibited up-regulation in functional categories typical of actively proliferating and growing cells (photosynthesis, cell cycle regulation, chromatin assembly) whereas in dormant ones the main functional categories up-regulated were associated to stress response pathways together with transcripts related to starch catabolism. Major transcriptional changes during the dormancy period were associated to the para/endodormancy, endo/ecodormancy and ecodormancy/bud break transitions. Global transcriptional analyses along tendril and inflorescence development suggested that these two homologous organs share a common transcriptional program related to cell proliferation functions. Both structures showed a progressive decrease in the expression of categories such as cell-cycle, auxin metabolism/signaling, DNA metabolism, chromatin assembly and a cluster of five transcripts belonging to the GROWTH-REGULATING FACTOR (GRF) transcription factor family, that are known to control cell proliferation in other species and determine the size of lateral organs. However, they also showed organ specific transcriptional programs that can be related to their differential organ structure and function. Tendrils showed higher transcription of genes related to photosynthesis, hormone signaling and secondary metabolism than inflorescences, while inflorescences have higher transcriptional activity for genes encoding transcription factors (especially those belonging to the MADS-box gene family). Further analysis along inflorescence development evidenced the relevance of additional functions likely related to processes of flower development such as fatty acid and lipid metabolism, jasmonate signaling and oxylipin biosynthesis. The transcriptional analyses performed highlighted the relevance of several groups of transcriptional regulators in the developmental processes studied. The expression profiles along bud development revealed significant differences for some MADS-box subfamilies in relation to other plant species, like the members of the FLC and SVP subfamilies suggesting new roles for these groups in grapevine. In this way, it was found that VvFLC2 and VvAGL15.1 could participate, together with some members of the SPL-L family, in dormancy regulation, as was shown for some of them in other woody plants. Similarly, the expression patterns of the VvFLC1, VvFUL, VvSOC1.1 (together with VvFT, VvMFT1 and VFL) genes could indicate that they play a role in flowering transition in grapevine, in parallel to their roles in other plant systems. The expression levels of VFL, the grapevine LEAFY homolog, could be crucial to specify the development of inflorescence and flower meristems instead of tendril meristems. MADS-box genes VvAP3.1 and 2, VvPI, VvAG1 and 3, VvSEP1-4, as well as VvBS1 and 2 are likely associated with the events of flower meristems and flower organs differentiation, while VvAP1 and VvFUL-L (together with VvSOC1.1, VvAGL6.2) could be involved on tendril development given their expression patterns. In addition, the biological function ofVvAP1 and VvTFL1A was analyzed using a gene silencing approach in transgenic grapevine plants. Our preliminary results suggested a possible role for both genes in the initiation and differentiation of tendrils. Finally, the genomic analysis of the MADS-box gene family in grapevine revealed differential features regarding number and expression pattern of genes putatively involved in the flowering transition process as compared to those involved in the specification of flower and fruit organ identity. Altogether, the results obtained allow identifying putative candidate genes and pathways regulating grapevine reproductive developmental processes paving the way to future experiments demonstrating specific gene biological functions. RESUMEN El objetivo general de este trabajo es analizar los procesos regulatorios subyacentes a la inducción floral así como al desarrollo de la inflorescencia y la flor en la vid. La mayor parte de estos eventos cruciales tienen lugar en las yemas a lo largo de dos estaciones de crecimiento consecutivas. Durante la primera estación, el meristemo apical contenido en la yema diferencia los elementos básicos del pámpano, lo cual incluye la inducción de la floración en los meristemos laterales y el subsiguiente desarrollo de primordios de inflorescencia. Estos procesos prácticamente cesan con la entrada en dormición de la yema. En la segunda estación, se reanuda el crecimiento del pámpano acompañado por la formación y desarrollo de las flores. En la vid, los meristemos laterales pueden dar lugar a primordios de inflorescencia o de zarcillo que son considerados órganos homólogos. Con este objetivo llevamos a cabo un estudio a nivel del transcriptoma de la yema a lo largo de su ciclo anual, así como a lo largo del desarrollo de la inflorescencia y del zarcillo. Además realizamos un análisis genómico de la familia MADS de factores transcripcionales (concretamente aquellos del tipo MIKC) para identificar todos sus miembros y tratar de asignarles posibles funciones biológicas. En cuanto al ciclo de desarrollo de la yema, los resultados indican que los principales factores que explican las diferencias globales en la expresión génica fueron los procesos de dormición de la yema y el crecimiento activo junto con las respuestas a diversos tipos de estrés. Las yemas no durmientes mostraron un incremento en la expresión de genes contenidos en categorías funcionales típicas de células en proliferación y crecimiento activo (como fotosíntesis, regulación del ciclo celular, ensamblaje de cromatina), mientras que en las yemas durmientes, las principales categorías funcionales activadas estaban asociadas a respuestas a estrés, así como con el catabolismo de almidón. Los mayores cambios observados a nivel de transcriptoma en la yema coincidieron con las transiciones de para/endodormición, endo/ecodormición y ecodormición/brotación. Los análisis transcripcionales globales a lo largo del desarrollo del zarcillo y de la inflorescencia sugirieron que estos dos órganos homólogos comparten un programa transcripcional común, relacionado con funciones de proliferación celular. Ambas estructuras mostraron un descenso progresivo en la expresión de genes pertenecientes a categorías funcionales como regulación del ciclo celular, metabolismo/señalización por auxinas, metabolismo de ADN, ensamblaje de cromatina y un grupo de cinco tránscritos pertenecientes a la familia de factores transcripcionales GROWTH-REGULATING FACTOR (GRF), que han sido asociados con el control de la proliferación celular y en determinar el tamaño de los órganos laterales en otras especies. Sin embargo, también pusieron de manifiesto programas transcripcionales que podrían estar relacionados con la diferente estructura y función de dichos órganos. Los zarcillos mostraron mayor actividad transcripcional de genes relacionados con fotosíntesis, señalización hormonal y metabolismo secundario que las inflorescencias, mientras que éstas presentaron mayor actividad transcripcional de genes codificantes de factores de transcripción (especialmente los pertenecientes a la familia MADS-box). Análisis adicionales a lo largo del desarrollo de la inflorescencia evidenciaron la relevancia de otras funciones posiblemente relacionadas con el desarrollo floral, como el metabolismo de lípidos y ácidos grasos, la señalización mediada por jasmonato y la biosíntesis de oxilipinas. Los análisis transcripcionales llevados a cabo pusieron de manifiesto la relevancia de varios grupos de factores transcripcionales en los procesos estudiados. Los perfiles de expresión estudiados a lo largo del desarrollo de la yema mostraron diferencias significativas en algunas de las subfamilias de genes MADS con respecto a otras especies vegetales, como las observadas en los miembros de las subfamilias FLC y SVP, lo cual sugiere que podrían desempeñar nuevas funciones en la vid. En este sentido, se encontró que los genes VvFLC2 y VvAGL15.1 podrían participar, junto con algunos miembros de la familia SPL-L, en la regulación de la dormición. De un modo similar, los patrones de expresión de los genes VvFLC1, VvFUL, VvSOC1.1 (junto con VvFT, VvMFT1 y VFL) podría indicar que desempeñan un papel en la regulación de la inducción de la floración en la vid, como se ha observado en otros sistemas vegetales. Los niveles de expresión de VFL, el homólogo en vid del gen LEAFY de A. thaliana podrían ser cruciales para la especificación del desarrollo de meristemos de inflorescencia y flor en lugar de meristemos de zarcillo. Los genes VvAP3.1 y 2, VvPI, VvAG1 y 3, VvSEP1-4, así como VvBS1 y 2 parecen estar asociados con los eventos de diferenciación de meristemos y órganos florales, mientras que VvAP1 y VvFUL-L (junto con VvSOC1.1 y VvAGL6.2) podrían estar implicados en el desarrollo del zarcillo dados sus patrones de expresión. Adicionalmente, se analizó la función biológica de los genes VvAP1 y VvTFL1A por medio de una estrategia de silenciamiento génico. Los datos preliminares sugieren un posible papel para ambos genes en la iniciación y diferenciación de los zarcillos. Finalmente, el análisis genómico de la familia MADS en vid evidenció diferencias con respecto a otras especies vegetales en cuanto a número de miembros y patrón de expresión en genes supuestamente implicados en la inducción de la floración, en comparación con aquellos relacionados con la especificación de identidad de órganos florales y desarrollo del fruto. En conjunto, los resultados obtenidos han permitido identificar posibles rutas y genes candidatos a participar en la regulación de los procesos de desarrollo reproductivo de la vid, sentando las bases de futuros experimentos encaminados a conocer la funciones biológicas de genes específicos.
Resumo:
The engineering careers models were diverse in Europe, and are adopting now in Spain the Bolonia process for European Universities. Separated from older Universities, that are in part technically active, Civil Engineering (Caminos, Canales y Puertos) started at end of 18th century in Spain adopting the French models of Upper Schools for state civil servants with exam at entry. After 1800 intense wars, to conserve forest regions Ingenieros de Montes appeared as Upper School, and in 1855 also the Ingenieros Agrónomos to push up related techniques and practices. Other Engineers appeared as Upper Schools but more towards private factories. These ES got all adapted Lower Schools of Ingeniero Tecnico. Recently both grew much in number and evolved, linked also to recognized Professions. Spanish society, into European Community, evolved across year 2000, in part highly well, but with severe discordances, that caused severe youth unemployment with 2008-2011 crisis. With Bolonia process high formal changes step in from 2010-11, accepted with intense adaptation. The Lower Schools are changing towards the Upper Schools, and both that have shifted since 2010-11 various 4-years careers (Grado), some included into the precedent Professions, and diverse Masters. Acceptation of them to get students has started relatively well, and will evolve, and acceptation of new grades for employment in Spain, Europe or outside will be essential. Each Grado has now quite rigid curricula and programs, MOODLE was introduced to connect pupils, some specific uses of Personal Computers are taught in each subject. Escuela de Agronomos centre, reorganized with its old name in its precedent buildings at entrance of Campus Moncloa, offers Grados of Agronomic Engineering and Science for various public and private activities for agriculture, Alimentary Engineering for alimentary activities and control, Agro-Environmental Engineering more related to environment activities, and in part Biotechnology also in laboratories in Campus Monte-Gancedo for Biotechnology of Plants and Computational Biotechnology. Curricula include Basics, Engineering, Practices, Visits, English, ?project of end of career?, Stays. Some masters will conduce to specific professional diploma, list includes now Agro-Engineering, Agro-Forestal Biotechnology, Agro and Natural Resources Economy, Complex Physical Systems, Gardening and Landscaping, Rural Genie, Phytogenetic Resources, Plant Genetic Resources, Environmental Technology for Sustainable Agriculture, Technology for Human Development and Cooperation.
Resumo:
The design, construction and operation of the tunnels of M-30, the major ring road in the city of Madrid (Spain), represent a very interesting project in wich a wide variety of situations -geometrical, topographical, etc.- had to be covered, in variable conditions of traffic. For that reasons, the M-30 project is a remarkable technical challenge, which, after its completion, turned into an international reference. From the "design for safety" perspective, a holistic approach has been used to deal with new technologies, integration of systems and development of the procedures to reach the maximum level. However, one of the primary goals has been to achieve reasonable homogeneity characteristics which can permit operate a netword of tunels as one only infraestructure. In the case of the ventilation system the mentioned goals have implied innovative solutions and coordination efforts of great interest. Consequently, this paper describes the principal ideas underlying the conceptual solution developed focusing on the principal peculiarities of the project.
Resumo:
The one-dimensional self-similar motion of an initially cold, half-space plasma of electron density n,produced by the (anomalous) absorption of a laser pulse of irradiation
and ion-electron energy exchange, involves three dimensionless numbers: e = nc/n0 assumed small, Z, (ion charge number), and a parameter a
Resumo:
La optimización de parámetros tales como el consumo de potencia, la cantidad de recursos lógicos empleados o la ocupación de memoria ha sido siempre una de las preocupaciones principales a la hora de diseñar sistemas embebidos. Esto es debido a que se trata de sistemas dotados de una cantidad de recursos limitados, y que han sido tradicionalmente empleados para un propósito específico, que permanece invariable a lo largo de toda la vida útil del sistema. Sin embargo, el uso de sistemas embebidos se ha extendido a áreas de aplicación fuera de su ámbito tradicional, caracterizadas por una mayor demanda computacional. Así, por ejemplo, algunos de estos sistemas deben llevar a cabo un intenso procesado de señales multimedia o la transmisión de datos mediante sistemas de comunicaciones de alta capacidad. Por otra parte, las condiciones de operación del sistema pueden variar en tiempo real. Esto sucede, por ejemplo, si su funcionamiento depende de datos medidos por el propio sistema o recibidos a través de la red, de las demandas del usuario en cada momento, o de condiciones internas del propio dispositivo, tales como la duración de la batería. Como consecuencia de la existencia de requisitos de operación dinámicos es necesario ir hacia una gestión dinámica de los recursos del sistema. Si bien el software es inherentemente flexible, no ofrece una potencia computacional tan alta como el hardware. Por lo tanto, el hardware reconfigurable aparece como una solución adecuada para tratar con mayor flexibilidad los requisitos variables dinámicamente en sistemas con alta demanda computacional. La flexibilidad y adaptabilidad del hardware requieren de dispositivos reconfigurables que permitan la modificación de su funcionalidad bajo demanda. En esta tesis se han seleccionado las FPGAs (Field Programmable Gate Arrays) como los dispositivos más apropiados, hoy en día, para implementar sistemas basados en hardware reconfigurable De entre todas las posibilidades existentes para explotar la capacidad de reconfiguración de las FPGAs comerciales, se ha seleccionado la reconfiguración dinámica y parcial. Esta técnica consiste en substituir una parte de la lógica del dispositivo, mientras el resto continúa en funcionamiento. La capacidad de reconfiguración dinámica y parcial de las FPGAs es empleada en esta tesis para tratar con los requisitos de flexibilidad y de capacidad computacional que demandan los dispositivos embebidos. La propuesta principal de esta tesis doctoral es el uso de arquitecturas de procesamiento escalables espacialmente, que son capaces de adaptar su funcionalidad y rendimiento en tiempo real, estableciendo un compromiso entre dichos parámetros y la cantidad de lógica que ocupan en el dispositivo. A esto nos referimos con arquitecturas con huellas escalables. En particular, se propone el uso de arquitecturas altamente paralelas, modulares, regulares y con una alta localidad en sus comunicaciones, para este propósito. El tamaño de dichas arquitecturas puede ser modificado mediante la adición o eliminación de algunos de los módulos que las componen, tanto en una dimensión como en dos. Esta estrategia permite implementar soluciones escalables, sin tener que contar con una versión de las mismas para cada uno de los tamaños posibles de la arquitectura. De esta manera se reduce significativamente el tiempo necesario para modificar su tamaño, así como la cantidad de memoria necesaria para almacenar todos los archivos de configuración. En lugar de proponer arquitecturas para aplicaciones específicas, se ha optado por patrones de procesamiento genéricos, que pueden ser ajustados para solucionar distintos problemas en el estado del arte. A este respecto, se proponen patrones basados en esquemas sistólicos, así como de tipo wavefront. Con el objeto de poder ofrecer una solución integral, se han tratado otros aspectos relacionados con el diseño y el funcionamiento de las arquitecturas, tales como el control del proceso de reconfiguración de la FPGA, la integración de las arquitecturas en el resto del sistema, así como las técnicas necesarias para su implementación. Por lo que respecta a la implementación, se han tratado distintos aspectos de bajo nivel dependientes del dispositivo. Algunas de las propuestas realizadas a este respecto en la presente tesis doctoral son un router que es capaz de garantizar el correcto rutado de los módulos reconfigurables dentro del área destinada para ellos, así como una estrategia para la comunicación entre módulos que no introduce ningún retardo ni necesita emplear recursos configurables del dispositivo. El flujo de diseño propuesto se ha automatizado mediante una herramienta denominada DREAMS. La herramienta se encarga de la modificación de las netlists correspondientes a cada uno de los módulos reconfigurables del sistema, y que han sido generadas previamente mediante herramientas comerciales. Por lo tanto, el flujo propuesto se entiende como una etapa de post-procesamiento, que adapta esas netlists a los requisitos de la reconfiguración dinámica y parcial. Dicha modificación la lleva a cabo la herramienta de una forma completamente automática, por lo que la productividad del proceso de diseño aumenta de forma evidente. Para facilitar dicho proceso, se ha dotado a la herramienta de una interfaz gráfica. El flujo de diseño propuesto, y la herramienta que lo soporta, tienen características específicas para abordar el diseño de las arquitecturas dinámicamente escalables propuestas en esta tesis. Entre ellas está el soporte para el realojamiento de módulos reconfigurables en posiciones del dispositivo distintas a donde el módulo es originalmente implementado, así como la generación de estructuras de comunicación compatibles con la simetría de la arquitectura. El router has sido empleado también en esta tesis para obtener un rutado simétrico entre nets equivalentes. Dicha posibilidad ha sido explotada para aumentar la protección de circuitos con altos requisitos de seguridad, frente a ataques de canal lateral, mediante la implantación de lógica complementaria con rutado idéntico. Para controlar el proceso de reconfiguración de la FPGA, se propone en esta tesis un motor de reconfiguración especialmente adaptado a los requisitos de las arquitecturas dinámicamente escalables. Además de controlar el puerto de reconfiguración, el motor de reconfiguración ha sido dotado de la capacidad de realojar módulos reconfigurables en posiciones arbitrarias del dispositivo, en tiempo real. De esta forma, basta con generar un único bitstream por cada módulo reconfigurable del sistema, independientemente de la posición donde va a ser finalmente reconfigurado. La estrategia seguida para implementar el proceso de realojamiento de módulos es diferente de las propuestas existentes en el estado del arte, pues consiste en la composición de los archivos de configuración en tiempo real. De esta forma se consigue aumentar la velocidad del proceso, mientras que se reduce la longitud de los archivos de configuración parciales a almacenar en el sistema. El motor de reconfiguración soporta módulos reconfigurables con una altura menor que la altura de una región de reloj del dispositivo. Internamente, el motor se encarga de la combinación de los frames que describen el nuevo módulo, con la configuración existente en el dispositivo previamente. El escalado de las arquitecturas de procesamiento propuestas en esta tesis también se puede beneficiar de este mecanismo. Se ha incorporado también un acceso directo a una memoria externa donde se pueden almacenar bitstreams parciales. Para acelerar el proceso de reconfiguración se ha hecho funcionar el ICAP por encima de la máxima frecuencia de reloj aconsejada por el fabricante. Así, en el caso de Virtex-5, aunque la máxima frecuencia del reloj deberían ser 100 MHz, se ha conseguido hacer funcionar el puerto de reconfiguración a frecuencias de operación de hasta 250 MHz, incluyendo el proceso de realojamiento en tiempo real. Se ha previsto la posibilidad de portar el motor de reconfiguración a futuras familias de FPGAs. Por otro lado, el motor de reconfiguración se puede emplear para inyectar fallos en el propio dispositivo hardware, y así ser capaces de evaluar la tolerancia ante los mismos que ofrecen las arquitecturas reconfigurables. Los fallos son emulados mediante la generación de archivos de configuración a los que intencionadamente se les ha introducido un error, de forma que se modifica su funcionalidad. Con el objetivo de comprobar la validez y los beneficios de las arquitecturas propuestas en esta tesis, se han seguido dos líneas principales de aplicación. En primer lugar, se propone su uso como parte de una plataforma adaptativa basada en hardware evolutivo, con capacidad de escalabilidad, adaptabilidad y recuperación ante fallos. En segundo lugar, se ha desarrollado un deblocking filter escalable, adaptado a la codificación de vídeo escalable, como ejemplo de aplicación de las arquitecturas de tipo wavefront propuestas. El hardware evolutivo consiste en el uso de algoritmos evolutivos para diseñar hardware de forma autónoma, explotando la flexibilidad que ofrecen los dispositivos reconfigurables. En este caso, los elementos de procesamiento que componen la arquitectura son seleccionados de una biblioteca de elementos presintetizados, de acuerdo con las decisiones tomadas por el algoritmo evolutivo, en lugar de definir la configuración de las mismas en tiempo de diseño. De esta manera, la configuración del core puede cambiar cuando lo hacen las condiciones del entorno, en tiempo real, por lo que se consigue un control autónomo del proceso de reconfiguración dinámico. Así, el sistema es capaz de optimizar, de forma autónoma, su propia configuración. El hardware evolutivo tiene una capacidad inherente de auto-reparación. Se ha probado que las arquitecturas evolutivas propuestas en esta tesis son tolerantes ante fallos, tanto transitorios, como permanentes y acumulativos. La plataforma evolutiva se ha empleado para implementar filtros de eliminación de ruido. La escalabilidad también ha sido aprovechada en esta aplicación. Las arquitecturas evolutivas escalables permiten la adaptación autónoma de los cores de procesamiento ante fluctuaciones en la cantidad de recursos disponibles en el sistema. Por lo tanto, constituyen un ejemplo de escalabilidad dinámica para conseguir un determinado nivel de calidad, que puede variar en tiempo real. Se han propuesto dos variantes de sistemas escalables evolutivos. El primero consiste en un único core de procesamiento evolutivo, mientras que el segundo está formado por un número variable de arrays de procesamiento. La codificación de vídeo escalable, a diferencia de los codecs no escalables, permite la decodificación de secuencias de vídeo con diferentes niveles de calidad, de resolución temporal o de resolución espacial, descartando la información no deseada. Existen distintos algoritmos que soportan esta característica. En particular, se va a emplear el estándar Scalable Video Coding (SVC), que ha sido propuesto como una extensión de H.264/AVC, ya que este último es ampliamente utilizado tanto en la industria, como a nivel de investigación. Para poder explotar toda la flexibilidad que ofrece el estándar, hay que permitir la adaptación de las características del decodificador en tiempo real. El uso de las arquitecturas dinámicamente escalables es propuesto en esta tesis con este objetivo. El deblocking filter es un algoritmo que tiene como objetivo la mejora de la percepción visual de la imagen reconstruida, mediante el suavizado de los "artefactos" de bloque generados en el lazo del codificador. Se trata de una de las tareas más intensivas en procesamiento de datos de H.264/AVC y de SVC, y además, su carga computacional es altamente dependiente del nivel de escalabilidad seleccionado en el decodificador. Por lo tanto, el deblocking filter ha sido seleccionado como prueba de concepto de la aplicación de las arquitecturas dinámicamente escalables para la compresión de video. La arquitectura propuesta permite añadir o eliminar unidades de computación, siguiendo un esquema de tipo wavefront. La arquitectura ha sido propuesta conjuntamente con un esquema de procesamiento en paralelo del deblocking filter a nivel de macrobloque, de tal forma que cuando se varía del tamaño de la arquitectura, el orden de filtrado de los macrobloques varia de la misma manera. El patrón propuesto se basa en la división del procesamiento de cada macrobloque en dos etapas independientes, que se corresponden con el filtrado horizontal y vertical de los bloques dentro del macrobloque. Las principales contribuciones originales de esta tesis son las siguientes: - El uso de arquitecturas altamente regulares, modulares, paralelas y con una intensa localidad en sus comunicaciones, para implementar cores de procesamiento dinámicamente reconfigurables. - El uso de arquitecturas bidimensionales, en forma de malla, para construir arquitecturas dinámicamente escalables, con una huella escalable. De esta forma, las arquitecturas permiten establecer un compromiso entre el área que ocupan en el dispositivo, y las prestaciones que ofrecen en cada momento. Se proponen plantillas de procesamiento genéricas, de tipo sistólico o wavefront, que pueden ser adaptadas a distintos problemas de procesamiento. - Un flujo de diseño y una herramienta que lo soporta, para el diseño de sistemas reconfigurables dinámicamente, centradas en el diseño de las arquitecturas altamente paralelas, modulares y regulares propuestas en esta tesis. - Un esquema de comunicaciones entre módulos reconfigurables que no introduce ningún retardo ni requiere el uso de recursos lógicos propios. - Un router flexible, capaz de resolver los conflictos de rutado asociados con el diseño de sistemas reconfigurables dinámicamente. - Un algoritmo de optimización para sistemas formados por múltiples cores escalables que optimice, mediante un algoritmo genético, los parámetros de dicho sistema. Se basa en un modelo conocido como el problema de la mochila. - Un motor de reconfiguración adaptado a los requisitos de las arquitecturas altamente regulares y modulares. Combina una alta velocidad de reconfiguración, con la capacidad de realojar módulos en tiempo real, incluyendo el soporte para la reconfiguración de regiones que ocupan menos que una región de reloj, así como la réplica de un módulo reconfigurable en múltiples posiciones del dispositivo. - Un mecanismo de inyección de fallos que, empleando el motor de reconfiguración del sistema, permite evaluar los efectos de fallos permanentes y transitorios en arquitecturas reconfigurables. - La demostración de las posibilidades de las arquitecturas propuestas en esta tesis para la implementación de sistemas de hardware evolutivos, con una alta capacidad de procesamiento de datos. - La implementación de sistemas de hardware evolutivo escalables, que son capaces de tratar con la fluctuación de la cantidad de recursos disponibles en el sistema, de una forma autónoma. - Una estrategia de procesamiento en paralelo para el deblocking filter compatible con los estándares H.264/AVC y SVC que reduce el número de ciclos de macrobloque necesarios para procesar un frame de video. - Una arquitectura dinámicamente escalable que permite la implementación de un nuevo deblocking filter, totalmente compatible con los estándares H.264/AVC y SVC, que explota el paralelismo a nivel de macrobloque. El presente documento se organiza en siete capítulos. En el primero se ofrece una introducción al marco tecnológico de esta tesis, especialmente centrado en la reconfiguración dinámica y parcial de FPGAs. También se motiva la necesidad de las arquitecturas dinámicamente escalables propuestas en esta tesis. En el capítulo 2 se describen las arquitecturas dinámicamente escalables. Dicha descripción incluye la mayor parte de las aportaciones a nivel arquitectural realizadas en esta tesis. Por su parte, el flujo de diseño adaptado a dichas arquitecturas se propone en el capítulo 3. El motor de reconfiguración se propone en el 4, mientras que el uso de dichas arquitecturas para implementar sistemas de hardware evolutivo se aborda en el 5. El deblocking filter escalable se describe en el 6, mientras que las conclusiones finales de esta tesis, así como la descripción del trabajo futuro, son abordadas en el capítulo 7. ABSTRACT The optimization of system parameters, such as power dissipation, the amount of hardware resources and the memory footprint, has been always a main concern when dealing with the design of resource-constrained embedded systems. This situation is even more demanding nowadays. Embedded systems cannot anymore be considered only as specific-purpose computers, designed for a particular functionality that remains unchanged during their lifetime. Differently, embedded systems are now required to deal with more demanding and complex functions, such as multimedia data processing and high-throughput connectivity. In addition, system operation may depend on external data, the user requirements or internal variables of the system, such as the battery life-time. All these conditions may vary at run-time, leading to adaptive scenarios. As a consequence of both the growing computational complexity and the existence of dynamic requirements, dynamic resource management techniques for embedded systems are needed. Software is inherently flexible, but it cannot meet the computing power offered by hardware solutions. Therefore, reconfigurable hardware emerges as a suitable technology to deal with the run-time variable requirements of complex embedded systems. Adaptive hardware requires the use of reconfigurable devices, where its functionality can be modified on demand. In this thesis, Field Programmable Gate Arrays (FPGAs) have been selected as the most appropriate commercial technology existing nowadays to implement adaptive hardware systems. There are different ways of exploiting reconfigurability in reconfigurable devices. Among them is dynamic and partial reconfiguration. This is a technique which consists in substituting part of the FPGA logic on demand, while the rest of the device continues working. The strategy followed in this thesis is to exploit the dynamic and partial reconfiguration of commercial FPGAs to deal with the flexibility and complexity demands of state-of-the-art embedded systems. The proposal of this thesis to deal with run-time variable system conditions is the use of spatially scalable processing hardware IP cores, which are able to adapt their functionality or performance at run-time, trading them off with the amount of logic resources they occupy in the device. This is referred to as a scalable footprint in the context of this thesis. The distinguishing characteristic of the proposed cores is that they rely on highly parallel, modular and regular architectures, arranged in one or two dimensions. These architectures can be scaled by means of the addition or removal of the composing blocks. This strategy avoids implementing a full version of the core for each possible size, with the corresponding benefits in terms of scaling and adaptation time, as well as bitstream storage memory requirements. Instead of providing specific-purpose architectures, generic architectural templates, which can be tuned to solve different problems, are proposed in this thesis. Architectures following both systolic and wavefront templates have been selected. Together with the proposed scalable architectural templates, other issues needed to ensure the proper design and operation of the scalable cores, such as the device reconfiguration control, the run-time management of the architecture and the implementation techniques have been also addressed in this thesis. With regard to the implementation of dynamically reconfigurable architectures, device dependent low-level details are addressed. Some of the aspects covered in this thesis are the area constrained routing for reconfigurable modules, or an inter-module communication strategy which does not introduce either extra delay or logic overhead. The system implementation, from the hardware description to the device configuration bitstream, has been fully automated by modifying the netlists corresponding to each of the system modules, which are previously generated using the vendor tools. This modification is therefore envisaged as a post-processing step. Based on these implementation proposals, a design tool called DREAMS (Dynamically Reconfigurable Embedded and Modular Systems) has been created, including a graphic user interface. The tool has specific features to cope with modular and regular architectures, including the support for module relocation and the inter-module communications scheme based on the symmetry of the architecture. The core of the tool is a custom router, which has been also exploited in this thesis to obtain symmetric routed nets, with the aim of enhancing the protection of critical reconfigurable circuits against side channel attacks. This is achieved by duplicating the logic with an exactly equal routing. In order to control the reconfiguration process of the FPGA, a Reconfiguration Engine suited to the specific requirements set by the proposed architectures was also proposed. Therefore, in addition to controlling the reconfiguration port, the Reconfiguration Engine has been enhanced with the online relocation ability, which allows employing a unique configuration bitstream for all the positions where the module may be placed in the device. Differently to the existing relocating solutions, which are based on bitstream parsers, the proposed approach is based on the online composition of bitstreams. This strategy allows increasing the speed of the process, while the length of partial bitstreams is also reduced. The height of the reconfigurable modules can be lower than the height of a clock region. The Reconfiguration Engine manages the merging process of the new and the existing configuration frames within each clock region. The process of scaling up and down the hardware cores also benefits from this technique. A direct link to an external memory where partial bitstreams can be stored has been also implemented. In order to accelerate the reconfiguration process, the ICAP has been overclocked over the speed reported by the manufacturer. In the case of Virtex-5, even though the maximum frequency of the ICAP is reported to be 100 MHz, valid operations at 250 MHz have been achieved, including the online relocation process. Portability of the reconfiguration solution to today's and probably, future FPGAs, has been also considered. The reconfiguration engine can be also used to inject faults in real hardware devices, and this way being able to evaluate the fault tolerance offered by the reconfigurable architectures. Faults are emulated by introducing partial bitstreams intentionally modified to provide erroneous functionality. To prove the validity and the benefits offered by the proposed architectures, two demonstration application lines have been envisaged. First, scalable architectures have been employed to develop an evolvable hardware platform with adaptability, fault tolerance and scalability properties. Second, they have been used to implement a scalable deblocking filter suited to scalable video coding. Evolvable Hardware is the use of evolutionary algorithms to design hardware in an autonomous way, exploiting the flexibility offered by reconfigurable devices. In this case, processing elements composing the architecture are selected from a presynthesized library of processing elements, according to the decisions taken by the algorithm, instead of being decided at design time. This way, the configuration of the array may change as run-time environmental conditions do, achieving autonomous control of the dynamic reconfiguration process. Thus, the self-optimization property is added to the native self-configurability of the dynamically scalable architectures. In addition, evolvable hardware adaptability inherently offers self-healing features. The proposal has proved to be self-tolerant, since it is able to self-recover from both transient and cumulative permanent faults. The proposed evolvable architecture has been used to implement noise removal image filters. Scalability has been also exploited in this application. Scalable evolvable hardware architectures allow the autonomous adaptation of the processing cores to a fluctuating amount of resources available in the system. Thus, it constitutes an example of the dynamic quality scalability tackled in this thesis. Two variants have been proposed. The first one consists in a single dynamically scalable evolvable core, and the second one contains a variable number of processing cores. Scalable video is a flexible approach for video compression, which offers scalability at different levels. Differently to non-scalable codecs, a scalable video bitstream can be decoded with different levels of quality, spatial or temporal resolutions, by discarding the undesired information. The interest in this technology has been fostered by the development of the Scalable Video Coding (SVC) standard, as an extension of H.264/AVC. In order to exploit all the flexibility offered by the standard, it is necessary to adapt the characteristics of the decoder to the requirements of each client during run-time. The use of dynamically scalable architectures is proposed in this thesis with this aim. The deblocking filter algorithm is the responsible of improving the visual perception of a reconstructed image, by smoothing blocking artifacts generated in the encoding loop. This is one of the most computationally intensive tasks of the standard, and furthermore, it is highly dependent on the selected scalability level in the decoder. Therefore, the deblocking filter has been selected as a proof of concept of the implementation of dynamically scalable architectures for video compression. The proposed architecture allows the run-time addition or removal of computational units working in parallel to change its level of parallelism, following a wavefront computational pattern. Scalable architecture is offered together with a scalable parallelization strategy at the macroblock level, such that when the size of the architecture changes, the macroblock filtering order is modified accordingly. The proposed pattern is based on the division of the macroblock processing into two independent stages, corresponding to the horizontal and vertical filtering of the blocks within the macroblock. The main contributions of this thesis are: - The use of highly parallel, modular, regular and local architectures to implement dynamically reconfigurable processing IP cores, for data intensive applications with flexibility requirements. - The use of two-dimensional mesh-type arrays as architectural templates to build dynamically reconfigurable IP cores, with a scalable footprint. The proposal consists in generic architectural templates, which can be tuned to solve different computational problems. •A design flow and a tool targeting the design of DPR systems, focused on highly parallel, modular and local architectures. - An inter-module communication strategy, which does not introduce delay or area overhead, named Virtual Borders. - A custom and flexible router to solve the routing conflicts as well as the inter-module communication problems, appearing during the design of DPR systems. - An algorithm addressing the optimization of systems composed of multiple scalable cores, which size can be decided individually, to optimize the system parameters. It is based on a model known as the multi-dimensional multi-choice Knapsack problem. - A reconfiguration engine tailored to the requirements of highly regular and modular architectures. It combines a high reconfiguration throughput with run-time module relocation capabilities, including the support for sub-clock reconfigurable regions and the replication in multiple positions. - A fault injection mechanism which takes advantage of the system reconfiguration engine, as well as the modularity of the proposed reconfigurable architectures, to evaluate the effects of transient and permanent faults in these architectures. - The demonstration of the possibilities of the architectures proposed in this thesis to implement evolvable hardware systems, while keeping a high processing throughput. - The implementation of scalable evolvable hardware systems, which are able to adapt to the fluctuation of the amount of resources available in the system, in an autonomous way. - A parallelization strategy for the H.264/AVC and SVC deblocking filter, which reduces the number of macroblock cycles needed to process the whole frame. - A dynamically scalable architecture that permits the implementation of a novel deblocking filter module, fully compliant with the H.264/AVC and SVC standards, which exploits the macroblock level parallelism of the algorithm. This document is organized in seven chapters. In the first one, an introduction to the technology framework of this thesis, specially focused on dynamic and partial reconfiguration, is provided. The need for the dynamically scalable processing architectures proposed in this work is also motivated in this chapter. In chapter 2, dynamically scalable architectures are described. Description includes most of the architectural contributions of this work. The design flow tailored to the scalable architectures, together with the DREAMs tool provided to implement them, are described in chapter 3. The reconfiguration engine is described in chapter 4. The use of the proposed scalable archtieectures to implement evolvable hardware systems is described in chapter 5, while the scalable deblocking filter is described in chapter 6. Final conclusions of this thesis, and the description of future work, are addressed in chapter 7.