22 resultados para Modular integrated utility systems
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Neuronal networks exhibit diverse types of plasticity, including the activity-dependent regulation of synaptic functions and refinement of synaptic connections. In addition, continuous generation of new neurons in the “adult” brain (adult neurogenesis) represents a powerful form of structural plasticity establishing new connections and possibly implementing pre-existing neuronal circuits (Kempermann et al, 2000; Ming and Song, 2005). Neurotrophins, a family of neuronal growth factors, are crucially involved in the modulation of activity-dependent neuronal plasticity. The first evidence for the physiological importance of this role evolved from the observations that the local administration of neurotrophins has dramatic effects on the activity-dependent refinement of synaptic connections in the visual cortex (McAllister et al, 1999; Berardi et al, 2000; Thoenen, 1995). Moreover, the local availability of critical amounts of neurotrophins appears to be relevant for the ability of hippocampal neurons to undergo long-term potentiation (LTP) of the synaptic transmission (Lu, 2004; Aicardi et al, 2004). To achieve a comprehensive understanding of the modulatory role of neurotrophins in integrated neuronal systems, informations on the mechanisms about local neurotrophins synthesis and secretion as well as ditribution of their cognate receptors are of crucial importance. In the first part of this doctoral thesis I have used electrophysiological approaches and real-time imaging tecniques to investigate additional features about the regulation of neurotrophins secretion, namely the capability of the neurotrophin brain-derived neurotrophic factor (BDNF) to undergo synaptic recycling. In cortical and hippocampal slices as well as in dissociated cell cultures, neuronal activity rapidly enhances the neuronal expression and secretion of BDNF which is subsequently taken up by neurons themselves but also by perineuronal astrocytes, through the selective activation of BDNF receptors. Moreover, internalized BDNF becomes part of the releasable source of the neurotrophin, which is promptly recruited for activity-dependent recycling. Thus, we described for the first time that neurons and astrocytes contain an endocytic compartment competent for BDNF recycling, suggesting a specialized form of bidirectional communication between neurons and glia. The mechanism of BDNF recycling is reminiscent of that for neurotransmitters and identifies BDNF as a new modulator implicated in neuro- and glio-transmission. In the second part of this doctoral thesis I addressed the role of BDNF signaling in adult hippocampal neurogenesis. I have generated a transgenic mouse model to specifically investigate the influence of BDNF signaling on the generation, differentiation, survival and connectivity of newborn neurons into the adult hippocampal network. I demonstrated that the survival of newborn neurons critically depends on the activation of the BDNF receptor TrkB. The TrkB-dependent decision regarding life or death in these newborn neurons takes place right at the transition point of their morphological and functional maturation Before newborn neurons start to die, they exhibit a drastic reduction in dendritic complexity and spine density compared to wild-type newborn neurons, indicating that this receptor is required for the connectivity of newborn neurons. Both the failure to become integrated and subsequent dying lead to impaired LTP. Finally, mice lacking a functional TrkB in the restricted population of newborn neurons show behavioral deficits, namely increased anxiety-like behavior. These data suggest that the integration and establishment of proper connections by newly generated neurons into the pre-existing network are relevant features for regulating the emotional state of the animal.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
This thesis presents the outcomes of my Ph.D. course in telecommunications engineering. The focus of my research has been on Global Navigation Satellite Systems (GNSS) and in particular on the design of aiding schemes operating both at position and physical level and the evaluation of their feasibility and advantages. Assistance techniques at the position level are considered to enhance receiver availability in challenging scenarios where satellite visibility is limited. Novel positioning techniques relying on peer-to-peer interaction and exchange of information are thus introduced. More specifically two different techniques are proposed: the Pseudorange Sharing Algorithm (PSA), based on the exchange of GNSS data, that allows to obtain coarse positioning where the user has scarce satellite visibility, and the Hybrid approach, which also permits to improve the accuracy of the positioning solution. At the physical level, aiding schemes are investigated to improve the receiver’s ability to synchronize with satellite signals. An innovative code acquisition strategy for dual-band receivers, the Cross-Band Aiding (CBA) technique, is introduced to speed-up initial synchronization by exploiting the exchange of time references between the two bands. In addition vector configurations for code tracking are analyzed and their feedback generation process thoroughly investigated.
Resumo:
This dissertation deals with the design and the characterization of novel reconfigurable silicon-on-insulator (SOI) devices to filter and route optical signals on-chip. Design is carried out through circuit simulations based on basic circuit elements (Building Blocks, BBs) in order to prove the feasibility of an approach allowing to move the design of Photonic Integrated Circuits (PICs) toward the system level. CMOS compatibility and large integration scale make SOI one of the most promising material to realize PICs. The concepts of generic foundry and BB based circuit simulations for the design are emerging as a solution to reduce the costs and increase the circuit complexity. To validate the BB based approach, the development of some of the most important BBs is performed first. A novel tunable coupler is also presented and it is demonstrated to be a valuable alternative to the known solutions. Two novel multi-element PICs are then analysed: a narrow linewidth single mode resonator and a passband filter with widely tunable bandwidth. Extensive circuit simulations are carried out to determine their performance, taking into account fabrication tolerances. The first PIC is based on two Grating Assisted Couplers in a ring resonator (RR) configuration. It is shown that a trade-off between performance, resonance bandwidth and device footprint has to be performed. The device could be employed to realize reconfigurable add-drop de/multiplexers. Sensitivity with respect to fabrication tolerances and spurious effects is however observed. The second PIC is based on an unbalanced Mach-Zehnder interferometer loaded with two RRs. Overall good performance and robustness to fabrication tolerances and nonlinear effects have confirmed its applicability for the realization of flexible optical systems. Simulated and measured devices behaviour is shown to be in agreement thus demonstrating the viability of a BB based approach to the design of complex PICs.
Resumo:
Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.
Resumo:
The rational construction of the house. The writings and projects of Giuseppe Pagano Description, themes and research objectives The research aims at analysing the architecture of Giuseppe Pagano, which focuses on the theme of dwelling, through the reading of 3 of his house projects. On the one hand, these projects represent “minor” works not thoroughly known by Pagano’s contemporary critics; on the other they emphasise a particular methodological approach, which serves the author to explore a theme closely linked to his theoretical thought. The house project is a key to Pagano’s research, given its ties to the socio-cultural and political conditions in which the architect was working, so that it becomes a mirror of one of his specific and theoretical path, always in a state of becoming. Pagano understands architecture as a “servant of the human being”, subject to a “utilitarian slavery” since it is a clear, essential and “modest” answer to specific human needs, free from aprioristic aesthetic and formal choices. It is a rational architecture in sensu stricto; it constitutes a perfect synthesis between cause and effect and between function and form. The house needs to accommodate these principles because it is closely intertwined with human needs and intimately linked to a specific place, climatic conditions and technical and economical possibilities. Besides, differently from his public and common masterpieces such as the Palazzo Gualino, the Istituto di Fisica and the Università Commerciale Bocconi, the house projects are representative of a precise project will, which is expressed in a more authentic way, partially freed from political influences and dogmatic preoccupations and, therefore, far from the attempt to research a specific expressive language. I believe that the house project better represents that “ingenuity”, freshness and “sincerity” that Pagano identifies with the minor architecture, thereby revealing a more authentic expression of his understanding of a project. Therefore, the thesis, by tracing the theoretical research of Pagano through the analysis of some of his designed and built works, attempts to identify a specific methodological approach to Pagano’s project, which, developed through time, achieves a certain clarity in the 1930s. In fact, this methodological approach becomes more evident in his last projects, mainly regarding the house and the urban space. These reflect the attempt to respond to the new social needs and, at the same time, they also are an expression of a freer idea of built architecture, closely linked with the place and with the human being who dwells it. The three chosen projects (Villa Colli, La Casa a struttura d’acciaio and Villa Caraccio) make Pagano facing different places, different customers and different economic and technical conditions, which, given the author’s biography, correspond to important historical and political conditions. This is the reason why the projects become apparently distant works, both linguistically and conceptually, to the point that one can define them as ”eclectic”. However, I argue that this eclecticism is actually an added value to the architectural work of Pagano, steaming from the use of a method which, having as a basis the postulate of a rational architecture as essence and logic of building, finds specific variations depending on the multiple variables to be addressed by the project. This is the methodological heritage that Pagano learns from the tradition, especially that of the rural residential architecture, defined by Pagano as a “dictionary of the building logic of man”, as an “a-stylistic background”. For Pagano this traditional architecture is a clear expression of the relationships between a theme and its development, an architectural “fact” that is resolved with purely technical and utilitarian aims and with a spontaneous development far from any aprioristic theoretical principle. Architecture, therefore, cannot be an invention for Pagano and the personal contribution of each architect has to consider his/her close relationship with the specific historical context, place and new building methods. These are basic principles in the methodological approach that drives a great deal of his research and that also permits his thought to be modern. I argue that both ongoing and new collaborations with younger protagonists of the culture and architecture of the period are significant for the development of his methodology. These encounters represent the will to spread his own understanding of the “new architecture” as well as a way of self-renewal by confronting the self with new themes and realities and by learning from his collaborators. Thesis’ outline The thesis is divided in two principal parts, each articulated in four chapters attempting to offer a new reading of the theory and work of Pagano by emphasising the central themes of the research. The first chapter is an introduction to the thesis and to the theme of the rational house, as understood and developed in its typological and technical aspects by Pagano and by other protagonists of the Italian rationalism of the 1930s. Here the attention is on two different aspects defining, according to Pagano, the house project: on the one hand, the typological renewal, aimed at defining a “standard form” as a clear and essential answer to certain needs and variables of the project leading to different formal expressions. On the other, it focuses on the building, understood as a technique to “produce” architecture, where new technologies and new materials are not merely tools but also essential elements of the architectural work. In this way the villa becomes different from the theme of the common house or from that of the minimalist house, by using rules in the choice of material and in the techniques that are every time different depending on the theme under exploration and on the contingency of place. It is also visible the rigorous rationalism that distinguishes the author's appropriation of certain themes of rural architecture. The pages of “Casabella” and the events of the contemporary Triennali form the preliminary material for the writing of this chapter given that they are primary sources to individuate projects and writings produced by Pagano and contemporary architects on this theme. These writings and projects, when compared, reconstruct the evolution of the idea of the rational house and, specifically, of the personal research of Pagano. The second part regards the reading of three of Pagano’s projects of houses as a built verification of his theories. This section constitutes the central part of the thesis since it is aimed at detecting a specific methodological approach showing a theoretical and ideological evolution expressed in the vast edited literature. The three projects that have been chosen explore the theme of the house, looking at various research themes that the author proposes and that find continuity in the affirmation of a specific rationalism, focussed on concepts such as essentiality, utility, functionality and building honesty. These concepts guide the thought and the activities of Pagano, also reflecting a social and cultural period. The projects span from the theme of the villa moderna, Villa Colli, which, inspired by the architecture of North Europe, anticipates a specific rationalism of Pagano based on rigour, simplicity and essentiality, to the theme of the common house, Casa a struttura d’acciaio, la casa del domani, which ponders on the definition of new living spaces and, moreover, on new concepts of standardisation, economical efficiency and new materials responding to the changing needs of the modern society. Finally, the third project returns to the theme of the, Villa Caraccio, revisiting it with new perspectives. These perspectives find in the solution of the open plant, in the openness to nature and landscape and in the revisiting of materials and local building systems that idea of the freed house, which express clearly a new theoretical thought. Methodology It needs to be noted that due to the lack of an official Archive of Pagano’s work, the analysis of his work has been difficult and this explains the necessity to read the articles and the drawings published in the pages of «Casabella» and «Domus». As for the projects of Villa Colli and Casa a struttura d’acciaio, parts of the original drawings have been consulted. These drawings are not published and are kept in private archives of the collaborators of Pagano. The consultation of these documents has permitted the analysis of the cited works, which have been subject to a more complete reading following the different proposed solutions, which have permitted to understand the project path. The projects are analysed thought the method of comparison and critical reading which, specifically, means graphical elaborations and analytical schemes, mostly reconstructed on the basis of original projects but, where possible, also on a photographic investigation. The focus is on the project theme which, beginning with a specific living (dwelling) typology, finds variations because of the historico-political context in which Pagano is embedded and which partially shapes his research and theoretical thought, then translated in the built work. The analysis of the work follows, beginning, where possible, from a reconstruction of the evolution of the project as elaborated on the basis of the original documents and ending on an analysis of the constructive principles and composition. This second phase employs a methodology proposed by Pagano in his article Piante di ville, which, as expected, focuses on the plant as essential tool to identify the “true practical and poetic qualities of the construction”(Pagano, «Costruzioni-Casabella», 1940, p. 2). The reading of the project is integrated with the constructive analyses related to the technical aspects of the house which, in the case of Casa a struttura d’acciaio, play an important role in the project, while in Villa Colli and in Villa Caraccio are principally linked to the choice of materials for the construction of the different architectural elements. These are nonetheless key factors in the composition of the work. Future work could extend this reading to other house projects to deepen the research that could be completed with the consultation of Archival materials, which are missing at present. Finally, in the appendix I present a critical selection of the Pagano’s writings, which recall the themes discussed and embodied by the three projects. The texts have been selected among the articles published in Casabella and in other journals, completing the reading of the project work which cannot be detached from his theoretical thought. Moving from theory to project, we follow a path that brings us to define and deepen the central theme of the thesis: rational building as the principal feature of the architectural research of Pagano, which is paraphrased in multiple ways in his designed and built works.
Resumo:
The last decades have seen an unrivaled growth and diffusion of mobile telecommunications. Several standards have been developed to this purposes, from GSM mobile phone communications to WLAN IEEE 802.11, providing different services for the the transmission of signals ranging from voice to high data rate digital communications and Digital Video Broadcasting (DVB). In this wide research and market field, this thesis focuses on Ultra Wideband (UWB) communications, an emerging technology for providing very high data rate transmissions over very short distances. In particular the presented research deals with the circuit design of enabling blocks for MB-OFDM UWB CMOS single-chip transceivers, namely the frequency synthesizer and the transmission mixer and power amplifier. First we discuss three different models for the simulation of chargepump phase-locked loops, namely the continuous time s-domain and discrete time z-domain approximations and the exact semi-analytical time-domain model. The limitations of the two approximated models are analyzed in terms of error in the computed settling time as a function of loop parameters, deriving practical conditions under which the different models are reliable for fast settling PLLs up to fourth order. Besides, a phase noise analysis method based upon the time-domain model is introduced and compared to the results obtained by means of the s-domain model. We compare the three models over the simulation of a fast switching PLL to be integrated in a frequency synthesizer for WiMedia MB-OFDM UWB systems. In the second part, the theoretical analysis is applied to the design of a 60mW 3.4 to 9.2GHz 12 Bands frequency synthesizer for MB-OFDM UWB based on two wide-band PLLs. The design is presented and discussed up to layout level. A test chip has been implemented in TSMC CMOS 90nm technology, measured data is provided. The functionality of the circuit is proved and specifications are met with state-of-the-art area occupation and power consumption. The last part of the thesis deals with the design of a transmission mixer and a power amplifier for MB-OFDM UWB band group 1. The design has been carried on up to layout level in ST Microlectronics 65nm CMOS technology. Main characteristics of the systems are the wideband behavior (1.6 GHz of bandwidth) and the constant behavior over process parameters, temperature and supply voltage thanks to the design of dedicated adaptive biasing circuits.
Resumo:
The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.
Resumo:
In recent years, due to the rapid convergence of multimedia services, Internet and wireless communications, there has been a growing trend of heterogeneity (in terms of channel bandwidths, mobility levels of terminals, end-user quality-of-service (QoS) requirements) for emerging integrated wired/wireless networks. Moreover, in nowadays systems, a multitude of users coexists within the same network, each of them with his own QoS requirement and bandwidth availability. In this framework, embedded source coding allowing partial decoding at various resolution is an appealing technique for multimedia transmissions. This dissertation includes my PhD research, mainly devoted to the study of embedded multimedia bitstreams in heterogenous networks, developed at the University of Bologna, advised by Prof. O. Andrisano and Prof. A. Conti, and at the University of California, San Diego (UCSD), where I spent eighteen months as a visiting scholar, advised by Prof. L. B. Milstein and Prof. P. C. Cosman. In order to improve the multimedia transmission quality over wireless channels, joint source and channel coding optimization is investigated in a 2D time-frequency resource block for an OFDM system. We show that knowing the order of diversity in time and/or frequency domain can assist image (video) coding in selecting optimal channel code rates (source and channel code rates). Then, adaptive modulation techniques, aimed at maximizing the spectral efficiency, are investigated as another possible solution for improving multimedia transmissions. For both slow and fast adaptive modulations, the effects of imperfect channel estimation errors are evaluated, showing that the fast technique, optimal in ideal systems, might be outperformed by the slow adaptive modulation, when a real test case is considered. Finally, the effects of co-channel interference and approximated bit error probability (BEP) are evaluated in adaptive modulation techniques, providing new decision regions concepts, and showing how the widely used BEP approximations lead to a substantial loss in the overall performance.
Resumo:
Background. One of the phenomena observed in human aging is the progressive increase of a systemic inflammatory state, a condition referred to as “inflammaging”, negatively correlated with longevity. A prominent mediator of inflammation is the transcription factor NF-kB, that acts as key transcriptional regulator of many genes coding for pro-inflammatory cytokines. Many different signaling pathways activated by very diverse stimuli converge on NF-kB, resulting in a regulatory network characterized by high complexity. NF-kB signaling has been proposed to be responsible of inflammaging. Scope of this analysis is to provide a wider, systemic picture of such intricate signaling and interaction network: the NF-kB pathway interactome. Methods. The study has been carried out following a workflow for gathering information from literature as well as from several pathway and protein interactions databases, and for integrating and analyzing existing data and the relative reconstructed representations by using the available computational tools. Strong manual intervention has been necessarily used to integrate data from multiple sources into mathematically analyzable networks. The reconstruction of the NF-kB interactome pursued with this approach provides a starting point for a general view of the architecture and for a deeper analysis and understanding of this complex regulatory system. Results. A “core” and a “wider” NF-kB pathway interactome, consisting of 140 and 3146 proteins respectively, were reconstructed and analyzed through a mathematical, graph-theoretical approach. Among other interesting features, the topological characterization of the interactomes shows that a relevant number of interacting proteins are in turn products of genes that are controlled and regulated in their expression exactly by NF-kB transcription factors. These “feedback loops”, not always well-known, deserve deeper investigation since they may have a role in tuning the response and the output consequent to NF-kB pathway initiation, in regulating the intensity of the response, or its homeostasis and balance in order to make the functioning of such critical system more robust and reliable. This integrated view allows to shed light on the functional structure and on some of the crucial nodes of thet NF-kB transcription factors interactome. Conclusion. Framing structure and dynamics of the NF-kB interactome into a wider, systemic picture would be a significant step toward a better understanding of how NF-kB globally regulates diverse gene programs and phenotypes. This study represents a step towards a more complete and integrated view of the NF-kB signaling system.
Resumo:
Nano(bio)science and nano(bio)technology play a growing and tremendous interest both on academic and industrial aspects. They are undergoing rapid developments on many fronts such as genomics, proteomics, system biology, and medical applications. However, the lack of characterization tools for nano(bio)systems is currently considered as a major limiting factor to the final establishment of nano(bio)technologies. Flow Field-Flow Fractionation (FlFFF) is a separation technique that is definitely emerging in the bioanalytical field, and the number of applications on nano(bio)analytes such as high molar-mass proteins and protein complexes, sub-cellular units, viruses, and functionalized nanoparticles is constantly increasing. This can be ascribed to the intrinsic advantages of FlFFF for the separation of nano(bio)analytes. FlFFF is ideally suited to separate particles over a broad size range (1 nm-1 μm) according to their hydrodynamic radius (rh). The fractionation is carried out in an empty channel by a flow stream of a mobile phase of any composition. For these reasons, fractionation is developed without surface interaction of the analyte with packing or gel media, and there is no stationary phase able to induce mechanical or shear stress on nanosized analytes, which are for these reasons kept in their native state. Characterization of nano(bio)analytes is made possible after fractionation by interfacing the FlFFF system with detection techniques for morphological, optical or mass characterization. For instance, FlFFF coupling with multi-angle light scattering (MALS) detection allows for absolute molecular weight and size determination, and mass spectrometry has made FlFFF enter the field of proteomics. Potentialities of FlFFF couplings with multi-detection systems are discussed in the first section of this dissertation. The second and the third sections are dedicated to new methods that have been developed for the analysis and characterization of different samples of interest in the fields of diagnostics, pharmaceutics, and nanomedicine. The second section focuses on biological samples such as protein complexes and protein aggregates. In particular it focuses on FlFFF methods developed to give new insights into: a) chemical composition and morphological features of blood serum lipoprotein classes, b) time-dependent aggregation pattern of the amyloid protein Aβ1-42, and c) aggregation state of antibody therapeutics in their formulation buffers. The third section is dedicated to the analysis and characterization of structured nanoparticles designed for nanomedicine applications. The discussed results indicate that FlFFF with on-line MALS and fluorescence detection (FD) may become the unparallel methodology for the analysis and characterization of new, structured, fluorescent nanomaterials.
Resumo:
Synthetic biology is a young field of applicative research aiming to design and build up artificial biological devices, useful for human applications. How synthetic biology emerged in past years and how the development of the Registry of Standard Biological Parts aimed to introduce one practical starting solution to apply the basics of engineering to molecular biology is presented in chapter 1 in the thesis The same chapter recalls how biological parts can make up a genetic program, the molecular cloning tecnique useful for this purpose, and an overview of the mathematical modeling adopted to describe gene circuit behavior. Although the design of gene circuits has become feasible the increasing complexity of gene networks asks for a rational approach to design gene circuits. A bottom-up approach was proposed, suggesting that the behavior of a complicated system can be predicted from the features of its parts. The option to use modular parts in large-scale networks will be facilitated by a detailed and shared characterization of their functional properties. Such a prediction, requires well-characterized mathematical models of the parts and of how they behave when assembled together. In chapter 2, the feasibility of the bottom-up approach in the design of a synthetic program in Escherichia coli bacterial cells is described. The rational design of gene networks is however far from being established. The synthetic biology approach can used the mathematical formalism to identify biological information not assessable with experimental measurements. In this context, chapter 3 describes the design of a synthetic sensor for identifying molecules of interest inside eukaryotic cells. The Registry of Standard parts collects standard and modular biological parts. To spread the use of BioBricks the iGEM competition was started. The ICM Laboratory, where Francesca Ceroni completed her Ph.D, partecipated with teams of students and Chapter 4 summarizes the projects developed.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
In genere, negli studi di vocazionalità delle colture, vengono presi in considerazione solo variabili ambientali pedo-climatiche. La coltivazione di una coltura comporta anche un impatto ambientale derivante dalle pratiche agronomiche ed il territorio può essere più o meno sensibile a questi impatti in base alla sua vulnerabilità. In questo studio si vuole sviluppare una metodologia per relazionare spazialmente l’impatto delle colture con le caratteristiche sito specifiche del territorio in modo da considerare anche questo aspetto nell’allocazione negli studi di vocazionalità. LCA è stato utilizzato per quantificare diversi impatti di alcune colture erbacee alimentari e da energia, relazionati a mappe di vulnerabilità costruite con l’utilizzo di GIS, attraverso il calcolo di coefficienti di rischio di allocazione per ogni combinazione coltura-area vulnerabile. Le colture energetiche sono state considerate come un uso alternativo del suolo per diminuire l’impatto ambientale. Il caso studio ha mostrato che l’allocazione delle colture può essere diversa in base al tipo e al numero di impatti considerati. Il risultato sono delle mappe in cui sono riportate le distribuzioni ottimali delle colture al fine di minimizzare gli impatti, rispetto a mais e grano, due colture alimentari importanti nell’area di studio. Le colture con l’impatto più alto dovrebbero essere coltivate nelle aree a vulnerabilità bassa, e viceversa. Se il rischio ambientale è la priorità, mais, colza, grano, girasole, e sorgo da fibra dovrebbero essere coltivate solo nelle aree a vulnerabilità bassa o moderata, mentre, le colture energetiche erbacee perenni, come il panico, potrebbero essere coltivate anche nelle aree a vulnerabilità alta, rappresentando cosi una opportunità per aumentare la sostenibilità di uso del suolo rurale. Lo strumento LCA-GIS inoltre, integrato con mappe di uso attuale del suolo, può aiutare a valutarne il suo grado di sostenibilità ambientale.
Resumo:
This Ph.D. dissertation reports on the work performed at the Wireless Communication Laboratory - University of Bologna and National Research Council - as well as, for six months, at the Fraunhofer Institute for Integrated Circuit (IIS) in Nürnberg. The work of this thesis is in the area of wireless communications, especially with regards to cooperative communications aspects in narrow-band and ultra-wideband systems, cooperative links characterization, network geometry, power allocation techniques,and synchronization between nodes. The underpinning of this work is devoted to developing a general framework for design and analysis of wireless cooperative communication systems, which depends on propagation environment, transmission technique, diversity method, power allocation for various scenarios and relay positions. The optimal power allocation for minimizing the bit error probability at the destination is derived. In addition, a syncronization algorithm for master-slave communications is proposed with the aim of jointly compensate the clock drift and offset of wireless nodes composing the network.