957 resultados para On-Chip Multiprocessor (OCM)
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.
Resumo:
The present PhD project was focused on the development of new tools and methods for luminescence-based techniques. In particular, the ultimate goal was to present substantial improvements to the currently available technologies for both research and diagnostic in the fields of biology, proteomics and genomics. Different aspects and problems were investigated, requiring different strategies and approaches. The whole work was thus divided into separate chapters, each based on the study of one specific aspect of luminescence: Chemiluminescence, Fluorescence and Electrochemiluminescence. CHAPTER 1, Chemiluminescence The work on luminol-enhancer solution lead to a new luminol solution formulation with 1 order of magnitude lower detection limit for HRP. This technology was patented with Cyanagen brand and is now sold worldwide for Western Blot and ELISA applications. CHAPTER 2, Fluorescescence The work on dyed-doped silica nanoparticles is marking a new milestone in the development of nanotechnologies for biological applications. While the project is still in progress, preliminary studies on model structures are leading to very promising results. The improved brightness of these nano-sized objects, their simple synthesis and handling, their low toxicity will soon turn them, we strongly believe, into a new generation of fluorescent labels for many applications. CHAPTER 3, Electrochemiluminescence The work on electrochemiluminescence produced interesting results that can potentially turn into great improvements from an analytical point of view. Ru(bpy)3 derivatives were employed both for on-chip microarray (Chapter 3.1) and for microscopic imaging applications (Chapter 3.2). The development of these new techniques is still under investigation, but the obtained results confirm the possibility to achieve the final goal. Furthermore the development of new ECL-active species (Chapter 3.3, 3.4, 3.5) and their use in these applications can significantly improve overall performances, thus helping to spread ECL as powerful analytical tool for routinary techniques. To conclude, the results obtained are of strong value to largely increase the sensitivity of luminescence techniques, thus fulfilling the expectation we had at the beginning of this research work.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
Animal neocentromeres are defined as ectopic centromeres that have formed in non-centromeric locations and avoid some of the features, like the DNA satellite sequence, that normally characterize canonical centromeres. Despite this, they are stable functional centromeres inherited through generations. The only existence of neocentromeres provide convincing evidence that centromere specification is determined by epigenetic rather than sequence-specific mechanisms. For all this reasons, we used them as simplified models to investigate the molecular mechanisms that underlay the formation and the maintenance of functional centromeres. We collected human cell lines carrying neocentromeres in different positions. To investigate the region involved in the process at the DNA sequence level we applied a recent technology that integrates Chromatin Immuno-Precipitation and DNA microarrays (ChIP-on-chip) using rabbit polyclonal antibodies directed against CENP-A or CENP-C human centromeric proteins. These DNA binding-proteins are required for kinetochore function and are exclusively targeted to functional centromeres. Thus, the immunoprecipitation of DNA bound by these proteins allows the isolation of centromeric sequences, including those of the neocentromeres. Neocentromeres arise even in protein-coding genes region. We further analyzed if the increased scaffold attachment sites and the corresponding tighter chromatin of the region involved in the neocentromerization process still were permissive or not to transcription of within encoded genes. Centromere repositioning is a phenomenon in which a neocentromere arisen without altering the gene order, followed by the inactivation of the canonical centromere, becomes fixed in population. It is a process of chromosome rearrangement fundamental in evolution, at the bases of speciation. The repeat-free region where the neocentromere initially forms, progressively acquires extended arrays of satellite tandem repeats that may contribute to its functional stability. In this view our attention focalized to the repositioned horse ECA11 centromere. ChIP-on-chip analysis was used to define the region involved and SNPs studies, mapping within the region involved into neocentromerization, were carried on. We have been able to describe the structural polymorphism of the chromosome 11 centromeric domain of Caballus population. That polymorphism was seen even between homologues chromosome of the same cells. That discovery was the first described ever. Genomic plasticity had a fundamental role in evolution. Centromeres are not static packaged region of genomes. The key question that fascinates biologists is to understand how that centromere plasticity could be combined to the stability and maintenance of centromeric function. Starting from the epigenetic point of view that underlies centromere formation, we decided to analyze the RNA content of centromeric chromatin. RNA, as well as secondary chemically modifications that involve both histones and DNA, represents a good candidate to guide somehow the centromere formation and maintenance. Many observations suggest that transcription of centromeric DNA or of other non-coding RNAs could affect centromere formation. To date has been no thorough investigation addressing the identity of the chromatin-associated RNAs (CARs) on a global scale. This prompted us to develop techniques to identify CARs in a genome-wide approach using high-throughput genomic platforms. The future goal of this study will be to focalize the attention on what strictly happens specifically inside centromere chromatin.
Resumo:
Multi-Processor SoC (MPSOC) design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. Scaling down of process technologies has increased process and dynamic variations as well as transistor wearout. Because of this, delay variations increase and impact the performance of the MPSoCs. The interconnect architecture inMPSoCs becomes a single point of failure as it connects all other components of the system together. A faulty processing element may be shut down entirely, but the interconnect architecture must be able to tolerate partial failure and variations and operate with performance, power or latency overhead. This dissertation focuses on techniques at different levels of abstraction to face with the reliability and variability issues in on-chip interconnection networks. By showing the test results of a GALS NoC testchip this dissertation motivates the need for techniques to detect and work around manufacturing faults and process variations in MPSoCs’ interconnection infrastructure. As a physical design technique, we propose the bundle routing framework as an effective way to route the Network on Chips’ global links. For architecture-level design, two cases are addressed: (I) Intra-cluster communication where we propose a low-latency interconnect with variability robustness (ii) Inter-cluster communication where an online functional testing with a reliable NoC configuration are proposed. We also propose dualVdd as an orthogonal way of compensating variability at the post-fabrication stage. This is an alternative strategy with respect to the design techniques, since it enforces the compensation at post silicon stage.
Resumo:
Membranproteine spielen eine wichtige Rolle bei physiologischen Prozessen wie Signalweiterleitung oder Immunreaktion. Deshalb stehen sie im Fokus der pharmakologischen Wirkstoffentwicklung und es besteht großes Interesse, Membranproteinbasierte Biosensoren zu entwickeln, die sich z.B. als Screening-Plattformen eignen. Allerdings stellt die Handhabung von Membranproteinen wegen ihrer amphiphilen Struktur eine große Herausforderung dar. Membranproteine werden meist in Zellkultur oder in bakteriellen Expressionssystemen synthetisiert. Diese Verfahren liefern aber oft nur eine geringe Ausbeute und erlauben wenig Kontrolle über die Expressionsbedingungen. Als alternativer Ansatz bietet sich stattdessen die in vitro Synthese von Proteinen an, die in einer zellfreien Umgebung stattfindet. Ziel der vorliegenden Arbeit war die Etablierung eines miniaturisierten Analysesystems, das Aktivitätsmessungen an in vitro synthetisierten Ionenkanälen erlaubt. Dafür wurde ein Labon- Chip entwickelt, der elektrochemische und optische Nachweismethoden in parallelen Anätzen ermöglicht. Als amphiphile Umgebung für die Inkorporation von Membranproteinen wurden vier verschieden biomimetische Membranaufbauten hinsichtlich ihrer Dichtigkeit und ihrer Reproduzierbarkeit untersucht. Als Methode fanden insbesondere die Impedanzspektroskpie und die Oberflächenplasmonen-Resonanzspektroskopie Anwendung. Die peptide cushioned Bilyer Lipid Membranes (pcBLM) eignete sich dabei am besten für Untersuchungen an Membranproteinen. Zur Detektion der Ionenkanalaktivität wurde eine neue Messmethode etabliert, die auf der Messung der Impedanz bei fester Frequenz basiert und u.a. eine Aussage über die Änderung des Membranwiderstandes bei Aktivierung erlaubt. Am Beispiel des nicotinischen Acetylcholinrezeptors (nAchR) konnte gezeigt werden, dass sich die Aktivität von Ionenkanälen mit dem entwickelten Chip-System nachweisen ließ. Die Spezifität der Methode konnte durch verschiedene Kontrollen wie die Zugabe eines nicht-aktivierenden Liganden oder Inhibition des Rezeptors nachgewiesen werden. Weiterhin konnte die in vitro Synthese des Ionenkanals a7 nAchR durch Radioaktivmarkierung nachgewiesen werden. Die Inkorporation des Rezeptors in die biomimetischen Membranen wurde mit Immunodetektion und elektrochemischen Methoden untersucht. Es zeigte sich, dass die funktionelle Inkorporation des a7 nAchR davon abhing, welcher biomimetische Membranaufbau verwendet wurde.
Resumo:
The human DMD locus encodes dystrophin protein. Absence or reduced levels of dystrophin (DMD or BMD phenotype, respectively) lead to progressive muscle wasting. Little is known about the complex coordination of dystrophin expression and its transcriptional regulation is a field of intense interest. In this work we found that DMD locus harbours multiple long non coding RNAs which orchestrate and control transcription of muscle dystrophin mRNA isoforms. These lncRNAs are tissue-specific and highly expressed during myogenesis, suggesting a possible role in tissue-specific expression of DMD gene isoforms. Their forced ectopic expression in human muscle and neuronal cells leads to a specific and negative regulation of endogenous dystrophin full lenght isoforms. An intriguing aspect regarding the transcription of the DMD locus is the gene size (2.4Mb). The mechanism that ensures the complete synthesis of the primary transcript and the coordinated splicing of 79 exons is still completely unknown. By ChIP-on-chip analyses, we discovered novel regions never been involved before in the transcription regulation of the DMD locus. Specifically, we observed enrichments for Pol II, P-Ser2, P-Ser5, Ac-H3 and 2Me-H3K4 in an intronic region of 3Kb (approximately 21Kb) downstream of the end of DMD exon 52 and in a region of 4Kb spanning the DMD exon 62. Interestingly, this latter region and the TSS of Dp71 are strongly marked by 3Me-H3K36, an histone modification associated with the regulation of splicing process. Furthermore, we also observed strong presence of open chromatin marks (Ac-H3 and 2Me-H3K4) around intron 34 and the exon 45 without presence of RNA pol II. We speculate that these two regions may exert an enhancer-like function on Dp427m promoter, although further investigations are necessary. Finally, we investigated the nuclear-cytoplasmic compartmentalization of the muscular dystrophin mRNA and, specifically, we verified whether the exon skipping therapy could influence its cellular distribution.
Resumo:
This dissertation deals with the design and the characterization of novel reconfigurable silicon-on-insulator (SOI) devices to filter and route optical signals on-chip. Design is carried out through circuit simulations based on basic circuit elements (Building Blocks, BBs) in order to prove the feasibility of an approach allowing to move the design of Photonic Integrated Circuits (PICs) toward the system level. CMOS compatibility and large integration scale make SOI one of the most promising material to realize PICs. The concepts of generic foundry and BB based circuit simulations for the design are emerging as a solution to reduce the costs and increase the circuit complexity. To validate the BB based approach, the development of some of the most important BBs is performed first. A novel tunable coupler is also presented and it is demonstrated to be a valuable alternative to the known solutions. Two novel multi-element PICs are then analysed: a narrow linewidth single mode resonator and a passband filter with widely tunable bandwidth. Extensive circuit simulations are carried out to determine their performance, taking into account fabrication tolerances. The first PIC is based on two Grating Assisted Couplers in a ring resonator (RR) configuration. It is shown that a trade-off between performance, resonance bandwidth and device footprint has to be performed. The device could be employed to realize reconfigurable add-drop de/multiplexers. Sensitivity with respect to fabrication tolerances and spurious effects is however observed. The second PIC is based on an unbalanced Mach-Zehnder interferometer loaded with two RRs. Overall good performance and robustness to fabrication tolerances and nonlinear effects have confirmed its applicability for the realization of flexible optical systems. Simulated and measured devices behaviour is shown to be in agreement thus demonstrating the viability of a BB based approach to the design of complex PICs.
Resumo:
In this thesis, anodic aluminum oxide (AAO) membranes, which provide well-aligned uniform mesoscopic pores with adjustable pore parameters, were fabricated and successfully utilized as templates for the fabrication of functional organic nanowires, nanorods and the respective well-ordered arrays. The template-assisted patterning technique was successfully applied for the realization of different objectives:rnHigh-density and well-ordered arrays of hole-conducting nanorods composed of cross-linked triphenylamine (TPA) and tetraphenylbenzidine (TPD) derivatives on conductive substrates like ITO/glass have been successfully fabricated. By applying a freeze-drying technique to remove the aqueous medium after the wet-chemical etching of the template, aggregation and collapsing of the rods was prevented and macroscopic areas of perfectly freestanding nanorods were feasible. Based on the hole-conducting nanorod arrays and their subsequent embedding into an electron-conducting polymer matrix via spin-coating, a novel routine concept for the fabrication of well-ordered all-organic bulk heterojunction for organic photovoltaic applications was successfully demonstrated. The increased donor/acceptor interface of the fabricated devices resulted in a remarkable increase of the photoluminescence quenching compared to a planar bilayer morphology. Further, the fundamental working principle of the templating approach for the solution-based all-organic photovoltaic device was demonstrated for the first time.rnFurthermore, in order to broaden the applicability of patterned surfaces, which are feasible via the template-based patterning of functional materials, AAO with hierarchically branched pores were fabricated and utilized as templates. By pursuing the common templating process hierarchically polymeric replicas, which show remarkable similarities with interesting biostructures, like the surface of the lotus leaf and the feet of a gecko, were successfully prepared.rnIn contrast to the direct infiltration of organic functional materials, a novel route for the fabrication of functional nanowires via post-modification of reactive nanowires was established. Therefore, reactive nanowires based on cross-linked pentafluorophenylesters were fabricated by utilizing AAO templates. The post-modification with fluorescent dyes was demonstrated. Furthermore, reactive wires were converted into well-dispersed poly(N-isopropylacrylamide) (PNIPAM) hydrogels, which exhibit a thermal-responsive reversible phase transition. The reversible thermal-responsible swelling of the PNIPAM nanowires exhibited a more than 50 % extended length than in the collapsed PNIPAM state. rnLast but not least, the shape-anisotropic pores of AAO were utilized to uniformly align the mesogens of a nematic liquid crystalline elastomer. Liquid crystalline nanowires with a narrow size distribution and uniform orientation of the liquid crystalline material were fabricated. It was shown that during the transition from the nematic to the isotropic phase the rod’s length shortened by roughly 40 percent. As such these liquid crystalline elastomeric nanowires may find application, as wire-shaped nanoactuators in various fields of research, like lab-on-chip systems, micro fluidics and biomimetics.rn
Resumo:
Liquid crystalline elastomers (LCEs) are known to perform a reversible change of shape upon the phase transition from the semi-ordered liquid crystalline state to the chaotic isotropic state. This unique behavior of these “artificial muscles” arises from the self-organizing properties of liquid crystals (mesogens) in combination with the entropy-elasticity of the slightly crosslinked elastomer network. In this work, micrometer-sized LCE actuators are fabricated in a microfluidic setup. The microtubular shear flow provides for a uniform orientation of the mesogens during the crosslinking, a perquisite for obtaining actuating LCE samples. The scope of this work was to design different actuator geometries and to broaden the applicability of the microfluidic device for different types of liquid crystalline mesogens, ranging from side-chain to main-chain systems, as well as monomer and polymer precursors. For example, the thiol-ene “click” mechanism was used for the polymerization and crosslinking of main-chain LCE actuators. The main focus was, however, placed on acrylate monomers and polymers with LC side chains. A LC polymer precursor, comprising mesogenic and crosslinkable side-chains was synthesized. Used in combination with an LC monomer, the polymeric crosslinker promoted a stable LC phase, which allowed the mixture to be isothermally handled in the microfluidic reactor. If processed without the additional LC components, the polymer precursor yielded actuating fibers. A suitable co-flowing continuous phase facilitates the formation of a liquid jet and lowers the tendency for drop formation. By modification of the microfluidic device, it was further possible to prepare core-shell particles, comprised of an LCE shell and filled with an isotropic liquid. In analogy to the heart, a hollow muscle, the elastomer shell expels the inner liquid core upon its contraction. The feasibility of the core-shell particles as micropumps was demonstrated. In general, the synthesized LCE microactuators may be utilized as active components in micromechanical and lab-on-chip systems.
Resumo:
In questa tesi sono stati apportati due importanti contributi nel campo degli acceleratori embedded many-core. Abbiamo implementato un runtime OpenMP ottimizzato per la gestione del tasking model per sistemi a processori strettamente accoppiati in cluster e poi interconnessi attraverso una network on chip. Ci siamo focalizzati sulla loro scalabilità e sul supporto di task di granularità fine, come è tipico nelle applicazioni embedded. Il secondo contributo di questa tesi è stata proporre una estensione del runtime di OpenMP che cerca di prevedere la manifestazione di errori dati da fenomeni di variability tramite una schedulazione efficiente del carico di lavoro.
Resumo:
In questa tesi si descrive il lavoro svolto presso l’istituto INFN-CNAF, che consiste nello sviluppo di un’applicazione parallela e del suo utilizzo su di un’architettura a basso consumo, allo scopo di valutare il comportamento della stessa, confrontandolo a quello di architetture ad alta potenza di calcolo. L’architettura a basso consumo utilizzata `e un system on chip mutuato dal mondo mobile e embedded contenente una cpu ARM quad core e una GPU NVIDIA, mentre l’architettura ad alta potenza di calcolo `e un sistema x86 64 con una GPU NVIDIA di classe server. L’applicazione `e stata sviluppata in C++ in due differenti versioni: la prima utilizzando l’estensione OpenMP e la seconda utilizzando l’estensione CUDA. Queste due versioni hanno permesso di valutare il comportamento dell’architettura a basso consumo sotto diversi punti di vista, utilizzando nelle differenti versioni dell’applicazione la CPU o la GPU come unita` principale di elaborazione.