71 resultados para VLSI architectures
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
It is usual to hear a strange short sentence: «Random is better than...». Why is randomness a good solution to a certain engineering problem? There are many possible answers, and all of them are related to the considered topic. In this thesis I will discuss about two crucial topics that take advantage by randomizing some waveforms involved in signals manipulations. In particular, advantages are guaranteed by shaping the second order statistic of antipodal sequences involved in an intermediate signal processing stages. The first topic is in the area of analog-to-digital conversion, and it is named Compressive Sensing (CS). CS is a novel paradigm in signal processing that tries to merge signal acquisition and compression at the same time. Consequently it allows to direct acquire a signal in a compressed form. In this thesis, after an ample description of the CS methodology and its related architectures, I will present a new approach that tries to achieve high compression by design the second order statistics of a set of additional waveforms involved in the signal acquisition/compression stage. The second topic addressed in this thesis is in the area of communication system, in particular I focused the attention on ultra-wideband (UWB) systems. An option to produce and decode UWB signals is direct-sequence spreading with multiple access based on code division (DS-CDMA). Focusing on this methodology, I will address the coexistence of a DS-CDMA system with a narrowband interferer. To do so, I minimize the joint effect of both multiple access (MAI) and narrowband (NBI) interference on a simple matched filter receiver. I will show that, when spreading sequence statistical properties are suitably designed, performance improvements are possible with respect to a system exploiting chaos-based sequences minimizing MAI only.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
This thesis investigates context-aware wireless networks, capable to adapt their behavior to the context and the application, thanks to the ability of combining communication, sensing and localization. Problems of signals demodulation, parameters estimation and localization are addressed exploiting analytical methods, simulations and experimentation, for the derivation of the fundamental limits, the performance characterization of the proposed schemes and the experimental validation. Ultrawide-bandwidth (UWB) signals are in certain cases considered and non-coherent receivers, allowing the exploitation of the multipath channel diversity without adopting complex architectures, investigated. Closed-form expressions for the achievable bit error probability of novel proposed architectures are derived. The problem of time delay estimation (TDE), enabling network localization thanks to ranging measurement, is addressed from a theoretical point of view. New fundamental bounds on TDE are derived in the case the received signal is partially known or unknown at receiver side, as often occurs due to propagation or due to the adoption of low-complexity estimators. Practical estimators, such as energy-based estimators, are revised and their performance compared with the new bounds. The localization issue is addressed with experimentation for the characterization of cooperative networks. Practical algorithms able to improve the accuracy in non-line-of-sight (NLOS) channel conditions are evaluated on measured data. With the purpose of enhancing the localization coverage in NLOS conditions, non-regenerative relaying techniques for localization are introduced and ad hoc position estimators are devised. An example of context-aware network is given with the study of the UWB-RFID system for detecting and locating semi-passive tags. In particular a deep investigation involving low-complexity receivers capable to deal with problems of multi-tag interference, synchronization mismatches and clock drift is presented. Finally, theoretical bounds on the localization accuracy of this and others passive localization networks (e.g., radar) are derived, also accounting for different configurations such as in monostatic and multistatic networks.
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.
Resumo:
Background: Neisseria meningitides represents a major cause of meningitis and sepsis. The meningococcal regulator NadR was previously shown to repress the expression of the Neisserial Adhesin A (NadA) and play a major role in its phase-variation. NadA is a surface exposed protein involved in epithelial cell adhesion and colonization and a major component of 4CMenB, a novel vaccine to prevent meningococcus serogroup B infection. The NadR mediated repression of NadA is attenuated by 4-HPA, a natural molecule released in human saliva. Results: In this thesis we investigated the global role of NadR during meningogoccal infection, identifying through microarray analysis the NadR regulon. Two distinct types of NadR targets were identified, differing in their promoter architectures and 4HPA responsive activities: type I are induced, while type II are co-repressed in response to the same 4HPA signal. We then investigate the mechanism of regulation of NadR by 4-HPA, generating NadR mutants and identifying classes or residues involved in either NadR DNA binding or 4HPA responsive activities. Finally, we studied the impact of NadR mediated repression of NadA on the vaccine coverage of 4CMenB. A selected MenB strains is not killed by sera from immunized infants when the strain is grown in vitro, however, in an in vivo passive protection model, the same sera protected infant rats from bacteremia. Finally, using bioluminescent reporters, nadA expression in the infant rat model was induced in vivo at 3 h post-infection. Conclusions: Our results suggest that NadR coordinates a broad transcriptional response to signals present in the human host, enabling the meningococcus to adapt to the relevant host niche. During infectious disease the effect of the same signal on NadR changes between different targets. In particular NadA expression is induced in vivo, leading to efficient killing of meningococcus by anti-NadA antibodies elicited by the 4CMenB vaccine.
Resumo:
Nanoscience is an emerging and fast-growing field of science with the aim of manipulating nanometric objects with dimension below 100 nm. Top down approach is currently used to build these type of architectures (e.g microchips). The miniaturization process cannot proceed indefinitely due to physical and technical limitations. Those limits are focusing the interest on the bottom-up approach and construction of nano-objects starting from “nano-bricks” like atoms, molecules or nanocrystals. Unlike atoms, molecules can be “fully programmable” and represent the best choice to build up nanostructures. In the past twenty years many examples of functional nano-devices able to perform simple actions have been reported. Nanocrystals which are often considered simply nanostructured materials, can be active part in the development of those nano-devices, in combination with functional molecules. The object of this dissertation is the photophysical and photochemical investigation of nano-objects bearing molecules and semiconductor nanocrystals (QDs) as components. The first part focuses on the characterization of a bistable rotaxane. This study, in collaboration with the group of Prof. J.F. Stoddart (Northwestern University, Evanston, Illinois, USA) who made the synthesis of the compounds, shows the ability of this artificial machine to operate as bistable molecular-level memory under kinetic control. The second part concerns the study of the surface properties of luminescent semiconductor nanocrystals (QDs) and in particular the effect of acid and base on the spectroscopical properties of those nanoparticles. In this section is also reported the work carried out in the laboratory of Prof H. Mattoussi (Florida State University, Tallahassee, Florida, USA), where I developed a novel method for the surface decoration of QDs with lipoic acid-based ligands involving the photoreduction of the di-thiolane moiety.
Resumo:
Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.
Resumo:
Pervasive Sensing is a recent research trend that aims at providing widespread computing and sensing capabilities to enable the creation of smart environments that can sense, process, and act by considering input coming from both people and devices. The capabilities necessary for Pervasive Sensing are nowadays available on a plethora of devices, from embedded devices to PCs and smartphones. The wide availability of new devices and the large amount of data they can access enable a wide range of novel services in different areas, spanning from simple data collection systems to socially-aware collaborative filtering. However, the strong heterogeneity and unreliability of devices and sensors poses significant challenges. So far, existing works on Pervasive Sensing have focused only on limited portions of the whole stack of available devices and data that they can use, to propose and develop mainly vertical solutions. The push from academia and industry for this kind of services shows that time is mature for a more general support framework for Pervasive Sensing solutions able to enhance frail architectures, promote a well balanced usage of resources on different devices, and enable the widest possible access to sensed data, while ensuring a minimal energy consumption on battery-operated devices. This thesis focuses on pervasive sensing systems to extract design guidelines as foundation of a comprehensive reference model for multi-tier Pervasive Sensing applications. The validity of the proposed model is tested in five different scenarios that present peculiar and different requirements, and different hardware and sensors. The ease of mapping from the proposed logical model to the real implementations and the positive performance result campaigns prove the quality of the proposed approach and offer a reliable reference model, together with a direction for the design and deployment of future Pervasive Sensing applications.
Resumo:
La ricerca ha per oggetto il progetto del Foro Bonaparte a Milano redatto da Giovanni Antonio Antolini a seguito del decreto del 23 giugno 1800 che stabiliva l’abbattimento delle mura del Castello Sforzesco. Si affronta il tema dell’architettura avendo come obiettivo una lettura critica di tale progetto servendosi dell’analisi compositiva come strumento in grado di stabilire i rapporti che intercorrono tra la città, l’architettura e il tipo. Attraverso lo studio del progetto urbano la ricerca conferma l’ipotesi per la quale la grande forma totalizzante, perentoria e assoluta è capace di mutare la struttura urbana, offrendo un nuovo modello con cui rinnovare la città. L’ambizione di Antolini di veder realizzata l’opera è destinata a svanire nell’arco di pochi anni, ma il progetto per il Foro Bonaparte continuerà per lungo tempo ad evocare la sua idea innovativa fino ai giorni nostri. Sebbene l’opera sia destinata a rimanere un’architettura solo disegnata, le varie pubblicazioni continuano a circolare nelle accademie prima e nelle università successivamente, costituendo un importante patrimonio di studio e di ricerca per generazioni di architetti, fino alla riscoperta avvenuta con Aldo Rossi e Manfredo Tafuri negli anni sessanta del secolo scorso. Dalle lezioni formulate nelle architetture del passato è possibile avanzare nuove ipotesi e alimentare riflessioni e dibattiti sul ruolo dell’architettura nella città contemporanea. La ricerca si occupa del progetto d’architettura per offrire un ulteriore contributo al tema, attraverso una lettura di carattere compositivo, supportata da una serie di schemi e disegni di studio necessari per completare il testo e per verificare i concetti esposti. Dopo aver raccolto, catalogato ed analizzato il materiale iconografico relativo al progetto per il Foro Bonaparte si è scelto di basare il ridisegno sulla raccolta di disegni conservati presso la Biblioteca Nazionale di Francia.
Resumo:
Per Viollet-le-Duc lo “stile” «è la manifestazione di un ideale fondato su un principio» dove per principio si intende il principio d’ordine della struttura, quest’ultimo deve rispondere direttamente alla Legge dell’”unità” che deve essere sempre rispettata nell’ideazione dell’opera architettonica. A partire da questo nodo centrale del pensiero viollettiano, la presente ricerca si è posta come obiettivo quello dell’esplorazione dei legami fra teoria e prassi nell’opera di Viollet-le-Duc, nei quali lo “stile” ricorre come un "fil rouge" costante, presentandosi come una possibile inedita chiave di lettura di questa figura protagonista della storia del restauro e dell’architettura dell’Ottocento. Il lavoro di ricerca si é dunque concentrato su una nuova lettura dei documenti sia editi che inediti, oltre che su un’accurata ricognizione bibliografica e documentaria, e sullo studio diretto delle architetture. La ricerca archivistica si é dedicata in particolare sull’analisi sistematica dei disegni originali di progetto e delle relazioni tecniche delle opere di Viollet-le- Duc. A partire da questa prima ricognizione, sono stati selezionati due casi- studio ritenuti particolarmente significativi nell’ambito della tematica scelta: il progetto di restauro della chiesa della Madeleine a Vézelay (1840-1859) e il progetto della Maison Milon in rue Douai a Parigi (1857-1860). Attraverso il parallelo lavoro di analisi dei casi-studio e degli scritti di Viollet- le-Duc, si è cercato di verificare le possibili corrispondenze tra teoria e prassi operativa: confrontando i progetti sia con le opere teoriche, sia con la concreta testimonianza degli edifici realizzati.
Resumo:
Hybrid vehicles (HV), comprising a conventional ICE-based powertrain and a secondary energy source, to be converted into mechanical power as well, represent a well-established alternative to substantially reduce both fuel consumption and tailpipe emissions of passenger cars. Several HV architectures are either being studied or already available on market, e.g. Mechanical, Electric, Hydraulic and Pneumatic Hybrid Vehicles. Among the others, Electric (HEV) and Mechanical (HSF-HV) parallel Hybrid configurations are examined throughout this Thesis. To fully exploit the HVs potential, an optimal choice of the hybrid components to be installed must be properly designed, while an effective Supervisory Control must be adopted to coordinate the way the different power sources are managed and how they interact. Real-time controllers can be derived starting from the obtained optimal benchmark results. However, the application of these powerful instruments require a simplified and yet reliable and accurate model of the hybrid vehicle system. This can be a complex task, especially when the complexity of the system grows, i.e. a HSF-HV system assessed in this Thesis. The first task of the following dissertation is to establish the optimal modeling approach for an innovative and promising mechanical hybrid vehicle architecture. It will be shown how the chosen modeling paradigm can affect the goodness and the amount of computational effort of the solution, using an optimization technique based on Dynamic Programming. The second goal concerns the control of pollutant emissions in a parallel Diesel-HEV. The emissions level obtained under real world driving conditions is substantially higher than the usual result obtained in a homologation cycle. For this reason, an on-line control strategy capable of guaranteeing the respect of the desired emissions level, while minimizing fuel consumption and avoiding excessive battery depletion is the target of the corresponding section of the Thesis.
Resumo:
Molecular self-assembly takes advantage of supramolecular non-covalent interactions (ionic, hydrophobic, van der Waals, hydrogen and coordination bonds) for the construction of organized and tunable systems. In this field, lipophilic guanosines can represent powerful building blocks thanks to their aggregation proprieties in organic solvents, which can be controlled by addition or removal of cations. For example, potassium ion can template the formation of piled G-quartets structures, while in its absence ribbon-like G aggregates are generated in solution. In this thesis we explored the possibility of using guanosines as scaffolds to direct the construction of ordered and self-assembled architectures, one of the main goals of bottom-up approach in nanotechnology. In Chapter III we will describe Langmuir-Blodgett films obtained from guanosines and other lipophilic nucleosides, revealing the “special” behavior of guanine in comparison with the other nucleobases. In Chapter IV we will report the synthesis of several thiophene-functionalized guanosines and the studies towards their possible use in organic electronics: the pre-programmed organization of terthiophene residues in ribbon aggregates could allow charge conduction through π-π stacked oligothiophene functionalities. The construction and the behavior of some simple electronic nanodevices based on these organized thiopehene-guanosine hybrids has been explored.
Resumo:
Thermal effects are rapidly gaining importance in nanometer heterogeneous integrated systems. Increased power density, coupled with spatio-temporal variability of chip workload, cause lateral and vertical temperature non-uniformities (variations) in the chip structure. The assumption of an uniform temperature for a large circuit leads to inaccurate determination of key design parameters. To improve design quality, we need precise estimation of temperature at detailed spatial resolution which is very computationally intensive. Consequently, thermal analysis of the designs needs to be done at multiple levels of granularity. To further investigate the flow of chip/package thermal analysis we exploit the Intel Single Chip Cloud Computer (SCC) and propose a methodology for calibration of SCC on-die temperature sensors. We also develop an infrastructure for online monitoring of SCC temperature sensor readings and SCC power consumption. Having the thermal simulation tool in hand, we propose MiMAPT, an approach for analyzing delay, power and temperature in digital integrated circuits. MiMAPT integrates seamlessly into industrial Front-end and Back-end chip design flows. It accounts for temperature non-uniformities and self-heating while performing analysis. Furthermore, we extend the temperature variation aware analysis of designs to 3D MPSoCs with Wide-I/O DRAM. We improve the DRAM refresh power by considering the lateral and vertical temperature variations in the 3D structure and adapting the per-DRAM-bank refresh period accordingly. We develop an advanced virtual platform which models the performance, power, and thermal behavior of a 3D-integrated MPSoC with Wide-I/O DRAMs in detail. Moving towards real-world multi-core heterogeneous SoC designs, a reconfigurable heterogeneous platform (ZYNQ) is exploited to further study the performance and energy efficiency of various CPU-accelerator data sharing methods in heterogeneous hardware architectures. A complete hardware accelerator featuring clusters of OpenRISC CPUs, with dynamic address remapping capability is built and verified on a real hardware.
Resumo:
In this work we presented several aspects regarding the possibility to use readily available propargylic alcohols as acyclic precursors to develop new stereoselective [Au(I)]-catalyzed cascade reactions for the synthesis of highly complex indole architectures. The use of indole-based propargylic alcohols of type 1 in a stereoselective [Au(I)]-catalyzed hydroindolynation/immiun trapping reactive sequence opened access to a new class of tetracyclic indolines, dihydropyranylindolines A and furoindolines B. An enantioselective protocol was futher explored in order to synthesize this molecules with high yields and ee. The suitability of propargylic alcohols in [Au(I)]-catalyzed cascade reactions was deeply investigated by developing cascade reactions in which was possible not only to synthesize the indole core but also to achieve a second functionalization. Aniline based propargylic alcohols 2 were found to be modular acyclic precursors for the synthesis of [1,2-a] azepinoindoles C. In describing this reactivity we additionally reported experimental evidences for an unprecedented NHCAu(I)-vinyl specie which in a chemoselective fashion, led to the annulation step, synthesizing the N1-C2-connected seven membered ring. The chemical flexibility of propargylic alcohols was further explored by changing the nature of the chemical surrounding with different preinstalled N-alkyl moiety in propargylic alcohols of type 3. Particularly, in the case of a primary alcohol, [Au(I)] catalysis was found to be prominent in the synthesis of a new class of [4,3-a]-oxazinoindoles D while the use of an allylic alcohol led to the first example of [Au(I)] catalyzed synthesis and enantioselective functionalization of this class of molecules (D*). With this work we established propargylic alcohols as excellent acyclic precursor to developed new [Au(I)]-catalyzed cascade reaction and providing new catalytic synthetic tools for the stereoselective synthesis of complex indole/indoline architectures.