19 resultados para Thread safe parallel run-time
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
Technology advances in recent years have dramatically changed the way users exploit contents and services available on the Internet, by enforcing pervasive and mobile computing scenarios and enabling access to networked resources almost from everywhere, at anytime, and independently of the device in use. In addition, people increasingly require to customize their experience, by exploiting specific device capabilities and limitations, inherent features of the communication channel in use, and interaction paradigms that significantly differ from the traditional request/response one. So-called Ubiquitous Internet scenario calls for solutions that address many different challenges, such as device mobility, session management, content adaptation, context-awareness and the provisioning of multimodal interfaces. Moreover, new service opportunities demand simple and effective ways to integrate existing resources into new and value added applications, that can also undergo run-time modifications, according to ever-changing execution conditions. Despite service-oriented architectural models are gaining momentum to tame the increasing complexity of composing and orchestrating distributed and heterogeneous functionalities, existing solutions generally lack a unified approach and only provide support for specific Ubiquitous Internet aspects. Moreover, they usually target rather static scenarios and scarcely support the dynamic nature of pervasive access to Internet resources, that can make existing compositions soon become obsolete or inadequate, hence in need of reconfiguration. This thesis proposes a novel middleware approach to comprehensively deal with Ubiquitous Internet facets and assist in establishing innovative application scenarios. We claim that a truly viable ubiquity support infrastructure must neatly decouple distributed resources to integrate and push any kind of content-related logic outside its core layers, by keeping only management and coordination responsibilities. Furthermore, we promote an innovative, open, and dynamic resource composition model that allows to easily describe and enforce complex scenario requirements, and to suitably react to changes in the execution conditions.
Resumo:
Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.
Resumo:
A recent initiative of the European Space Agency (ESA) aims at the definition and adoption of a software reference architecture for use in on-board software of future space missions. Our PhD project placed in the context of that effort. At the outset of our work we gathered all the industrial needs relevant to ESA and all the main European space stakeholders and we were able to consolidate a set of technical high-level requirements for the fulfillment of them. The conclusion we reached from that phase confirmed that the adoption of a software reference architecture was indeed the best solution for the fulfillment of the high-level requirements. The software reference architecture we set on building rests on four constituents: (i) a component model, to design the software as a composition of individually verifiable and reusable software units; (ii) a computational model, to ensure that the architectural description of the software is statically analyzable; (iii) a programming model, to ensure that the implementation of the design entities conforms with the semantics, the assumptions and the constraints of the computational model; (iv) a conforming execution platform, to actively preserve at run time the properties asserted by static analysis. The nature, feasibility and fitness of constituents (ii), (iii) and (iv), were already proved by the author in an international project that preceded the commencement of the PhD work. The core of the PhD project was therefore centered on the design and prototype implementation of constituent (i), a component model. Our proposed component model is centered on: (i) rigorous separation of concerns, achieved with the support for design views and by careful allocation of concerns to the dedicated software entities; (ii) the support for specification and model-based analysis of extra-functional properties; (iii) the inclusion space-specific concerns.
Resumo:
The promising development in the routine nanofabrication and the increasing knowledge of the working principles of new classes of highly sensitive, label-free and possibly cost-effective bio-nanosensors for the detection of molecules in liquid environment, has rapidly increased the possibility to develop portable sensor devices that could have a great impact on many application fields, such as health-care, environment and food production, thanks to the intrinsic ability of these biosensors to detect, monitor and study events at the nanoscale. Moreover, there is a growing demand for low-cost, compact readout structures able to perform accurate preliminary tests on biosensors and/or to perform routine tests with respect to experimental conditions avoiding skilled personnel and bulky laboratory instruments. This thesis focuses on analysing, designing and testing novel implementation of bio-nanosensors in layered hybrid systems where microfluidic devices and microelectronic systems are fused in compact printed circuit board (PCB) technology. In particular the manuscript presents hybrid systems in two validating cases using nanopore and nanowire technology, demonstrating new features not covered by state of the art technologies and based on the use of two custom integrated circuits (ICs). As far as the nanopores interface system is concerned, an automatic setup has been developed for the concurrent formation of bilayer lipid membranes combined with a custom parallel readout electronic system creating a complete portable platform for nanopores or ion channels studies. On the other hand, referring to the nanowire readout hybrid interface, two systems enabling to perform parallel, real-time, complex impedance measurements based on lock-in technique, as well as impedance spectroscopy measurements have been developed. This feature enable to experimentally investigate the possibility to enrich informations on the bio-nanosensors concurrently acquiring impedance magnitude and phase thus investigating capacitive contributions of bioanalytical interactions on biosensor surface.
Resumo:
Descrizione, tema e obiettivi della ricerca La ricerca si propone lo studio delle possibili influenze che la teoria di Aldo Rossi ha avuto sulla pratica progettuale nella Penisola Iberica, intende quindi affrontare i caratteri fondamentali della teoria che sta alla base di un metodo progettuale ed in particolar modo porre l'attenzione alle nuove costruzioni quando queste si confrontano con le città storiche. Ha come oggetto principale lo studio dei documenti, saggi e scritti riguardanti il tema della costruzione all'interno delle città storiche. Dallo studio di testi selezionati di Aldo Rossi sulla città si vuole concentrare l'attenzione sull'influenza che tale teoria ha avuto nei progetti della Penisola Iberica, studiare come è stata recepita e trasmessa successivamente, attraverso gli scritti di autori spagnoli e come ha visto un suo concretizzarsi poi nei progetti di nuove costruzioni all'interno delle città storiche. Si intende restringere il campo su un periodo ed un luogo precisi, Spagna e Portogallo a partire dagli anni Settanta, tramite la lettura di un importante evento che ha ufficializzato il contatto dell'architetto italiano con la Penisola Iberica, quale il Seminario di Santiago de Compostela tenutosi nel 1976. Al Seminario parteciparono numerosi architetti che si confrontarono su di un progetto per la città di Santiago e furono invitati personaggi di fama internazionale a tenere lezioni introduttive sul tema di dibattito in merito al progetto e alla città storica. Il Seminario di Santiago si colloca in un periodo storico cruciale per la Penisola Iberica, nel 1974 cade il regime salazarista in Portogallo e nel 1975 cade il regime franchista in Spagna ed è quindi di rilevante importanza capire il legame tra l'architettura e la nuova situazione politica. Dallo studio degli interventi, dei progetti che furono prodotti durante il Seminario, della relazione tra questo evento ed il periodo storico in cui esso va contestualizzato, si intende giungere alla individuazione delle tracce della reale presenza di tale eredità. Presupposti metodologici. Percorso e strumenti di ricerca La ricerca può quindi essere articolata in distinte fasi corrispondenti per lo più ai capitoli in cui si articola la tesi: una prima fase con carattere prevalentemente storica, di ricerca del materiale per poter definire il contesto in cui si sviluppano poi le vicende oggetto della tesi; una seconda fase di impronta teorica, ossia di ricerca bibliografica del materiale e delle testimonianze che provvedono alla definizione della reale presenza di effetti scaturiti dai contatti tra Rossi e la Penisola Iberica, per andare a costruire una eredità ; una terza fase che entra nel merito della composizione attraverso lo studio e la verifica delle prime due parti, tramite l'analisi grafica applicata ad uno specifico esempio architettonico selezionato; una quarta fase dove il punto di vista viene ribaltato e si indaga l'influenza dei luoghi visitati e dei contatti intrattenuti con alcuni personaggi della Penisola Iberica sull'architettura di Rossi, ricercandone i riferimenti. La ricerca è stata condotta attraverso lo studio di alcuni eventi selezionati nel corso degli anni che si sono mostrati significativi per l'indagine, per la risonanza che hanno avuto sulla storia dell'architettura della Penisola. A questo scopo si sono utilizzati principalmente tre strumenti: lo studio dei documenti, le pubblicazioni e le riviste prodotte in Spagna, gli scritti di Aldo Rossi in merito, e la testimonianza diretta attraverso interviste di personaggi chiave. La ricerca ha prodotto un testo suddiviso per capitoli che rispetta l'organizzazione in fasi di lavoro. A seguito di determinate condizioni storiche e politiche, studiate nella ricerca a supporto della tesi espressa, nella Penisola Iberica si è verificato il diffondersi della necessità e del desiderio di guardare e prendere a riferimento l'architettura europea e in particolar modo quella italiana. Il periodo sul quale viene focalizzata l'attenzione ha inizio negli anni Sessanta, gli ultimi prima della caduta delle dittature, scenario dei primi viaggi di Aldo Rossi nella Penisola Iberica. Questi primi contatti pongono le basi per intense e significative relazioni future. Attraverso l'approfondimento e la studio dei materiali relativi all'oggetto della tesi, si è cercato di mettere in luce il contesto culturale, l'attenzione e l'interesse per l'apertura di un dibattito intorno all'architettura, non solo a livello nazionale, ma europeo. Ciò ha evidenziato il desiderio di innescare un meccanismo di discussione e scambio di idee, facendo leva sull'importanza dello sviluppo e ricerca di una base teorica comune che rende coerente i lavori prodotti nel panorama architettonico iberico, seppur ottenendo risultati che si differenziano gli uni dagli altri. E' emerso un forte interesse per il discorso teorico sull'architettura, trasmissibile e comunicabile, che diventa punto di partenza per un metodo progettuale. Ciò ha reso palese una condivisione di intenti e l'assunzione della teoria di Aldo Rossi, acquisita, diffusa e discussa, attraverso la pubblicazione dei suoi saggi, la conoscenza diretta con l'architetto e la sua architettura, conferenze, seminari, come base teorica su cui fondare il proprio sapere architettonico ed il processo metodologico progettuale da applicare di volta in volta negli interventi concreti. Si è giunti così alla definizione di determinati eventi che hanno permesso di entrare nel profondo della questione e di sondare la relazione tra Rossi e la Penisola Iberica, il materiale fornito dallo studio di tali episodi, quali il I SIAC, la diffusione della rivista "2C. Construccion de la Ciudad", la Coleccion Arquitectura y Critica di Gustavo Gili, hanno poi dato impulso per il reperimento di una rete di ulteriori riferimenti. E' stato possibile quindi individuare un gruppo di architetti spagnoli, che si identificano come allievi del maestro Rossi, impegnato per altro in quegli anni nella formazione di una Scuola e di un insegnamento, che non viene recepito tanto nelle forme, piuttosto nei contenuti. I punti su cui si fondano le connessioni tra l'analisi urbana e il progetto architettonico si centrano attorno due temi di base che riprendono la teoria esposta da Rossi nel saggio L'architettura della città : - relazione tra l'area-studio e la città nella sua globalità, - relazione tra la tipologia edificatoria e gli aspetti morfologici. La ricerca presentata ha visto nelle sue successive fasi di approfondimento, come si è detto, lo sviluppo parallelo di più tematiche. Nell'affrontare ciascuna fase è stato necessario, di volta in volta, operare una verifica delle tappe percorse precedentemente, per mantenere costante il filo del discorso col lavoro svolto e ritrovare, durante lo svolgimento stesso della ricerca, gli elementi di connessione tra i diversi episodi analizzati. Tale operazione ha messo in luce talvolta nodi della ricerca rimasti in sospeso che richiedevano un ulteriore approfondimento o talvolta solo una rivisitazione per renderne possibile un più proficuo collegamento con la rete di informazioni accumulate. La ricerca ha percorso strade diverse che corrono parallele, per quanto riguarda il periodo preso in analisi: - i testi sulla storia dell'architettura spagnola e la situazione contestuale agli anni Settanta - il materiale riguardante il I SIAC - le interviste ai partecipanti al I SIAC - le traduzioni di Gustavo Gili nella Coleccion Arquitectura y Critica - la rivista "2C. Construccion de la Ciudad" Esse hanno portato alla luce una notevole quantità di tematiche, attraverso le quali, queste strade vengono ad intrecciarsi e a coincidere, verificando l'una la veridicità dell'altra e rafforzandone il valore delle affermazioni. Esposizione sintetica dei principali contenuti esposti dalla ricerca Andiamo ora a vedere brevemente i contenuti dei singoli capitoli. Nel primo capitolo Anni Settanta. Periodo di transizione per la Penisola Iberica si è cercato di dare un contesto storico agli eventi studiati successivamente, andando ad evidenziare gli elementi chiave che permettono di rintracciare la presenza della predisposizione ad un cambiamento culturale. La fase di passaggio da una condizione di chiusura rispetto alle contaminazioni provenienti dall'esterno, che caratterizza Spagna e Portogallo negli anni Sessanta, lascia il posto ad un graduale abbandono della situazione di isolamento venutasi a creare intorno al Paese a causa del regime dittatoriale, fino a giungere all'apertura e all'interesse nei confronti degli apporti culturali esterni. E' in questo contesto che si gettano le basi per la realizzazione del I Seminario Internazionale di Architettura Contemporanea a Santiago de Compostela, del 1976, diretto da Aldo Rossi e organizzato da César Portela e Salvador Tarragó, di cui tratta il capitolo secondo. Questo è uno degli eventi rintracciati nella storia delle relazioni tra Rossi e la Penisola Iberica, attraverso il quale è stato possibile constatare la presenza di uno scambio culturale e l'importazione in Spagna delle teorie di Aldo Rossi. Organizzato all'indomani della caduta del franchismo, ne conserva una reminescenza formale. Il capitolo è organizzato in tre parti, la prima si occupa della ricostruzione dei momenti salienti del Seminario Proyecto y ciudad historica, dagli interventi di architetti di fama internazionale, quali lo stesso Aldo Rossi, Carlo Aymonino, James Stirling, Oswald Mathias Ungers e molti altri, che si confrontano sul tema delle città storiche, alle giornate seminariali dedicate all’elaborazione di un progetto per cinque aree individuate all’interno di Santiago de Compostela e quindi dell’applicazione alla pratica progettuale dell’inscindibile base teorica esposta. Segue la seconda parte dello stesso capitolo riguardante La selezione di interviste ai partecipanti al Seminario. Esso contiene la raccolta dei colloqui avuti con alcuni dei personaggi che presero parte al Seminario e attraverso le loro parole si è cercato di approfondire la materia, in particolar modo andando ad evidenziare l’ambiente culturale in cui nacque l’idea del Seminario, il ruolo avuto nella diffusione della teoria di Aldo Rossi in Spagna e la ripercussione che ebbe nella pratica costruttiva. Le diverse interviste, seppur rivolte a persone che oggi vivono in contesti distanti e che in seguito a questa esperienza collettiva hanno intrapreso strade diverse, hanno fatto emergere aspetti comuni, tale unanimità ha dato ancor più importanza al valore di testimonianza offerta. L’elemento che risulta più evidente è il lascito teorico, di molto prevalente rispetto a quello progettuale che si è andato mescolando di volta in volta con la tradizione e l’esperienza dei cosiddetti allievi di Aldo Rossi. Negli stessi anni comincia a farsi strada l’importanza del confronto e del dibattito circa i temi architettonici e nel capitolo La fortuna critica della teoria di Aldo Rossi nella Penisola Iberica è stato affrontato proprio questo rinnovato interesse per la teoria che in quegli anni si stava diffondendo. Si è portato avanti lo studio delle pubblicazioni di Gustavo Gili nella Coleccion Arquitectura y Critica che, a partire dalla fine degli anni Sessanta, pubblica e traduce in lingua spagnola i più importanti saggi di architettura, tra i quali La arquitectura de la ciudad di Aldo Rossi, nel 1971, e Comlejidad y contradiccion en arquitectura di Robert Venturi nel 1972. Entrambi fondamentali per il modo di affrontare determinate tematiche di cui sempre più in quegli anni si stava interessando la cultura architettonica iberica, diventando così ¬ testi di riferimento anche nelle scuole. Le tracce dell’influenza di Rossi sulla Penisola Iberica si sono poi ricercate nella rivista “2C. Construccion de la Ciudad” individuata come strumento di espressione di una teoria condivisa. Con la nascita nel 1972 a Barcellona di questa rivista viene portato avanti l’impegno di promuovere la Tendenza, facendo riferimento all’opera e alle idee di Rossi ed altri architetti europei, mirando inoltre al recupero di un ruolo privilegiato dell’architettura catalana. A questo proposito sono emersi due fondamentali aspetti che hanno legittimato l’indagine e lo studio di questa fonte: - la diffusione della cultura architettonica, il controllo ideologico e di informazione operato dal lavoro compiuto dalla rivista; - la documentazione circa i criteri di scelta della redazione a proposito del materiale pubblicato. E’ infatti attraverso le pubblicazioni di “2C. Construccion de la Ciudad” che è stato possibile il ritrovamento delle notizie sulla mostra Arquitectura y razionalismo. Aldo Rossi + 21 arquitectos españoles, che accomuna in un’unica esposizione le opere del maestro e di ventuno giovani allievi che hanno recepito e condiviso la teoria espressa ne “L’architettura della città”. Tale mostra viene poi riproposta nella Sezione Internazionale di Architettura della XV Triennale di Milano, la quale dedica un Padiglione col titolo Barcelona, tres epocas tres propuestas. Dalla disamina dei progetti presentati è emerso un interessante caso di confronto tra le Viviendas para gitanos di César Portela e la Casa Bay di Borgo Ticino di Aldo Rossi, di cui si è occupato l’ultimo paragrafo di questo capitolo. Nel corso degli studi è poi emerso un interessante risvolto della ricerca che, capovolgendone l’oggetto stesso, ne ha approfondito gli aspetti cercando di scavare più in profondità nell’analisi della reciproca influenza tra la cultura iberica e Aldo Rossi, questa parte, sviscerata nell’ultimo capitolo, La Penisola Iberica nel “magazzino della memoria” di Aldo Rossi, ha preso il posto di quello che inizialmente doveva presentarsi come il risvolto progettuale della tesi. Era previsto infatti, al termine dello studio dell’influenza di Aldo Rossi sulla Penisola Iberica, un capitolo che concentrava l’attenzione sulla produzione progettuale. A seguito dell’emergere di un’influenza di carattere prettamente teorica, che ha sicuramente modificato la pratica dal punto di vista delle scelte architettoniche, senza però rendersi esplicita dal punto di vista formale, si è preferito, anche per la difficoltà di individuare un solo esempio rappresentativo di quanto espresso, sostituire quest’ultima parte con lo studio dell’altra faccia della medaglia, ossia l’importanza che a sua volta ha avuto la cultura iberica nella formazione della collezione dei riferimenti di Aldo Rossi. L’articolarsi della tesi in fasi distinte, strettamente connesse tra loro da un filo conduttore, ha reso necessari successivi aggiustamenti nel percorso intrapreso, dettati dall’emergere durante la ricerca di nuovi elementi di indagine. Si è pertanto resa esplicita la ricercata eredità di Aldo Rossi, configurandosi però prevalentemente come un’influenza teorica che ha preso le sfumature del contesto e dell’esperienza personale di chi se ne è fatto ricevente, diventandone così un continuatore attraverso il proprio percorso autonomo o collettivo intrapreso in seguito. Come suggerisce José Charters Monteiro, l’eredità di Rossi può essere letta attraverso tre aspetti su cui si basa la sua lezione: la biografia, la teoria dell’architettura, l’opera. In particolar modo per quanto riguarda la Penisola Iberica si può parlare dell’individuazione di un insegnamento riferito alla seconda categoria, i suoi libri di testo, le sue partecipazioni, le traduzioni. Questo è un lascito che rende possibile la continuazione di un dibattito in merito ai temi della teoria dell’architettura, della sue finalità e delle concrete applicazioni nelle opere, che ha permesso il verificarsi di una apertura mentale che mette in relazione l’architettura con altre discipline umanistiche e scientifiche, dalla politica, alla sociologia, comprendendo l’arte, le città la morfologia, la topografia, mediate e messe in relazione proprio attraverso l’architettura.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
Massive parallel robots (MPRs) driven by discrete actuators are force regulated robots that undergo continuous motions despite being commanded through a finite number of states only. Designing a real-time control of such systems requires fast and efficient methods for solving their inverse static analysis (ISA), which is a challenging problem and the subject of this thesis. In particular, five Artificial intelligence methods are proposed to investigate the on-line computation and the generalization error of ISA problem of a class of MPRs featuring three-state force actuators and one degree of revolute motion.
Resumo:
The main objective of this thesis is to explore the short and long run causality patterns in the finance – growth nexus and finance-growth-trade nexus before and after the global financial crisis, in the case of Albania. To this end we use quarterly data on real GDP, 13 proxy measures for financial development and the trade openness indicator for the period 1998Q1 – 2013Q2 and 1998Q1-2008Q3. Causality patterns will be explored in a VAR-VECM framework. For this purpose we will proceed as follows: (i) testing for the integration order of the variables; (ii) cointegration analysis and (iii) performing Granger causality tests in a VAR-VECM framework. In the finance-growth nexus, empirical evidence suggests for a positive long run relationship between finance and economic growth, with causality running from financial development to economic growth. The global financial crisis seems to have not affected the causality direction in the finance and growth nexus, thus supporting the finance led growth hypothesis in the long run in the case of Albania. In the finance-growth-trade openness nexus, we found evidence for a positive long run relationship the variables, with causality direction depending on the proxy used for financial development. When the pre-crisis sample is considered, we find evidence for causality running from financial development and trade openness to economic growth. The global financial crisis seems to have affected somewhat the causality direction in the finance-growth-trade nexus, which has become sensible to the proxy used for financial development. On the short run, empirical evidence suggests for a clear unidirectional relationship between finance and growth, with causality mostly running from economic growth to financial development. When we consider the per-crisis sub sample results are mixed, depending on the proxy used for financial development. The same results are confirmed when trade openness is taken into account.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.
Resumo:
This dissertation studies the geometric static problem of under-constrained cable-driven parallel robots (CDPRs) supported by n cables, with n ≤ 6. The task consists of determining the overall robot configuration when a set of n variables is assigned. When variables relating to the platform posture are assigned, an inverse geometric static problem (IGP) must be solved; whereas, when cable lengths are given, a direct geometric static problem (DGP) must be considered. Both problems are challenging, as the robot continues to preserve some degrees of freedom even after n variables are assigned, with the final configuration determined by the applied forces. Hence, kinematics and statics are coupled and must be resolved simultaneously. In this dissertation, a general methodology is presented for modelling the aforementioned scenario with a set of algebraic equations. An elimination procedure is provided, aimed at solving the governing equations analytically and obtaining a least-degree univariate polynomial in the corresponding ideal for any value of n. Although an analytical procedure based on elimination is important from a mathematical point of view, providing an upper bound on the number of solutions in the complex field, it is not practical to compute these solutions as it would be very time-consuming. Thus, for the efficient computation of the solution set, a numerical procedure based on homotopy continuation is implemented. A continuation algorithm is also applied to find a set of robot parameters with the maximum number of real assembly modes for a given DGP. Finally, the end-effector pose depends on the applied load and may change due to external disturbances. An investigation into equilibrium stability is therefore performed.
Resumo:
This thesis tries to further our understanding for why some countries today are more prosperous than others. It establishes that part of today's observed variation in several proxies such as income or gender inequality have been determined in the distant past. Chapter one shows that 450 years of (Catholic) Portuguese colonisation had a long-lasting impact in India when it comes to education and female emancipation. Furthermore I use a historical quasi-experiment that happened 250 years ago in order to show that different outcomes have different degrees of persitence over time. Educational gaps between males and females seemingly wash out a few decades after the public provision of schools. The male biased sex-ratios on the other hand stay virtually unchanged despite governmental efforts. This provides evidence that deep rooted son preferences are much harder to overcome, suggesting that a differential approach is needed to tackle sex-selective abortion and female neglect. The second chapter proposes improvements for the execution of Spatial Regression Discontinuity Designs. These suggestions are accompanied by a full-fledged spatial statistical package written in R. Chapter three introduces a quantitative economic geography model in order to study the peculiar evolution of the European urban system on its way to the Industrial Revolution. It can explain the shift of economic gravity from the Mediterranean towards the North-Sea ("little divergence"). The framework provides novel insights on the importance of agricultural trade costs and the peculiar geography of Europe with its extended coastline and dense network of navigable rivers.
Resumo:
This thesis focuses on the dynamics of underactuated cable-driven parallel robots (UACDPRs), including various aspects of robotic theory and practice, such as workspace computation, parameter identification, and trajectory planning. After a brief introduction to CDPRs, UACDPR kinematic and dynamic models are analyzed, under the relevant assumption of inextensible cables. The free oscillatory motion of the end-effector (EE), which is a unique feature of underactuated mechanisms, is studied in detail, from both a kinematic and a dynamic perspective. The free (small) oscillations of the EE around equilibria are proved to be harmonic and the corresponding natural oscillation frequencies are analytically computed. UACDPR workspace computation and analysis are then performed. A new performance index is proposed for the analysis of the influence of actuator errors on cable tensions around equilibrium configurations, and a new type of workspace, called tension-error-insensitive, is defined as the set of poses that a UACDPR EE can statically attain even in presence of actuation errors, while preserving tensions between assigned (positive) bounds. EE free oscillations are then employed to conceive a novel procedure aimed at identifying the EE inertial parameters. This approach does not require the use of force or torque measurements. Moreover, a self-calibration procedure for the experimental determination of UACDPR initial cable lengths is developed, which consequently enables the robot to automatically infer the EE initial pose at machine start-up. Lastly, trajectory planning of UACDPRs is investigated. Two alternative methods are proposed, which aim at (i) reducing EE oscillations even when model parameters are uncertain or (ii) eliminate EE oscillations in case model parameters are perfectly known. EE oscillations are reduced in real-time by dynamically scaling a nominal trajectory and filtering it with an input shaper, whereas they can be eliminated if an off-line trajectory is computed that accounts for the system internal dynamics.