885 resultados para Paradigms
Resumo:
There is evidence that the explicit lexical-semantic processing deficits which characterize aphasia may be observed in the absence of implicit semantic impairment. The aim of this article was to critically review the international literature on lexical-semantic processing in aphasia, as tested through the semantic priming paradigm. Specifically, this review focused on aphasia and lexical-semantic processing, the methodological strengths and weaknesses of the semantic paradigms used, and recent evidence from neuroimaging studies on lexical-semantic processing. Furthermore, evidence on dissociations between implicit and explicit lexical-semantic processing reported in the literature will be discussed and interpreted by referring to functional neuroimaging evidence from healthy populations. There is evidence that semantic priming effects can be found both in fluent and in non-fluent aphasias, and that these effects are related to an extensive network which includes the temporal lobe, the pre-frontal cortex, the left frontal gyrus, the left temporal gyrus and the cingulated cortex.
Resumo:
Se han eliminado páginas en blanco
Resumo:
[EN] Programming software for controlling robotic systems in order to built working systems that perform adequately according to their design requirements remains being a task that requires an important development effort. Currently, there are no clear programming paradigms for programming robotic systems, and the programming techniques which are of common use today are not adequate to deal with the complexity associated with these systems. The work presented in this document describes a programming tool, concretely a framework, that must be considered as a first step to devise a tool for dealing with the complexity present in robotics systems. In this framework the software that controls a system is viewed as a dynamic network of units of execution inter-connected by means of data paths. Each one of these units of execution, called a component, is a port automaton which provides a given functionality, hidden behind an external interface specifying clearly which data it needs and which data it produces. Components, once defined and built, may be instantiated, integrated and used as many times as needed in other systems. The framework provides the infrastructure necessary to support this concept for components and the inter communication between them by means of data paths (port connections) which can be established and de-established dynamically. Moreover, and considering that the more robust components that conform a system are, the more robust the system is, the framework provides the necessary infrastructure to control and monitor the components than integrate a system at any given instant of time.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
Impairment of postural control is a common consequence of Parkinson's disease (PD) that becomes more and more critical with the progression of the disease, in spite of the available medications. Postural instability is one of the most disabling features of PD and induces difficulties with postural transitions, initiation of movements, gait disorders, inability to live independently at home, and is the major cause of falls. Falls are frequent (with over 38% falling each year) and may induce adverse consequences like soft tissue injuries, hip fractures, and immobility due to fear of falling. As the disease progresses, both postural instability and fear of falling worsen, which leads patients with PD to become increasingly immobilized. The main aims of this dissertation are to: 1) detect and assess, in a quantitative way, impairments of postural control in PD subjects, investigate the central mechanisms that control such motor performance, and how these mechanism are affected by levodopa; 2) develop and validate a protocol, using wearable inertial sensors, to measure postural sway and postural transitions prior to step initiation; 3) find quantitative measures sensitive to impairments of postural control in early stages of PD and quantitative biomarkers of disease progression; and 4) test the feasibility and effects of a recently-developed audio-biofeedback system in maintaining balance in subjects with PD. In the first set of studies, we showed how PD reduces functional limits of stability as well as the magnitude and velocity of postural preparation during voluntary, forward and backward leaning while standing. Levodopa improves the limits of stability but not the postural strategies used to achieve the leaning. Further, we found a strong relationship between backward voluntary limits of stability and size of automatic postural response to backward perturbations in control subjects and in PD subjects ON medication. Such relation might suggest that the central nervous system presets postural response parameters based on perceived maximum limits and this presetting is absent in PD patients OFF medication but restored with levodopa replacement. Furthermore, we investigated how the size of preparatory postural adjustments (APAs) prior to step initiation depend on initial stance width. We found that patients with PD did not scale up the size of their APA with stance width as much as control subjects so they had much more difficulty initiating a step from a wide stance than from a narrow stance. This results supports the hypothesis that subjects with PD maintain a narrow stance as a compensation for their inability to sufficiently increase the size of their lateral APA to allow speedy step initiation in wide stance. In the second set of studies, we demonstrated that it is possible to use wearable accelerometers to quantify postural performance during quiet stance and step initiation balance tasks in healthy subjects. We used a model to predict center of pressure displacements associated with accelerations at the upper and lower back and thigh. This approach allows the measurement of balance control without the use of a force platform outside the laboratory environment. We used wearable accelerometers on a population of early, untreated PD patients, and found that postural control in stance and postural preparation prior to a step are impaired early in the disease when the typical balance and gait intiation symptoms are not yet clearly manifested. These novel results suggest that technological measures of postural control can be more sensitive than clinical measures. Furthermore, we assessed spontaneous sway and step initiation longitudinally across 1 year in patients with early, untreated PD. We found that changes in trunk sway, and especially movement smoothness, measured as Jerk, could be used as an objective measure of PD and its progression. In the third set of studies, we studied the feasibility of adapting an existing audio-biofeedback device to improve balance control in patients with PD. Preliminary results showed that PD subjects found the system easy-to-use and helpful, and they were able to correctly follow the audio information when available. Audiobiofeedback improved the properties of trunk sway during quiet stance. Our results have many implications for i) the understanding the central mechanisms that control postural motor performance, and how these mechanisms are affected by levodopa; ii) the design of innovative protocols for measuring and remote monitoring of motor performance in the elderly or subjects with PD; and iii) the development of technologies for improving balance, mobility, and consequently quality of life in patients with balance disorders, such as PD patients with augmented biofeedback paradigms.
Resumo:
Technology advances in recent years have dramatically changed the way users exploit contents and services available on the Internet, by enforcing pervasive and mobile computing scenarios and enabling access to networked resources almost from everywhere, at anytime, and independently of the device in use. In addition, people increasingly require to customize their experience, by exploiting specific device capabilities and limitations, inherent features of the communication channel in use, and interaction paradigms that significantly differ from the traditional request/response one. So-called Ubiquitous Internet scenario calls for solutions that address many different challenges, such as device mobility, session management, content adaptation, context-awareness and the provisioning of multimodal interfaces. Moreover, new service opportunities demand simple and effective ways to integrate existing resources into new and value added applications, that can also undergo run-time modifications, according to ever-changing execution conditions. Despite service-oriented architectural models are gaining momentum to tame the increasing complexity of composing and orchestrating distributed and heterogeneous functionalities, existing solutions generally lack a unified approach and only provide support for specific Ubiquitous Internet aspects. Moreover, they usually target rather static scenarios and scarcely support the dynamic nature of pervasive access to Internet resources, that can make existing compositions soon become obsolete or inadequate, hence in need of reconfiguration. This thesis proposes a novel middleware approach to comprehensively deal with Ubiquitous Internet facets and assist in establishing innovative application scenarios. We claim that a truly viable ubiquity support infrastructure must neatly decouple distributed resources to integrate and push any kind of content-related logic outside its core layers, by keeping only management and coordination responsibilities. Furthermore, we promote an innovative, open, and dynamic resource composition model that allows to easily describe and enforce complex scenario requirements, and to suitably react to changes in the execution conditions.
Resumo:
The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.
Resumo:
The focus of this dissertation is the relationship between the necessity for protection and the construction of cultural identities. In particular, by cultural identities I mean the representation and construction of communities: national communities, religious communities or local communities. By protection I mean the need for individuals and groups to be reassured about dangers and risks. From an anthropological point of view, the relationship between the need for protection and the formation and construction of collective identities is driven by the defensive function of culture. This was recognized explicitly by Claude Lévi-Strauss and Jurij Lotman. To explore the “protective hypothesis,” it was especially useful to compare the immunitarian paradigm, proposed by Roberto Esposito, with a semiotic approach to the problem. According to Esposito, immunity traces borders, dividing Community from what should be kept outside: the enemies, dangers and chaos, and, in general, whatever is perceived to be a threat to collective and individual life. I recognized two dimensions in the concept of immunity. The first is the logic dimension: every element of a system makes sense because of the network of differential relations in which it is inscribed; the second dimension is the social praxis of division and definition of who. We are (or what is inside the border), and who They are (or what is, and must be kept, outside the border). I tested my hypothesis by analyzing two subject areas in particular: first, the security practices in London after 9/11 and 7/7; and, second, the Spiritual Guide of 9/11 suicide bombers. In both cases, one observes the construction of two entities: We and They. The difference between the two cases is their “model of the world”: in the London case, one finds the political paradigms of security as Sovereignty, Governamentality and Biopolitics. In the Spiritual Guide, one observes a religious model of the Community of God confronting the Community of Evil. From a semiotic point view, the problem is the origin of respective values, the origin of respective moral universes, and the construction of authority. In both cases, I found that emotional dynamics are crucial in the process of forming collective identities and in the process of motivating the involved subjects: specifically, the role of fear and terror is the primary factor, and represents the principal focus of my research.
Resumo:
The main aim of this thesis is strongly interdisciplinary: it involves and presumes a knowledge on Neurophysiology, to understand the mechanisms that undergo the studied phenomena, a knowledge and experience on Electronics, necessary during the hardware experimental set-up to acquire neuronal data, on Informatics and programming to write the code necessary to control the behaviours of the subjects during experiments and the visual presentation of stimuli. At last, neuronal and statistical models should be well known to help in interpreting data. The project started with an accurate bibliographic research: until now the mechanism of perception of heading (or direction of motion) are still poorly known. The main interest is to understand how the integration of visual information relative to our motion with eye position information happens. To investigate the cortical response to visual stimuli in motion and the integration with eye position, we decided to study an animal model, using Optic Flow expansion and contraction as visual stimuli. In the first chapter of the thesis, the basic aims of the research project are presented, together with the reasons why it’s interesting and important to study perception of motion. Moreover, this chapter describes the methods my research group thought to be more adequate to contribute to scientific community and underlines my personal contribute to the project. The second chapter presents an overview on useful knowledge to follow the main part of the thesis: it starts with a brief introduction on central nervous system, on cortical functions, then it presents more deeply associations areas, which are the main target of our study. Furthermore, it tries to explain why studies on animal models are necessary to understand mechanism at a cellular level, that could not be addressed on any other way. In the second part of the chapter, basics on electrophysiology and cellular communication are presented, together with traditional neuronal data analysis methods. The third chapter is intended to be a helpful resource for future works in the laboratory: it presents the hardware used for experimental sessions, how to control animal behaviour during the experiments by means of C routines and a software, and how to present visual stimuli on a screen. The forth chapter is the main core of the research project and the thesis. In the methods, experimental paradigms, visual stimuli and data analysis are presented. In the results, cellular response of area PEc to visual stimuli in motion combined with different eye positions are shown. In brief, this study led to the identification of different cellular behaviour in relation to focus of expansion (the direction of motion given by the optic flow pattern) and eye position. The originality and importance of the results are pointed out in the conclusions: this is the first study aimed to investigate perception of motion in this particular cortical area. In the last paragraph, a neuronal network model is presented: the aim is simulating cellular pre-saccadic and post-saccadic response of neuron in area PEc, during eye movement tasks. The same data presented in chapter four, are further analysed in chapter fifth. The analysis started from the observation of the neuronal responses during 1s time period in which the visual stimulation was the same. It was clear that cells activities showed oscillations in time, that had been neglected by the previous analysis based on mean firing frequency. Results distinguished two cellular behaviour by their response characteristics: some neurons showed oscillations that changed depending on eye and optic flow position, while others kept the same oscillations characteristics independent of the stimulus. The last chapter discusses the results of the research project, comments the originality and interdisciplinary of the study and proposes some future developments.
Resumo:
Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.
La Pace Calda. La nascita del movimento antinucleare negli Stati Uniti e in Gran Bretagna, 1957-1963
Resumo:
The aim of this proposal is to offer an alternative perspective on the study of Cold War, since insufficient attention is usually paid to those organizations that mobilized against the development and proliferation of nuclear weapons. The antinuclear movement began to mobilize between the 1950s and the 1960s, when it finally gained the attention of public opinion, and helped to build a sort of global conscience about nuclear bombs. This was due to the activism of a significant part of the international scientific community, which offered powerful intellectual and political legitimization to the struggle, and to the combined actions of the scientific and organized protests. This antinuclear conscience is something we usually tend to consider as a fait accompli in contemporary world, but the question is to show its roots, and the way it influenced statesmen and political choices during the period of nuclear confrontation of the early Cold War. To understand what this conscience could be and how it should be defined, we have to look at the very meaning of the nuclear weapons that has deeply modified the sense of war. Nuclear weapons seemed to be able to destroy human beings everywhere with no realistic forms of control of the damages they could set off, and they represented the last resource in the wide range of means of mass destruction. Even if we tend to consider this idea fully rational and incontrovertible, it was not immediately born with the birth of nuclear weapons themselves. Or, better, not everyone in the world did immediately share it. Due to the particular climate of Cold War confrontation, deeply influenced by the persistence of realistic paradigms in international relations, British and U.S. governments looked at nuclear weapons simply as «a bullet». From the Trinity Test to the signature of the Limited Test Ban Treaty in 1963, many things happened that helped to shift this view upon nuclear weapons. First of all, more than ten years of scientific protests provided a more concerned knowledge about consequences of nuclear tests and about the use of nuclear weapons. Many scientists devoted their social activities to inform public opinion and policy-makers about the real significance of the power of the atom and the related danger for human beings. Secondly, some public figures, as physicists, philosophers, biologists, chemists, and so on, appealed directly to the human community to «leave the folly and face reality», publicly sponsoring the antinuclear conscience. Then, several organizations leaded by political, religious or radical individuals gave to this protests a formal structure. The Campaign for Nuclear Disarmament in Great Britain, as well as the National Committee for a Sane Nuclear Policy in the U.S., represented the voice of the masses against the attempts of governments to present nuclear arsenals as a fundamental part of the international equilibrium. Therefore, the antinuclear conscience could be defined as an opposite feeling to the development and the use of nuclear weapons, able to create a political issue oriented to the influence of military and foreign policies. Only taking into consideration the strength of this pressure, it seems possible to understand not only the beginning of nuclear negotiations, but also the reasons that permitted Cold War to remain cold.
Resumo:
The ever increasing demand for new services from users who want high-quality broadband services while on the move, is straining the efficiency of current spectrum allocation paradigms, leading to an overall feeling of spectrum scarcity. In order to circumvent this problem, two possible solutions are being investigated: (i) implementing new technologies capable of accessing the temporarily/locally unused bands, without interfering with the licensed services, like Cognitive Radios; (ii) release some spectrum bands thanks to new services providing higher spectral efficiency, e.g., DVB-T, and allocate them to new wireless systems. These two approaches are promising, but also pose novel coexistence and interference management challenges to deal with. In particular, the deployment of devices such as Cognitive Radio, characterized by the inherent unplanned, irregular and random locations of the network nodes, require advanced mathematical techniques in order to explicitly model their spatial distribution. In such context, the system performance and optimization are strongly dependent on this spatial configuration. On the other hand, allocating some released spectrum bands to other wireless services poses severe coexistence issues with all the pre-existing services on the same or adjacent spectrum bands. In this thesis, these methodologies for better spectrum usage are investigated. In particular, using Stochastic Geometry theory, a novel mathematical framework is introduced for cognitive networks, providing a closed-form expression for coverage probability and a single-integral form for average downlink rate and Average Symbol Error Probability. Then, focusing on more regulatory aspects, interference challenges between DVB-T and LTE systems are analysed proposing a versatile methodology for their proper coexistence. Moreover, the studies performed inside the CEPT SE43 working group on the amount of spectrum potentially available to Cognitive Radios and an analysis of the Hidden Node problem are provided. Finally, a study on the extension of cognitive technologies to Hybrid Satellite Terrestrial Systems is proposed.
Resumo:
La tesi ha per oggetto lo studio delle politiche pubbliche locali ed in particolare delle politiche sociali che dal 2011 sono diventate politiche esclusivamente territoriali. L’obiettivo è quello di verificare se il differente orientamento politico delle amministrazioni genera politiche differenti. Per verificare le ipotesi si sono scelti 2 Comuni simili sul piano delle variabili socio-economiche, ma guidati da giunte con orientamento politico differente: il Comune di Modena a guida Partito Democratico e il Comune di Verona con un sindaco leghista a capo di una giunta di centro-destra. Nella prima parte vengono esposti ed analizzati i principali paradigmi di studio delle politiche (rational choice, paradigma marxista, economia del benessere, corporativismo e pluralismo, neo-istituzionalismo e paradigma relazionale) e viene presentato il paradigma che verrà utilizzato per l’analisi delle politiche (paradigma relazionale). Per la parte empirica si è proceduto attraverso interviste in profondità effettuate ai due Assessori alle Politiche sociali e ai due Dirigenti comunali dei Comuni e a 18 organizzazioni di Terzo settore impegnate nella costruzione delle politiche e selezionate attraverso la metodologia “a palla di neve”. Sono analizzate le disposizioni normative in materia di politica sociale, sia per la legislazione regionale che per quella comunale. L’analisi dei dati ha verificato l’ipotesi di ricerca nel senso che l’orientamento politico produce politiche differenti per quanto riguarda il rapporto tra Pubblica Amministrazione e Terzo settore. Per Modena si può parlare di una scelta di esternalizzazione dei servizi che si accompagna ad un processo di internalizzazione dei servizi tramite le ASP; a Verona almeno per alcuni settori delle politiche (disabilità e anziani) sono stati realizzati processi di sussidiarietà e di governance. Per la fase di programmazione l’orientamento politico ha meno influenza e la programmazione mostra caratteristiche di tipo “top-down”.
Resumo:
La tesi si è consolidata nell’analisi dell’impatto dei social networks nella costruzione dello spazio pubblico, nella sfera di osservazione che è la rete e il web2.0. Osservando che il paradigma della società civile si sia modificato. Ridefinendo immagini e immaginari e forme di autorappresentazione sui new media (Castells, 2010). Nel presupposto che lo spazio pubblico “non è mai una realtà precostituita” (Innerarity, 2008) ma si muove all’interno di reti che generano e garantiscono socievolezza. Nell’obiettivo di capire cosa è spazio pubblico. Civic engagement che si rafforza in spazi simbolici (Sassen, 2008), nodi d’incontro significativi. Ivi cittadini-consumatori avanzano corresponsabilmente le proprie istanze per la debacle nei governi.. Cultura partecipativa che prende mossa da un nuovo senso civico mediato che si esprime nelle “virtù” del consumo critico. Portando la politica sul mercato. Cultura civica autoattualizzata alla ricerca di soluzioni alle crisi degli ultimi anni. Potere di una comunicazione che riduce il mondo ad un “villaggio globale” e mettono in relazione i pubblici connessi in spazi e tempi differenti, dando origine ad azioni collettive come nel caso degli Indignados, di Occupy Wall Street o di Rai per una notte. Emerge un (ri)pensare la citizenship secondo due paradigmi (Bennett,2008): l’uno orientato al governo attraverso i partiti, modello “Dutiful Citizenship”; l’altro, modello “Self Actualizing Citizenship” per cui i pubblici attivi seguono news ed eventi, percepiscono un minor obbligo nel governo, il voto è meno significativo per (s)fiducia nei media e nei politici. Mercato e società civile si muovono per il bene comune e una nuova “felicità”. La partecipazione si costituisce in consumerismo politico all’interno di reti in cui si sviluppano azioni individuali attraverso il social networking e scelte di consumo responsabile. Partendo dall’etnografia digitale, si è definito il modello “4 C”: Conoscenza > Coadesione > Co-partecipazione > Corresposabilità (azioni collettive) > Cultura-bility.
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.