922 resultados para High-level Design Specification
Resumo:
La ricerca si propone di definire le linee guida per la stesura di un Piano che si occupi di qualità della vita e di benessere. Il richiamo alla qualità e al benessere è positivamente innovativo, in quanto impone agli organi decisionali di sintonizzarsi con la soggettività attiva dei cittadini e, contemporaneamente, rende evidente la necessità di un approccio più ampio e trasversale al tema della città e di una più stretta relazione dei tecnici/esperti con i responsabili degli organismi politicoamministrativi. La ricerca vuole indagare i limiti dell’urbanistica moderna di fronte alla complessità di bisogni e di nuove necessità espresse dalle popolazioni urbane contemporanee. La domanda dei servizi è notevolmente cambiata rispetto a quella degli anni Sessanta, oltre che sul piano quantitativo anche e soprattutto sul piano qualitativo, a causa degli intervenuti cambiamenti sociali che hanno trasformato la città moderna non solo dal punto di vista strutturale ma anche dal punto di vista culturale: l’intermittenza della cittadinanza, per cui le città sono sempre più vissute e godute da cittadini del mondo (turisti e/o visitatori, temporaneamente presenti) e da cittadini diffusi (suburbani, provinciali, metropolitani); la radicale trasformazione della struttura familiare, per cui la famiglia-tipo costituita da una coppia con figli, solido riferimento per l’economia e la politica, è oggi minoritaria; l’irregolarità e flessibilità dei calendari, delle agende e dei ritmi di vita della popolazione attiva; la mobilità sociale, per cui gli individui hanno traiettorie di vita e pratiche quotidiane meno determinate dalle loro origini sociali di quanto avveniva nel passato; l’elevazione del livello di istruzione e quindi l’incremento della domanda di cultura; la crescita della popolazione anziana e la forte individualizzazione sociale hanno generato una domanda di città espressa dalla gente estremamente variegata ed eterogenea, frammentata e volatile, e per alcuni aspetti assolutamente nuova. Accanto a vecchie e consolidate richieste – la città efficiente, funzionale, produttiva, accessibile a tutti – sorgono nuove domande, ideali e bisogni che hanno come oggetto la bellezza, la varietà, la fruibilità, la sicurezza, la capacità di stupire e divertire, la sostenibilità, la ricerca di nuove identità, domande che esprimono il desiderio di vivere e di godere la città, di stare bene in città, domande che non possono essere più soddisfatte attraverso un’idea di welfare semplicemente basata sull’istruzione, la sanità, il sistema pensionistico e l’assistenza sociale. La città moderna ovvero l’idea moderna della città, organizzata solo sui concetti di ordine, regolarità, pulizia, uguaglianza e buon governo, è stata consegnata alla storia passata trasformandosi ora in qualcosa di assai diverso che facciamo fatica a rappresentare, a descrivere, a raccontare. La città contemporanea può essere rappresentata in molteplici modi, sia dal punto di vista urbanistico che dal punto di vista sociale: nella letteratura recente è evidente la difficoltà di definire e di racchiudere entro limiti certi l’oggetto “città” e la mancanza di un convincimento forte nell’interpretazione delle trasformazioni politiche, economiche e sociali che hanno investito la società e il mondo nel secolo scorso. La città contemporanea, al di là degli ambiti amministrativi, delle espansioni territoriali e degli assetti urbanistici, delle infrastrutture, della tecnologia, del funzionalismo e dei mercati globali, è anche luogo delle relazioni umane, rappresentazione dei rapporti tra gli individui e dello spazio urbano in cui queste relazioni si muovono. La città è sia concentrazione fisica di persone e di edifici, ma anche varietà di usi e di gruppi, densità di rapporti sociali; è il luogo in cui avvengono i processi di coesione o di esclusione sociale, luogo delle norme culturali che regolano i comportamenti, dell’identità che si esprime materialmente e simbolicamente nello spazio pubblico della vita cittadina. Per studiare la città contemporanea è necessario utilizzare un approccio nuovo, fatto di contaminazioni e saperi trasversali forniti da altre discipline, come la sociologia e le scienze umane, che pure contribuiscono a costruire l’immagine comunemente percepita della città e del territorio, del paesaggio e dell’ambiente. La rappresentazione del sociale urbano varia in base all’idea di cosa è, in un dato momento storico e in un dato contesto, una situazione di benessere delle persone. L’urbanistica moderna mirava al massimo benessere del singolo e della collettività e a modellarsi sulle “effettive necessità delle persone”: nei vecchi manuali di urbanistica compare come appendice al piano regolatore il “Piano dei servizi”, che comprende i servizi distribuiti sul territorio circostante, una sorta di “piano regolatore sociale”, per evitare quartieri separati per fasce di popolazione o per classi. Nella città contemporanea la globalizzazione, le nuove forme di marginalizzazione e di esclusione, l’avvento della cosiddetta “new economy”, la ridefinizione della base produttiva e del mercato del lavoro urbani sono espressione di una complessità sociale che può essere definita sulla base delle transazioni e gli scambi simbolici piuttosto che sui processi di industrializzazione e di modernizzazione verso cui era orientata la città storica, definita moderna. Tutto ciò costituisce quel complesso di questioni che attualmente viene definito “nuovo welfare”, in contrapposizione a quello essenzialmente basato sull’istruzione, sulla sanità, sul sistema pensionistico e sull’assistenza sociale. La ricerca ha quindi analizzato gli strumenti tradizionali della pianificazione e programmazione territoriale, nella loro dimensione operativa e istituzionale: la destinazione principale di tali strumenti consiste nella classificazione e nella sistemazione dei servizi e dei contenitori urbanistici. E’ chiaro, tuttavia, che per poter rispondere alla molteplice complessità di domande, bisogni e desideri espressi dalla società contemporanea le dotazioni effettive per “fare città” devono necessariamente superare i concetti di “standard” e di “zonizzazione”, che risultano essere troppo rigidi e quindi incapaci di adattarsi all’evoluzione di una domanda crescente di qualità e di servizi e allo stesso tempo inadeguati nella gestione del rapporto tra lo spazio domestico e lo spazio collettivo. In questo senso è rilevante il rapporto tra le tipologie abitative e la morfologia urbana e quindi anche l’ambiente intorno alla casa, che stabilisce il rapporto “dalla casa alla città”, perché è in questa dualità che si definisce il rapporto tra spazi privati e spazi pubblici e si contestualizzano i temi della strada, dei negozi, dei luoghi di incontro, degli accessi. Dopo la convergenza dalla scala urbana alla scala edilizia si passa quindi dalla scala edilizia a quella urbana, dal momento che il criterio del benessere attraversa le diverse scale dello spazio abitabile. Non solo, nei sistemi territoriali in cui si è raggiunto un benessere diffuso ed un alto livello di sviluppo economico è emersa la consapevolezza che il concetto stesso di benessere sia non più legato esclusivamente alla capacità di reddito collettiva e/o individuale: oggi la qualità della vita si misura in termini di qualità ambientale e sociale. Ecco dunque la necessità di uno strumento di conoscenza della città contemporanea, da allegare al Piano, in cui vengano definiti i criteri da osservare nella progettazione dello spazio urbano al fine di determinare la qualità e il benessere dell’ambiente costruito, inteso come benessere generalizzato, nel suo significato di “qualità dello star bene”. E’ evidente che per raggiungere tale livello di qualità e benessere è necessario provvedere al soddisfacimento da una parte degli aspetti macroscopici del funzionamento sociale e del tenore di vita attraverso gli indicatori di reddito, occupazione, povertà, criminalità, abitazione, istruzione, etc.; dall’altra dei bisogni primari, elementari e di base, e di quelli secondari, culturali e quindi mutevoli, trapassando dal welfare state allo star bene o well being personale, alla wellness in senso olistico, tutte espressioni di un desiderio di bellezza mentale e fisica e di un nuovo rapporto del corpo con l’ambiente, quindi manifestazione concreta di un’esigenza di ben-essere individuale e collettivo. Ed è questa esigenza, nuova e difficile, che crea la diffusa sensazione dell’inizio di una nuova stagione urbana, molto più di quanto facciano pensare le stesse modifiche fisiche della città.
Resumo:
The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
Supramolecular self-assembly represents a key technology for the spontaneous construction of nanoarchitectures and for the fabrication of materials with enhanced physical and chemical properties. In addition, a significant asset of supramolecular self-assemblies rests on their reversible formation, thanks to the kinetic lability of their non-covalent interactions. This dynamic nature can be exploited for the development of “self-healing” and “smart” materials towards the tuning of their functional properties upon various external factors. One particular intriguing objective in the field is to reach a high level of control over the shape and size of the supramolecular architectures, in order to produce well-defined functional nanostructures by rational design. In this direction, many investigations have been pursued toward the construction of self-assembled objects from numerous low-molecular weight scaffolds, for instance by exploiting multiple directional hydrogen-bonding interactions. In particular, nucleobases have been used as supramolecular synthons as a result of their efficiency to code for non-covalent interaction motifs. Among nucleobases, guanine represents the most versatile one, because of its different H-bond donor and acceptor sites which display self-complementary patterns of interactions. Interestingly, and depending on the environmental conditions, guanosine derivatives can form various types of structures. Most of the supramolecular architectures reported in this Thesis from guanosine derivatives require the presence of a cation which stabilizes, via dipole-ion interactions, the macrocyclic G-quartet that can, in turn, stack in columnar G-quadruplex arrangements. In addition, in absence of cations, guanosine can polymerize via hydrogen bonding to give a variety of supramolecular networks including linear ribbons. This complex supramolecular behavior confers to the guanine-guanine interactions their upper interest among all the homonucleobases studied. They have been subjected to intense investigations in various areas ranging from structural biology and medicinal chemistry – guanine-rich sequences are abundant in telomeric ends of chromosomes and promoter regions of DNA, and are capable of forming G-quartet based structures– to material science and nanotechnology. This Thesis, organized into five Chapters, describes mainly some recent advances in the form and function provided by self-assembly of guanine based systems. More generally, Chapter 4 will focus on the construction of supramolecular self-assemblies whose self-assembling process and self-assembled architectures can be controlled by light as external stimulus. Chapter 1 will describe some of the many recent studies of G-quartets in the general area of nanoscience. Natural G- quadruplexes can be useful motifs to build new structures and biomaterials such as self-assembled nanomachines, biosensors, therapeutic aptamer and catalysts. In Chapters 2-4 it is pointed out the core concept held in this PhD Thesis, i.e. the supramolecular organization of lipophilic guanosine derivatives with photo or chemical addressability. Chapter 2 will mainly focus on the use of cation-templated guanosine derivatives as a potential scaffold for designing functional materials with tailored physical properties, showing a new way to control the bottom-up realization of well-defined nanoarchitectures. In section 2.6.7, the self-assembly properties of compound 28a may be considered an example of open-shell moieties ordered by a supramolecular guanosine architecture showing a new (magnetic) property. Chapter 3 will report on ribbon-like structures, supramolecular architectures formed by guanosine derivatives that may be of interest for the fabrication of molecular nanowires within the framework of future molecular electronic applications. In section 3.4 we investigate the supramolecular polymerizations of derivatives dG 1 and G 30 by light scattering technique and TEM experiments. The obtained data reveal the presence of several levels of organization due to the hierarchical self-assembly of the guanosine units in ribbons that in turn aggregate in fibrillar or lamellar soft structures. The elucidation of these structures furnishes an explanation to the physical behaviour of guanosine units which display organogelator properties. Chapter 4 will describe photoresponsive self-assembling systems. Numerous research examples have demonstrated that the use of photochromic molecules in supramolecular self-assemblies is the most reasonable method to noninvasively manipulate their degree of aggregation and supramolecular architectures. In section 4.4 we report on the photocontrolled self-assembly of modified guanosine nucleobase E-42: by the introduction of a photoactive moiety at C8 it is possible to operate a photocontrol over the self-assembly of the molecule, where the existence of G-quartets can be alternately switched on and off. In section 4.5 we focus on the use of cyclodextrins as photoresponsive host-guest assemblies: αCD–azobenzene conjugates 47-48 (section 4.5.3) are synthesized in order to obtain a photoresponsive system exhibiting a fine photocontrollable degree of aggregation and self-assembled architecture. Finally, Chapter 5 contains the experimental protocols used for the research described in Chapters 2-4.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
Das Hepatitis C Virus (HCV) ist ein umhülltes Virus aus der Familie der Flaviviridae. Es besitzt ein Plusstrang-RNA Genom von ca. 9600 Nukleotiden Länge, das nur ein kodierendes Leseraster besitzt. Das Genom wird am 5’ und 3’ Ende von nicht-translatierten Sequenzen (NTRs) flankiert, welche für die Translation und vermutlich auch Replikation von Bedeutung sind. Die 5’ NTR besitzt eine interne Ribosomeneintrittsstelle (IRES), die eine cap-unabhängige Translation des ca. 3000 Aminosäure langen viralen Polyproteins erlaubt. Dieses wird ko- und posttranslational von zellulären und viralen Proteasen in 10 funktionelle Komponenten gespalten. Inwieweit die 5’ NTR auch für die Replikation der HCV RNA benötigt wird, war zu Beginn der Arbeit nicht bekannt. Die 3’ NTR besitzt eine dreigeteilte Struktur, bestehend aus einer variablen Region, dem polyU/UC-Bereich und der sogenannten X-Sequenz, eine hochkonservierte 98 Nukleotide lange Region, die vermutlich für die RNA-Replikation und möglicherweise auch für die Translation benötigt wird. Die genuae Rolle der 3’ NTR für diese beiden Prozesse war zu Beginn der Arbeit jedoch nicht bekannt. Ziel der Dissertation war deshalb eine detaillierte genetische Untersuchung der NTRs hinsichtlich ihrer Bedeutung für die RNA-Translation und -Replikation. In die Analyse mit einbezogen wurden auch RNA-Strukturen innerhalb der kodierenden Region, die zwischen verschiedenen HCV-Genotypen hoch konserviert sind und die mit verschiedenen computer-basierten Modellen vorhergesagt wurden. Zur Kartierung der für RNA-Replikation benötigten Minimallänge der 5’ NTR wurde eine Reihe von Chimären hergestellt, in denen unterschiedlich lange Bereiche der HCV 5’ NTR 3’ terminal mit der IRES des Poliovirus fusioniert wurden. Mit diesem Ansatz konnten wir zeigen, dass die ersten 120 Nukleotide der HCV 5’ NTR als Minimaldomäne für Replikation ausreichen. Weiterhin ergab sich eine klare Korrelation zwischen der Länge der HCV 5’ NTR und der Replikationseffizienz. Mit steigender Länge der 5’ NTR nahm auch die Replikationseffizienz zu, die dann maximal war, wenn das vollständige 5’ Element mit der Poliovirus-IRES fusioniert wurde. Die hier gefundene Kopplung von Translation und Replikation in der HCV 5’ NTR könnte auf einen Mechanismus zur Regulation beider Funktionen hindeuten. Es konnte allerdings noch nicht geklärt werden, welche Bereiche innerhalb der Grenzen des IRES-Elements genau für die RNA-Replikation benötigt werden. Untersuchungen im Bereich der 3’ NTR ergaben, dass die variable Region für die Replikation entbehrlich, die X-Sequenz jedoch essentiell ist. Der polyU/UC-Bereich musste eine Länge von mindestens 11-30 Uridinen besitzen, wobei maximale Replikation ab einer Länge von 30-50 Uridinen beobachtet wurde. Die Addition von heterologen Sequenzen an das 3’ Ende der HCV-RNA führte zu einer starken Reduktion der Replikation. In den hier durchgeführten Untersuchungen zeigte keines der Elemente in der 3’ NTR einen signifikanten Einfluss auf die Translation. Ein weiteres cis aktives RNA-Element wurde im 3’ kodierenden Bereich für das NS5B Protein beschrieben. Wir fanden, dass Veränderungen dieser Struktur durch stille Punktmutationen die Replikation hemmten, welche durch die Insertion einer intakten Version dieses RNA-Elements in die variable Region der 3’ NTR wieder hergestellt werden konnte. Dieser Versuchsansatz erlaubte die genaue Untersuchung der für die Replikation kritischen Strukturelemente. Dadurch konnte gezeigt werden, dass die Struktur und die Primärsequenz der Loopbereiche essentiell sind. Darüber hinaus wurde eine Sequenzkomplementarität zwischen dem Element in der NS5B-kodierenden Region und einem RNA-Bereich in der X-Sequenz der 3’ NTR gefunden, die eine sog. „kissing loop“ Interaktion eingehen kann. Mit Hilfe von gezielten Mutationen konnten wir zeigen, dass diese RNA:RNA Interaktion zumindest transient stattfindet und für die Replikation des HCV essentiell ist.
Resumo:
Coupled-cluster theory provides one of the most successful concepts in electronic-structure theory. This work covers the parallelization of coupled-cluster energies, gradients, and second derivatives and its application to selected large-scale chemical problems, beside the more practical aspects such as the publication and support of the quantum-chemistry package ACES II MAB and the design and development of a computational environment optimized for coupled-cluster calculations. The main objective of this thesis was to extend the range of applicability of coupled-cluster models to larger molecular systems and their properties and therefore to bring large-scale coupled-cluster calculations into day-to-day routine of computational chemistry. A straightforward strategy for the parallelization of CCSD and CCSD(T) energies, gradients, and second derivatives has been outlined and implemented for closed-shell and open-shell references. Starting from the highly efficient serial implementation of the ACES II MAB computer code an adaptation for affordable workstation clusters has been obtained by parallelizing the most time-consuming steps of the algorithms. Benchmark calculations for systems with up to 1300 basis functions and the presented applications show that the resulting algorithm for energies, gradients and second derivatives at the CCSD and CCSD(T) level of theory exhibits good scaling with the number of processors and substantially extends the range of applicability. Within the framework of the ’High accuracy Extrapolated Ab initio Thermochemistry’ (HEAT) protocols effects of increased basis-set size and higher excitations in the coupled- cluster expansion were investigated. The HEAT scheme was generalized for molecules containing second-row atoms in the case of vinyl chloride. This allowed the different experimental reported values to be discriminated. In the case of the benzene molecule it was shown that even for molecules of this size chemical accuracy can be achieved. Near-quantitative agreement with experiment (about 2 ppm deviation) for the prediction of fluorine-19 nuclear magnetic shielding constants can be achieved by employing the CCSD(T) model together with large basis sets at accurate equilibrium geometries if vibrational averaging and temperature corrections via second-order vibrational perturbation theory are considered. Applying a very similar level of theory for the calculation of the carbon-13 NMR chemical shifts of benzene resulted in quantitative agreement with experimental gas-phase data. The NMR chemical shift study for the bridgehead 1-adamantyl cation at the CCSD(T) level resolved earlier discrepancies of lower-level theoretical treatment. The equilibrium structure of diacetylene has been determined based on the combination of experimental rotational constants of thirteen isotopic species and zero-point vibrational corrections calculated at various quantum-chemical levels. These empirical equilibrium structures agree to within 0.1 pm irrespective of the theoretical level employed. High-level quantum-chemical calculations on the hyperfine structure parameters of the cyanopolyynes were found to be in excellent agreement with experiment. Finally, the theoretically most accurate determination of the molecular equilibrium structure of ferrocene to date is presented.
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.
Resumo:
Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.
Resumo:
In der Herstellung fester Darreichungsformen umfasst die Granulierung einen komplexen Teilprozess mit hoher Relevanz für die Qualität des pharmazeutischen Produktes. Die Wirbelschichtgranulierung ist ein spezielles Granulierverfahren, welches die Teilprozesse Mischen, Agglomerieren und Trocknen in einem Gerät vereint. Durch die Kombination mehrerer Prozessstufen unterliegt gerade dieses Verfahren besonderen Anforderungen an ein umfassendes Prozessverständnis. Durch die konsequente Verfolgung des PAT- Ansatzes, welcher im Jahre 2004 durch die amerikanische Zulassungsbehörde (FDA) als Guideline veröffentlicht wurde, wurde der Grundstein für eine kontinuierliche Prozessverbesserung durch erhöhtes Prozessverständnis, für Qualitätserhöhung und Kostenreduktion gegeben. Die vorliegende Arbeit befasste sich mit der Optimierung der Wirbelschicht-Granulationsprozesse von zwei prozesssensiblen Arzneistoffformulierungen, unter Verwendung von PAT. rnFür die Enalapril- Formulierung, einer niedrig dosierten und hochaktiven Arzneistoffrezeptur, wurde herausgefunden, dass durch eine feinere Zerstäubung der Granulierflüssigkeit deutlich größere Granulatkörnchen erhalten werden. Eine Erhöhung der MassRatio verringert die Tröpfchengröße, dies führt zu größeren Granulaten. Sollen Enalapril- Granulate mit einem gewünschten D50-Kornverteilung zwischen 100 und 140 um hergestellt werden, dann muss die MassRatio auf hohem Niveau eingestellt werden. Sollen Enalapril- Granulate mit einem D50- Wert zwischen 80 und 120µm erhalten werden, so muss die MassRatio auf niedrigem Niveau eingestellt sein. Anhand der durchgeführten Untersuchungen konnte gezeigt werden, dass die MassRatio ein wichtiger Parameter ist und zur Steuerung der Partikelgröße der Enalapril- Granulate eingesetzt werden kann; unter der Voraussetzung dass alle anderen Prozessparameter konstant gehalten werden.rnDie Betrachtung der Schnittmengenplots gibt die Möglichkeit geeignete Einstellungen der Prozessparameter bzw. Einflussgrößen zu bestimmen, welche dann zu den gewünschten Granulat- und Tabletteneigenschaften führen. Anhand der Lage und der Größe der Schnittmenge können die Grenzen der Prozessparameter zur Herstellung der Enalapril- Granulate bestimmt werden. Werden die Grenzen bzw. der „Design Space“ der Prozessparameter eingehalten, kann eine hochwertige Produktqualität garantiert werden. rnUm qualitativ hochwertige Enalapril Tabletten mit der gewählten Formulierung herzustellen, sollte die Enalapril- Granulation mit folgenden Prozessparametern durchgeführt werden: niedrige Sprührate, hoher MassRatio, einer Zulufttemperatur von mindestens > 50 °C und einer effektiven Zuluftmenge < 180 Nm³/h. Wird hingegen eine Sprührate von 45 g/min und eine mittlere MassRatio von 4.54 eingestellt, so muss die effektive Zuluftmenge mindestens 200 Nm³/h und die Zulufttemperatur mindestens 60 °C betragen, um eine vorhersagbar hohe Tablettenqualität zu erhalten. Qualität wird in das Arzneimittel bereits während der Herstellung implementiert, indem die Prozessparameter bei der Enalapril- Granulierung innerhalb des „Design Space“ gehalten werden.rnFür die Metformin- Formulierung, einer hoch dosierten aber wenig aktiven Arzneistoffrezeptur wurde herausgefunden, dass sich der Wachstumsmechanismus des Feinanteils der Metformin- Granulate von dem Wachstumsmechanismus der D50- und D90- Kornverteilung unterscheidet. Der Wachstumsmechanismus der Granulate ist abhängig von der Partikelbenetzung durch die versprühten Flüssigkeitströpfchen und vom Größenverhältnis von Partikel zu Sprühtröpfchen. Der Einfluss der MassRatio ist für die D10- Kornverteilung der Granulate vernachlässigbar klein. rnMit Hilfe der Störgrößen- Untersuchungen konnte eine Regeleffizienz der Prozessparameter für eine niedrig dosierte (Enalapril)- und eine hoch dosierte (Metformin) Arzneistoffformulierung erarbeitet werden, wodurch eine weitgehende Automatisierung zur Verringerung von Fehlerquellen durch Nachregelung der Störgrößen ermöglicht wird. Es ergibt sich für die gesamte Prozesskette ein in sich geschlossener PAT- Ansatz. Die Prozessparameter Sprührate und Zuluftmenge erwiesen sich als am besten geeignet. Die Nachregelung mit dem Parameter Zulufttemperatur erwies sich als träge. rnFerner wurden in der Arbeit Herstellverfahren für Granulate und Tabletten für zwei prozesssensible Wirkstoffe entwickelt. Die Robustheit der Herstellverfahren gegenüber Störgrößen konnte demonstriert werden, wodurch die Voraussetzungen für eine Echtzeitfreigabe gemäß dem PAT- Gedanken geschaffen sind. Die Kontrolle der Qualität des Produkts findet nicht am Ende der Produktions- Prozesskette statt, sondern die Kontrolle wird bereits während des Prozesses durchgeführt und basiert auf einem besseren Verständnis des Produktes und des Prozesses. Außerdem wurde durch die konsequente Verfolgung des PAT- Ansatzes die Möglichkeit zur kontinuierlichen Prozessverbesserung, zur Qualitätserhöhung und Kostenreduktion gegeben und damit das ganzheitliche Ziel des PAT- Gedankens erreicht und verwirklicht.rn
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
Resource management is of paramount importance in network scenarios and it is a long-standing and still open issue. Unfortunately, while technology and innovation continue to evolve, our network infrastructure system has been maintained almost in the same shape for decades and this phenomenon is known as “Internet ossification”. Software-Defined Networking (SDN) is an emerging paradigm in computer networking that allows a logically centralized software program to control the behavior of an entire network. This is done by decoupling the network control logic from the underlying physical routers and switches that forward traffic to the selected destination. One mechanism that allows the control plane to communicate with the data plane is OpenFlow. The network operators could write high-level control programs that specify the behavior of an entire network. Moreover, the centralized control makes it possible to define more specific and complex tasks that could involve many network functionalities, e.g., security, resource management and control, into a single framework. Nowadays, the explosive growth of real time applications that require stringent Quality of Service (QoS) guarantees, brings the network programmers to design network protocols that deliver certain performance guarantees. This thesis exploits the use of SDN in conjunction with OpenFlow to manage differentiating network services with an high QoS. Initially, we define a QoS Management and Orchestration architecture that allows us to manage the network in a modular way. Then, we provide a seamless integration between the architecture and the standard SDN paradigm following the separation between the control and data planes. This work is a first step towards the deployment of our proposal in the University of California, Los Angeles (UCLA) campus network with differentiating services and stringent QoS requirements. We also plan to exploit our solution to manage the handoff between different network technologies, e.g., Wi-Fi and WiMAX. Indeed, the model can be run with different parameters, depending on the communication protocol and can provide optimal results to be implemented on the campus network.
Resumo:
A computationally efficient procedure for modeling the alkaline hydrolysis of esters is proposed based on calculations performed on methyl acetate and methyl benzoate systems. Extensive geometry and energy comparisons were performed on the simple ester methyl acetate. The effectiveness of performing high level single point ab initio energy calculations on the geometries obtained from semiempirical and ab initio methods was determined. The AM1 and PM3 semiempirical methods are evaluated for their ability to model the transition states and intermediates for ester hydrolysis. The Cramer/Truhlar SM3 solvation method was used to determine activation energies. The most computationally efficient way to model the transition states of large esters is to use the PM3 method. The PM3 transition structure can then be used as a template for the design of haptens capable of inducing catalytic antibodies.