821 resultados para Networking
Resumo:
Translucent WDM optical networks use sparse placement of regenerators to overcome the impairments and wavelength contention introduced by fully transparent networks, and achieve a performance close to fully opaque networks with much less cost. Our previous study proved the feasibility of translucent networks using sparse regeneration technique. We addressed the placement of regenerators based on static schemes allowing only fixed number of regenerators at fixed locations. This paper furthers the study by proposing a suite of dynamical routing schemes. Dynamic allocation, advertisement and discovery of regeneration resources are proposed to support sharing transmitters and receivers between regeneration and access functions. This study follows the current trend in optical networking industry by utilizing extension of IP control protocols. Dynamic routing algorithms, aware of current regeneration resources and link states, are designed to smartly route the connection requests under quality constraints. A hierarchical network model, supported by the MPLS-based control plane, is also proposed to provide scalability. Experiments show that network performance is improved without placement of extra regenerators.
Resumo:
Pós-graduação em Ciência da Informação - FFC
Resumo:
In 2008, academic researchers and public service officials created a university extension studies platform based on online and on-site meetings denominated "Work-Related Accidents Forum: Analysis, Prevention, and Other Relevant Aspects. Its aim was to help public agents and social partners to propagate a systemic approach that would be helpful in the surveillance and prevention of work-related accidents. This article describes and analyses such a platform. Online access is free and structured to: support dissemination of updated concepts; support on-site meetings and capacity to build educational activities; and keep a permanent space for debate among the registered participants. The desired result is the propagation of a social-technical-systemic view of work-related accidents that replaces the current traditional view that emphasizes human error and results in blaming the victims. The Forum uses an educational approach known as permanent health education, which is based on the experience and needs of workers and encourages debate among participants. The forum adopts a problematizing pedagogy that starts from the requirements and experiences of the social actors and stimulates support and discussions among them in line with an ongoing health educational approach. The current challenge is to turn the platform into a social networking website in order to broaden its links with society.
Resumo:
A presença das tecnologias e das mídias é hoje tão importante, que é impossível ignorá-la. Os calouros e estudantes universitários de hoje têm telefones celulares com fotos digitais e ídeos. Eles usam blogs, twitter e sites de redes sociais. Ao mesmo tempo, leem livros e artigos, e fazem seus projetos e a lição de casa. Continuamente, diferentes tecnologias e novas informações estão impactando o ensino superior. Isto exige uma rápida atualização da competência informacional e midiática pelos alunos. Apesar do impacto progressivo da tecnologia digital na cultura acadêmica contemporânea, é imperativo resgatar e consolidar um compromisso mais crítico com a informação, mídia e tecnologia. Nós defendemos a convergência da literacia da informação e da literacia dos media no ensino superior. O Projeto CIMES (Competência em Informação e Mídia no Ensino Superior) está em estágio inicial. O presente artigo tem como objetivo revisar as questões teóricas, políticas e práticas sobre a educação para a competência em informação e mídia no ensino superior, especialmente no Brasil. O objetivo final do Projeto CIMES é fornecer uma estrutura para desenvolver programas educacionais no Brasil que tenham a competência em informação e a competência midiática como uma aplicação transversal no ensino superior. O atual estágio do projeto permite apenas traçar um quadro geral analítico
Resumo:
Too Big to Ignore (TBTI; www.toobigtoignore.net) is a research network and knowledge mobilization partnership established to elevate the profile of small-scale fisheries (SSF), to argue against their marginalization in national and international policies, and to develop research and governance capacity to address global fisheries challenges. Network participants and partners are conducting global and comparative analyses, as well as in-depth studies of SSF in the context of local complexity and dynamics, along with a thorough examination of governance challenges, to encourage careful consideration of this sector in local, regional and global policy arenas. Comprising 15 partners and 62 researchers from 27 countries, TBTI conducts activities in five regions of the world. In Latin America and the Caribbean (LAC) region, we are taking a participative approach to investigate and promote stewardship and self-governance in SSF, seeking best practices and success stories that could be replicated elsewhere. As well, the region will focus to promote sustainable livelihoods of coastal communities. Key activities include workshops and stakeholder meetings, facilitation of policy dialogue and networking, as well as assessing local capacity needs and training. Currently, LAC members are putting together publications that examine key issues concerning SSF in the region and best practices, with a first focus on ecosystem stewardship. Other planned deliverables include comparative analysis, a regional profile on the top research issues on SSF, and a synthesis of SSF knowledge in LAC
Resumo:
Breve introdução ao contexto SIBiUSP. Projetos integrados com unidades internas à USP: Ecosistema Informação USP. Projetos integrados com unidades externas à USP. Impactos sociais.
Resumo:
[ES] En la actualidad, no cabe duda de que la industria del videojuego se ha ganado un puesto más que imperante en el mercado. Además, se trata de una industria que ha evolucionado vertiginosamente, desde los sencillos juegos de la era de los 8-bit hasta llegar a los juegos multijugador masivos actuales. Así como los juegos han cambiado, las técnicas de desarrollo también lo han hecho. El presente proyecto, titulado “Beat Fighters!: Exploración y Desarrollo de diferentes Técnicas de Desarrollo de Videojuegos Multijugador” es, en realidad, un estudio y desarrollo de distintas técnicas utilizadas en el desarrollo de videojuegos, haciendo hincapié en dos campos en concreto: la inteligencia artificial y la red.
Resumo:
Advances in wireless networking and content delivery systems are enabling new challenging provisioning scenarios where a growing number of users access multimedia services, e.g., audio/video streaming, while moving among different points of attachment to the Internet, possibly with different connectivity technologies, e.g., Wi-Fi, Bluetooth, and cellular 3G. That calls for novel middlewares capable of dynamically personalizing service provisioning to the characteristics of client environments, in particular to discontinuities in wireless resource availability due to handoffs. This dissertation proposes a novel middleware solution, called MUM, that performs effective and context-aware handoff management to transparently avoid service interruptions during both horizontal and vertical handoffs. To achieve the goal, MUM exploits the full visibility of wireless connections available in client localities and their handoff implementations (handoff awareness), of service quality requirements and handoff-related quality degradations (QoS awareness), and of network topology and resources available in current/future localities (location awareness). The design and implementation of the all main MUM components along with extensive on the field trials of the realized middleware architecture confirmed the validity of the proposed full context-aware handoff management approach. In particular, the reported experimental results demonstrate that MUM can effectively maintain service continuity for a wide range of different multimedia services by exploiting handoff prediction mechanisms, adaptive buffering and pre-fetching techniques, and proactive re-addressing/re-binding.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
Esperimenti sulla virtualizzazione del laboratorio informatico della facoltà, che attraverso migrazioni di virtual machine, consentirebbe il risparmio energetico grazie allo spegnimento di alcune macchine fisiche.
Resumo:
Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.
Resumo:
La ricerca svolta ha individuato fra i suoi elementi promotori l’orientamento determinato da parte della comunità europea di dare vita e sostegno ad ambiti territoriali intermedi sub nazionali di tipo regionale all’interno dei quali i sistemi di città potessero raggiungere le massime prestazioni tecnologiche per cogliere gli effetti positivi delle innovazioni. L’orientamento europeo si è confrontato con una realtà storica e geografica molto variata in quanto accanto a stati membri, nei quali le gerarchie fra città sono storicamente radicate e funzionalmente differenziate secondo un ordine che vede la città capitale dominante su città subalterne nelle quali la cultura di dominio del territorio non è né continua né gerarchizzata sussistono invece territori nazionali compositi con una città capitale di riconosciuto potere ma con città di minor dimensione che da secoli esprimono una radicata incisività nella organizzazione del territorio di appartenenza. Alla prima tipologia di stati appartengono ad esempio i Paesi del Nord Europa e l’Inghilterra, esprimendo nella Francia una situazione emblematica, alla seconda tipologia appartengono invece i Paesi dell’aera mediterranea, Italia in primis, con la grande eccezione della Germania. Applicando gli intendimenti comunitari alla realtà locale nazionale, questa tesi ha avviato un approfondimento di tipo metodologico e procedurale sulla possibile organizzazione a sistema di una regione fortemente policentrica nel suo sviluppo e “artificiosamente” rinata ad unità, dopo le vicende del XIX secolo: l’Emilia-Romagna. Anche nelle regioni che si presentano come storicamente organizzate sulla pluralità di centri emergenti, il rapporto col territorio è mediato da centri urbani minori che governano il tessuto cellulare delle aggregazioni di servizi di chiara origine agraria. Questo stato di cose comporta a livello politico -istituzionale una dialettica vivace fra territori voluti dalle istituzioni e territori legittimati dal consolidamento delle tradizioni confermato dall’uso attuale. La crescente domanda di capacità di governo dello sviluppo formulata dagli operatori economici locali e sostenuta dalle istituzioni europee si confronta con la scarsa capacità degli enti territoriali attuali: Regioni, Comuni e Province di raggiungere un livello di efficienza sufficiente ad organizzare sistemi di servizi adeguati a sostegno della crescita economica. Nel primo capitolo, dopo un breve approfondimento sulle “figure retoriche comunitarie”, quali il policentrismo, la governance, la coesione territoriale, utilizzate per descrivere questi fenomeni in atto, si analizzano gli strumenti programmatici europei e lo S.S.S.E,. in primis, che recita “Per garantire uno sviluppo regionale equilibrato nella piena integrazione anche nell’economia mondiale, va perseguito un modello di sviluppo policentrico, al fine di impedire un’ulteriore eccessiva concentrazione della forza economica e della popolazione nei territori centrali dell’UE. Solo sviluppando ulteriormente la struttura, relativamente decentrata, degli insediamenti è possibile sfruttare il potenziale economico di tutte le regioni europee.” La tesi si inserisce nella fase storica in cui si tenta di definire quali siano i nuovi territori funzionali e su quali criteri si basa la loro riconoscibilità; nel tentativo di adeguare ad essi, riformandoli, i territori istituzionali. Ai territori funzionali occorre riportare la futura fiscalità, ed è la scala adeguata per l'impostazione della maggior parte delle politiche, tutti aspetti che richiederanno anche la necessità di avere una traduzione in termini di rappresentanza/sanzionabilità politica da parte dei cittadini. Il nuovo governo auspicato dalla Comunità Europea prevede una gestione attraverso Sistemi Locali Territoriali (S.Lo.t.) definiti dalla combinazione di milieu locale e reti di attori che si comportano come un attore collettivo. Infatti il secondo capitolo parte con l’indagare il concetto di “regione funzionale”, definito sulla base della presenza di un nucleo e di una corrispondente area di influenza; che interagisce con altre realtà territoriali in base a relazioni di tipo funzionale, per poi arrivare alla definizione di un Sistema Locale territoriale, modello evoluto di regione funzionale che può essere pensato come una rete locale di soggetti i quali, in funzione degli specifici rapporti che intrattengono fra loro e con le specificità territoriali del milieu locale in cui operano e agiscono, si comportano come un soggetto collettivo. Identificare un sistema territoriale, è una condizione necessaria, ma non sufficiente, per definire qualsiasi forma di pianificazione o governance territoriale, perchè si deve soprattutto tener conto dei processi di integrazione funzionale e di networking che si vengono a generare tra i diversi sistemi urbani e che sono specchio di come il territorio viene realmente fruito., perciò solo un approccio metodologico capace di sfumare e di sovrapporre le diverse perimetrazioni territoriali riesce a definire delle aree sulle quali definire un’azione di governo del territorio. Sin dall’inizio del 2000 il Servizio Sviluppo Territoriale dell’OCSE ha condotto un’indagine per capire come i diversi paesi identificavano empiricamente le regioni funzionali. La stragrande maggioranza dei paesi adotta una definizione di regione funzionale basata sul pendolarismo. I confini delle regioni funzionali sono stati definiti infatti sulla base di “contorni” determinati dai mercati locali del lavoro, a loro volta identificati sulla base di indicatori relativi alla mobilità del lavoro. In Italia, la definizione di area urbana funzionale viene a coincidere di fatto con quella di Sistema Locale del Lavoro (SLL). Il fatto di scegliere dati statistici legati a caratteristiche demografiche è un elemento fondamentale che determina l’ubicazione di alcuni servizi ed attrezzature e una mappa per gli investimenti nel settore sia pubblico che privato. Nell’ambito dei programmi europei aventi come obiettivo lo sviluppo sostenibile ed equilibrato del territorio fatto di aree funzionali in relazione fra loro, uno degli studi di maggior rilievo è stato condotto da ESPON (European Spatial Planning Observation Network) e riguarda l’adeguamento delle politiche alle caratteristiche dei territori d’Europa, creando un sistema permanente di monitoraggio del territorio europeo. Sulla base di tali indicatori vengono costruiti i ranking dei diversi FUA e quelli che presentano punteggi (medi) elevati vengono classificati come MEGA. In questo senso, i MEGA sono FUA/SLL particolarmente performanti. In Italia ve ne sono complessivamente sei, di cui uno nella regione Emilia-Romagna (Bologna). Le FUA sono spazialmente interconnesse ed è possibile sovrapporre le loro aree di influenza. Tuttavia, occorre considerare il fatto che la prossimità spaziale è solo uno degli aspetti di interazione tra le città, l’altro aspetto importante è quello delle reti. Per capire quanto siano policentrici o monocentrici i paesi europei, il Progetto Espon ha esaminato per ogni FUA tre differenti parametri: la grandezza, la posizione ed i collegamenti fra i centri. La fase di analisi della tesi ricostruisce l’evoluzione storica degli strumenti della pianificazione regionale analizzandone gli aspetti organizzativi del livello intermedio, evidenziando motivazioni e criteri adottati nella suddivisione del territorio emilianoromagnolo (i comprensori, i distretti industriali, i sistemi locali del lavoro…). La fase comprensoriale e quella dei distretti, anche se per certi versi effimere, hanno avuto comunque il merito di confermare l’esigenza di avere un forte organismo intermedio di programmazione e pianificazione. Nel 2007 la Regione Emilia Romagna, nell’interpretare le proprie articolazioni territoriali interne, ha adeguato le proprie tecniche analitiche interpretative alle direttive contenute nel Progetto E.S.P.O.N. del 2001, ciò ha permesso di individuare sei S.Lo.T ( Sistemi Territoriali ad alta polarizzazione urbana; Sistemi Urbani Metropolitani; Sistemi Città – Territorio; Sistemi a media polarizzazione urbana; Sistemi a bassa polarizzazione urbana; Reti di centri urbani di piccole dimensioni). Altra linea di lavoro della tesi di dottorato ha riguardato la controriprova empirica degli effettivi confini degli S.Lo.T del PTR 2007 . Dal punto di vista metodologico si è utilizzato lo strumento delle Cluster Analisys per impiegare il singolo comune come polo di partenza dei movimenti per la mia analisi, eliminare inevitabili approssimazioni introdotte dalle perimetrazioni legate agli SLL e soprattutto cogliere al meglio le sfumature dei confini amministrativi dei diversi comuni e province spesso sovrapposti fra loro. La novità è costituita dal fatto che fino al 2001 la regione aveva definito sullo stesso territorio una pluralità di ambiti intermedi non univocamente circoscritti per tutte le funzioni ma definiti secondo un criterio analitico matematico dipendente dall’attività settoriale dominante. In contemporanea col processo di rinnovamento della politica locale in atto nei principali Paesi dell’Europa Comunitaria si va delineando una significativa evoluzione per adeguare le istituzioni pubbliche che in Italia comporta l’attuazione del Titolo V della Costituzione. In tale titolo si disegna un nuovo assetto dei vari livelli Istituzionali, assumendo come criteri di riferimento la semplificazione dell’assetto amministrativo e la razionalizzazione della spesa pubblica complessiva. In questa prospettiva la dimensione provinciale parrebbe essere quella tecnicamente più idonea per il minimo livello di pianificazione territoriale decentrata ma nel contempo la provincia come ente amministrativo intermedio palesa forti carenze motivazionali in quanto l’ente storico di riferimento della pianificazione è il comune e l’ente di gestione delegato dallo stato è la regione: in generale troppo piccolo il comune per fare una programmazione di sviluppo, troppo grande la regione per cogliere gli impulsi alla crescita dei territori e delle realtà locali. Questa considerazione poi deve trovare elementi di compatibilità con la piccola dimensione territoriale delle regioni italiane se confrontate con le regioni europee ed i Laender tedeschi. L'individuazione di criteri oggettivi (funzionali e non formali) per l'individuazione/delimitazione di territori funzionali e lo scambio di prestazioni tra di essi sono la condizione necessaria per superare l'attuale natura opzionale dei processi di cooperazione interistituzionale (tra comuni, ad esempio appartenenti allo stesso territorio funzionale). A questo riguardo molto utile è l'esperienza delle associazioni, ma anche delle unioni di comuni. Le esigenze della pianificazione nel riordino delle istituzioni politico territoriali decentrate, costituiscono il punto finale della ricerca svolta, che vede confermato il livello intermedio come ottimale per la pianificazione. Tale livello è da intendere come dimensione geografica di riferimento e non come ambito di decisioni amministrative, di governance e potrebbe essere validamente gestito attraverso un’agenzia privato-pubblica dello sviluppo, alla quale affidare la formulazione del piano e la sua gestione. E perché ciò avvenga è necessario che il piano regionale formulato da organi politici autonomi, coordinati dall’attività dello stato abbia caratteri definiti e fattibilità economico concreta.
Resumo:
Deutsch:Diese Arbeit beschäftigt sich zum einen mit der Synthese neuer, vernetzbarer, ferroelektrischer Verbindungen, welche eine höhere spontane Polarisation und damit ein besseres Schaltverhalten nach Vernetzung aufweisen sollten. Dazu wurde in bekannte Systemen die Halogene Fluor, Chlor und Brom erfolgreich eingebaut. Desweiteren konnten neue Untersuchungmethoden für ferroelektrische, flüssigkristalline Netzwerke erfolgreich angewendet und weiterentwickelt werden. Damit gelang es z. B. neue Erkenntnisse über die elastischen Eigenschaften von LC-Elastomeren zu gewinnen, wobei es erstmalig gelang, Seifenblasen aus LC-Polymeren herzustellen und durch UV-Bestrahlung zu vernetzen. Durch die Messung des Radius in Abhängigkeit des Druckes war es möglich festzustellen, daß sich das Verhalten des Polymers, welches zunächst oberflächenspannungskontrolliert war, nach UV-Bestrahlung, in ein elastisches Verhalten änderte. Aus der Radius vs. Druckbeziehung war es möglich, Daten über die elastischen Eigenschaften zu erhalten. Die Ballone zeigten dabei typische, gummielastische Eigenschaften. Ein Einfluß der Mesophase (d.h. SA oder SC-Phase) auf die Eigenschaften der Ballone konnte dabei nicht festgestellt werden. Für die beiden hier untersuchten Systeme des inter- und intralyer vernetzbaren System konnte festgestellt werden, daß ihr elastisches Verhalten sehr ähnlich ist, ganz im Gegensatz zu den früheren elektrooptischen Untersuchungen. D. h. beide Systeme zeigten nach der Vernetzung bis auf einen Faktor 2 das gleiche elastische Verhalten. Im Gegensatz zu nematischen Elastomeren, welche am Phasenübergang zum Teil große thermoelastische Änderungen zeigen, zeigten die hier untersuchten Elastomere keine Änderung der elastischen Eigenschaften beim Phasenübergang, was sich u.a. auf die relativ hohen Vernetzungsdichten zurückführen läßt. Weiterhin wurde die Elektrostriktion in ferroelektrischen flüssigkristallinen Elastomerenfilmen untersucht, welche zu einem neuen Weltrekord des elektrostriktiven Effektes führte. Es wurden Schichtdickenänderungen von 4% bei einem angelegten Feld von 1,5 kV gemessen. Röntgenstreuexperimente an gespincoateten, vernetzten Polymerfilmen haben überdies gezeigt, daß der gemessene Effekt voll und ganz auf den elektroklinen Effekt zurück zu führen ist. Zum Schluß wurde ein neuer Weg ausgearbeitet, um flüssigkristalline Netzwerke unter Einsatz von weniger präparativer Chemie zu erhalten. Dazu wurde die Möglichkeit der Netzwerkbildung mit organischen Gelbildnern untersucht. In diesem Zusammenhang ist es erstmalig gelungen, ferroelektrische Flüssigkristalle reversibel in dem einen oder anderen Zustand orientiert zu stabilisieren, wobei beliebig oft zwischen den stabilisierten Zuständen gewechselt werden konnte.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
Dendrimere spielen als strukturtreue Nanopartikel eine herausragende Rolle. Ziel dieser Arbeit war, Dendrimere mit einer hohen Dichte an photoaktiven Chromophoren herzu-stellen und zu untersuchen. Dazu wurden die terminalen Aminogruppen von Poly(propylenimin)dendrimeren 1. und 2. Generation, Astramol DAB-Am-4R und DAB-Am-8R, mit Stilbenen und Styrylstilbenen als Chromophor verknüpft. Mittels Wittig-Horner- und Heck-Reaktion wurden (E)-Stilbene aufgebaut, die auf der einen Seite drei Propoxygruppen zur Verbesserung der Löslichkeit und auf der anderen Seite eine passende Funktionalität zur Verknüpfung mit dem dendritischen Core tragen. Als Verknüpfungsmethoden wurden die Verknüpfung als Amid (PSDA), Schiffsche Base (PSDS) und Harnstoff (PSDH) getestet. Die Schiffschen Basen wurden außerdem zur Erhöhung der Hydrolysestabilität zum sekundären Amin reduziert (PSDR und PQDR). Durch die Verknüpfung mit dem Core werden die stilbenoiden Chromophore sehr stark photoaktiviert. Das beruht auf einem Singulett-Energietransfer (Förster-Mechanismus) von Chromophor zu Chromophor. Dieser Prozeß konkurriert zu den Deaktivierungsprozessen, verlängert die mittlere S1-Lebensdauer und erhöht somit die Chancen der Photochemie. Der Styrylstilben-Chromophor hat darüber hinaus einen erheblichen Teil seiner UV-Absorbtion bereits im Tageslicht und photopolymerisiert daher bereits im Tageslicht. Vor allem bei den Dendrimeren 2. Generation stellte sich die Frage nach der vollständigen, d.h. achtfachen Umsetzung; das Core sollte als Knäuel vorliegen, die Arme zum Teil nach innen gefaltet und somit dem Reaktand nur bedingt zugänglich. Auch dort konnten unter optimierten Reaktionsbedingungen alle Aminogruppen umgesetzt werden. Die vollständige Umsetzung der Dendrimere wurde mittels NMR und massenspektroskopischen Methoden untersucht. Bei den Absorptionsspektren der Dendrimere 1. Generation ändert sich die Lage der Maxima je nach Art der Verknüpfung der Chromophore mit dem Core. Die Verlängerung des Chromophors um eine Styryleinheit bedingt eine beträchtliche Rotverschiebung. Die Lage der Emissionsmaxima differiert stärker als die Lage der Absorptionsmaxima. Den geringsten Stokes-Shift weist der Harnstoff auf, dann folgt das sekundäre Amin, dann die Schiffsche Base. Dies weist auf unterschiedlich relaxierte S1-Geometrien hin. Die Verbindungen PSDS1, PSDR1 und PSDH1 aus 3,4,5-Tripropoxystilbeneinheit und Astramol-Core 1. Generation DAB-Am-4 wurden in einer Konzentration von 10-5 mol/L belichtet. Der vollständige Photoabbau durch Belichtung in Chloroform mit einer Xenon-Lampe erfolgte ohne jeglichen Filter innerhalb von zehn Minuten (PSDH1), 20 Minuten (PSDR1) und einer Stunde (PSDS1). Allen drei Verbindungen gemeinsam ist das Entstehen eines intermediären neuen Maximums geringer Intensität, das um etwa 100 nm bathochrom verschoben ist. Das Harnstoffsystem weist außerdem ein weiteres intermediäres Maximum bei 614 nm auf. Diese Maxima können (laut früherer Untersuchungen) durch Oxidation entstandenen chinoiden Strukturen zugeordnet werden, deren Lebensdauer (im Sekundenbereich) zu kurz für eine NMR-Charakterisierung ist. PSDR1 wurde außerdem bei höheren Konzentrationen (10-4 und 10-3 mol/L) mit einer Quecksilberlampe mit Pyrex-Filter (lambda > 300 nm) belichtet. Dabei wird, wie erwartet, eine Verbreiterung der NMR-Signale beobachtet. Es bildet sich zunächst cis-Stilben. Außerdem läßt sich bei 4.3 ppm ein Signal beobachten, das von inter- oder intramolekular gebildeten Methinprotonen herrührt. Auch wenn laut MOPAC- und Kraftfeldrechnung die Doppelbindungen ungünstig für eine [pi2s + pi2s]-Cyclodimerisierung zueinander stehen, kann im photochemisch angeregten Zustand eine Geometrie vorherrschen, die die intramolekulare Kopf-Kopf-Cyclobutanbildung ermöglicht. Die massenspektrometrischen Untersuchungen der Belichtungsprodukte (FD, ESI, MALDI-TOF) zeigen als höchste Masse lediglich das Monomer. Allerdings kann dadurch nicht auf eine rein intramolekulare Reaktion geschlossen werden. Die fortschreitende statistische CC-Verknüpfung kann schnell zu vernetzten Nanopartikeln führen, die im Massenspektrometer nicht fliegen. Die NMR-Spektren der mit zunehmender Vernetzung immer schlechter löslich werdenden Teilchen belegen die Oligomerisierung.