863 resultados para Energy Efficient Routing Protocols
Resumo:
We report a systematic study of the localized surface plasmon resonance effects on the photoluminescence of Er3+-doped tellurite glasses containing Silver or Gold nanoparticles. The Silver and Gold nanoparticles are obtained by means of reduction of Ag ions (Ag+ -> Ag-0) or Au ions (Au3+ -> Au-0) during the melting process followed by the formation of nanoparticles by heat treatment of the glasses. Absorption and photoluminescence spectra reveal particular features of the interaction between the metallic nanoparticles and Er3+ ions. The photoluminescence enhancement observed is due to dipole coupling of Silver nanoparticles with the I-4(13/2) -> I-4(15/2) Er3+ transition and Gold nanoparticles with the H-2(11/2)-> I-4(13/2) (805 nm) and S-4(3/2) -> I-4(13/2) (840 nm) Er3+ transitions. Such process is achieved via an efficient coupling yielding an energy transfer from the nanoparticles to the Er3+ ions, which is confirmed from the theoretical spectra calculated through the decay rate. Crown Copyright (C) 2011 Published by Elsevier B.V. All rights reserved.
Resumo:
Three-party password-authenticated key exchange (3PAKE) protocols allow entities to negotiate a secret session key with the aid of a trusted server with whom they share a human-memorable password. Recently, Lou and Huang proposed a simple 3PAKE protocol based on elliptic curve cryptography, which is claimed to be secure and to provide superior efficiency when compared with similar-purpose solutions. In this paper, however, we show that the solution is vulnerable to key-compromise impersonation and offline password guessing attacks from system insiders or outsiders, which indicates that the empirical approach used to evaluate the scheme's security is flawed. These results highlight the need of employing provable security approaches when designing and analyzing PAKE schemes. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
In this work, we report a theoretical and experimental investigation of the energy transfer mechanism in two isotypical 2D coordination polymers, (infinity)[(Tb1-xEux)(DPA)(HDPA)], where H(2)DPA is pyridine 2,6-dicarboxylic acid and x = 0.05 or 0.50. Emission spectra of (infinity)[(Tb0.95Eu0.05)(DPA)(HDPA)] and (infinity)[(Tb0.5Eu0.5)(DPA)(HDPA)], (I) and (2), show that the high quenching effect on Tb3+ emission caused by Eu3+ ion indicates an efficient Tb3+-> Eu3+ energy transfer (ET). The k(ET) of Tb3+-> Eu3+ ET and rise rates (k(r)) of Eu3+ as a function of temperature for (1) are on the same order of magnitude, indicating that the sensitization of the Eu3+5D0 level is highly fed by ET from the D-5(4) level of Tb3+ ion. The eta(ET) and R-0 values vary in the 67-79% and 7.15 to 7.93 angstrom ranges. Hence, Tb3+ is enabled to transfer efficiently to Eu3+ that can occupy the possible sites at 6.32 and 6.75 angstrom. For (2), the ET processes occur on average with eta(ET) and R-0 of 97% and 31 angstrom, respectively. Consequently, Tb3+ ion is enabled to transfer energy to Eu3+ localized at different layers. The theoretical model developed by Malta was implemented aiming to insert more insights about the dominant mechanisms involved in the ET between lanthanides ions. Calculated single Tb3+-> Eu3+ ETs are three orders of magnitude inferior to those experimentally; however, it can be explained by the theoretical model that does not consider the role of phonon assistance in the Ln(3+)-> Ln(3+) ET processes. In addition, the Tb3+-> Eu3+ ET processes are predominantly governed by dipole-dipole (d-d) and dipole-quadrupole (d-q) mechanisms.
Resumo:
It is a well-established fact that statistical properties of energy-level spectra are the most efficient tool to characterize nonintegrable quantum systems. The statistical behavior of different systems such as complex atoms, atomic nuclei, two-dimensional Hamiltonians, quantum billiards, and noninteracting many bosons has been studied. The study of statistical properties and spectral fluctuations in interacting many-boson systems has developed interest in this direction. We are especially interested in weakly interacting trapped bosons in the context of Bose-Einstein condensation (BEC) as the energy spectrum shows a transition from a collective nature to a single-particle nature with an increase in the number of levels. However this has received less attention as it is believed that the system may exhibit Poisson-like fluctuations due to the existence of an external harmonic trap. Here we compute numerically the energy levels of the zero-temperature many-boson systems which are weakly interacting through the van der Waals potential and are confined in the three-dimensional harmonic potential. We study the nearest-neighbor spacing distribution and the spectral rigidity by unfolding the spectrum. It is found that an increase in the number of energy levels for repulsive BEC induces a transition from a Wigner-like form displaying level repulsion to the Poisson distribution for P(s). It does not follow the Gaussian orthogonal ensemble prediction. For repulsive interaction, the lower levels are correlated and manifest level-repulsion. For intermediate levels P(s) shows mixed statistics, which clearly signifies the existence of two energy scales: external trap and interatomic interaction, whereas for very high levels the trapping potential dominates, generating a Poisson distribution. Comparison with mean-field results for lower levels are also presented. For attractive BEC near the critical point we observe the Shnirelman-like peak near s = 0, which signifies the presence of a large number of quasidegenerate states.
Resumo:
This study aimed to test different protocols for the extraction of microbial DNA from the coral Mussismilia harttii. Four different commercial kits were tested, three of them based on methods for DNA extraction from soil (FastDNA SPIN Kit for soil, MP Bio, PowerSoil DNA Isolation Kit, MoBio, and ZR Soil Microbe DNA Kit, Zymo Research) and one kit for DNA extraction from plants (UltraClean Plant DNA Isolation Kit, MoBio). Five polyps of the same colony of M. harttii were macerated and aliquots were submitted to DNA extraction by the different kits. After extraction, the DNA was quantified and PCR-DGGE was used to study the molecular fingerprint of Bacteria and Eukarya. Among the four kits tested, the ZR Soil Microbe DNA Kit was the most efficient with respect to the amount of DNA extracted, yielding about three times more DNA than the other kits. Also, we observed a higher number and intensities of DGGE bands for both Bacteria and Eukarya with the same kit. Considering these results, we suggested that the ZR Soil Microbe DNA Kit is the best adapted for the study of the microbial communities of corals.
Resumo:
In this present work we present a methodology that aims to apply the many-body expansion to decrease the computational cost of ab initio molecular dynamics, keeping acceptable accuracy on the results. We implemented this methodology in a program which we called ManBo. In the many-body expansion approach, we partitioned the total energy E of the system in contributions of one body, two bodies, three bodies, etc., until the contribution of the Nth body [1-3]: E = E1 + E2 + E3 + …EN. The E1 term is the sum of the internal energy of the molecules; the term E2 is the energy due to interaction between all pairs of molecules; E3 is the energy due to interaction between all trios of molecules; and so on. In Manbo we chose to truncate the expansion in the contribution of two or three bodies, both for the calculation of the energy and for the calculation of the atomic forces. In order to partially include the many-body interactions neglected when we truncate the expansion, we can include an electrostatic embedding in the electronic structure calculations, instead of considering the monomers, pairs and trios as isolated molecules in space. In simulations we made we chose to simulate water molecules, and use the Gaussian 09 as external program to calculate the atomic forces and energy of the system, as well as reference program for analyzing the accuracy of the results obtained with the ManBo. The results show that the use of the many-body expansion seems to be an interesting approach for reducing the still prohibitive computational cost of ab initio molecular dynamics. The errors introduced on atomic forces in applying such methodology are very small. The inclusion of an embedding electrostatic seems to be a good solution for improving the results with only a small increase in simulation time. As we increase the level of calculation, the simulation time of ManBo tends to largely decrease in relation to a conventional BOMD simulation of Gaussian, due to better scalability of the methodology presented. References [1] E. E. Dahlke and D. G. Truhlar; J. Chem. Theory Comput., 3, 46 (2007). [2] E. E. Dahlke and D. G. Truhlar; J. Chem. Theory Comput., 4, 1 (2008). [3] R. Rivelino, P. Chaudhuri and S. Canuto; J. Chem. Phys., 118, 10593 (2003).
Resumo:
Thanks to the Chandra and XMM–Newton surveys, the hard X-ray sky is now probed down to a flux limit where the bulk of the X-ray background is almost completely resolved into discrete sources, at least in the 2–8 keV band. Extensive programs of multiwavelength follow-up observations showed that the large majority of hard X–ray selected sources are identified with Active Galactic Nuclei (AGN) spanning a broad range of redshifts, luminosities and optical properties. A sizable fraction of relatively luminous X-ray sources hosting an active, presumably obscured, nucleus would not have been easily recognized as such on the basis of optical observations because characterized by “peculiar” optical properties. In my PhD thesis, I will focus the attention on the nature of two classes of hard X-ray selected “elusive” sources: those characterized by high X-ray-to-optical flux ratios and red optical-to-near-infrared colors, a fraction of which associated with Type 2 quasars, and the X-ray bright optically normal galaxies, also known as XBONGs. In order to characterize the properties of these classes of elusive AGN, the datasets of several deep and large-area surveys have been fully exploited. The first class of “elusive” sources is characterized by X-ray-to-optical flux ratios (X/O) significantly higher than what is generally observed from unobscured quasars and Seyfert galaxies. The properties of well defined samples of high X/O sources detected at bright X–ray fluxes suggest that X/O selection is highly efficient in sampling high–redshift obscured quasars. At the limits of deep Chandra surveys (∼10−16 erg cm−2 s−1), high X/O sources are generally characterized by extremely faint optical magnitudes, hence their spectroscopic identification is hardly feasible even with the largest telescopes. In this framework, a detailed investigation of their X-ray properties may provide useful information on the nature of this important component of the X-ray source population. The X-ray data of the deepest X-ray observations ever performed, the Chandra deep fields, allows us to characterize the average X-ray properties of the high X/O population. The results of spectral analysis clearly indicate that the high X/O sources represent the most obscured component of the X–ray background. Their spectra are harder (G ∼ 1) than any other class of sources in the deep fields and also of the XRB spectrum (G ≈ 1.4). In order to better understand the AGN physics and evolution, a much better knowledge of the redshift, luminosity and spectral energy distributions (SEDs) of elusive AGN is of paramount importance. The recent COSMOS survey provides the necessary multiwavelength database to characterize the SEDs of a statistically robust sample of obscured sources. The combination of high X/O and red-colors offers a powerful tool to select obscured luminous objects at high redshift. A large sample of X-ray emitting extremely red objects (R−K >5) has been collected and their optical-infrared properties have been studied. In particular, using an appropriate SED fitting procedure, the nuclear and the host galaxy components have been deconvolved over a large range of wavelengths and ptical nuclear extinctions, black hole masses and Eddington ratios have been estimated. It is important to remark that the combination of hard X-ray selection and extreme red colors is highly efficient in picking up highly obscured, luminous sources at high redshift. Although the XBONGs do not present a new source population, the interest on the nature of these sources has gained a renewed attention after the discovery of several examples from recent Chandra and XMM–Newton surveys. Even though several possibilities were proposed in recent literature to explain why a relatively luminous (LX = 1042 − 1043erg s−1) hard X-ray source does not leave any significant signature of its presence in terms of optical emission lines, the very nature of XBONGs is still subject of debate. Good-quality photometric near-infrared data (ISAAC/VLT) of 4 low-redshift XBONGs from the HELLAS2XMMsurvey have been used to search for the presence of the putative nucleus, applying the surface-brightness decomposition technique. In two out of the four sources, the presence of a nuclear weak component hosted by a bright galaxy has been revealed. The results indicate that moderate amounts of gas and dust, covering a large solid angle (possibly 4p) at the nuclear source, may explain the lack of optical emission lines. A weak nucleus not able to produce suffcient UV photons may provide an alternative or additional explanation. On the basis of an admittedly small sample, we conclude that XBONGs constitute a mixed bag rather than a new source population. When the presence of a nucleus is revealed, it turns out to be mildly absorbed and hosted by a bright galaxy.
Resumo:
The relation between the intercepted light and orchard productivity was considered linear, although this dependence seems to be more subordinate to planting system rather than light intensity. At whole plant level not always the increase of irradiance determines productivity improvement. One of the reasons can be the plant intrinsic un-efficiency in using energy. Generally in full light only the 5 – 10% of the total incoming energy is allocated to net photosynthesis. Therefore preserving or improving this efficiency becomes pivotal for scientist and fruit growers. Even tough a conspicuous energy amount is reflected or transmitted, plants can not avoid to absorb photons in excess. The chlorophyll over-excitation promotes the reactive species production increasing the photoinhibition risks. The dangerous consequences of photoinhibition forced plants to evolve a complex and multilevel machine able to dissipate the energy excess quenching heat (Non Photochemical Quenching), moving electrons (water-water cycle , cyclic transport around PSI, glutathione-ascorbate cycle and photorespiration) and scavenging the generated reactive species. The price plants must pay for this equipment is the use of CO2 and reducing power with a consequent decrease of the photosynthetic efficiency, both because some photons are not used for carboxylation and an effective CO2 and reducing power loss occurs. Net photosynthesis increases with light until the saturation point, additional PPFD doesn’t improve carboxylation but it rises the efficiency of the alternative pathways in energy dissipation but also ROS production and photoinhibition risks. The wide photo-protective apparatus, although is not able to cope with the excessive incoming energy, therefore photodamage occurs. Each event increasing the photon pressure and/or decreasing the efficiency of the described photo-protective mechanisms (i.e. thermal stress, water and nutritional deficiency) can emphasize the photoinhibition. Likely in nature a small amount of not damaged photosystems is found because of the effective, efficient and energy consuming recovery system. Since the damaged PSII is quickly repaired with energy expense, it would be interesting to investigate how much PSII recovery costs to plant productivity. This PhD. dissertation purposes to improve the knowledge about the several strategies accomplished for managing the incoming energy and the light excess implication on photo-damage in peach. The thesis is organized in three scientific units. In the first section a new rapid, non-intrusive, whole tissue and universal technique for functional PSII determination was implemented and validated on different kinds of plants as C3 and C4 species, woody and herbaceous plants, wild type and Chlorophyll b-less mutant and monocot and dicot plants. In the second unit, using a “singular” experimental orchard named “Asymmetric orchard”, the relation between light environment and photosynthetic performance, water use and photoinhibition was investigated in peach at whole plant level, furthermore the effect of photon pressure variation on energy management was considered on single leaf. In the third section the quenching analysis method suggested by Kornyeyev and Hendrickson (2007) was validate on peach. Afterwards it was applied in the field where the influence of moderate light and water reduction on peach photosynthetic performances, water requirements, energy management and photoinhibition was studied. Using solar energy as fuel for life plant is intrinsically suicidal since the high constant photodamage risk. This dissertation would try to highlight the complex relation existing between plant, in particular peach, and light analysing the principal strategies plants developed to manage the incoming light for deriving the maximal benefits as possible minimizing the risks. In the first instance the new method proposed for functional PSII determination based on P700 redox kinetics seems to be a valid, non intrusive, universal and field-applicable technique, even because it is able to measure in deep the whole leaf tissue rather than the first leaf layers as fluorescence. Fluorescence Fv/Fm parameter gives a good estimate of functional PSII but only when data obtained by ad-axial and ab-axial leaf surface are averaged. In addition to this method the energy quenching analysis proposed by Kornyeyev and Hendrickson (2007), combined with the photosynthesis model proposed by von Caemmerer (2000) is a forceful tool to analyse and study, even in the field, the relation between plant and environmental factors such as water, temperature but first of all light. “Asymmetric” training system is a good way to study light energy, photosynthetic performance and water use relations in the field. At whole plant level net carboxylation increases with PPFD reaching a saturating point. Light excess rather than improve photosynthesis may emphasize water and thermal stress leading to stomatal limitation. Furthermore too much light does not promote net carboxylation improvement but PSII damage, in fact in the most light exposed plants about 50-60% of the total PSII is inactivated. At single leaf level, net carboxylation increases till saturation point (1000 – 1200 μmolm-2s-1) and light excess is dissipated by non photochemical quenching and non net carboxylative transports. The latter follows a quite similar pattern of Pn/PPFD curve reaching the saturation point at almost the same photon flux density. At middle-low irradiance NPQ seems to be lumen pH limited because the incoming photon pressure is not enough to generate the optimum lumen pH for violaxanthin de-epoxidase (VDE) full activation. Peach leaves try to cope with the light excess increasing the non net carboxylative transports. While PPFD rises the xanthophyll cycle is more and more activated and the rate of non net carboxylative transports is reduced. Some of these alternative transports, such as the water-water cycle, the cyclic transport around the PSI and the glutathione-ascorbate cycle are able to generate additional H+ in lumen in order to support the VDE activation when light can be limiting. Moreover the alternative transports seems to be involved as an important dissipative way when high temperature and sub-optimal conductance emphasize the photoinhibition risks. In peach, a moderate water and light reduction does not determine net carboxylation decrease but, diminishing the incoming light and the environmental evapo-transpiration request, stomatal conductance decreases, improving water use efficiency. Therefore lowering light intensity till not limiting levels, water could be saved not compromising net photosynthesis. The quenching analysis is able to partition absorbed energy in the several utilization, photoprotection and photo-oxidation pathways. When recovery is permitted only few PSII remained un-repaired, although more net PSII damage is recorded in plants placed in full light. Even in this experiment, in over saturating light the main dissipation pathway is the non photochemical quenching; at middle-low irradiance it seems to be pH limited and other transports, such as photorespiration and alternative transports, are used to support photoprotection and to contribute for creating the optimal trans-thylakoidal ΔpH for violaxanthin de-epoxidase. These alternative pathways become the main quenching mechanisms at very low light environment. Another aspect pointed out by this study is the role of NPQ as dissipative pathway when conductance becomes severely limiting. The evidence that in nature a small amount of damaged PSII is seen indicates the presence of an effective and efficient recovery mechanism that masks the real photodamage occurring during the day. At single leaf level, when repair is not allowed leaves in full light are two fold more photoinhibited than the shaded ones. Therefore light in excess of the photosynthetic optima does not promote net carboxylation but increases water loss and PSII damage. The more is photoinhibition the more must be the photosystems to be repaired and consequently the energy and dry matter to allocate in this essential activity. Since above the saturation point net photosynthesis is constant while photoinhibition increases it would be interesting to investigate how photodamage costs in terms of tree productivity. An other aspect of pivotal importance to be further widened is the combined influence of light and other environmental parameters, like water status, temperature and nutrition on peach light, water and phtosyntate management.
Resumo:
La ricerca oggetto di questa tesi, come si evince dal titolo stesso, è volta alla riduzione dei consumi per vetture a forte carattere sportivo ed elevate prestazioni specifiche. In particolare, tutte le attività descritte fanno riferimento ad un ben definito modello di vettura, ovvero la Maserati Quattroporte. Lo scenario all’interno del quale questo lavoro si inquadra, è quello di una forte spinta alla riduzione dei cosiddetti gas serra, ossia dell’anidride carbonica, in linea con quelle che sono le disposizioni dettate dal protocollo di Kyoto. La necessità di ridurre l’immissione in atmosfera di CO2 sta condizionando tutti i settori della società: dal riscaldamento degli edifici privati a quello degli stabilimenti industriali, dalla generazione di energia ai processi produttivi in senso lato. Nell’ambito di questo panorama, chiaramente, sono chiamati ad uno sforzo considerevole i costruttori di automobili, alle quali è imputata una percentuale considerevole dell’anidride carbonica prodotta ogni giorno e riversata nell’atmosfera. Al delicato problema inquinamento ne va aggiunto uno forse ancor più contingente e diretto, legato a ragioni di carattere economico. I combustibili fossili, come tutti sanno, sono una fonte di energia non rinnovabile, la cui disponibilità è legata a giacimenti situati in opportune zone del pianeta e non inesauribili. Per di più, la situazione socio politica che il medio oriente sta affrontando, unita alla crescente domanda da parte di quei paesi in cui il processo di industrializzazione è partito da poco a ritmi vertiginosi, hanno letteralmente fatto lievitare il prezzo del petrolio. A causa di ciò, avere una vettura efficiente in senso lato e, quindi, a ridotti consumi, è a tutti gli effetti un contenuto di prodotto apprezzato dal punto di vista del marketing, anche per i segmenti vettura più alti. Nell’ambito di questa ricerca il problema dei consumi è stato affrontato come una conseguenza del comportamento globale della vettura in termini di efficienza, valutando il miglior compromesso fra le diverse aree funzionali costituenti il veicolo. Una parte consistente del lavoro è stata dedicata alla messa a punto di un modello di calcolo, attraverso il quale eseguire una serie di analisi di sensibilità sull’influenza dei diversi parametri vettura sul consumo complessivo di carburante. Sulla base di tali indicazioni, è stata proposta una modifica dei rapporti del cambio elettro-attuato con lo scopo di ottimizzare il compromesso tra consumi e prestazioni, senza inficiare considerevolmente queste ultime. La soluzione proposta è stata effettivamente realizzata e provata su vettura, dando la possibilità di verificare i risultati ed operare un’approfondita attività di correlazione del modello di calcolo per i consumi. Il beneficio ottenuto in termini di autonomia è stato decisamente significativo con riferimento sia ai cicli di omologazione europei, che a quelli statunitensi. Sono state inoltre analizzate le ripercussioni dal punto di vista delle prestazioni ed anche in questo caso i numerosi dati rilevati hanno permesso di migliorare il livello di correlazione del modello di simulazione per le prestazioni. La vettura con la nuova rapportatura proposta è stata poi confrontata con un prototipo di Maserati Quattroporte avente cambio automatico e convertitore di coppia. Questa ulteriore attività ha permesso di valutare il differente comportamento tra le due soluzioni, sia in termini di consumo istantaneo, che di consumo complessivo rilevato durante le principali missioni su banco a rulli previste dalle normative. L’ultima sezione del lavoro è stata dedicata alla valutazione dell’efficienza energetica del sistema vettura, intesa come resistenza all’avanzamento incontrata durante il moto ad una determinata velocità. Sono state indagate sperimentalmente le curve di “coast down” della Quattroporte e di alcune concorrenti e sono stati proposti degli interventi volti alla riduzione del coefficiente di penetrazione aerodinamica, pur con il vincolo di non alterare lo stile vettura.
Resumo:
Large scale wireless adhoc networks of computers, sensors, PDAs etc. (i.e. nodes) are revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly distributed and dynamic environments. An example of adhoc networks are sensor networks, which are usually composed by small units able to sense and transmit to a sink elementary data which are successively processed by an external machine. Recent improvements in the memory and computational power of sensors, together with the reduction of energy consumptions, are rapidly changing the potential of such systems, moving the attention towards datacentric sensor networks. A plethora of routing and data management algorithms have been proposed for the network path discovery ranging from broadcasting/floodingbased approaches to those using global positioning systems (GPS). We studied WGrid, a novel decentralized infrastructure that organizes wireless devices in an adhoc manner, where each node has one or more virtual coordinates through which both message routing and data management occur without reliance on either flooding/broadcasting operations or GPS. The resulting adhoc network does not suffer from the deadend problem, which happens in geographicbased routing when a node is unable to locate a neighbor closer to the destination than itself. WGrid allow multidimensional data management capability since nodes' virtual coordinates can act as a distributed database without needing neither special implementation or reorganization. Any kind of data (both single and multidimensional) can be distributed, stored and managed. We will show how a location service can be easily implemented so that any search is reduced to a simple query, like for any other data type. WGrid has then been extended by adopting a replication methodology. We called the resulting algorithm WRGrid. Just like WGrid, WRGrid acts as a distributed database without needing neither special implementation nor reorganization and any kind of data can be distributed, stored and managed. We have evaluated the benefits of replication on data management, finding out, from experimental results, that it can halve the average number of hops in the network. The direct consequence of this fact are a significant improvement on energy consumption and a workload balancing among sensors (number of messages routed by each node). Finally, thanks to the replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors disconnections/connections, due to failures of sensors, without data loss. Another extension to {WGrid} is {W*Grid} which extends it by strongly improving network recovery performance from link and/or device failures that may happen due to crashes or battery exhaustion of devices or to temporary obstacles. W*Grid guarantees, by construction, at least two disjoint paths between each couple of nodes. This implies that the recovery in W*Grid occurs without broadcasting transmissions and guaranteeing robustness while drastically reducing the energy consumption. An extensive number of simulations shows the efficiency, robustness and traffic road of resulting networks under several scenarios of device density and of number of coordinates. Performance analysis have been compared to existent algorithms in order to validate the results.
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.
Resumo:
Currently pi-conjugated polymers are considered as technologically interesting materials to be used as functional building elements for the development of the new generation of optoelectronic devices. More specifically during the last few years, poly-p-phenylene materials have attracted considerable attention for their blue photoluminescence properties. This Thesis deals with the optical properties of the most representative blue light poly-p-phenylene emitters such as poly(fluorene), oligo(fluorene), poly(indenofluorene) and ladder-type penta(phenylene) derivatives. In the present work, laser induced photoluminescence spectroscopy is used as a major tool for the study of the interdependence between the dynamics of the probed photoluminescence, the molecular structures of the prepared polymeric films and the presence of chemical defects. Complementary results obtained by two-dimensional wide-angle X-ray diffraction are reported. These findings show that the different optical properties observed are influenced by the intermolecular solid-state interactions that in turn are controlled by the pendant groups of the polymer backbone. A significant feedback is delivered regarding the positive impact of a new synthetic route for the preparation of a poly(indenofluorene) derivative on the spectral purity of the compound. The energy transfer mechanisms that operate in the studied systems are addressed by doping experiments. After the evaluation of the structure/property interdependence, a new optical excitation pathway is presented. An efficient photon low-energy up-conversion that sensitises the blue emission of poly(fluorene) is demonstrated. The observed phenomenon takes place in poly(fluorene) derivatives hosts doped with metallated octaethyl porphyrins, after quasi-CW photoexcitation of intensities in the order of kW/cm2. The up-conversion process is parameterised in terms of temperature, wavelength excitation and central metal cation in the porphyrin ring. Additionally the observation of the up-conversion is extended in a broad range of poly-p-phenylene blue light emitting hosts. The dependence of the detected up-conversion intensity on the excitation intensity and doping concentration is reported. Furthermore the dynamics of the up-conversion intensity are monitored as a function of the doping concentration. These experimental results strongly suggest the existence of triplet-triplet annihilation events into the porphyrin molecules that are subsequently followed by energy transfer to the host. After confirming the occurrence of the up-conversion in solutions, cyclic voltammetry is used in order to show that the up-conversion efficiency is partially determined from the energetic alignment between the HOMO levels of the host and the dopant.
Resumo:
Conjugated polymers have attracted tremendous academical and industrial research interest over the past decades due to the appealing advantages that organic / polymeric materials offer for electronic applications and devices such as organic light emitting diodes (OLED), organic field effect transistors (OFET), organic solar cells (OSC), photodiodes and plastic lasers. The optimization of organic materials for applications in optoelectronic devices requires detailed knowledge of their photophysical properties, for instance energy levels of excited singlet and triplet states, excited state decay mechanisms and charge carrier mobilities. In the present work a variety of different conjugated (co)polymers, mainly polyspirobifluorene- and polyfluorene-type materials, was investigated using time-resolved photoluminescence spectroscopy in the picosecond to second time domain to study their elementary photophysical properties and to get a deeper insight into structure-property relationships. The experiments cover fluorescence spectroscopy using Streak Camera techniques as well as time-delayed gated detection techniques for the investigation of delayed fluorescence and phosphorescence. All measurements were performed on the solid state, i.e. thin polymer films and on diluted solutions. Starting from the elementary photophysical properties of conjugated polymers the experiments were extended to studies of singlet and triplet energy transfer processes in polymer blends, polymer-triplet emitter blends and copolymers. The phenomenon of photonenergy upconversion was investigated in blue light-emitting polymer matrices doped with metallated porphyrin derivatives supposing an bimolecular annihilation upconversion mechanism which could be experimentally verified on a series of copolymers. This mechanism allows for more efficient photonenergy upconversion than previously reported for polyfluorene derivatives. In addition to the above described spectroscopical experiments, amplified spontaneous emission (ASE) in thin film polymer waveguides was studied employing a fully-arylated poly(indenofluorene) as the gain medium. It was found that the material exhibits a very low threshold value for amplification of blue light combined with an excellent oxidative stability, which makes it interesting as active material for organic solid state lasers. Apart from spectroscopical experiments, transient photocurrent measurements on conjugated polymers were performed as well to elucidate the charge carrier mobility in the solid state, which is an important material parameter for device applications. A modified time-of-flight (TOF) technique using a charge carrier generation layer allowed to study hole transport in a series of spirobifluorene copolymers to unravel the structure-mobility relationship by comparison with the homopolymer. Not only the charge carrier mobility could be determined for the series of polymers but also field- and temperature-dependent measurements analyzed in the framework of the Gaussian disorder model showed that results coincide very well with the predictions of the model. Thus, the validity of the disorder concept for charge carrier transport in amorphous glassy materials could be verified for the investigated series of copolymers.
Resumo:
Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.