834 resultados para distributed heating


Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the business environments no longer confined to geographical borders, the new wave of digital technologies has given organizations an enormous opportunity to bring together their distributed workforce and develop the ability to work together despite being apart (Prasad & Akhilesh, 2002). resupposing creativity to be a social process, the way that this phenomenon occurs when the configuration of the team is substantially modified will be questioned. Very little is known about the impact of interpersonal relationships in the creativity (Kurtzberg & Amabile, 2001). In order to analyse the ways in which the creative process may be developed, we ought to be taken into consideration the fact that participants are dealing with a quite an atypical situation. Firstly, in these cases socialization takes place amongst individuals belonging to a geographically dispersed workplace, where interpersonal relationships are mediated by the computer, and where trust must be developed among persons who have never met one another. Participants not only have multiple addresses and locations, but above all different nationalities, and different cultures, attitudes, thoughts, and working patterns, and languages. Therefore, the central research question of this thesis is as follows: “How does the creative process unfold in globally distributed teams?” With a qualitative approach, we used the case study of the Business Unit of Volvo 3P, an arm of Volvo Group. Throughout this research, we interviewed seven teams engaged in the development of a new product in the chassis and cab areas, for the brands Volvo and Renault Trucks, teams that were geographically distributed in Brazil, Sweden, France and India. Our research suggests that corporate values, alongside with intrinsic motivation and task which lay down the necessary foundations for the development of the creative process in GDT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, the self-assembled functional structure of a broad range of amphiphilic molecular transporters is studied. By employing paramagnetic probe molecules and ions, continuous-wave and pulse electron paramagnetic resonance spectroscopy reveal information about the local structure of these materials from the perspective of incorporated guest molecules. First, the transport function of human serum albumin for fatty acids is in the focus. As suggested by the crystal structure, the anchor points for the fatty acids are distributed asymmetrically in the protein. In contrast to the crystallographic findings, a remarkably symmetric entry point distribution of the fatty acid binding channels is found, which may facilitate the uptake and release of the guest molecules. Further, the metal binding of 1,2,3-triazole modified star-shaped cholic acid oligomers is studied. These biomimetic molecules are able to include and transport molecules in solvents of different polarity. A pre-arrangement of the triazole groups induces a strong chelate-like binding and close contact between guest molecule and metal ion. In absence of a preordering, each triazole moiety acts as a single entity and the binding affinity for metal ions is strongly decreased. Hydrogels based on N-isopropylacrylamide phase separate from water above a certain temperature. The macroscopic thermal collapse of these hydrogels is utilized as a tool for dynamic nuclear polarization. It is shown that a radical-free hyperpolarized solution can be achieved with a spin-labeled gel as separable matrix. On the nanoscale, these hydrogels form static heterogeneities in both structure and function. Collapsed regions protect the spin probes from a chemical decay while open, water-swollen regions act as catalytic centers. Similarly, thermoresponsive dendronized polymers form structural heterogeneities, which are, however, highly dynamic. At the critical temperature, they trigger the aggregation of the polymer into mesoglobules. The dehydration of these aggregates is a molecularly controlled non-equilibrium process that is facilitated by a hydrophobic dendritic core. Further, a slow heating rate results in a kinetically entrapped non-equilibrium state due to the formation of an impermeable dense polymeric layer at the periphery of the mesoglobule.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern software systems, in particular distributed ones, are everywhere around us and are at the basis of our everyday activities. Hence, guaranteeing their cor- rectness, consistency and safety is of paramount importance. Their complexity makes the verification of such properties a very challenging task. It is natural to expect that these systems are reliable and above all usable. i) In order to be reliable, compositional models of software systems need to account for consistent dynamic reconfiguration, i.e., changing at runtime the communication patterns of a program. ii) In order to be useful, compositional models of software systems need to account for interaction, which can be seen as communication patterns among components which collaborate together to achieve a common task. The aim of the Ph.D. was to develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems appeared to be an adequate methodology, considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like, deadlock or livelock freedom in a concurrent setting. The main contributions of this dissertation are twofold. i) On the components side: we design types and a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations related to modifications of communication patterns in a program during execution time. ii) On the communication side: we study advanced safety properties related to communication in complex distributed systems like deadlock-freedom, livelock- freedom and progress. Most importantly, we exploit an encoding of types and terms of a typical distributed language, session π-calculus, into the standard typed π- calculus, in order to understand their expressive power.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The distribution pattern of European arctic-alpine disjunct species is of growing interest among biogeographers due to the arising variety of inferred demographic histories. In this thesis I used the co-distributed mayfly Ameletus inopinatus and the stonefly Arcynopteryx compacta as model species to investigate the European Pleistocene and Holocene history of stream-inhabiting arctic-alpine aquatic insects. I used last glacial maximum (LGM) species distribution models (SDM) to derive hypotheses on the glacial survival during the LGM and the recolonization of Fennoscandia: 1) both species potentially survived glacial cycles in periglacial, extra Mediterranean refugia, and 2) postglacial recolonization of Fennoscandia originated from these refugia. I tested these hypotheses using mitochondrial sequence (mtCOI) and species specific microsatellite data. Additionally, I used future SDM to predict the impact of climate change induced range shifts and habitat loss on the overall genetic diversity of the endangered mayfly A. inopinatus.rnI observed old lineages, deep splits, and almost complete lineage sorting of mtCOI sequences between mountain ranges. These results support the hypothesis that both species persisted in multiple periglacial extra-Mediterranean refugia in Central Europe during the LGM. However, the recolonization of Fennoscandia was very different between the two study species. For the mayfly A. inopinatus I found strong differentiation between the Fennoscandian and all other populations in sequence and microsatellite data, indicating that Fennoscandia was recolonized from an extra European refugium. High mtCOI genetic structure within Fennoscandia supports a recolonization of multiple lineages from independent refugia. However, this structure was not apparent in the microsatellite data, consistent with secondary contact without sexual incompability. In contrast, the stonefly A. compacta exhibited low genetic structure and shared mtCOI haplotypes among Fennoscandia and the Black Forest, suggesting a shared Pleistocene refugium in the periglacial tundrabelt. Again, there is incongruence with the microsatellite data, which could be explained with ancestral polymorphism or female-biased dispersal. Future SDM projects major regional habitat loss for the mayfly A. inopinatus, particularly in Central European mountain ranges. By relating these range shifts to my population genetic results, I identified conservation units primarily in Eastern Europe, that if preserved would maintain high levels of the present-day genetic diversity of A. inopinatus and continue to provide long-term suitable habitat under future climate warming scenarios.rnIn this thesis I show that despite similar present day distributions the underlying demographic histories of the study species are vastly different, which might be due to differing dispersal capabilities and niche plasticity. I present genetic, climatic, and ecological data that can be used to prioritize conservation efforts for cold-adapted freshwater insects in light of future climate change. Overall, this thesis provides a next step in filling the knowledge gap regarding molecular studies of the arctic-alpine invertebrate fauna. However, there is continued need to explore the phenomenon of arctic-alpine disjunctions to help understand the processes of range expansion, regression, and lineage diversification in Europe’s high latitude and high altitude biota.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Die Erzeugung von Elektronenstrahlen hoher Intensität (I$geq$2,mA) und hoher Spinpolarisation (P$geq$85%) ist für die Experimente an den geplanten glqq Linac Ringgrqq Electron--Ion--Collidern (z.B. eRHIC am Brookhaven National Laboratory) unabdingbar, stellt aber zugleich eine enorme Herausforderung dar. Die Photoemission aus ce{GaAs}--basierten Halbleitern wie z.B. den in dieser Arbeit untersuchten GaAlAs/InGaAlAs Quanten--Übergittern zeichnet sich zwar durch eine hohe Brillanz aus, die geringe Quantenausbeute von nur ca. 1% im Bereich maximaler Polarisation erfordert jedoch hohe Laserintensitäten von mehreren Watt pro $text{cm}^{2}$, was erhebliche thermische Probleme verursacht. rnrnIn dieser Arbeit konnte zunächst gezeigt werden, dass die Lebensdauer einer Photokathode mit steigender Laserleistung bzw. Temperatur exponentiell abnimmt. Durch Einbringen eines DBR--Spiegels zwischen die aktive Zone der Photokathode und ihr Substrat wird ein Großteil des ungenutzten Laserlichts wieder aus dem Kristall herausreflektiert und trägt somit nicht zur Erwärmung bei. Gleichzeitig bildet der Spiegel zusammen mit der Grenzfläche zum Vakuum eine Resonator--Struktur aus, die die aktive Zone umschließt. Dadurch kommt es für bestimmte Wellenlängen zu konstruktiver Interferenz und die Absorption in der aktiven Zone erhöht sich. Beide Effekte konnten durch vergleichenden Messungen an Kathoden mit und ohne DBR--Spiegel nachgewiesen werden. Dabei ergibt sich eine gute Übereinstimmung mit der Vorhersage eines Modells, das auf der dielektrischen Funktion der einzelnen Halbleiterstrukturen beruht. Von besonderer praktischer Bedeutung ist, dass die DBR--Kathode für einen gegebenen Photoemissions-strom eine um einen Faktor $geq$,3{,}5 kleinere Erwärmung aufweist. Dies gilt über den gesamten Wellenlängenbereich in dem die Kathode eine hohe Strahlpolarisation (P$>$80%) produzieren kann, auch im Bereich der Resonanz.rnAus zeitaufgelösten Messungen der Ladungsverteilung und Polarisation lassen sich sowohl Rückschlüsse über die Transportmechanismen im Inneren einer Kathode als auch über die Beschaffenheit ihrer Oberfläche ziehen. Im Rahmen dieser Dissertation konnte die Messgeschwindigkeit der verwendeten Apparatur durch den Einbau eines schnelleren Detektors und durch eine Automatisierung der Messprozedur entscheidend vergrößert und die resultierende Zeitauflösung mit jetzt 1{,}2 Pikosekunden annähernd verdoppelt werden.rnrnDie mit diesen Verbesserungen erhaltenen Ergebnisse zeigen, dass sich der Transport der Elektronen in Superlattice--Strukturen stark vom Transport in den bisher untersuchten Bulk--Kristallen unterscheidet. Der Charakter der Bewegung folgt nicht dem Diffusionsmodell, sondern gibt Hinweise auf lokalisierte Zustände, die nahe der Leitungsbandunterkante liegen und Elektronen für kurze Zeit einfangen können. Dadurch hat die Impulsantwort einer Kathode neben einem schnellen Abfall des Signals auch eine größere Zeitkonstante, die selbst nach 30,ps noch ein Signal in der Größenordnung von ca. 5textperthousand der Maximalintensität erzeugt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Zur Untersuchung von Effekten beim Laserheizen von Polymeren wurde ein Temperaturmessaufbau entwickelt. Das Messprinzip basiert auf der Auswertung der thermischen Emission. Der Messaufbau besteht aus einer hochauflösenden Kamera, ausgestattet mit Bildverstärker, sowie Interferenzfiltern um eine spektrale Auflösung zu gewährleisten und einem gepulster NIR-Heizlaser. Die Pulsdauer des Lasers liegt in der Größenordnung von 10 µs, der Strahldurchmesser durch entsprechende Fokussierung in der Größenordnung von 10 µm. Mittels Fit des Planck‘schen Strahlungsgesetzes an die aufgenommene thermische Emission konnten 2D Temperaturgraphen erhalten werden. Eine Ortsauflösung von 1 µm und eine Zeitauflösung von 1 µs konnten realisiert werden. In Kombination mit Finite-Elemente-Simulationen wurde mit diesem Aufbau die Laserablation verschiedener Polymere untersucht. Dabei hat sich gezeigt, dass bei Polymeren mit einem Glasübergang im Temperaturbereich zwischen Raum- und Zerfallstemperatur, photomechanische Ablation stattfand. Die Ablationsschwelle lag für diese Polymere mehrere 10 K über dem Glasübergang, weit unter der Zerfallstemperatur aus thermogravimetrischen Experimenten mit typischen Heizraten von 10 K/min. Bei hohen Laserenergien und damit verbundenen hohen Temperaturen konnte dagegen thermischer Zerfall beobachtet werden. Ein Übergang des Mechanismus von photomechanischer Ablation zu Ablation durch thermischen Zerfall ergab sich bei Temperaturen deutlich über der Zerfallstemperatur des Polymers aus der Thermogravimetrie. Dies wurde bedingt durch die kurzen Reaktionszeiten des Laserexperiments in der Größenordnung der Pulsdauer und steht im Einklang mit dem Gesetz von Arrhenius. Polymere ohne Glasübergang im Heizbereich zeigten dagegen keine photomechanische Ablation, sondern ausschließlich thermischen Zerfall. Die Ablationsschwelle lag auch hier bei höheren Temperaturen, entsprechend dem Gesetz von Arrhenius. Hohe Temperaturen, mehrere 100 K über der Zerfallstemperatur, ergaben sich darüber hinaus bei hohen Laserenergien. Ein drastisches Überhitzen des Polymers, wie in der Literatur beschrieben, konnte nicht beobachtet werden. Experimentelle Befunde deuten vielmehr darauf hin, dass es sich bei dem heißen Material um thermische Zerfallsprodukte, Polymerfragmente, Monomer und Zerfallsprodukte des Monomers handelte bzw. das Temperaturprofil der Zerfallsreaktion selbst visualisiert wurde.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Smart City is a high-performance urban context, where citizens live independently and are more aware of the surrounding opportunities, thanks to forward-looking development of economy politics, governance, mobility and environment. ICT infrastructures play a key-role in this new research field being also a mean for society to allow new ideas to prosper and new, more efficient approaches to be developed. The aim of this work is to research and develop novel solutions, here called smart services, in order to solve several upcoming problems and known issues in urban areas and more in general in the modern society context. A specific focus is posed on smart governance and on privacy issues which have been arisen in the cellular age.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nella fisica delle particelle, onde poter effettuare analisi dati, è necessario disporre di una grande capacità di calcolo e di storage. LHC Computing Grid è una infrastruttura di calcolo su scala globale e al tempo stesso un insieme di servizi, sviluppati da una grande comunità di fisici e informatici, distribuita in centri di calcolo sparsi in tutto il mondo. Questa infrastruttura ha dimostrato il suo valore per quanto riguarda l'analisi dei dati raccolti durante il Run-1 di LHC, svolgendo un ruolo fondamentale nella scoperta del bosone di Higgs. Oggi il Cloud computing sta emergendo come un nuovo paradigma di calcolo per accedere a grandi quantità di risorse condivise da numerose comunità scientifiche. Date le specifiche tecniche necessarie per il Run-2 (e successivi) di LHC, la comunità scientifica è interessata a contribuire allo sviluppo di tecnologie Cloud e verificare se queste possano fornire un approccio complementare, oppure anche costituire una valida alternativa, alle soluzioni tecnologiche esistenti. Lo scopo di questa tesi è di testare un'infrastruttura Cloud e confrontare le sue prestazioni alla LHC Computing Grid. Il Capitolo 1 contiene un resoconto generale del Modello Standard. Nel Capitolo 2 si descrive l'acceleratore LHC e gli esperimenti che operano a tale acceleratore, con particolare attenzione all’esperimento CMS. Nel Capitolo 3 viene trattato il Computing nella fisica delle alte energie e vengono esaminati i paradigmi Grid e Cloud. Il Capitolo 4, ultimo del presente elaborato, riporta i risultati del mio lavoro inerente l'analisi comparata delle prestazioni di Grid e Cloud.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'obiettivo di questa tesi è studiare la fattibilità dello studio della produzione associata ttH del bosone di Higgs con due quark top nell'esperimento CMS, e valutare le funzionalità e le caratteristiche della prossima generazione di toolkit per l'analisi distribuita a CMS (CRAB versione 3) per effettuare tale analisi. Nel settore della fisica del quark top, la produzione ttH è particolarmente interessante, soprattutto perchè rappresenta l'unica opportunità di studiare direttamente il vertice t-H senza dover fare assunzioni riguardanti possibili contributi dalla fisica oltre il Modello Standard. La preparazione per questa analisi è cruciale in questo momento, prima dell'inizio del Run-2 dell'LHC nel 2015. Per essere preparati a tale studio, le implicazioni tecniche di effettuare un'analisi completa in un ambito di calcolo distribuito come la Grid non dovrebbero essere sottovalutate. Per questo motivo, vengono presentati e discussi un'analisi dello stesso strumento CRAB3 (disponibile adesso in versione di pre-produzione) e un confronto diretto di prestazioni con CRAB2. Saranno raccolti e documentati inoltre suggerimenti e consigli per un team di analisi che sarà eventualmente coinvolto in questo studio. Nel Capitolo 1 è introdotta la fisica delle alte energie a LHC nell'esperimento CMS. Il Capitolo 2 discute il modello di calcolo di CMS e il sistema di analisi distribuita della Grid. Nel Capitolo 3 viene brevemente presentata la fisica del quark top e del bosone di Higgs. Il Capitolo 4 è dedicato alla preparazione dell'analisi dal punto di vista degli strumenti della Grid (CRAB3 vs CRAB2). Nel capitolo 5 è presentato e discusso uno studio di fattibilità per un'analisi del canale ttH in termini di efficienza di selezione.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il lavoro svolto è dedicato alla realizzazione ed implementazione di un sistema distribuito "smart" per il controllo degli accessi. Il progetto sviluppato è inquadrato nel contesto di "SPOT Software", che necessita di migliorare il processo aziendale di controllo accessi e gestione presenze al fine di aumentarne usabilità ed efficienza. Saranno affrontate in generale le tematiche di Internet of Things, Smart Building, Smart City e sistemi embedded, approfondendo il ruolo delle tecnologie di comunicazione NFC e BLE, al centro di questo lavoro. Successivamente sarà discussa la progettazione di ognuno dei tre nodi del sistema, motivando le scelte tecnologiche e progettuali: Web application, Smart device e Smartphone app.