818 resultados para Efficient lighting


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work reports on the construction and spectroscopic analyses of optical micro-cavities (OMCs) that efficiently emit at ~1535 nm. The emission wavelength matches the third transmission window of commercial optical fibers and the OMCs were entirely based on silicon. The sputtering deposition method was adopted in the preparation of the OMCs, which comprised two Bragg reflectors and one spacer layer made of either Er- or ErYb-doped amorphous silicon nitride. The luminescence signal extracted from the OMCs originated from the 4I13/2→4I15/2 transition (due to Er3+ ions) and its intensity showed to be highly dependent on the presence of Yb3+ ions.According to the results, the Er3+-related light emission was improved by a factor of 48 when combined with Yb3+ ions and inserted in the spacer layer of the OMC. The results also showed the effectiveness of the present experimental approach in producing Si-based light-emitting structures in which the main characteristics are: (a) compatibility with the actual microelectronics industry, (b) the deposition of optical quality layers with accurate composition control, and (c) no need of uncommon elements-compounds nor extensive thermal treatments. Along with the fundamental characteristics of the OMCs, this work also discusses the impact of the Er3+-Yb3+ ion interaction on the emission intensity as well as the potential of the present findings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chemistry can contribute, in many different ways to solve the challenges we are facing to modify our inefficient and fossil-fuel based energy system. The present work was motivated by the search for efficient photoactive materials to be employed in the context of the energy problem: materials to be utilized in energy efficient devices and in the production of renewable electricity and fuels. We presented a new class of copper complexes, that could find application in lighting techhnologies, by serving as luminescent materials in LEC, OLED, WOLED devices. These technologies may provide substantial energy savings in the lighting sector. Moreover, recently, copper complexes have been used as light harvesting compounds in dye sensitized photoelectrochemical solar cells, which offer a viable alternative to silicon-based photovoltaic technologies. We presented also a few supramolecular systems containing fullerene, e.g. dendrimers, dyads and triads.The most complex among these arrays, which contain porphyrin moieties, are presented in the final chapter. They undergo photoinduced energy- and electron transfer processes also with long-lived charge separated states, i.e. the fundamental processes to power artificial photosynthetic systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The irrigation scheme Eduardo Mondlane, situated in Chókwè District - in the Southern part of the Gaza province and within the Limpopo River Basin - is the largest in the country, covering approximately 30,000 hectares of land. Built by the Portuguese colonial administration in the 1950s to exploit the agricultural potential of the area through cash-cropping, after Independence it became one of Frelimo’s flagship projects aiming at the “socialization of the countryside” and at agricultural economic development through the creation of a state farm and of several cooperatives. The failure of Frelimo’s economic reforms, several infrastructural constraints and local farmers resistance to collective forms of production led to scheme to a state of severe degradation aggravated by the floods of the year 2000. A project of technical rehabilitation initiated after the floods is currently accompanied by a strong “efficiency” discourse from the managing institution that strongly opposes the use of irrigated land for subsistence agriculture, historically a major livelihood strategy for smallfarmers, particularly for women. In fact, the area has been characterized, since the end of the XIX century, by a stable pattern of male migration towards South African mines, that has resulted in an a steady increase of women-headed households (both de jure and de facto). The relationship between land reform, agricultural development, poverty alleviation and gender equality in Southern Africa is long debated in academic literature. Within this debate, the role of agricultural activities in irrigation schemes is particularly interesting considering that, in a drought-prone area, having access to water for irrigation means increased possibilities of improving food and livelihood security, and income levels. In the case of Chókwè, local governments institutions are endorsing the development of commercial agriculture through initiatives such as partnerships with international cooperation agencies or joint-ventures with private investors. While these business models can sometimes lead to positive outcomes in terms of poverty alleviation, it is important to recognize that decentralization and neoliberal reforms occur in the context of financial and political crisis of the State that lacks the resources to efficiently manage infrastructures such as irrigation systems. This kind of institutional and economic reforms risk accelerating processes of social and economic marginalisation, including landlessness, in particular for poor rural women that mainly use irrigated land for subsistence production. The study combines an analysis of the historical and geographical context with the study of relevant literature and original fieldwork. Fieldwork was conducted between February and June 2007 (where I mainly collected secondary data, maps and statistics and conducted preliminary visit to Chókwè) and from October 2007 to March 2008. Fieldwork methodology was qualitative and used semi-structured interviews with central and local Government officials, technical experts of the irrigation scheme, civil society organisations, international NGOs, rural extensionists, and water users from the irrigation scheme, in particular those women smallfarmers members of local farmers’ associations. Thanks to the collaboration with the Union of Farmers’ Associations of Chókwè, she has been able to participate to members’ meeting, to education and training activities addressed to women farmers members of the Union and to organize a group discussion. In Chókwè irrigation scheme, women account for the 32% of water users of the familiar sector (comprising plot-holders with less than 5 hectares of land) and for just 5% of the private sector. If one considers farmers’ associations of the familiar sector (a legacy of Frelimo’s cooperatives), women are 84% of total members. However, the security given to them by the land title that they have acquired through occupation is severely endangered by the use that they make of land, that is considered as “non efficient” by the irrigation scheme authority. Due to a reduced access to marketing possibilities and to inputs, training, information and credit women, in actual fact, risk to see their right to access land and water revoked because they are not able to sustain the increasing cost of the water fee. The myth of the “efficient producer” does not take into consideration the characteristics of inequality and gender discrimination of the neo-liberal market. Expecting small-farmers, and in particular women, to be able to compete in the globalized agricultural market seems unrealistic, and can perpetuate unequal gendered access to resources such as land and water.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years and thanks to innovative technological advances in supplemental lighting sources and photo-selective filters, light quality manipulation (i.e. spectral composition of sunlight) have demonstrated positive effects on plant performance in ornamentals and vegetable crops. However, this aspect has been much less studied in fruit trees due to the difficulty of conditioning the light environment of orchards. The aim of the present PhD research was to study the use of different colored nets with selective light transmission in the blue (400 – 500 nm), red (600 – 700 nm) and near infrared (700 – 1100 nm) wavelengths as a tool to the light quality management and its morphological and physiological effects in field-grown apple trees. Chapter I provides a review the current status on physiological and technological advances on light quality management in fruit trees. Chapter II shows the main effect of colored nets on morpho-anatomical (stomata density, mesophyll structure and leaf mass area index) characteristics in apple leaves. Chapter III provides an analysis about the effect of micro-environmental conditions under colored nets on leaf stomatal conductance and leaf photosynthetic capacity. Chapter IV describes a study approach to evaluate the impact of colored nets on fruit growth potential in apples. Summing up results obtained in the present PhD dissertation clearly demonstrate that light quality management through photo-selective colored nets presents an interesting potential for the manipulation of plant morphological and physiological traits in apple trees. Cover orchards with colored nets might be and alternative technology to address many of the most important challenges of modern fruit growing, such as: the need for the efficient use of natural resources (water, soil and nutrients) the reduction of environmental impacts and the mitigation of possible negative effects of global climate change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The conventional way to calculate hard scattering processes in perturbation theory using Feynman diagrams is not efficient enough to calculate all necessary processes - for example for the Large Hadron Collider - to a sufficient precision. Two alternatives to order-by-order calculations are studied in this thesis.rnrnIn the first part we compare the numerical implementations of four different recursive methods for the efficient computation of Born gluon amplitudes: Berends-Giele recurrence relations and recursive calculations with scalar diagrams, with maximal helicity violating vertices and with shifted momenta. From the four methods considered, the Berends-Giele method performs best, if the number of external partons is eight or bigger. However, for less than eight external partons, the recursion relation with shifted momenta offers the best performance. When investigating the numerical stability and accuracy, we found that all methods give satisfactory results.rnrnIn the second part of this thesis we present an implementation of a parton shower algorithm based on the dipole formalism. The formalism treats initial- and final-state partons on the same footing. The shower algorithm can be used for hadron colliders and electron-positron colliders. Also massive partons in the final state were included in the shower algorithm. Finally, we studied numerical results for an electron-positron collider, the Tevatron and the Large Hadron Collider.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of linear programming in various areas has increased with the significant improvement of specialized solvers. Linear programs are used as such to model practical problems, or as subroutines in algorithms such as formal proofs or branch-and-cut frameworks. In many situations a certified answer is needed, for example the guarantee that the linear program is feasible or infeasible, or a provably safe bound on its objective value. Most of the available solvers work with floating-point arithmetic and are thus subject to its shortcomings such as rounding errors or underflow, therefore they can deliver incorrect answers. While adequate for some applications, this is unacceptable for critical applications like flight controlling or nuclear plant management due to the potential catastrophic consequences. We propose a method that gives a certified answer whether a linear program is feasible or infeasible, or returns unknown'. The advantage of our method is that it is reasonably fast and rarely answers unknown'. It works by computing a safe solution that is in some way the best possible in the relative interior of the feasible set. To certify the relative interior, we employ exact arithmetic, whose use is nevertheless limited in general to critical places, allowing us to rnremain computationally efficient. Moreover, when certain conditions are fulfilled, our method is able to deliver a provable bound on the objective value of the linear program. We test our algorithm on typical benchmark sets and obtain higher rates of success compared to previous approaches for this problem, while keeping the running times acceptably small. The computed objective value bounds are in most of the cases very close to the known exact objective values. We prove the usability of the method we developed by additionally employing a variant of it in a different scenario, namely to improve the results of a Satisfiability Modulo Theories solver. Our method is used as a black box in the nodes of a branch-and-bound tree to implement conflict learning based on the certificate of infeasibility for linear programs consisting of subsets of linear constraints. The generated conflict clauses are in general small and give good rnprospects for reducing the search space. Compared to other methods we obtain significant improvements in the running time, especially on the large instances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dendritic cells (DCs) are the most potent cell type for capture, processing, and presentation of antigens. They are able to activate naïve T cells as well as to initiate memory T-cell immune responses. T lymphocytes are key elements in eliciting cellular immunity against bacteria and viruses as well as in the generation of anti-tumor and anti-leukemia immune responses. Because of their central position in the immunological network, specific manipulations of these cell types provide promising possibilities for novel immunotherapies. Nanoparticles (NP) that have just recently been investigated for use as carriers of drugs or imaging agents, are well suited for therapeutic applications in vitro and also in vivo since they can be addressed to cells with a high target specificity upon surface functionalization. As a first prerequisite, an efficient in vitro labeling of cells with NP has to be established. In this work we developed protocols allowing an effective loading of human monocyte-derived DCs and primary antigen-specific T cells with newly designed NP without affecting biological cell functions. Polystyrene NP that have been synthesized by the miniemulsion technique contained perylenmonoimide (PMI) as a fluorochrome, allowing the rapid determination of intracellular uptake by flow cytometry. To confirm intracellular localization, NP-loaded cells were analyzed by confocal laser scanning microscopy (cLSM) and transmission electron microscopy (TEM). Functional analyses of NP-loaded cells were performed by IFN-γ ELISPOT, 51Chromium-release, and 3H-thymidine proliferation assays. In the first part of this study, we observed strong labeling of DCs with amino-functionalized NP. Even after 8 days 95% of DCs had retained nanoparticles with a median fluorescence intensity of 67% compared to day 1. NP loading did not influence expression of cell surface molecules that are specific for mature DCs (mDCs) nor did it influence the immunostimulatory capacity of mDCs. This procedure did also not impair the capability of DCs for uptake, processing and presentation of viral antigens that has not been shown before for NP in DCs. In the second part of this work, the protocol was adapted to the very different conditions with T lymphocytes. We used leukemia-, tumor-, and allo-human leukocyte antigen (HLA) reactive CD8+ or CD4+ T cells as model systems. Our data showed that amino-functionalized NP were taken up very efficiently also by T lymphocytes, which usually had a lower capacity for NP incorporation compared to other cell types. In contrast to DCs, T cells released 70-90% of incorporated NP during the first 24 h, which points to the need to escape from intracellular uptake pathways before export to the outside can occur. Preliminary data with biodegradable nanocapsules (NC) revealed that encapsulated cargo molecules could, in principle, escape from the endolysosomal compartment after loading into T lymphocytes. T cell function was not influenced by NP load at low to intermediate concentrations of 25 to 150 μg/mL. Overall, our data suggest that NP and NC are promising tools for the delivery of drugs, antigens, and other molecules into DCs and T lymphocytes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content. Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-Pérot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis focuses on the energy efficiency in wireless networks under the transmission and information diffusion points of view. In particular, on one hand, the communication efficiency is investigated, attempting to reduce the consumption during transmissions, while on the other hand the energy efficiency of the procedures required to distribute the information among wireless nodes in complex networks is taken into account. For what concerns energy efficient communications, an innovative transmission scheme reusing source of opportunity signals is introduced. This kind of signals has never been previously studied in literature for communication purposes. The scope is to provide a way for transmitting information with energy consumption close to zero. On the theoretical side, starting from a general communication channel model subject to a limited input amplitude, the theme of low power transmission signals is tackled under the perspective of stating sufficient conditions for the capacity achieving input distribution to be discrete. Finally, the focus is shifted towards the design of energy efficient algorithms for the diffusion of information. In particular, the endeavours are aimed at solving an estimation problem distributed over a wireless sensor network. The proposed solutions are deeply analyzed both to ensure their energy efficiency and to guarantee their robustness against losses during the diffusion of information (against information diffusion truncation more in general).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid development in the field of lighting and illumination allows low energy consumption and a rapid growth in the use, and development of solid-state sources. As the efficiency of these devices increases and their cost decreases there are predictions that they will become the dominant source for general illumination in the short term. The objective of this thesis is to study, through extensive simulations in realistic scenarios, the feasibility and exploitation of visible light communication (VLC) for vehicular ad hoc networks (VANETs) applications. A brief introduction will introduce the new scenario of smart cities in which visible light communication will become a fundamental enabling technology for the future communication systems. Specifically, this thesis focus on the acquisition of several, frequent, and small data packets from vehicles, exploited as sensors of the environment. The use of vehicles as sensors is a new paradigm to enable an efficient environment monitoring and an improved traffic management. In most cases, the sensed information must be collected at a remote control centre and one of the most challenging aspects is the uplink acquisition of data from vehicles. My thesis discusses the opportunity to take advantage of short range vehicle-to-vehicle (V2V) and vehicle-to-roadside (V2R) communications to offload the cellular networks. More specifically, it discusses the system design and assesses the obtainable cellular resource saving, by considering the impact of the percentage of vehicles equipped with short range communication devices, of the number of deployed road side units, and of the adopted routing protocol. When short range communications are concerned, WAVE/IEEE 802.11p is considered as standard for VANETs. Its use together with VLC will be considered in urban vehicular scenarios to let vehicles communicate without involving the cellular network. The study is conducted by simulation, considering both a simulation platform (SHINE, simulation platform for heterogeneous interworking networks) developed within the Wireless communication Laboratory (Wilab) of the University of Bologna and CNR, and network simulator (NS3). trying to realistically represent all the wireless network communication aspects. Specifically, simulation of vehicular system was performed and introduced in ns-3, creating a new module for the simulator. This module will help to study VLC applications in VANETs. Final observations would enhance and encourage potential research in the area and optimize performance of VLC systems applications in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we present techniques that can be used to speed up the calculation of perturbative matrix elements for observables with many legs ($n = 3, 4, 5, 6, 7, ldots$). We investigate several ways to achieve this, including the use of Monte Carlo methods, the leading-color approximation, numerically less precise but faster operations, and SSE-vectorization. An important idea is the use of enquote{random polarizations} for which we derive subtraction terms for the real corrections in next-to-leading order calculations. We present the effectiveness of all these methods in the context of electron-positron scattering to $n$ jets, $n$ ranging from two to seven.