990 resultados para Dark objects method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decade has witnessed the establishment of a Standard Cosmological Model, which is based on two fundamental assumptions: the first one is the existence of a new non relativistic kind of particles, i. e. the Dark Matter (DM) that provides the potential wells in which structures create, while the second one is presence of the Dark Energy (DE), the simplest form of which is represented by the Cosmological Constant Λ, that sources the acceleration in the expansion of our Universe. These two features are summarized by the acronym ΛCDM, which is an abbreviation used to refer to the present Standard Cosmological Model. Although the Standard Cosmological Model shows a remarkably successful agreement with most of the available observations, it presents some longstanding unsolved problems. A possible way to solve these problems is represented by the introduction of a dynamical Dark Energy, in the form of the scalar field ϕ. In the coupled DE models, the scalar field ϕ features a direct interaction with matter in different regimes. Cosmic voids are large under-dense regions in the Universe devoided of matter. Being nearby empty of matter their dynamics is supposed to be dominated by DE, to the nature of which the properties of cosmic voids should be very sensitive. This thesis work is devoted to the statistical and geometrical analysis of cosmic voids in large N-body simulations of structure formation in the context of alternative competing cosmological models. In particular we used the ZOBOV code (see ref. Neyrinck 2008), a publicly available void finder algorithm, to identify voids in the Halos catalogues extraxted from CoDECS simulations (see ref. Baldi 2012 ). The CoDECS are the largest N-body simulations to date of interacting Dark Energy (DE) models. We identify suitable criteria to produce voids catalogues with the aim of comparing the properties of these objects in interacting DE scenarios to the standard ΛCDM model, at different redshifts. This thesis work is organized as follows: in chapter 1, the Standard Cosmological Model as well as the main properties of cosmic voids are intro- duced. In chapter 2, we will present the scalar field scenario. In chapter 3 the tools, the methods and the criteria by which a voids catalogue is created are described while in chapter 4 we discuss the statistical properties of cosmic voids included in our catalogues. In chapter 5 the geometrical properties of the catalogued cosmic voids are presented by means of their stacked profiles. In chapter 6 we summarized our results and we propose further developments of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plasmonen stellen elektromagnetische Moden in metallischen Strukturen dar, in denen die quasifreien Elektronen im Metall kollektiv oszillieren. Während des letzten Jahrzehnts erfuhr das Gebiet der Plasmonik eine rasante Entwicklung, basierend auf zunehmenden Fortschritten der Nanostrukturierungsmethoden und spektroskopischen Untersuchungsmethoden, die zu der Möglichkeit von systematischen Einzelobjektuntersuchungen wohldefinierter Nanostrukturen führte. Die Anregung von Plasmonen resultiert neben einer radiativen Verstärkung der optischen Streuintensität im Fernfeld in einer nicht-radiativen Überhöhung der Feldstärke in unmittelbarer Umgebung der metallischen Struktur (Nahfeld), die durch die kohärente Ladungsansammlung an der metallischen Oberfläche hervorgerufen wird. Das optische Nahfeld stellt folglich eine bedeutende Größe für das fundamentale Verständnis der Wirkung und Wechselwirkung von Plasmonen sowie für die Optimierung plasmonbasierter Applikationen dar. Die große Herausforderung liegt in der Kompliziertheit des experimentellen Zugangs zum Nahfeld, der die Entwicklung eines grundlegenden Verständisses des Nahfeldes verhinderte.rnIm Rahmen dieser Arbeit wurde Photoemissionselektronenmikroskopie (PEEM) bzw. -mikrospektroskopie genutzt, um ortsaufgelöst die Eigenschaften nahfeld-induzierter Elektronenemission zu bestimmen. Die elektrodynamischen Eigenschaften der untersuchten Systeme wurden zudem mit numerischen, auf der Finiten Integrationsmethode basierenden Berechnungen bestimmt und mit den experimentellen Resultaten verglichen.rnAg-Scheiben mit einem Durchmesser von 1µm und einer Höhe von 50nm wurden mit fs-Laserstrahlung der Wellenlänge 400nm unter verschiedenen Polarisationszuständen angeregt. Die laterale Verteilung der infolge eines 2PPE-Prozesses emittierten Elektronen wurde mit dem PEEM aufgenommen. Aus dem Vergleich mit den numerischen Berechnungen lässt sich folgern, dass sich das Nahfeld an unterschiedlichen Stellen der metallischen Struktur verschiedenartig ausbildet. Insbesondere wird am Rand der Scheibe bei s-polarisierter Anregung (verschwindende Vertikalkomponente des elektrischen Felds) ein Nahfeld mit endlicher z-Komponente induziert, während im Zentrum der Scheibe das Nahfeld stets proportional zum einfallenden elektrischen Feld ist.rnWeiterhin wurde erstmalig das Nahfeld optisch angeregter, stark gekoppelter Plasmonen spektral (750-850nm) untersucht und für identische Nanoobjekte mit den entsprechenden Fernfeldspektren verglichen. Dies erfolgte durch Messung der spektralen Streucharakteristik der Einzelobjekte mit einem Dunkelfeldkonfokalmikroskop. Als Modellsystem stark gekoppelter Plasmonen dienten Au Nanopartikel in sub-Nanometerabstand zu einem Au Film (nanoparticle on plane, NPOP). Mit Hilfe dieser Kombination aus komplementären Untersuchungsmethoden konnte erstmalig die spektrale Trennung von radiativen und nicht-radiativen Moden stark gekoppelter Plasmonen nachgewiesen werden. Dies ist insbesondere für Anwendungen von großer Relevanz, da reine Nahfeldmoden durch den unterdrückten radiativen Zerfall eine große Lebensdauer besitzen, so dass deren Verstärkungswirkung besonders lange nutzbar ist. Ursachen für die Unterschiede im spektralen Verhalten von Fern- und Nahfeld konnten durch numerische Berechnungen identifiziert werden. Sie zeigten, dass das Nahfeld nicht-spärischer NPOPs durch die komplexe Oszillationsbewegung der Elektronen innerhalb des Spaltes zwischen Partikel und Film stark ortsabhängig ist. Zudem reagiert das Nahfeld stark gekoppelter Plasmonen deutlich empfindlicher auf strukturelle Störstellen des Resonators als die Fernfeld-Response. Ferner wurde der Elektronenemissionsmechanismus als optischer Feldemissionsprozess identifiziert. Um den Vorgang beschreiben zu können, wurde die Fowler-Nordheim Theorie der statischen Feldemission für den Fall harmonisch oszillierender Felder modifiziert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Standard Model of particle physics is a very successful theory which describes nearly all known processes of particle physics very precisely. Nevertheless, there are several observations which cannot be explained within the existing theory. In this thesis, two analyses with high energy electrons and positrons using data of the ATLAS detector are presented. One, probing the Standard Model of particle physics and another searching for phenomena beyond the Standard Model.rnThe production of an electron-positron pair via the Drell-Yan process leads to a very clean signature in the detector with low background contributions. This allows for a very precise measurement of the cross-section and can be used as a precision test of perturbative quantum chromodynamics (pQCD) where this process has been calculated at next-to-next-to-leading order (NNLO). The invariant mass spectrum mee is sensitive to parton distribution functions (PFDs), in particular to the poorly known distribution of antiquarks at large momentum fraction (Bjoerken x). The measurementrnof the high-mass Drell-Yan cross-section in proton-proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV is performed on a dataset collected with the ATLAS detector, corresponding to an integrated luminosity of 4.7 fb-1. The differential cross-section of pp -> Z/gamma + X -> e+e- + X is measured as a function of the invariant mass in the range 116 GeV < mee < 1500 GeV. The background is estimated using a data driven method and Monte Carlo simulations. The final cross-section is corrected for detector effects and different levels of final state radiation corrections. A comparison isrnmade to various event generators and to predictions of pQCD calculations at NNLO. A good agreement within the uncertainties between measured cross-sections and Standard Model predictions is observed.rnExamples of observed phenomena which can not be explained by the Standard Model are the amount of dark matter in the universe and neutrino oscillations. To explain these phenomena several extensions of the Standard Model are proposed, some of them leading to new processes with a high multiplicity of electrons and/or positrons in the final state. A model independent search in multi-object final states, with objects defined as electrons and positrons, is performed to search for these phenomenas. Therndataset collected at a center-of-mass energy of sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 20.3 fb-1 is used. The events are separated in different categories using the object multiplicity. The data-driven background method, already used for the cross-section measurement was developed further for up to five objects to get an estimation of the number of events including fake contributions. Within the uncertainties the comparison between data and Standard Model predictions shows no significant deviations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oggi sappiamo che la materia ordinaria rappresenta solo una piccola parte dell'intero contenuto in massa dell'Universo. L'ipotesi dell'esistenza della Materia Oscura, un nuovo tipo di materia che interagisce solo gravitazionalmente e, forse, tramite la forza debole, è stata avvalorata da numerose evidenze su scala sia galattica che cosmologica. Gli sforzi rivolti alla ricerca delle cosiddette WIMPs (Weakly Interacting Massive Particles), il generico nome dato alle particelle di Materia Oscura, si sono moltiplicati nel corso degli ultimi anni. L'esperimento XENON1T, attualmente in costruzione presso i Laboratori Nazionali del Gran Sasso (LNGS) e che sarà in presa dati entro la fine del 2015, segnerà un significativo passo in avanti nella ricerca diretta di Materia Oscura, che si basa sulla rivelazione di collisioni elastiche su nuclei bersaglio. XENON1T rappresenta la fase attuale del progetto XENON, che ha già realizzato gli esperimenti XENON10 (2005) e XENON100 (2008 e tuttora in funzione) e che prevede anche un ulteriore sviluppo, chiamato XENONnT. Il rivelatore XENON1T sfrutta circa 3 tonnellate di xeno liquido (LXe) e si basa su una Time Projection Chamber (TPC) a doppia fase. Dettagliate simulazioni Monte Carlo della geometria del rivelatore, assieme a specifiche misure della radioattività dei materiali e stime della purezza dello xeno utilizzato, hanno permesso di predire con accuratezza il fondo atteso. In questo lavoro di tesi, presentiamo lo studio della sensibilità attesa per XENON1T effettuato tramite il metodo statistico chiamato Profile Likelihood (PL) Ratio, il quale nell'ambito di un approccio frequentista permette un'appropriata trattazione delle incertezze sistematiche. In un primo momento è stata stimata la sensibilità usando il metodo semplificato Likelihood Ratio che non tiene conto di alcuna sistematica. In questo modo si è potuto valutare l'impatto della principale incertezza sistematica per XENON1T, ovvero quella sulla emissione di luce di scintillazione dello xeno per rinculi nucleari di bassa energia. I risultati conclusivi ottenuti con il metodo PL indicano che XENON1T sarà in grado di migliorare significativamente gli attuali limiti di esclusione di WIMPs; la massima sensibilità raggiunge una sezione d'urto σ=1.2∙10-47 cm2 per una massa di WIMP di 50 GeV/c2 e per una esposizione nominale di 2 tonnellate∙anno. I risultati ottenuti sono in linea con l'ambizioso obiettivo di XENON1T di abbassare gli attuali limiti sulla sezione d'urto, σ, delle WIMPs di due ordini di grandezza. Con tali prestazioni, e considerando 1 tonnellata di LXe come massa fiduciale, XENON1T sarà in grado di superare gli attuali limiti (esperimento LUX, 2013) dopo soli 5 giorni di acquisizione dati.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of today's dynamic analysis approaches are based on method traces. However, in the case of object-orientation understanding program execution by analyzing method traces is complicated because the behavior of a program depends on the sharing and the transfer of object references (aliasing). We argue that trace-based dynamic analysis is at a too low level of abstraction for object-oriented systems. We propose a new approach that captures the life cycle of objects by explicitly taking into account object aliasing and how aliases propagate during the execution of the program. In this paper, we present in detail our new meta-model and discuss future tracks opened by it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The demands of developing modern, highly dynamic applications have led to an increasing interest in dynamic programming languages and mechanisms. Not only applications must evolve over time, but the object models themselves may need to be adapted to the requirements of different run-time contexts. Class-based models and prototype-based models, for example, may need to co-exist to meet the demands of dynamically evolving applications. Multi-dimensional dispatch, fine-grained and dynamic software composition, and run-time evolution of behaviour are further examples of diverse mechanisms which may need to co-exist in a dynamically evolving run-time environment How can we model the semantics of these highly dynamic features, yet still offer some reasonable safety guarantees? To this end we present an original calculus in which objects can adapt their behaviour at run-time to changing contexts. Both objects and environments are represented by first-class mappings between variables and values. Message sends are dynamically resolved to method calls. Variables may be dynamically bound, making it possible to model a variety of dynamic mechanisms within the same calculus. Despite the highly dynamic nature of the calculus, safety properties are assured by a type assignment system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The demands of developing modern, highly dynamic applications have led to an increasing interest in dynamic programming languages and mechanisms. Not only must applications evolve over time, but the object models themselves may need to be adapted to the requirements of different run-time contexts. Class-based models and prototype-based models, for example, may need to co-exist to meet the demands of dynamically evolving applications. Multi-dimensional dispatch, fine-grained and dynamic software composition, and run-time evolution of behaviour are further examples of diverse mechanisms which may need to co-exist in a dynamically evolving run-time environment. How can we model the semantics of these highly dynamic features, yet still offer some reasonable safety guarantees? To this end we present an original calculus in which objects can adapt their behaviour at run-time. Both objects and environments are represented by first-class mappings between variables and values. Message sends are dynamically resolved to method calls. Variables may be dynamically bound, making it possible to model a variety of dynamic mechanisms within the same calculus. Despite the highly dynamic nature of the calculus, safety properties are assured by a type assignment system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Higher visual functions can be defined as cognitive processes responsible for object recognition, color and shape perception, and motion detection. People with impaired higher visual functions after unilateral brain lesion are often tested with paper pencil tests, but such tests do not assess the degree of interaction between the healthy brain hemisphere and the impaired one. Hence, visual functions are not tested separately in the contralesional and ipsilesional visual hemifields. METHODS: A new measurement setup, that involves real-time comparisons of shape and size of objects, orientation of lines, speed and direction of moving patterns, in the right or left visual hemifield, has been developed. The setup was implemented in an immersive environment like a hemisphere to take into account the effects of peripheral and central vision, and eventual visual field losses. Due to the non-flat screen of the hemisphere, a distortion algorithm was needed to adapt the projected images to the surface. Several approaches were studied and, based on a comparison between projected images and original ones, the best one was used for the implementation of the test. Fifty-seven healthy volunteers were then tested in a pilot study. A Satisfaction Questionnaire was used to assess the usability of the new measurement setup. RESULTS: The results of the distortion algorithm showed a structural similarity between the warped images and the original ones higher than 97%. The results of the pilot study showed an accuracy in comparing images in the two visual hemifields of 0.18 visual degrees and 0.19 visual degrees for size and shape discrimination, respectively, 2.56° for line orientation, 0.33 visual degrees/s for speed perception and 7.41° for recognition of motion direction. The outcome of the Satisfaction Questionnaire showed a high acceptance of the battery by the participants. CONCLUSIONS: A new method to measure higher visual functions in an immersive environment was presented. The study focused on the usability of the developed battery rather than the performance at the visual tasks. A battery of five subtasks to study the perception of size, shape, orientation, speed and motion direction was developed. The test setup is now ready to be tested in neurological patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement association and initial orbit determination is a fundamental task when building up a database of space objects. This paper proposes an efficient and robust method to determine the orbit using the available information of two tracklets, i.e. their line-of-sights and their derivatives. The approach works with a boundary-value formulation to represent hypothesized orbital states and uses an optimization scheme to find the best fitting orbits. The method is assessed and compared to an initial-value formulation using a measurement set taken by the Zimmerwald Small Aperture Robotic Telescope of the Astronomical Institute at the University of Bern. False associations of closely spaced objects on similar orbits cannot be completely eliminated due to the short duration of the measurement arcs. However, the presented approach uses the available information optimally and the overall association performance and robustness is very promising. The boundary-value optimization takes only around 2% of computational time when compared to optimization approaches using an initial-value formulation. The full potential of the method in terms of run-time is additionally illustrated by comparing it to other published association methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cataloging geocentric objects can be put in the framework of Multiple Target Tracking (MTT). Current work tends to focus on the S = 2 MTT problem because of its favorable computational complexity of O(n²). The MTT problem becomes NP-hard for a dimension of S˃3. The challenge is to find an approximation to the solution within a reasonable computation time. To effciently approximate this solution a Genetic Algorithm is used. The algorithm is applied to a simulated test case. These results represent the first steps towards a method that can treat the S˃3 problem effciently and with minimal manual intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). In this context both, the correct associations among the observations and the orbits of the objects have to be determined. The complexity of the MTT problem is defined by its dimension S. The number S corresponds to the number of fences involved in the problem. Each fence consists of a set of observations where each observation belongs to a different object. The S ≥ 3 MTT problem is an NP-hard combinatorial optimization problem. There are two general ways to solve this. One way is to seek the optimum solution, this can be achieved by applying a branch-and- bound algorithm. When using these algorithms the problem has to be greatly simplified to keep the computational cost at a reasonable level. Another option is to approximate the solution by using meta-heuristic methods. These methods aim to efficiently explore the different possible combinations so that a reasonable result can be obtained with a reasonable computational effort. To this end several population-based meta-heuristic methods are implemented and tested on simulated optical measurements. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tiled projector displays are a common choice for training simulators, where a high resolution output image is required. They are cheap for the resolution that they can reach and can be configured in many different ways. Nevertheless, such kinds of displays require geometric and color correction so that the composite image looks seamless. Display correction is an even bigger challenge when the projected images include dark scenes combined with brighter scenes. This is usually a problem for railway simulators when the train is positioned inside a tunnel and the black offset effect becomes noticeable. In this paper, a method for fast photometric and geometric correction of tiled display systems where dark and bright scenes are combined is presented. The image correction is carried out in two steps. First, geometric alignment and overlapping areas attenuation for brighter scenes is applied. Second, in the event of being inside a tunnel, the brightness of the scene is increased in certain areas using light sources in order to create the impression of darkness but minimizing the effect of the black offset