981 resultados para dark matter simulations


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present the first results of searches for axions and axionlike particles with the XENON100 experiment. The axion-electron coupling constant, g Ae , has been probed by exploiting the axioelectric effect in liquid xenon. A profile likelihood analysis of 224.6 live days × 34-kg exposure has shown no evidence for a signal. By rejecting g Ae larger than 7.7×10 −12 (90% C.L.) in the solar axion search, we set the best limit to date on this coupling. In the frame of the DFSZ and KSVZ models, we exclude QCD axions heavier than 0.3 and 80  eV/c 2 , respectively. For axionlike particles, under the assumption that they constitute the whole abundance of dark matter in our galaxy, we constrain g Ae to be lower than 1×10 −12 (90% C.L.) for masses between 5 and 10  keV/c 2 .

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The XENON100 dark matter experiment uses liquid xenon in a time projection chamber (TPC) to measure xenon nuclear recoils resulting from the scattering of dark matter weakly interacting massive particles (WIMPs). In this paper, we report the observation of single-electron charge signals which are not related to WIMP interactions. These signals, which show the excellent sensitivity of the detector to small charge signals, are explained as being due to the photoionization of impurities in the liquid xenon and of the metal components inside the TPC. They are used as a unique calibration source to characterize the detector. We explain how we can infer crucial parameters for the XENON100 experiment: the secondary-scintillation gain, the extraction yield from the liquid to the gas phase and the electron drift velocity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quarks were introduced 50 years ago opening the road towards our understanding of the elementary constituents of matter and their fundamental interactions. Since then, a spectacular progress has been made with important discoveries that led to the establishment of the Standard Theory that describes accurately the basic constituents of the observable matter, namely quarks and leptons, interacting with the exchange of three fundamental forces, the weak, electromagnetic and strong force. Particle physics is now entering a new era driven by the quest of understanding of the composition of our Universe such as the unobservable (dark) matter, the hierarchy of masses and forces, the unification of all fundamental interactions with gravity in a consistent quantum framework, and several other important questions. A candidate theory providing answers to many of these questions is string theory that replaces the notion of point particles by extended objects, such as closed and open strings. In this short note, I will give a brief overview of string unification, describe in particular how quarks and leptons can emerge and discuss what are possible predictions for particle physics and cosmology that could test these ideas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quarks were introduced 50 years ago opening the road towards our understanding of the elementary constituents of matter and their fundamental interactions. Since then, a spectacular progress has been made with important discoveries that led to the establishment of the Standard Theory that describes accurately the basic constituents of the observable matter, namely quarks and leptons, interacting with the exchange of three fundamental forces, the weak, electromagnetic and strong force. Particle physics is now entering a new era driven by the quest of understanding of the composition of our Universe such as the unobservable (dark) matter, the hierarchy of masses and forces, the unification of all fundamental interactions with gravity in a consistent quantum framework, and several other important questions. A candidate theory providing answers to many of these questions is string theory that replaces the notion of point particles by extended objects, such as closed and open strings. In this short note, I will give a brief overview of string unification, describe in particular how quarks and leptons can emerge and discuss what are possible predictions for particle physics and cosmology that could test these ideas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the framework of the MSSM, we examine several simplified models where only a few superpartners are light. This allows us to study WIMP-nucleus scattering in terms of a handful of MSSM parameters and thereby scrutinize their impact on dark matter direct-detection experiments. Focusing on spin-independent WIMP-nucleon scattering, we derive simplified, analytic expressions for the Wilson coefficients associated with Higgs and squark exchange. We utilize these results to study the complementarity of constraints due to direct-detection, flavor, and collider experiments. We also identify parameter configurations that produce (almost) vanishing cross sections. In the proximity of these so-called blind spots, we find that the amount of isospin violation may be much larger than typically expected in the MSSM. This feature is a generic property of parameter regions where cross sections are suppressed, and highlights the importance of a careful analysis of the nucleon matrix elements and the associated hadronic uncertainties. This becomes especially relevant once the increased sensitivity of future direct-detection experiments corners the MSSM into these regions of parameter space.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We are currently setting up a facility for low-background gamma-ray spectrometry based on a HPGe detector. It is dedicated to material screening for the XENON and DARWIN dark matter projects as well as to the characterization of meteorites. The detector will be installed in a medium depth (∼620 m.w.e.) underground laboratory in Switzerland with several layers of shielding and an active muon-veto. The GeMSE facility will be operational by fall 2015 with an expected background rate of ∼250 counts/day (100-2700 keV).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rare event search experiments using liquid xenon as target and detection medium require ultra-low background levels to fully exploit their physics potential. Cosmic ray induced activation of the detector components and, even more importantly, of the xenon itself during production, transportation and storage at the Earth's surface, might result in the production of radioactive isotopes with long half-lives, with a possible impact on the expected background. We present the first dedicated study on the cosmogenic activation of xenon after 345 days of exposure to cosmic rays at the Jungfraujoch research station at 3470m above sea level, complemented by a study of copper which has been activated simultaneously. We have directly observed the production of 7Be, 101Rh, 125Sb, 126I and 127Xe in xenon, out of which only 125Sb could potentially lead to background for a multi-ton scale dark matter search. The production rates for five out of eight studied radioactive isotopes in copper are in agreement with the only existing dedicated activation measurement, while we observe lower rates for the remaining ones. The specific saturation activities for both samples are also compared to predictions obtained with commonly used software packages, where we observe some underpredictions, especially for xenon activation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have searched for periodic variations of the electronic recoil event rate in the (2-6) keV energy range recorded between February 2011 and March 2012 with the XENON100 detector, adding up to 224.6 live days in total. Following a detailed study to establish the stability of the detector and its background contributions during this run, we performed an un-binned profile likelihood analysis to identify any periodicity up to 500 days. We find a global significance of less than 1 sigma for all periods suggesting no statistically significant modulation in the data. While the local significance for an annual modulation is 2.8 sigma, the analysis of a multiple-scatter control sample and the phase of the modulation disfavor a dark matter interpretation. The DAMA/LIBRA annual modulation interpreted as a dark matter signature with axial-vector coupling of WIMPs to electrons is excluded at 4.8 sigma.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La astronomía de rayos γ estudia las partículas más energéticas que llegan a la Tierra desde el espacio. Estos rayos γ no se generan mediante procesos térmicos en simples estrellas, sino mediante mecanismos de aceleración de partículas en objetos celestes como núcleos de galaxias activos, púlsares, supernovas, o posibles procesos de aniquilación de materia oscura. Los rayos γ procedentes de estos objetos y sus características proporcionan una valiosa información con la que los científicos tratan de comprender los procesos físicos que ocurren en ellos y desarrollar modelos teóricos que describan su funcionamiento con fidelidad. El problema de observar rayos γ es que son absorbidos por las capas altas de la atmósfera y no llegan a la superficie (de lo contrario, la Tierra será inhabitable). De este modo, sólo hay dos formas de observar rayos γ embarcar detectores en satélites, u observar los efectos secundarios que los rayos γ producen en la atmósfera. Cuando un rayo γ llega a la atmósfera, interacciona con las partículas del aire y genera un par electrón - positrón, con mucha energía. Estas partículas secundarias generan a su vez más partículas secundarias cada vez menos energéticas. Estas partículas, mientras aún tienen energía suficiente para viajar más rápido que la velocidad de la luz en el aire, producen una radiación luminosa azulada conocida como radiación Cherenkov durante unos pocos nanosegundos. Desde la superficie de la Tierra, algunos telescopios especiales, conocidos como telescopios Cherenkov o IACTs (Imaging Atmospheric Cherenkov Telescopes), son capaces de detectar la radiación Cherenkov e incluso de tomar imágenes de la forma de la cascada Cherenkov. A partir de estas imágenes es posible conocer las principales características del rayo γ original, y con suficientes rayos se pueden deducir características importantes del objeto que los emitió, a cientos de años luz de distancia. Sin embargo, detectar cascadas Cherenkov procedentes de rayos γ no es nada fácil. Las cascadas generadas por fotones γ de bajas energías emiten pocos fotones, y durante pocos nanosegundos, y las correspondientes a rayos γ de alta energía, si bien producen más electrones y duran más, son más improbables conforme mayor es su energía. Esto produce dos líneas de desarrollo de telescopios Cherenkov: Para observar cascadas de bajas energías son necesarios grandes reflectores que recuperen muchos fotones de los pocos que tienen estas cascadas. Por el contrario, las cascadas de altas energías se pueden detectar con telescopios pequeños, pero conviene cubrir con ellos una superficie grande en el suelo para aumentar el número de eventos detectados. Con el objetivo de mejorar la sensibilidad de los telescopios Cherenkov actuales, en el rango de energía alto (> 10 TeV), medio (100 GeV - 10 TeV) y bajo (10 GeV - 100 GeV), nació el proyecto CTA (Cherenkov Telescope Array). Este proyecto en el que participan más de 27 países, pretende construir un observatorio en cada hemisferio, cada uno de los cuales contará con 4 telescopios grandes (LSTs), unos 30 medianos (MSTs) y hasta 70 pequeños (SSTs). Con un array así, se conseguirán dos objetivos. En primer lugar, al aumentar drásticamente el área de colección respecto a los IACTs actuales, se detectarán más rayos γ en todos los rangos de energía. En segundo lugar, cuando una misma cascada Cherenkov es observada por varios telescopios a la vez, es posible analizarla con mucha más precisión gracias a las técnicas estereoscópicas. La presente tesis recoge varios desarrollos técnicos realizados como aportación a los telescopios medianos y grandes de CTA, concretamente al sistema de trigger. Al ser las cascadas Cherenkov tan breves, los sistemas que digitalizan y leen los datos de cada píxel tienen que funcionar a frecuencias muy altas (≈1 GHz), lo que hace inviable que funcionen de forma continua, ya que la cantidad de datos guardada será inmanejable. En su lugar, las señales analógicas se muestrean, guardando las muestras analógicas en un buffer circular de unos pocos µs. Mientras las señales se mantienen en el buffer, el sistema de trigger hace un análisis rápido de las señales recibidas, y decide si la imagen que hay en el buér corresponde a una cascada Cherenkov y merece ser guardada, o por el contrario puede ignorarse permitiendo que el buffer se sobreescriba. La decisión de si la imagen merece ser guardada o no, se basa en que las cascadas Cherenkov producen detecciones de fotones en píxeles cercanos y en tiempos muy próximos, a diferencia de los fotones de NSB (night sky background), que llegan aleatoriamente. Para detectar cascadas grandes es suficiente con comprobar que más de un cierto número de píxeles en una región hayan detectado más de un cierto número de fotones en una ventana de tiempo de algunos nanosegundos. Sin embargo, para detectar cascadas pequeñas es más conveniente tener en cuenta cuántos fotones han sido detectados en cada píxel (técnica conocida como sumtrigger). El sistema de trigger desarrollado en esta tesis pretende optimizar la sensibilidad a bajas energías, por lo que suma analógicamente las señales recibidas en cada píxel en una región de trigger y compara el resultado con un umbral directamente expresable en fotones detectados (fotoelectrones). El sistema diseñado permite utilizar regiones de trigger de tamaño seleccionable entre 14, 21 o 28 píxeles (2, 3, o 4 clusters de 7 píxeles cada uno), y con un alto grado de solapamiento entre ellas. De este modo, cualquier exceso de luz en una región compacta de 14, 21 o 28 píxeles es detectado y genera un pulso de trigger. En la versión más básica del sistema de trigger, este pulso se distribuye por toda la cámara de forma que todos los clusters sean leídos al mismo tiempo, independientemente de su posición en la cámara, a través de un delicado sistema de distribución. De este modo, el sistema de trigger guarda una imagen completa de la cámara cada vez que se supera el número de fotones establecido como umbral en una región de trigger. Sin embargo, esta forma de operar tiene dos inconvenientes principales. En primer lugar, la cascada casi siempre ocupa sólo una pequeña zona de la cámara, por lo que se guardan muchos píxeles sin información alguna. Cuando se tienen muchos telescopios como será el caso de CTA, la cantidad de información inútil almacenada por este motivo puede ser muy considerable. Por otro lado, cada trigger supone guardar unos pocos nanosegundos alrededor del instante de disparo. Sin embargo, en el caso de cascadas grandes la duración de las mismas puede ser bastante mayor, perdiéndose parte de la información debido al truncamiento temporal. Para resolver ambos problemas se ha propuesto un esquema de trigger y lectura basado en dos umbrales. El umbral alto decide si hay un evento en la cámara y, en caso positivo, sólo las regiones de trigger que superan el nivel bajo son leídas, durante un tiempo más largo. De este modo se evita guardar información de píxeles vacíos y las imágenes fijas de las cascadas se pueden convertir en pequeños \vídeos" que representen el desarrollo temporal de la cascada. Este nuevo esquema recibe el nombre de COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), y se ha descrito detalladamente en el capítulo 5. Un problema importante que afecta a los esquemas de sumtrigger como el que se presenta en esta tesis es que para sumar adecuadamente las señales provenientes de cada píxel, estas deben tardar lo mismo en llegar al sumador. Los fotomultiplicadores utilizados en cada píxel introducen diferentes retardos que deben compensarse para realizar las sumas adecuadamente. El efecto de estos retardos ha sido estudiado, y se ha desarrollado un sistema para compensarlos. Por último, el siguiente nivel de los sistemas de trigger para distinguir efectivamente las cascadas Cherenkov del NSB consiste en buscar triggers simultáneos (o en tiempos muy próximos) en telescopios vecinos. Con esta función, junto con otras de interfaz entre sistemas, se ha desarrollado un sistema denominado Trigger Interface Board (TIB). Este sistema consta de un módulo que irá montado en la cámara de cada LST o MST, y que estará conectado mediante fibras ópticas a los telescopios vecinos. Cuando un telescopio tiene un trigger local, este se envía a todos los vecinos conectados y viceversa, de modo que cada telescopio sabe si sus vecinos han dado trigger. Una vez compensadas las diferencias de retardo debidas a la propagación en las fibras ópticas y de los propios fotones Cherenkov en el aire dependiendo de la dirección de apuntamiento, se buscan coincidencias, y en el caso de que la condición de trigger se cumpla, se lee la cámara en cuestión, de forma sincronizada con el trigger local. Aunque todo el sistema de trigger es fruto de la colaboración entre varios grupos, fundamentalmente IFAE, CIEMAT, ICC-UB y UCM en España, con la ayuda de grupos franceses y japoneses, el núcleo de esta tesis son el Level 1 y la Trigger Interface Board, que son los dos sistemas en los que que el autor ha sido el ingeniero principal. Por este motivo, en la presente tesis se ha incluido abundante información técnica relativa a estos sistemas. Existen actualmente importantes líneas de desarrollo futuras relativas tanto al trigger de la cámara (implementación en ASICs), como al trigger entre telescopios (trigger topológico), que darán lugar a interesantes mejoras sobre los diseños actuales durante los próximos años, y que con suerte serán de provecho para toda la comunidad científica participante en CTA. ABSTRACT -ray astronomy studies the most energetic particles arriving to the Earth from outer space. This -rays are not generated by thermal processes in mere stars, but by means of particle acceleration mechanisms in astronomical objects such as active galactic nuclei, pulsars, supernovas or as a result of dark matter annihilation processes. The γ rays coming from these objects and their characteristics provide with valuable information to the scientist which try to understand the underlying physical fundamentals of these objects, as well as to develop theoretical models able to describe them accurately. The problem when observing rays is that they are absorbed in the highest layers of the atmosphere, so they don't reach the Earth surface (otherwise the planet would be uninhabitable). Therefore, there are only two possible ways to observe γ rays: by using detectors on-board of satellites, or by observing their secondary effects in the atmosphere. When a γ ray reaches the atmosphere, it interacts with the particles in the air generating a highly energetic electron-positron pair. These secondary particles generate in turn more particles, with less energy each time. While these particles are still energetic enough to travel faster than the speed of light in the air, they produce a bluish radiation known as Cherenkov light during a few nanoseconds. From the Earth surface, some special telescopes known as Cherenkov telescopes or IACTs (Imaging Atmospheric Cherenkov Telescopes), are able to detect the Cherenkov light and even to take images of the Cherenkov showers. From these images it is possible to know the main parameters of the original -ray, and with some -rays it is possible to deduce important characteristics of the emitting object, hundreds of light-years away. However, detecting Cherenkov showers generated by γ rays is not a simple task. The showers generated by low energy -rays contain few photons and last few nanoseconds, while the ones corresponding to high energy -rays, having more photons and lasting more time, are much more unlikely. This results in two clearly differentiated development lines for IACTs: In order to detect low energy showers, big reflectors are required to collect as much photons as possible from the few ones that these showers have. On the contrary, small telescopes are able to detect high energy showers, but a large area in the ground should be covered to increase the number of detected events. With the aim to improve the sensitivity of current Cherenkov showers in the high (> 10 TeV), medium (100 GeV - 10 TeV) and low (10 GeV - 100 GeV) energy ranges, the CTA (Cherenkov Telescope Array) project was created. This project, with more than 27 participating countries, intends to build an observatory in each hemisphere, each one equipped with 4 large size telescopes (LSTs), around 30 middle size telescopes (MSTs) and up to 70 small size telescopes (SSTs). With such an array, two targets would be achieved. First, the drastic increment in the collection area with respect to current IACTs will lead to detect more -rays in all the energy ranges. Secondly, when a Cherenkov shower is observed by several telescopes at the same time, it is possible to analyze it much more accurately thanks to the stereoscopic techniques. The present thesis gathers several technical developments for the trigger system of the medium and large size telescopes of CTA. As the Cherenkov showers are so short, the digitization and readout systems corresponding to each pixel must work at very high frequencies (_ 1 GHz). This makes unfeasible to read data continuously, because the amount of data would be unmanageable. Instead, the analog signals are sampled, storing the analog samples in a temporal ring buffer able to store up to a few _s. While the signals remain in the buffer, the trigger system performs a fast analysis of the signals and decides if the image in the buffer corresponds to a Cherenkov shower and deserves to be stored, or on the contrary it can be ignored allowing the buffer to be overwritten. The decision of saving the image or not, is based on the fact that Cherenkov showers produce photon detections in close pixels during near times, in contrast to the random arrival of the NSB phtotons. Checking if more than a certain number of pixels in a trigger region have detected more than a certain number of photons during a certain time window is enough to detect large showers. However, taking also into account how many photons have been detected in each pixel (sumtrigger technique) is more convenient to optimize the sensitivity to low energy showers. The developed trigger system presented in this thesis intends to optimize the sensitivity to low energy showers, so it performs the analog addition of the signals received in each pixel in the trigger region and compares the sum with a threshold which can be directly expressed as a number of detected photons (photoelectrons). The trigger system allows to select trigger regions of 14, 21, or 28 pixels (2, 3 or 4 clusters with 7 pixels each), and with extensive overlapping. In this way, every light increment inside a compact region of 14, 21 or 28 pixels is detected, and a trigger pulse is generated. In the most basic version of the trigger system, this pulse is just distributed throughout the camera in such a way that all the clusters are read at the same time, independently from their position in the camera, by means of a complex distribution system. Thus, the readout saves a complete camera image whenever the number of photoelectrons set as threshold is exceeded in a trigger region. However, this way of operating has two important drawbacks. First, the shower usually covers only a little part of the camera, so many pixels without relevant information are stored. When there are many telescopes as will be the case of CTA, the amount of useless stored information can be very high. On the other hand, with every trigger only some nanoseconds of information around the trigger time are stored. In the case of large showers, the duration of the shower can be quite larger, loosing information due to the temporal cut. With the aim to solve both limitations, a trigger and readout scheme based on two thresholds has been proposed. The high threshold decides if there is a relevant event in the camera, and in the positive case, only the trigger regions exceeding the low threshold are read, during a longer time. In this way, the information from empty pixels is not stored and the fixed images of the showers become to little \`videos" containing the temporal development of the shower. This new scheme is named COLIBRI (Concept for an Optimized Local Image Building and Readout Infrastructure), and it has been described in depth in chapter 5. An important problem affecting sumtrigger schemes like the one presented in this thesis is that in order to add the signals from each pixel properly, they must arrive at the same time. The photomultipliers used in each pixel introduce different delays which must be compensated to perform the additions properly. The effect of these delays has been analyzed, and a delay compensation system has been developed. The next trigger level consists of looking for simultaneous (or very near in time) triggers in neighbour telescopes. These function, together with others relating to interfacing different systems, have been developed in a system named Trigger Interface Board (TIB). This system is comprised of one module which will be placed inside the LSTs and MSTs cameras, and which will be connected to the neighbour telescopes through optical fibers. When a telescope receives a local trigger, it is resent to all the connected neighbours and vice-versa, so every telescope knows if its neighbours have been triggered. Once compensated the delay differences due to propagation in the optical fibers and in the air depending on the pointing direction, the TIB looks for coincidences, and in the case that the trigger condition is accomplished, the camera is read a fixed time after the local trigger arrived. Despite all the trigger system is the result of the cooperation of several groups, specially IFAE, Ciemat, ICC-UB and UCM in Spain, with some help from french and japanese groups, the Level 1 and the Trigger Interface Board constitute the core of this thesis, as they have been the two systems designed by the author of the thesis. For this reason, a large amount of technical information about these systems has been included. There are important future development lines regarding both the camera trigger (implementation in ASICS) and the stereo trigger (topological trigger), which will produce interesting improvements for the current designs during the following years, being useful for all the scientific community participating in CTA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Until the mid-1990s a person could not point to any celestial object and say with assurance that “here is a brown dwarf.” Now dozens are known, and the study of brown dwarfs has come of age, touching upon major issues in astrophysics, including the nature of dark matter, the properties of substellar objects, and the origin of binary stars and planetary systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A long-standing goal of theorists has been to constrain cosmological parameters that define the structure formation theory from cosmic microwave background (CMB) anisotropy experiments and large-scale structure (LSS) observations. The status and future promise of this enterprise is described. Current band-powers in ℓ-space are consistent with a ΔT flat in frequency and broadly follow inflation-based expectations. That the levels are ∼(10−5)2 provides strong support for the gravitational instability theory, while the Far Infrared Absolute Spectrophotometer (FIRAS) constraints on energy injection rule out cosmic explosions as a dominant source of LSS. Band-powers at ℓ ≳ 100 suggest that the universe could not have re-ionized too early. To get the LSS of Cosmic Background Explorer (COBE)-normalized fluctuations right provides encouraging support that the initial fluctuation spectrum was not far off the scale invariant form that inflation models prefer: e.g., for tilted Λ cold dark matter sequences of fixed 13-Gyr age (with the Hubble constant H0 marginalized), ns = 1.17 ± 0.3 for Differential Microwave Radiometer (DMR) only; 1.15 ± 0.08 for DMR plus the SK95 experiment; 1.00 ± 0.04 for DMR plus all smaller angle experiments; 1.00 ± 0.05 when LSS constraints are included as well. The CMB alone currently gives weak constraints on Λ and moderate constraints on Ωtot, but theoretical forecasts of future long duration balloon and satellite experiments are shown which predict percent-level accuracy among a large fraction of the 10+ parameters characterizing the cosmic structure formation theory, at least if it is an inflation variant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is argued that within the standard Big Bang cosmological model the bulk of the mass of the luminous parts of the large galaxies likely had been assembled by redshift z ∼ 10. Galaxy assembly this early would be difficult to fit in the widely discussed adiabatic cold dark matter model for structure formation, but it could agree with an isocurvature version in which the cold dark matter is the remnant of a massive scalar field frozen (or squeezed) from quantum fluctuations during inflation. The squeezed field fluctuations would be Gaussian with zero mean, and the distribution of the field mass therefore would be the square of a random Gaussian process. This offers a possibly interesting new direction for the numerical exploration of models for cosmic structure formation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Filaments of galaxies are known to stretch between galaxy clusters at all redshifts in a complex manner. In this Letter, we present an analysis of the frequency and distribution of intercluster galaxy filaments selected from the 2dF Galaxy Redshift Survey. Out of 805 cluster-cluster pairs, we find at least 40 per cent have bona fide filaments. We introduce a filament classification scheme and divide the filaments into several types according to their visual morphology: straight (lying on the cluster-cluster axis; 37 per cent), warped or curved (lying off the cluster-cluster axis; 33 per cent), sheets (planar configurations of galaxies; 3 per cent), uniform (1 per cent) and irregular (26 per cent). We find that straight filaments are more likely to reside between close cluster pairs and they become more curved with increasing cluster separation. This curving is toward a larger mass concentration in general. We also show that the more massive a cluster is, the more likely it is to have a larger number of filaments. Our results are found to be consistent with a cold dark matter cosmology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Filaments of galaxies are the dominant feature of modern large-scale redshift surveys. They can account for up to perhaps half of the baryonic mass budget of the Universe and their distribution and abundance can help constrain cosmological models. However, there remains no single, definitive way in which to detect, describe, and define what filaments are and their extent. This work examines a number of physically motivated, as well as statistical, methods that can be used to define filaments and examines their relative merits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new algorithm for detecting intercluster galaxy filaments based upon the assumption that the orientations of constituent galaxies along such filaments are non-isotropic. We apply the algorithm to the 2dF Galaxy Redshift Survey catalogue and find that it readily detects many straight filaments between close cluster pairs. At large intercluster separations (> 15 h(-1) Mpc), we find that the detection efficiency falls quickly, as it also does with more complex filament morphologies. We explore the underlying assumptions and suggest that it is only in the case of close cluster pairs that we can expect galaxy orientations to be significantly correlated with filament direction.