990 resultados para Dark objects method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The abundance and distribution of collapsed objects such as galaxy clusters will become an important tool to investigate the nature of dark energy and dark matter. Number counts of very massive objects are sensitive not only to the equation of state of dark energy, which parametrizes the smooth component of its pressure, but also to the sound speed of dark energy, which determines the amount of pressure in inhomogeneous and collapsed structures. Since the evolution of these structures must be followed well into the nonlinear regime, and a fully relativistic framework for this regime does not exist yet, we compare two approximate schemes: the widely used spherical collapse model and the pseudo-Newtonian approach. We show that both approximation schemes convey identical equations for the density contrast, when the pressure perturbation of dark energy is parametrized in terms of an effective sound speed. We also make a comparison of these approximate approaches to general relativity in the linearized regime, which lends some support to the approximations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We numerically investigate the dynamical evolution of non-nucleated dwarf elliptical/spiral galaxies (dE) and nucleated ones (dE,Ns) in clusters of galaxies in order to understand the origin of intracluster stellar objects, such as intracluster stars (ICSs), GCs (ICGCs), and ultracompact dwarfs (UCDs) recently discovered by all-object spectroscopic survey centred on the Fornax cluster of galaxies. We find that the outer stellar components of a nucleated dwarf are removed by the strong tidal field of the cluster, whereas the nucleus manages to survive as a result of its initially compact nature. The developed naked nucleus is found to have physical properties (e.g., size and mass) similar to those observed for UCDs. We also find that the UCD formation process, does depend on the radial density profile of the dark halo in the sense that UCDs are less likely to be formed from dwarfs embedded in dark matter halos with central 'cuspy' density profiles. Our simulations also suggest that very massive and compact stellar systems can be rapidly and efficiently formed in the central regions of dwarfs through the merging of smaller GCs. GCs initially in the outer part of dE and dE,Ns are found to be stripped to form ICGCs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synthetic aperture radar (SAR) images of resonant buried objects are modelled in the presence of ground surface clutter. The method of moments (MoM) is used to model scattered fields from a resonant buried conductor and clutter is modelled as a bivariant Gaussian distribution. A diffraction stack SAR imaging technique is applied to the ultra-wideband waveforms to give a bipolar signal image. A number of examples have been computed to illustrate the combined effects of SAR processing with resonant targets and clutter. SAR images of different targets show differences which may facilitate target identification. To maximise the peak signal-to-clutter ratio, an image correlation technique is applied and the results are shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A scheme is presented to incorporate a mixed potential integral equation (MPIE) using Michalski's formulation C with the method of moments (MoM) for analyzing the scattering of a plane wave from conducting planar objects buried in a dielectric half-space. The robust complex image method with a two-level approximation is used for the calculation of the Green's functions for the half-space. To further speed up the computation, an interpolation technique for filling the matrix is employed. While the induced current distributions on the object's surface are obtained in the frequency domain, the corresponding time domain responses are calculated via the inverse fast Fourier transform (FFT), The complex natural resonances of targets are then extracted from the late time response using the generalized pencil-of-function (GPOF) method. We investigate the pole trajectories as we vary the distance between strips and the depth and orientation of single, buried strips, The variation from the pole position of a single strip in a homogeneous dielectric medium was only a few percent for most of these parameter variations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Engenharia Física

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the allocation of a finite number of indivisible objects to the same number of agents according to an exogenously given queue. We assume that the agents collaborate in order to achieve an efficient outcome for society. We allow for side-payments and provide a method for obtaining stable outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estudi elaborat a partir d’una estada al Royal Veterinary and Agricultural University of Denmark entre els mesos de Març a Juny del 2006. S’ha investigat l’efecte dels envasats amb atmosferes modificades (MAP), així com la marinació amb vi tint, sobre l’evolució de la contaminació bacteriològica de carns fosques, dures i seques (DFD). Les carns DFD es troben a les canals d’animals que, abans del sacrifici, han estat exposades a activitats musculars prolongades o estrès. Les carns DFD impliquen importants pèrdues econòmiques degut a la contaminació bacteriològica i als problemes tecnològics relacionats amb la alta capacitat de retenció d’aigua. A més a més, és crític per la indústria investigar la diversitat de la contaminació bacteriana, identificar les espècies bacterianes i controlar-les. Però és difícil degut a la inhabilitat per detectar algunes bactèries en medis coneguts, les interaccions entre elles, la complexitat dels tipus de contaminació com són aigua, terra, femtes i l’ambient. La Polymerasa chain reaction- Denaturating Electrophoresis Gel (PCR-DGEE ) pot sobrepassar aquests problemes reflectint la diversitat microbial i les espècies bacterianes. Els resultants han indicat que la varietat bacteriana de la carn incrementava amb els dies d’envasat independentment del mètode d’envasat, però decreixia significativament amb el tractament de marinació amb vi tint. La DGEE ha mostrat diferències en les espècies trobades, indicant canvis en la contaminació bacteriana i les seves característiques en la carn DFD sota els diferents tractaments. Tot i que la marinació és una bona alternativa i solució a la comercialització de carn DFD , estudis de seqüenciació són necessaris per identificar les diferents tipus de bactèries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Report for the scientific sojourn at the Swiss Federal Institute of Technology Zurich, Switzerland, between September and December 2007. In order to make robots useful assistants for our everyday life, the ability to learn and recognize objects is of essential importance. However, object recognition in real scenes is one of the most challenging problems in computer vision, as it is necessary to deal with difficulties. Furthermore, in mobile robotics a new challenge is added to the list: computational complexity. In a dynamic world, information about the objects in the scene can become obsolete before it is ready to be used if the detection algorithm is not fast enough. Two recent object recognition techniques have achieved notable results: the constellation approach proposed by Lowe and the bag of words approach proposed by Nistér and Stewénius. The Lowe constellation approach is the one currently being used in the robot localization project of the COGNIRON project. This report is divided in two main sections. The first section is devoted to briefly review the currently used object recognition system, the Lowe approach, and bring to light the drawbacks found for object recognition in the context of indoor mobile robot navigation. Additionally the proposed improvements for the algorithm are described. In the second section the alternative bag of words method is reviewed, as well as several experiments conducted to evaluate its performance with our own object databases. Furthermore, some modifications to the original algorithm to make it suitable for object detection in unsegmented images are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the unresolved questions of modern physics is the nature of Dark Matter. Strong experimental evidences suggest that the presence of this elusive component in the energy budget of the Universe is quite significant, without, however, being able to provide conclusive information about its nature. The most plausible scenario is that of weakly interacting massive particles (WIMPs), that includes a large class of non-baryonic Dark Matter candidates with a mass typically between few tens of GeV and few TeVs, and a cross section of the order of weak interactions. Search for Dark Matter particles using very high energy gamma-ray Cherenkov telescopes is based on the model that WIMPs can self-annihilate, leading to production of detectable species, like photons. These photons are very energetic, and since unreflected by the Universe's magnetic fields, they can be traced straight to the source of their creation. The downside of the approach is a great amount of background radiation, coming from the conventional astrophysical objects, that usually hides clear signals of the Dark Matter particle interactions. That is why good choice of the observational candidates is the crucial factor in search for Dark Matter. With MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov Telescopes), a two-telescope ground-based system located in La Palma, Canary Islands, we choose objects like dwarf spheroidal satellite galaxies of the Milky Way and galaxy clusters for our search. Our idea is to increase chances for WIMPs detection by pointing to objects that are relatively close, with great amount of Dark Matter and with as-little-as-possible pollution from the stars. At the moment, several observation projects are ongoing and analyses are being performed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RATIONALE AND OBJECTIVE:. The information assessment method (IAM) permits health professionals to systematically document the relevance, cognitive impact, use and health outcomes of information objects delivered by or retrieved from electronic knowledge resources. The companion review paper (Part 1) critically examined the literature, and proposed a 'Push-Pull-Acquisition-Cognition-Application' evaluation framework, which is operationalized by IAM. The purpose of the present paper (Part 2) is to examine the content validity of the IAM cognitive checklist when linked to email alerts. METHODS: A qualitative component of a mixed methods study was conducted with 46 doctors reading and rating research-based synopses sent on email. The unit of analysis was a doctor's explanation of a rating of one item regarding one synopsis. Interviews with participants provided 253 units that were analysed to assess concordance with item definitions. RESULTS AND CONCLUSION: The content relevance of seven items was supported. For three items, revisions were needed. Interviews suggested one new item. This study has yielded a 2008 version of IAM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Straylight gives the appearance of a veil of light thrown over a person's retinal image when there is a strong light source present. We examined the reproducibility of the measurements by C-Quant, and assessed its correlation to characteristics of the eye and subjects' age. PARTICIPANTS AND METHODS: Five repeated straylight measurements were taken using the dominant eye of 45 healthy subjects (age 21-59) with a BCVA of 20/20: 14 emmetropic, 16 myopic, eight hyperopic and seven with astigmatism. We assessed the extent of reproducibility of straylight measures using the intraclass correlation coefficient. RESULTS: The mean straylight value of all measurements was 1.01 (SD 0.23, median 0.97, interquartile range 0.85-1.1). Per 10 years of age, straylight increased in average by 0.10 (95%CI 0.04 to 0.16, p < 0.01]. We found no independent association of refraction (range -5.25 dpt to +2 dpt) on straylight values (0.001; 95%CI -0.022 to 0.024, p = 0.92). Compared to emmetropic subjects, myopia reduced straylight (-.011; -0.024 to 0.02, p = 0.11), whereas higher straylight values (0.09; -0.01 to 0.20, p = 0.09) were observed in subjects with blue irises as compared to dark-colored irises when correcting for age. The intraclass correlation coefficient (ICC) of repeated measurements was 0.83 (95%CI 0.76 to 0.90). CONCLUSIONS: Our study showed that straylight measurements with the C-Quant had a high reproducibility, i.e. a lack of large intra-observer variability, making it appropriate to be applied in long-term follow-up studies assessing the long-term effect of surgical procedures on the quality of vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diffuse flow velocimetry (DFV) is introduced as a new, noninvasive, optical technique for measuring the velocity of diffuse hydrothermal flow. The technique uses images of a motionless, random medium (e.g.,rocks) obtained through the lens of a moving refraction index anomaly (e.g., a hot upwelling). The method works in two stages. First, the changes in apparent background deformation are calculated using particle image velocimetry (PIV). The deformation vectors are determined by a cross correlation of pixel intensities across consecutive images. Second, the 2-D velocity field is calculated by cross correlating the deformation vectors between consecutive PIV calculations. The accuracy of the method is tested with laboratory and numerical experiments of a laminar, axisymmetric plume in fluids with both constant and temperaturedependent viscosity. Results show that average RMS errors are ∼5%–7% and are most accurate in regions of pervasive apparent background deformation which is commonly encountered in regions of diffuse hydrothermal flow. The method is applied to a 25 s video sequence of diffuse flow from a small fracture captured during the Bathyluck’09 cruise to the Lucky Strike hydrothermal field (September 2009). The velocities of the ∼10°C–15°C effluent reach ∼5.5 cm/s, in strong agreement with previous measurements of diffuse flow. DFV is found to be most accurate for approximately 2‐D flows where background objects have a small spatial scale, such as sand or gravel

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring thunderstorms activity is an essential part of operational weather surveillance given their potential hazards, including lightning, hail, heavy rainfall, strong winds or even tornadoes. This study has two main objectives: firstly, the description of a methodology, based on radar and total lightning data to characterise thunderstorms in real-time; secondly, the application of this methodology to 66 thunderstorms that affected Catalonia (NE Spain) in the summer of 2006. An object-oriented tracking procedure is employed, where different observation data types generate four different types of objects (radar 1-km CAPPI reflectivity composites, radar reflectivity volumetric data, cloud-to-ground lightning data and intra-cloud lightning data). In the framework proposed, these objects are the building blocks of a higher level object, the thunderstorm. The methodology is demonstrated with a dataset of thunderstorms whose main characteristics, along the complete life cycle of the convective structures (development, maturity and dissipation), are described statistically. The development and dissipation stages present similar durations in most cases examined. On the contrary, the duration of the maturity phase is much more variable and related to the thunderstorm intensity, defined here in terms of lightning flash rate. Most of the activity of IC and CG flashes is registered in the maturity stage. In the development stage little CG flashes are observed (2% to 5%), while for the dissipation phase is possible to observe a few more CG flashes (10% to 15%). Additionally, a selection of thunderstorms is used to examine general life cycle patterns, obtained from the analysis of normalized (with respect to thunderstorm total duration and maximum value of variables considered) thunderstorm parameters. Among other findings, the study indicates that the normalized duration of the three stages of thunderstorm life cycle is similar in most thunderstorms, with the longest duration corresponding to the maturity stage (approximately 80% of the total time).