972 resultados para Telescope space debris satellite spectroscopy tracking photometry NASA ASI


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2×2×3 factorial design with the following factors: PMC on or off; 3.0mm or 1.5mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p<0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. White dwarfs can be used to study the structure and evolution of the Galaxy by analysing their luminosity function and initial mass function. Among them, the very cool white dwarfs provide the information for the early ages of each population. Because white dwarfs are intrinsically faint only the nearby (~ 20 pc) sample is reasonably complete. The Gaia space mission will drastically increase the sample of known white dwarfs through its 5-6 years survey of the whole sky up to magnitude V = 20-25. Aims. We provide a characterisation of Gaia photometry for white dwarfs to better prepare for the analysis of the scientific output of the mission. Transformations between some of the most common photometric systems and Gaia passbands are derived. We also give estimates of the number of white dwarfs of the different galactic populations that will be observed. Methods. Using synthetic spectral energy distributions and the most recent Gaia transmission curves, we computed colours of three different types of white dwarfs (pure hydrogen, pure helium, and mixed composition with H/He = 0.1). With these colours we derived transformations to other common photometric systems (Johnson-Cousins, Sloan Digital Sky Survey, and 2MASS). We also present numbers of white dwarfs predicted to be observed by Gaia. Results. We provide relationships and colour-colour diagrams among different photometric systems to allow the prediction and/or study of the Gaia white dwarf colours. We also include estimates of the number of sources expected in every galactic population and with a maximum parallax error. Gaia will increase the sample of known white dwarfs tenfold to about 200 000. Gaia will be able to observe thousands of very cool white dwarfs for the first time, which will greatly improve our understanding of these stars and early phases of star formation in our Galaxy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims:We searched for very high energy (VHE) γ-ray emission from the supernova remnant Cassiopeia A Methods: The shell-type supernova remnant Cassiopeia A was observed with the 17 m MAGIC telescope between July 2006 and January 2007 for a total time of 47 h. Results: The source was detected above an energy of 250 GeV with a significance of 5.2σ and a photon flux above 1 TeV of (7.3 ± 0.7_stat ± 2.2_sys) × 10-13 cm-2s-1. The photon spectrum is compatible with a power law dN/dE ∝ E-Γ with a photon index Γ = 2.3 ± 0.2_stat ± 0.2_sys. The source is point-like within the angular resolution of the telescope.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report optical spectroscopic observations of a sample of 6 low-galactic latitude microquasar candidates selected by cross-identification of X-ray and radio point source catalogs for |b|<5 degrees. Two objects resulted to be of clear extragalactic origin, as an obvious cosmologic redshift has been measured from their emission lines. For the rest, none exhibits a clear stellar-like spectrum as would be expected for genuine Galactic microquasars. Their featureless spectra are consistent with being extragalactic in origin although two of them could be also highly reddened stars. The apparent non-confirmation of our candidates suggests that the population of persistent microquasar systems in the Galaxy is more rare than previously believed. If none of them is galactic, the upper limit to the space density of new Cygnus X-3-like microquasars within 15 kpc would be 1.1\times10^{-12} per cubic pc. A similar upper limit for new LS 5039-like systems within 4 kpc is estimated to be 5.6\times10^{-11} per cubic pc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Cherenkov light flashes produced by Extensive Air Showers are very short in time. A high bandwidth and fast digitizing readout, therefore, can minimize the influence of the background from the light of the night sky, and improve the performance in Cherenkov telescopes. The time structure of the Cherenkov image can further be used in single-dish Cherenkov telescopes as an additional parameter to reduce the background from unwanted hadronic showers. A description of an analysis method which makes use of the time information and the subsequent improvement on the performance of the MAGIC telescope (especially after the upgrade with an ultra fast 2 GSamples/s digitization system in February 2007) will be presented. The use of timing information in the analysis of the new MAGIC data reduces the background by a factor two, which in turn results in an enhancement of about a factor 1.4 of the flux sensitivity to point-like sources, as tested on observations of the Crab Nebula.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the timing of photons observed by the MAGIC telescope during a flare of the active galactic nucleus Mkn 501 for a possible correlation with energy, as suggested by some models of quantum gravity (QG), which predict a vacuum refractive index similar or equal to 1 + (E/M-QGn)(n), n = 1, 2. Parametrizing the delay between gamma-rays of different energies as Delta t = +/-tau E-1 or Delta t = +/-tau E-q(2), we find tau(1) = (0.030 +/- 0.012) s/GeV at the 2.5-sigma level, and tau(q) = (3.71 +/- 2.57) x 10(-6) s/GeV2, respectively. We use these results to establish lower limits M-QG1 > 0.21 X 10(18) GeV and M-QG2 > 0.26 x 10(11) GeV at the 95% C.L. Monte Carlo studies confirm the MAGIC sensitivity to propagation effects at these levels. Thermal plasma effects in the source are negligible, but we cannot exclude the importance of some other source effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complex permittivity of films of polyether ether ketone (PEEK) has been investigated over a wide range of frequency. There is no relaxation peak in the range of 1Hz to 10(5) Hz but in the low-frequency side (10-4 Hz) there is an evidence of a peak that also can be observed by thermally stimulated discharge current measurements. That peak is related with the glass transition temperature (Tg) of the polymer. The activation energy of the relaxation was found to be 0.44 eV, similar to that of several synthetic polymers. Space charges are important in the conduction mechanism as shown by discharging transient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A control law was designed for a satellite launcher ( rocket ) vehicle using eigenstructure assignment in order that the vehicle tracks a reference attitude and also to decouple the yaw response from roll and pitch manoeuvres and to decouple the pitch response from roll and yaw manoeuvres. The design was based on a complete linear coupled model obtained from the complete vehicle non linear model by linearization at each trajectory point. After all, the design was assessed with the vehicle time varying non-linear model showing a good performance and robustness. The used design method is explained and a case study for the Brazilian satellite launcher ( VLS Rocket ) is reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral dissertation presents studies of the formation and evolution of galaxies, through observations and simulations of galactic halos. The halo is the component of galaxies which hosts some of the oldest objects we know of in the cosmos; it is where clues to the history of galaxies are found, for example, by how the chemical structure is related to the dynamics of objects in the halo. The dynamical and chemical structure of halos, both in the Milky Way’s own halo, and in two elliptical galaxies, is the underlying theme in the research. I focus on the density falloff and chemistry of the two external halos, and on the dynamics, density falloff, and chemistry of the Milky Way halo. I first study galactic halos via computer simulations, to test the long- term stability of an anomalous feature recently found in kinematics of the Milky Way’s metal-poor stellar halo. I find that the feature is transient, making its origin unclear. I use a second set of simulations to test if an initially strong relation between the dynamics and chemistry of halo glob-ular clusters in a Milky Way-type galaxy is affected by a merging satellite galaxy, and find that the relation remains strong despite a merger in which the satellite is a third of the mass of the host galaxy. From simulations, I move to observing halos in nearby galaxies, a challenging procedure as most of the light from galaxies comes from the disk and bulge components as opposed to the halo. I use Hubble Space Tele scope observations of the halo of the galaxy M87 and, comparing to similar observations of NGC 5128, find that the chemical structure of the inner halo is similar for both of these giant elliptical galaxies. I use Very Large Telescope observations of the outer halo of NGC 5128 (Centaurus A) and, because of the difficultly in resolving dim extragalac- tic stellar halo populations, I introduce a new technique to subtract the contaminating background galaxies. A transition from a metal-rich stellar halo to a metal-poor has previously been discovered in two different types of galaxies, the disk galaxy M31 and the classic elliptical NGC 3379. Unexpectedly, I discover in this third type of galaxy, the merger remnant NGC 5128, that the density of metal-rich and metal-poor halo stars falls at the same rate within the galactocentric radii of 8 − 65 kpc, the limit of our observations. This thesis presents new results which open opportunities for future investigations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study analyzes the ectopic development of the rat skeletal muscle originated from transplanted satellite cells. Satellite cells (10(6) cells) obtained from hindlimb muscles of newborn female 2BAW Wistar rats were injected subcutaneously into the dorsal area of adult male rats. After 3, 7, and 14 days, the transplanted tissues (N = 4-5) were processed for histochemical analysis of peripheral nerves, inactive X-chromosome and acetylcholinesterase. Nicotinic acetylcholine receptors (nAChRs) were also labeled with tetramethylrhodamine-labeled alpha-bungarotoxin. The development of ectopic muscles was successful in 86% of the implantation sites. By day 3, the transplanted cells were organized as multinucleated fibers containing multiple clusters of nAChRs (N = 2-4), resembling those from non-innervated cultured skeletal muscle fibers. After 7 days, the transplanted cells appeared as a highly vascularized tissue formed by bundles of fibers containing peripheral nuclei. The presence of X chromatin body indicated that subcutaneously developed fibers originated from female donor satellite cells. Differently from the extensor digitorum longus muscle of adult male rat (87.9 ± 1.0 µm; N = 213), the diameter of ectopic fibers (59.1 µm; N = 213) did not obey a Gaussian distribution and had a higher coefficient of variation. After 7 and 14 days, the organization of the nAChR clusters was similar to that of clusters from adult innervated extensor digitorum longus muscle. These findings indicate the histocompatibility of rats from 2BAW colony and that satellite cells transplanted into the subcutaneous space of adult animals are able to develop and fuse to form differentiated skeletal muscle fibers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

F/A-18-monitoimihävittäjän ohjaajan tehtävän kognitiiviset vaatimukset ovat korkeat. Kognitiivisen kuormituksen taso vaikuttaa hävittäjäohjaajan suoritustasoon ja subjektiivisiin tun-temuksiin. Yerkesin ja Dodsonin periaatteen mukaisesti erittäin matala tai erittäin korkea kuormituksen taso laskee suoritustasoa. Optimaalinen kuormituksen taso ja suoritustaso saa-vutetaan jossain ääripäiden välillä. Hävittäjäohjaajan kognitiivisen kuormituksen tasoon vaikuttaa lentotehtävän suorittamiseen vaadittava henkinen ponnistelu. Vaadittavan ponnistelun taso riippuu tehtävien vaatimustasosta ja määrästä, tehtäviin käytettävissä olevasta ajasta sekä yksilöllisistä ominaisuuksista. Tutkimuksessa mitattiin kognitiivisen kuormituksen tasoa subjektiivisen arvioinnin menetelmällä NASA-TLX (National Aeronautics and Space Administration - Task Load Index) ja MCH (Modified Cooper-Harper) -mittareilla. Tutkimuksessa selvitettiin mittareiden havaintoarvojen muutosta, sensitiivisyyttä ja yhdenmukaisuutta kognitiivisen kuormituksen tason muuttuessa. Tutkimuksen mittauksiin osallistui 35 Suomen ilmavoimien aktiivisessa palveluksessa olevaa F/A-18-monitoimihävittäjäohjaajaa. Koehenkilöiden lentotuntien keskiarvo F/A-18-monitoimihävittäjällä oli 598 tuntia ja keskihajonta 445 tuntia. Koehenkilöiden tehtävänä oli lentää F/A-18-virtuaalisimulaattorilla 11 ILS (Instrument Landing System) -mittarilähestymistä eri aloitusetäisyyksiltä kiitotien kynnyksestä. Kognitiivisesti kuormitta-van mittarilähestymistehtävän aikana kuormituksen tasoa nostettiin lisätehtävillä ja vähentä-mällä tehtäviin käytettävissä olevaa aikaa. Koehenkilöitä pyydettiin ponnistelemaan mahdollisimman paljon tehtävien suorittamisen aikana hyvän suoritustason ylläpitämiseksi. Tulosten perusteella mittareiden havaintoarvot muuttuivat kognitiivisen kuormituksen tason muuttuessa. Käytettävissä olevan ajan vaikutus kognitiivisen kuormituksen tasoon oli tilastollisesti erittäin merkitsevä. Mittarit olivat sensitiivisiä kognitiivisen kuormituksen tason muutokselle ja antoivat yhdenmukaisia havaintoarvoja.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le relevé DEBRIS est effectué par le télescope spatial Herschel. Il permet d’échantillonner les disques de débris autour d’étoiles de l’environnement solaire. Dans la première partie de ce mémoire, un relevé polarimétrique de 108 étoiles des candidates de DEBRIS est présenté. Utilisant le polarimètre de l’Observatoire du Mont-Mégantic, des observations ont été effectuées afin de détecter la polarisation due à la présence de disques de débris. En raison d’un faible taux de détection d’étoiles polarisées, une analyse statistique a été réalisée dans le but de comparer la polarisation d’étoiles possédant un excès dans l’infrarouge et la polarisation de celles n’en possédant pas. Utilisant la théorie de diffusion de Mie, un modèle a été construit afin de prédire la polarisation due à un disque de débris. Les résultats du modèle sont cohérents avec les observations. La deuxième partie de ce mémoire présente des tests optiques du polarimètre POL-2, construit à l’Université de Montréal. L’imageur du télescope James-Clerk-Maxwell passe de l’instrument SCUBA à l’instrument SCUBA-2, qui sera au moins cent fois plus rapide que son prédécesseur. De même, le polarimètre suit l’amélioration et un nouveau polarimètre, POL-2, a été installé sur SCUBA-2 en juillet 2010. Afin de vérifier les performances optiques de POL-2, des tests ont été exécutés dans les laboratoires sub-millimétriques de l’Université de Western Ontario en juin 2009 et de l’Université de Lethbridge en septembre 2009. Ces tests et leurs implications pour les observations futures sont discutés.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse porte sur la capacité à détecter des compagnons de faible intensité en présence de bruit de tavelures dans le contexte de l’imagerie à haute gamme dynamique pour l’astronomie spatiale. On s’intéressera plus particulièrement à l’imagerie spectrale différentielle (ISD) obtenue en utilisant un étalon Fabry-Pérot comme filtre accordable. Les performances d’un tel filtre accordable sont présentées dans le cadre du Tunable Filter Imager (TFI), instrument conçu pour le télescope spatial James Webb (JWST). La capacité de l’étalon à supprimer les tavelures avec ISD est démontrée expérimentalement grâce à un prototype de l’étalon installé sur un banc de laboratoire. Les améliorations de contraste varient en fonction de la séparation, s’étendant d’un facteur 10 pour les séparations supérieures à 11 lambda/D jusqu’à un facteur 60 à 5 lambda/D. Ces résultats sont cohérents avec une étude théorique qui utilise un modèle basé sur la propagation de Fresnel pour montrer que les performances de suppression de tavelures sont limitées par le banc optique et non pas par l’étalon. De plus, il est démontré qu’un filtre accordable est une option séduisante pour l’imagerie à haute gamme dynamique combinée à la technique ISD. Une seconde étude basée sur la propagation de Fresnel de l’instrument TFI et du télescope, a permis de définir les performances de la technique ISD combinée avec un étalon pour l’astronomie spatiale. Les résultats prévoient une amélioration de contraste de l’ordre de 7 jusqu’à 100, selon la configuration de l’instrument. Une comparaison entre ISD et la soustraction par rotation a également été simulée. Enfin, la dernière partie de ce chapitre porte sur les performances de la technique ISD dans le cadre de l’instrument Near-Infrared Imager and Slitless Spectrograph (NIRISS), conçu pour remplacer TFI comme module scientifique à bord du Fine Guidance Sensor du JWST. Cent quatre objets localisés vers la région centrale de la nébuleuse d’Orion ont été caractérisés grâce à un spectrographe multi-objet, de basse résolution et multi-bande (0.85-2.4 um). Cette étude a relevé 7 nouvelles naines brunes et 4 nouveaux candidats de masse planétaire. Ces objets sont utiles pour déterminer la fonction de masse initiale sous-stellaire et pour évaluer les modèles atmosphériques et évolutifs futurs des jeunes objets stellaires et sous-stellaires. Combinant les magnitudes en bande H mesurées et les valeurs d’extinction, les objets classifiés sont utilisés pour créer un diagramme de Hertzsprung-Russell de cet amas stellaire. En accord avec des études antérieures, nos résultats montrent qu’il existe une seule époque de formation d’étoiles qui a débuté il y a environ 1 million d’années. La fonction de masse initiale qui en dérive est en accord avec des études antérieures portant sur d’autres amas jeunes et sur le disque galactique.