972 resultados para Telescope space debris satellite spectroscopy tracking photometry NASA ASI
Resumo:
To evaluate the performance of ocean-colour retrievals of total chlorophyll-a concentration requires direct comparison with concomitant and co-located in situ data. For global comparisons, these in situ match-ups should be ideally representative of the distribution of total chlorophyll-a concentration in the global ocean. The oligotrophic gyres constitute the majority of oceanic water, yet are under-sampled due to their inaccessibility and under-represented in global in situ databases. The Atlantic Meridional Transect (AMT) is one of only a few programmes that consistently sample oligotrophic waters. In this paper, we used a spectrophotometer on two AMT cruises (AMT19 and AMT22) to continuously measure absorption by particles in the water of the ship's flow-through system. From these optical data continuous total chlorophyll-a concentrations were estimated with high precision and accuracy along each cruise and used to evaluate the performance of ocean-colour algorithms. We conducted the evaluation using level 3 binned ocean-colour products, and used the high spatial and temporal resolution of the underway system to maximise the number of match-ups on each cruise. Statistical comparisons show a significant improvement in the performance of satellite chlorophyll algorithms over previous studies, with root mean square errors on average less than half (~ 0.16 in log10 space) that reported previously using global datasets (~ 0.34 in log10 space). This improved performance is likely due to the use of continuous absorption-based chlorophyll estimates, that are highly accurate, sample spatial scales more comparable with satellite pixels, and minimise human errors. Previous comparisons might have reported higher errors due to regional biases in datasets and methodological inconsistencies between investigators. Furthermore, our comparison showed an underestimate in satellite chlorophyll at low concentrations in 2012 (AMT22), likely due to a small bias in satellite remote-sensing reflectance data. Our results highlight the benefits of using underway spectrophotometric systems for evaluating satellite ocean-colour data and underline the importance of maintaining in situ observatories that sample the oligotrophic gyres.
Resumo:
To evaluate the performance of ocean-colour retrievals of total chlorophyll-a concentration requires direct comparison with concomitant and co-located in situ data. For global comparisons, these in situ match-ups should be ideally representative of the distribution of total chlorophyll-a concentration in the global ocean. The oligotrophic gyres constitute the majority of oceanic water, yet are under-sampled due to their inaccessibility and under-represented in global in situ databases. The Atlantic Meridional Transect (AMT) is one of only a few programmes that consistently sample oligotrophic waters. In this paper, we used a spectrophotometer on two AMT cruises (AMT19 and AMT22) to continuously measure absorption by particles in the water of the ship's flow-through system. From these optical data continuous total chlorophyll-a concentrations were estimated with high precision and accuracy along each cruise and used to evaluate the performance of ocean-colour algorithms. We conducted the evaluation using level 3 binned ocean-colour products, and used the high spatial and temporal resolution of the underway system to maximise the number of match-ups on each cruise. Statistical comparisons show a significant improvement in the performance of satellite chlorophyll algorithms over previous studies, with root mean square errors on average less than half (~ 0.16 in log10 space) that reported previously using global datasets (~ 0.34 in log10 space). This improved performance is likely due to the use of continuous absorption-based chlorophyll estimates, that are highly accurate, sample spatial scales more comparable with satellite pixels, and minimise human errors. Previous comparisons might have reported higher errors due to regional biases in datasets and methodological inconsistencies between investigators. Furthermore, our comparison showed an underestimate in satellite chlorophyll at low concentrations in 2012 (AMT22), likely due to a small bias in satellite remote-sensing reflectance data. Our results highlight the benefits of using underway spectrophotometric systems for evaluating satellite ocean-colour data and underline the importance of maintaining in situ observatories that sample the oligotrophic gyres.
Resumo:
We present optical photometry and spectroscopy of the optical transient SN 2011A. Our data span 140 days after discovery including BVRI u′g′r′i′z′ photometry and 11 epochs of optical spectroscopy. Originally classified as a type IIn supernova (SN IIn) due to the presence of narrow Hα emission, this object shows exceptional characteristics. First, the light curve shows a double plateau, a property only observed before in the impostor SN 1997bs. Second, SN 2011A has a very low luminosity (MV=-15.72), placing it between normal luminous SNe IIn and SN impostors. Third, SN 2011A shows low velocity and high equivalent width absorption close to the sodium doublet, which increases with time and is most likely of circumstellar origin. This evolution is also accompanied by a change in line profile; when the absorption becomes stronger, a P Cygni profile appears. We discuss SN 2011A in the context of interacting SNe IIn and SN impostors, which appears to confirm the uniqueness of this transient. While we favor an impostor origin for SN 2011A, we highlight the difficulty in differentiating between terminal and non-terminal interacting transients.
Resumo:
We present a photometric and spectroscopic study of a reddened type Ic supernova (SN) 2005at. We report our results based on the available data of SN 2005at, including late-time observations from the Spitzer Space Telescope and the Hubble Space Telescope. In particular, late-time mid-infrared observations are something rare for type Ib/c SNe. In our study we find SN 2005at to be very similar photometrically and spectroscopically to another nearby type Ic SN 2007gr, underlining the prototypical nature of this well-followed type Ic event. The spectroscopy of both events shows similar narrow spectral line features. The radio observations of SN 2005at are consistent with fast evolution and low luminosity at radio wavelengths. The late-time Spitzer data suggest the presence of an unresolved light echo from interstellar dust and dust formation in the ejecta, both of which are unique observations for a type Ic SN. The late-time Hubble observations reveal a faint point source coincident with SN 2005at, which is very likely either a declining light echo of the SN or a compact cluster. For completeness we study ground-based pre-explosion archival images of the explosion site of SN 2005at, however this only yielded very shallow upper limits for the SN progenitor star. We derive a host galaxy extinction of AV ∼ 1.9 mag for SN 2005at, which is relatively high for a SN in a normal spiral galaxy not viewed edge-on.
Resumo:
We report the discovery of the B[e] star VFTS 822 in the 30 Doradus star-forming region of the Large Magellanic Cloud, classified by optical spectroscopy from the VLT-FLAMES Tarantula Survey and complementary infrared photometry. VFTS 822 is a relatively low-luminosity (log L = 4.04 ± 0.25 L·) B8[e] star. In this Letter, we evaluate the evolutionary status of VFTS 822 and discuss its candidacy as a Herbig B[e] star. If the object is indeed in the pre-main sequence phase, it would present an exciting opportunity to spectroscopically measure mass accretion rates at low metallicity, to probe the effect of metallicity on accretion rates.
Resumo:
Ellerman Bombs (EBs) are often found to be co-spatial with bipolar photospheric magnetic fields. We use Hα imaging spectroscopy along with Fe i 6302.5 Å spectropolarimetry from the Swedish 1 m Solar Telescope (SST), combined with data from the Solar Dynamic Observatory, to study EBs and the evolution of the local magnetic fields at EB locations. EBs are found via an EB detection and tracking algorithm. Using NICOLE inversions of the spectropolarimetric data, we find that, on average, (3.43 ± 0.49) × 1024 erg of stored magnetic energy disappears from the bipolar region during EB burning. The inversions also show flux cancellation rates of 1014–1015 Mx s‑1 and temperature enhancements of 200 K at the detection footpoints. We investigate the near-simultaneous flaring of EBs due to co-temporal flux emergence from a sunspot, which shows a decrease in transverse velocity when interacting with an existing, stationary area of opposite polarity magnetic flux, resulting in the formation of the EBs. We also show that these EBs can be fueled further by additional, faster moving, negative magnetic flux regions.
Resumo:
We report the discovery, tracking, and detection circumstances for 85 trans-Neptunian objects (TNOs) from the first 42 deg2 of the Outer Solar System Origins Survey. This ongoing r-band solar system survey uses the 0.9 deg2 field of view MegaPrime camera on the 3.6 m Canada–France–Hawaii Telescope. Our orbital elements for these TNOs are precise to a fractional semimajor axis uncertainty <0.1%. We achieve this precision in just two oppositions, as compared to the normal three to five oppositions, via a dense observing cadence and innovative astrometric technique. These discoveries are free of ephemeris bias, a first for large trans-Neptunian surveys. We also provide the necessary information to enable models of TNO orbital distributions to be tested against our TNO sample. We confirm the existence of a cold "kernel" of objects within the main cold classical Kuiper Belt and infer the existence of an extension of the "stirred" cold classical Kuiper Belt to at least several au beyond the 2:1 mean motion resonance with Neptune. We find that the population model of Petit et al. remains a plausible representation of the Kuiper Belt. The full survey, to be completed in 2017, will provide an exquisitely characterized sample of important resonant TNO populations, ideal for testing models of giant planet migration during the early history of the solar system.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
In the last thirty years, the emergence and progression of biologging technology has led to great advances in marine predator ecology. Large databases of location and dive observations from biologging devices have been compiled for an increasing number of diving predator species (such as pinnipeds, sea turtles, seabirds and cetaceans), enabling complex questions about animal activity budgets and habitat use to be addressed. Central to answering these questions is our ability to correctly identify and quantify the frequency of essential behaviours, such as foraging. Despite technological advances that have increased the quality and resolution of location and dive data, accurately interpreting behaviour from such data remains a challenge, and analytical methods are only beginning to unlock the full potential of existing datasets. This review evaluates both traditional and emerging methods and presents a starting platform of options for future studies of marine predator foraging ecology, particularly from location and two-dimensional (time-depth) dive data. We outline the different devices and data types available, discuss the limitations and advantages of commonly-used analytical techniques, and highlight key areas for future research. We focus our review on pinnipeds - one of the most studied taxa of marine predators - but offer insights that will be applicable to other air-breathing marine predator tracking studies. We highlight that traditionally-used methods for inferring foraging from location and dive data, such as first-passage time and dive shape analysis, have important caveats and limitations depending on the nature of the data and the research question. We suggest that more holistic statistical techniques, such as state-space models, which can synthesise multiple track, dive and environmental metrics whilst simultaneously accounting for measurement error, offer more robust alternatives. Finally, we identify a need for more research to elucidate the role of physical oceanography, device effects, study animal selection, and developmental stages in predator behaviour and data interpretation.
Resumo:
Supernova (SN) is an explosion of a star at the end of its lifetime. SNe are classified to two types, namely type I and II through the optical spectra. They have been categorised based on their explosion mechanism, to core collapse supernovae (CCSNe) and thermonuclear supernovae. The CCSNe group which includes types IIP, IIn, IIL, IIb, Ib, and Ic are produced when a massive star with initial mass more than 8 M⊙ explodes due to a collapse of its iron core. On the other hand, thermonuclear SNe originate from white dwarfs (WDs) made of carbon and oxygen, in a binary system. Infrared astronomy covers observations of astronomical objects in infrared radiation. The infrared sky is not completely dark and it is variable. Observations of SNe in the infrared give different information than optical observations. Data reduction is required to correct raw data from for example unusable pixels and sky background. In this project, the NOTCam package in the IRAF was used for the data reduction. For measuring magnitudes of SNe, the aperture photometry method with the Gaia program was used. In this Master’s thesis, near-infrared (NIR) observations of three supernovae of type IIn (namely LSQ13zm, SN 2009ip and SN2011jb), one type IIb (SN2012ey), in addition to one type Ic (SN2012ej) and type IIP (SN 2013gd) are studied with emphasis on luminosity and colour evolution. All observations were done with the Nordic Optical Telescope (NOT). Here, we used the classification by Mattila & Meikle (2001) [76], where the SNe are differentiated by the infrared light curves into two groups, namely ’ordinary’ and ’slowly declining’. The light curves and colour evolution of these supernovae were obtained in J, H and Ks bands. In this study, our data, combined with other observations, provide evidence to categorize LSQ13zm, SN 2012ej and SN 2012ey as being part of the ordinary type. We found interesting NIR behaviour of SN 2011jb, which lead it to be classified as a slowly declining type.
Resumo:
The aim of this thesis was threefold, firstly, to compare current player tracking technology in a single game of soccer. Secondly, to investigate the running requirements of elite women’s soccer, in particular the use and application of athlete tracking devices. Finally, how can game style be quantified and defined. Study One compared four different match analysis systems commonly used in both research and applied settings: video-based time-motion analysis, a semi-automated multiple camera based system, and two commercially available Global Positioning System (GPS) based player tracking systems at 1 Hertz (Hz) and 5 Hz respectively. A comparison was made between each of the systems when recording the same game. Total distance covered during the match for the four systems ranged from 10 830 ± 770 m (semi-automated multiple camera based system) to 9 510 ± 740m (video-based time-motion analysis). At running speeds categorised as high-intensity running (>15 km⋅h-1), the semi-automated multiple camera based system reported the highest distance of 2 650 ± 530 m with video-based time-motion analysis reporting the least amount of distance covered with 1 610 ± 370 m. At speeds considered to be sprinting (>20 km⋅h-1), the video-based time-motion analysis reported the highest value (420 ± 170 m) and 1 Hz GPS units the lowest value (230 ± 160 m). These results demonstrate there are differences in the determination of the absolute distances, and that comparison of results between match analysis systems should be made with caution. Currently, there is no criterion measure for these match analysis methods and as such it was not possible to determine if one system was more accurate than another. Study Two provided an opportunity to apply player-tracking technology (GPS) to measure activity profiles and determine the physical demands of Australian international level women soccer players. In four international women’s soccer games, data was collected on a total of 15 Australian women soccer players using a 5 Hz GPS based athlete tracking device. Results indicated that Australian women soccer players covered 9 140 ± 1 030 m during 90 min of play. The total distance covered by Australian women was less than the 10 300 m reportedly covered by female soccer players in the Danish First Division. However, there was no apparent difference in the estimated "#$%&', as measured by multi-stage shuttle tests, between these studies. This study suggests that contextual information, including the “game style” of both the team and opposition may influence physical performance in games. Study Three examined the effect the level of the opposition had on the physical output of Australian women soccer players. In total, 58 game files from 5 Hz athlete-tracking devices from 13 international matches were collected. These files were analysed to examine relationships between physical demands, represented by total distance covered, high intensity running (HIR) and distances covered sprinting, and the level of the opposition, as represented by the Fédération Internationale de Football Association (FIFA) ranking at the time of the match. Higher-ranking opponents elicited less high-speed running and greater low-speed activity compared to playing teams of similar or lower ranking. The results are important to coaches and practitioners in the preparation of players for international competition, and showed that the differing physical demands required were dependent on the level of the opponents. The results also highlighted the need for continued research in the area of integrating contextual information in team sports and demonstrated that soccer can be described as having dynamic and interactive systems. The influence of playing strategy, tactics and subsequently the overall game style was highlighted as playing a significant part in the physical demands of the players. Study Four explored the concept of game style in field sports such as soccer. The aim of this study was to provide an applied framework with suggested metrics for use by coaches, media, practitioners and sports scientists. Based on the findings of Studies 1- 3 and a systematic review of the relevant literature, a theoretical framework was developed to better understand how a team’s game style could be quantified. Soccer games can be broken into key moments of play, and for each of these moments we categorised metrics that provide insight to success or otherwise, to help quantify and measure different methods of playing styles. This study highlights that to date, there had been no clear definition of game style in team sports and as such a novel definition of game style is proposed that can be used by coaches, sport scientists, performance analysts, media and general public. Studies 1-3 outline four common methods of measuring the physical demands in soccer: video based time motion analysis, GPS at 1 Hz and at 5 Hz and semiautomated multiple camera based systems. As there are no semi-automated multiple camera based systems available in Australia, primarily due to cost and logistical reasons, GPS is widely accepted for use in team sports in tracking player movements in training and competition environments. This research identified that, although there are some limitations, GPS player-tracking technology may be a valuable tool in assessing running demands in soccer players and subsequently contribute to our understanding of game style. The results of the research undertaken also reinforce the differences between methods used to analyse player movement patterns in field sports such as soccer and demonstrate that the results from different systems such as GPS based athlete tracking devices and semi-automated multiple camera based systems cannot be used interchangeably. Indeed, the magnitude of measurement differences between methods suggests that significant measurement error is evident. This was apparent even when the same technologies are used which measure at different sampling rates, such as GPS systems using either 1 Hz or 5 Hz frequencies of measurement. It was also recognised that other factors influence how team sport athletes behave within an interactive system. These factors included the strength of the opposition and their style of play. In turn, these can impact the physical demands of players that change from game to game, and even within games depending on these contextual features. Finally, the concept of what is game style and how it might be measured was examined. Game style was defined as "the characteristic playing pattern demonstrated by a team during games. It will be regularly repeated in specific situational contexts such that measurement of variables reflecting game style will be relatively stable. Variables of importance are player and ball movements, interaction of players, and will generally involve elements of speed, time and space (location)".
Resumo:
Due to increasing integration density and operating frequency of today's high performance processors, the temperature of a typical chip can easily exceed 100 degrees Celsius. However, the runtime thermal state of a chip is very hard to predict and manage due to the random nature in computing workloads, as well as the process, voltage and ambient temperature variability (together called PVT variability). The uneven nature (both in time and space) of the heat dissipation of the chip could lead to severe reliability issues and error-prone chip behavior (e.g. timing errors). Many dynamic power/thermal management techniques have been proposed to address this issue such as dynamic voltage and frequency scaling (DVFS), clock gating and etc. However, most of such techniques require accurate knowledge of the runtime thermal state of the chip to make efficient and effective control decisions. In this work we address the problem of tracking and managing the temperature of microprocessors which include the following sub-problems: (1) how to design an efficient sensor-based thermal tracking system on a given design that could provide accurate real-time temperature feedback; (2) what statistical techniques could be used to estimate the full-chip thermal profile based on very limited (and possibly noise-corrupted) sensor observations; (3) how do we adapt to changes in the underlying system's behavior, since such changes could impact the accuracy of our thermal estimation. The thermal tracking methodology proposed in this work is enabled by on-chip sensors which are already implemented in many modern processors. We first investigate the underlying relationship between heat distribution and power consumption, then we introduce an accurate thermal model for the chip system. Based on this model, we characterize the temperature correlation that exists among different chip modules and explore statistical approaches (such as those based on Kalman filter) that could utilize such correlation to estimate the accurate chip-level thermal profiles in real time. Such estimation is performed based on limited sensor information because sensors are usually resource constrained and noise-corrupted. We also took a further step to extend the standard Kalman filter approach to account for (1) nonlinear effects such as leakage-temperature interdependency and (2) varying statistical characteristics in the underlying system model. The proposed thermal tracking infrastructure and estimation algorithms could consistently generate accurate thermal estimates even when the system is switching among workloads that have very distinct characteristics. Through experiments, our approaches have demonstrated promising results with much higher accuracy compared to existing approaches. Such results can be used to ensure thermal reliability and improve the effectiveness of dynamic thermal management techniques.