984 resultados para Annular Aperture Array


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Askar'yan Radio Array (ARA), a neutrino detector to be situated at the South Pole next to the IceCube detector, will be sensitive to ultrahigh-energy cosmic neutrinos above 0.1 EeV and will have the greatest sensitivity within the favored energy range from 0.1 EeV up to 10 EeV. Neutrinos of this energy are guaranteed by current observations of the GZK-cutoff by the HiRes and Pierre Auger Observatories. The detection method is based on Cherenkov emission by a neutrino induced cascade in the ice, coherent at radio wavelengths, which was predicted by Askar'yan in 1962 and verified in beam tests at SLAC in 2006. The detector is planned to consist of 37 stations with 16 antennas each, deployed at depths of up to 200 m under the ice surface. During the last two polar seasons (2010-2011, 2011-2012), a prototype station and a first detector station were successfully deployed and are taking data. These data have been and are currently being analyzed to study the ambient noise background and the radio frequency properties of the South Pole ice sheet. A worldwide collaboration is working on the planning, construction and data analysis of the detector array. This article will give a short report on the status of the ARA detector and show recent results from the recorded data. © 2013 AIP Publishing LLC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many applications, including communications, test and measurement, and radar, require the generation of signals with a high degree of spectral purity. One method for producing tunable, low-noise source signals is to combine the outputs of multiple direct digital synthesizers (DDSs) arranged in a parallel configuration. In such an approach, if all noise is uncorrelated across channels, the noise will decrease relative to the combined signal power, resulting in a reduction of sideband noise and an increase in SNR. However, in any real array, the broadband noise and spurious components will be correlated to some degree, limiting the gains achieved by parallelization. This thesis examines the potential performance benefits that may arise from using an array of DDSs, with a focus on several types of common DDS errors, including phase noise, phase truncation spurs, quantization noise spurs, and quantizer nonlinearity spurs. Measurements to determine the level of correlation among DDS channels were made on a custom 14-channel DDS testbed. The investigation of the phase noise of a DDS array indicates that the contribution to the phase noise from the DACs can be decreased to a desired level by using a large enough number of channels. In such a system, the phase noise qualities of the source clock and the system cost and complexity will be the main limitations on the phase noise of the DDS array. The study of phase truncation spurs suggests that, at least in our system, the phase truncation spurs are uncorrelated, contrary to the theoretical prediction. We believe this decorrelation is due to the existence of an unidentified mechanism in our DDS array that is unaccounted for in our current operational DDS model. This mechanism, likely due to some timing element in the FPGA, causes some randomness in the relative phases of the truncation spurs from channel to channel each time the DDS array is powered up. This randomness decorrelates the phase truncation spurs, opening the potential for SFDR gain from using a DDS array. The analysis of the correlation of quantization noise spurs in an array of DDSs shows that the total quantization noise power of each DDS channel is uncorrelated for nearly all values of DAC output bits. This suggests that a near N gain in SQNR is possible for an N-channel array of DDSs. This gain will be most apparent for low-bit DACs in which quantization noise is notably higher than the thermal noise contribution. Lastly, the measurements of the correlation of quantizer nonlinearity spurs demonstrate that the second and third harmonics are highly correlated across channels for all frequencies tested. This means that there is no benefit to using an array of DDSs for the problems of in-band quantizer nonlinearities. As a result, alternate methods of harmonic spur management must be employed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The protein lysate array is an emerging technology for quantifying the protein concentration ratios in multiple biological samples. It is gaining popularity, and has the potential to answer questions about post-translational modifications and protein pathway relationships. Statistical inference for a parametric quantification procedure has been inadequately addressed in the literature, mainly due to two challenges: the increasing dimension of the parameter space and the need to account for dependence in the data. Each chapter of this thesis addresses one of these issues. In Chapter 1, an introduction to the protein lysate array quantification is presented, followed by the motivations and goals for this thesis work. In Chapter 2, we develop a multi-step procedure for the Sigmoidal models, ensuring consistent estimation of the concentration level with full asymptotic efficiency. The results obtained in this chapter justify inferential procedures based on large-sample approximations. Simulation studies and real data analysis are used to illustrate the performance of the proposed method in finite-samples. The multi-step procedure is simpler in both theory and computation than the single-step least squares method that has been used in current practice. In Chapter 3, we introduce a new model to account for the dependence structure of the errors by a nonlinear mixed effects model. We consider a method to approximate the maximum likelihood estimator of all the parameters. Using the simulation studies on various error structures, we show that for data with non-i.i.d. errors the proposed method leads to more accurate estimates and better confidence intervals than the existing single-step least squares method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this paper is to survey and assess the state-of-the-art in automatic target recognition for synthetic aperture radar imagery (SAR-ATR). The aim is not to develop an exhaustive survey of the voluminous literature, but rather to capture in one place the various approaches for implementing the SAR-ATR system. This paper is meant to be as self-contained as possible, and it approaches the SAR-ATR problem from a holistic end-to-end perspective. A brief overview for the breadth of the SAR-ATR challenges is conducted. This is couched in terms of a single-channel SAR, and it is extendable to multi-channel SAR systems. Stages pertinent to the basic SAR-ATR system structure are defined, and the motivations of the requirements and constraints on the system constituents are addressed. For each stage in the SAR-ATR processing chain, a taxonomization methodology for surveying the numerous methods published in the open literature is proposed. Carefully selected works from the literature are presented under the taxa proposed. Novel comparisons, discussions, and comments are pinpointed throughout this paper. A two-fold benchmarking scheme for evaluating existing SAR-ATR systems and motivating new system designs is proposed. The scheme is applied to the works surveyed in this paper. Finally, a discussion is presented in which various interrelated issues, such as standard operating conditions, extended operating conditions, and target-model design, are addressed. This paper is a contribution toward fulfilling an objective of end-to-end SAR-ATR system design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents the achievements and scientific work conducted using a previously designed and fabricated 64 x 64-pixel ion camera with the use of a 0.35 μm CMOS technology. We used an array of Ion Sensitive Field Effect Transistors (ISFETs) to monitor and measure chemical and biochemical reactions in real time. The area of our observation was a 4.2 x 4.3 mm silicon chip while the actual ISFET array covered an area of 715.8 x 715.8 μm consisting of 4096 ISFET pixels in total with a 1 μm separation space among them. The ion sensitive layer, the locus where all reactions took place was a silicon nitride layer, the final top layer of the austriamicrosystems 0.35 μm CMOS technology used. Our final measurements presented an average sensitivity of 30 mV/pH. With the addition of extra layers we were able to monitor a 65 mV voltage difference during our experiments with glucose and hexokinase, whereas a difference of 85 mV was detected for a similar glucose reaction mentioned in literature, and a 55 mV voltage difference while performing photosynthesis experiments with a biofilm made from cyanobacteria, whereas a voltage difference of 33.7 mV was detected as presented in literature for a similar cyanobacterial species using voltamemtric methods for detection. To monitor our experiments PXIe-6358 measurement cards were used and measurements were controlled by LabVIEW software. The chip was packaged and encapsulated using a PGA-100 chip carrier and a two-component commercial epoxy. Printed circuit board (PCB) has also been previously designed to provide interface between the chip and the measurement cards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of this study is to better simulate microscopic and voxel-based dynamic contrast enhancement in magnetic resonance imaging. Specifically, errors imposed by the traditional two-compartment model are reduced by introducing a novel Krogh cylinder network. The two-compartment model was developed for macroscopic pharmacokinetic analysis of dynamic contrast enhancement and generalizing it to voxel dimensions, due to the significant decrease in scale, imposes physiologically unrealistic assumptions. In the project, a system of microscopic exchange between plasma and extravascular-extracellular space is built while numerically simulating the local contrast agent flow between and inside image elements. To do this, tissue parameter maps were created, contrast agent was introduced to the tissue via a flow lattice, and various data sets were simulated. The effects of sources, tissue heterogeneity, and the contribution of individual tissue parameters to an image are modeled. Further, the study attempts to demonstrate the effects of a priori flow maps on image contrast, indicating that flow data is as important as permeability data when analyzing tumor contrast enhancement. In addition, the simulations indicate that it may be possible to obtain tumor-type diagnostic information by acquiring both flow and permeability data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ocean bottom pressure records from eight stations of the Cascadia array are used to investigate the properties of short surface gravity waves with frequencies ranging from 0.2 to 5 Hz. It is found that the pressure spectrum at all sites is a well-defined function of the wind speed U10 and frequency f, with only a minor shift of a few dB from one site to another that can be attributed to variations in bottom properties. This observation can be combined with the theoretical prediction that the ocean bottom pressure spectrum is proportional to the surface gravity wave spectrum E(f) squared, times the overlap integral I(f) which is given by the directional wave spectrum at each frequency. This combination, using E(f) estimated from modeled spectra or parametric spectra, yields an overlap integral I(f) that is a function of the local wave age inline image. This function is maximum for f∕fPM = 8 and decreases by 10 dB for f∕fPM = 2 and f∕fPM = 30. This shape of I(f) can be interpreted as a maximum width of the directional wave spectrum at f∕fPM = 8, possibly equivalent to an isotropic directional spectrum, and a narrower directional distribution toward both the dominant low frequencies and the higher capillary-gravity wave frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper aims to provide aperture corrections for emission lines in a sample of spiral galaxies from the Calar Alto Legacy Integral Field Area Survey (CALIFA) database. In particular, we explore the behavior of the log([O III] λ5007/Hβ)/([N II] λ6583/Hα) (O3N2) and log[N II] lambda 6583/Hα (N2) flux ratios since they are closely connected to different empirical calibrations of the oxygen abundances in star-forming galaxies. We compute the median growth curves of Hα, Hα/Hβ, O3N2, and N-2 up to 2.5R(50) and 1.5 disk R-eff. These distances cover most of the optical spatial extent of the CALIFA galaxies. The growth curves simulate the effect of observing galaxies through apertures of varying radii. We split these growth curves by morphological types and stellar masses to check if there is any dependence on these properties. The median growth curve of the Hα flux shows a monotonous increase with radius with no strong dependence on galaxy inclination, morphological type, and stellar mass. The median growth curve of the Hα/HβH ratio monotonically decreases from the center toward larger radii, showing for small apertures a maximum value of ≈10% larger than the integrated one. It does not show any dependence on inclination, morphological type, and stellar mass. The median growth curve of N-2 shows a similar behavior, decreasing from the center toward larger radii. No strong dependence is seen on the inclination, morphological type, and stellar mass. Finally, the median growth curve of O3N2 increases monotonically with radius, and it does not show dependence on the inclination. However, at small radii it shows systematically higher values for galaxies of earlier morphological types and for high stellar mass galaxies. Applying our aperture corrections to a sample of galaxies from the SDSS survey at 0.02 ≤ z ≤ 0.3 shows that the average difference between fiber-based and aperture-corrected oxygen abundances, for different galaxy stellar mass and redshift ranges, reaches typically to ≈11%, depending on the abundance calibration used. This average difference is found to be systematically biased, though still within the typical uncertainties of oxygen abundances derived from empirical calibrations. Caution must be exercised when using observations of galaxies for small radii (e.g., below 0.5 R_eff) given the high dispersion shown around the median growth curves. Thus, the application of these median aperture corrections to derive abundances for individual galaxies is not recommended when their fluxes come from radii much smaller than either R_50 or R_eff.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biogeochemical-Argo is the extension of the Argo array of profiling floats to include floats that are equipped with biogeochemical sensors for pH, oxygen, nitrate, chlorophyll, suspended particles, and downwelling irradiance. Argo is a highly regarded, international program that measures the changing ocean temperature (heat content) and salinity with profiling floats distributed throughout the ocean. Newly developed sensors now allow profiling floats to also observe biogeochemical properties with sufficient accuracy for climate studies. This extension of Argo will enable an observing system that can determine the seasonal to decadal-scale variability in biological productivity, the supply of essential plant nutrients from deep-waters to the sunlit surface layer, ocean acidification, hypoxia, and ocean uptake of CO2. Biogeochemical-Argo will drive a transformative shift in our ability to observe and predict the effects of climate change on ocean metabolism, carbon uptake, and living marine resource management. Presently, vast areas of the open ocean are sampled only once per decade or less, with sampling occurring mainly in summer. Our ability to detect changes in biogeochemical processes that may occur due to the warming and acidification driven by increasing atmospheric CO2, as well as by natural climate variability, is greatly hindered by this undersampling. In close synergy with satellite systems (which are effective at detecting global patterns for a few biogeochemical parameters, but only very close to the sea surface and in the absence of clouds), a global array of biogeochemical sensors would revolutionize our understanding of ocean carbon uptake, productivity, and deoxygenation. The array would reveal the biological, chemical, and physical events that control these processes. Such a system would enable a new generation of global ocean prediction systems in support of carbon cycling, acidification, hypoxia and harmful algal blooms studies, as well as the management of living marine resources. In order to prepare for a global Biogeochemical-Argo array, several prototype profiling float arrays have been developed at the regional scale by various countries and are now operating. Examples include regional arrays in the Southern Ocean (SOCCOM ), the North Atlantic Sub-polar Gyre (remOcean ), the Mediterranean Sea (NAOS ), the Kuroshio region of the North Pacific (INBOX ), and the Indian Ocean (IOBioArgo ). For example, the SOCCOM program is deploying 200 profiling floats with biogeochemical sensors throughout the Southern Ocean, including areas covered seasonally with ice. The resulting data, which are publically available in real time, are being linked with computer models to better understand the role of the Southern Ocean in influencing CO2 uptake, biological productivity, and nutrient supply to distant regions of the world ocean. The success of these regional projects has motivated a planning meeting to discuss the requirements for and applications of a global-scale Biogeochemical-Argo program. The meeting was held 11-13 January 2016 in Villefranche-sur-Mer, France with attendees from eight nations now deploying Argo floats with biogeochemical sensors present to discuss this topic. In preparation, computer simulations and a variety of analyses were conducted to assess the resources required for the transition to a global-scale array. Based on these analyses and simulations, it was concluded that an array of about 1000 biogeochemical profiling floats would provide the needed resolution to greatly improve our understanding of biogeochemical processes and to enable significant improvement in ecosystem models. With an endurance of four years for a Biogeochemical-Argo float, this system would require the procurement and deployment of 250 new floats per year to maintain a 1000 float array. The lifetime cost for a Biogeochemical-Argo float, including capital expense, calibration, data management, and data transmission, is about $100,000. A global Biogeochemical-Argo system would thus cost about $25,000,000 annually. In the present Argo paradigm, the US provides half of the profiling floats in the array, while the EU, Austral/Asia, and Canada share most the remaining half. If this approach is adopted, the US cost for the Biogeochemical-Argo system would be ~$12,500,000 annually and ~$6,250,000 each for the EU, and Austral/Asia and Canada. This includes no direct costs for ship time and presumes that float deployments can be carried out from future research cruises of opportunity, including, for example, the international GO-SHIP program (http://www.go-ship.org). The full-scale implementation of a global Biogeochemical-Argo system with 1000 floats is feasible within a decade. The successful, ongoing pilot projects have provided the foundation and start for such a system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, the development of the photovoltaic (PV) technology is consolidated as a source of renewable energy. The research in the topic of maximum improvement on the energy efficiency of the PV plants is today a major challenge. The main requirement for this purpose is to know the performance of each of the PV modules that integrate the PV field in real time. In this respect, a PLC communications based Smart Monitoring and Communications Module, which is able to monitor at PV level their operating parameters, has been developed at the University of Malaga. With this device you can check if any of the panels is suffering any type of overriding performance, due to a malfunction or partial shadowing of its surface. Since these fluctuations in electricity production from a single panel affect the overall sum of all panels that conform a string, it is necessary to isolate the problem and modify the routes of energy through alternative paths in case of PV panels array configuration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most exciting discoveries in astrophysics of the last last decade is of the sheer diversity of planetary systems. These include "hot Jupiters", giant planets so close to their host stars that they orbit once every few days; "Super-Earths", planets with sizes intermediate to those of Earth and Neptune, of which no analogs exist in our own solar system; multi-planet systems with planets smaller than Mars to larger than Jupiter; planets orbiting binary stars; free-floating planets flying through the emptiness of space without any star; even planets orbiting pulsars. Despite these remarkable discoveries, the field is still young, and there are many areas about which precious little is known. In particular, we don't know the planets orbiting Sun-like stars nearest to our own solar system, and we know very little about the compositions of extrasolar planets. This thesis provides developments in those directions, through two instrumentation projects.

The first chapter of this thesis concerns detecting planets in the Solar neighborhood using precision stellar radial velocities, also known as the Doppler technique. We present an analysis determining the most efficient way to detect planets considering factors such as spectral type, wavelengths of observation, spectrograph resolution, observing time, and instrumental sensitivity. We show that G and K dwarfs observed at 400-600 nm are the best targets for surveys complete down to a given planet mass and out to a specified orbital period. Overall we find that M dwarfs observed at 700-800 nm are the best targets for habitable-zone planets, particularly when including the effects of systematic noise floors caused by instrumental imperfections. Somewhat surprisingly, we demonstrate that a modestly sized observatory, with a dedicated observing program, is up to the task of discovering such planets.

We present just such an observatory in the second chapter, called the "MINiature Exoplanet Radial Velocity Array," or MINERVA. We describe the design, which uses a novel multi-aperture approach to increase stability and performance through lower system etendue, as well as keeping costs and time to deployment down. We present calculations of the expected planet yield, and data showing the system performance from our testing and development of the system at Caltech's campus. We also present the motivation, design, and performance of a fiber coupling system for the array, critical for efficiently and reliably bringing light from the telescopes to the spectrograph. We finish by presenting the current status of MINERVA, operational at Mt. Hopkins observatory in Arizona.

The second part of this thesis concerns a very different method of planet detection, direct imaging, which involves discovery and characterization of planets by collecting and analyzing their light. Directly analyzing planetary light is the most promising way to study their atmospheres, formation histories, and compositions. Direct imaging is extremely challenging, as it requires a high performance adaptive optics system to unblur the point-spread function of the parent star through the atmosphere, a coronagraph to suppress stellar diffraction, and image post-processing to remove non-common path "speckle" aberrations that can overwhelm any planetary companions.

To this end, we present the "Stellar Double Coronagraph," or SDC, a flexible coronagraphic platform for use with the 200" Hale telescope. It has two focal and pupil planes, allowing for a number of different observing modes, including multiple vortex phase masks in series for improved contrast and inner working angle behind the obscured aperture of the telescope. We present the motivation, design, performance, and data reduction pipeline of the instrument. In the following chapter, we present some early science results, including the first image of a companion to the star delta Andromeda, which had been previously hypothesized but never seen.

A further chapter presents a wavefront control code developed for the instrument, using the technique of "speckle nulling," which can remove optical aberrations from the system using the deformable mirror of the adaptive optics system. This code allows for improved contrast and inner working angles, and was written in a modular style so as to be portable to other high contrast imaging platforms. We present its performance on optical, near-infrared, and thermal infrared instruments on the Palomar and Keck telescopes, showing how it can improve contrasts by a factor of a few in less than ten iterations.

One of the large challenges in direct imaging is sensing and correcting the electric field in the focal plane to remove scattered light that can be much brighter than any planets. In the last chapter, we present a new method of focal-plane wavefront sensing, combining a coronagraph with a simple phase-shifting interferometer. We present its design and implementation on the Stellar Double Coronagraph, demonstrating its ability to create regions of high contrast by measuring and correcting for optical aberrations in the focal plane. Finally, we derive how it is possible to use the same hardware to distinguish companions from speckle errors using the principles of optical coherence. We present results observing the brown dwarf HD 49197b, demonstrating the ability to detect it despite it being buried in the speckle noise floor. We believe this is the first detection of a substellar companion using the coherence properties of light.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research investigated annular field reversed configuration (AFRC)devices for high power electric propulsion by demonstrating the acceleration of these plasmoids using an experimental prototype and measuring the plasmoid's velocity, impulse, and energy efficiency. The AFRC plasmoid translation experiment was design and constructed with the aid of a dynamic circuit model. Two versions of the experiment were built, using underdamped RLC circuits at 10 kHz and 20 kHz. Input energies were varied from 100 J/pulse to 1000 J/pulse for the 10 kHz bank and 100 J/pulse for the 20 kHz bank. The plasmoids were formed in static gas fill of argon, from 1 mTorr to 50 mTorr. The translation of the plasmoid was accomplished by incorporating a small taper into the outer coil, with a half angle of 2°. Magnetic field diagnostics, plasma probes, and single-frame imaging were used to measure the plasmoid's velocity and to diagnose plasmoid behavior. Full details of the device design, construction, and diagnostics are provided in this dissertation. The results from the experiment demonstrated that a repeatable AFRC plasmoid was produced between the coils, yet failed to translate for all tested conditions. The data revealed the plasmoid was limited in lifetime to only a few (4-10) μs, too short for translation at low energy. A global stability study showed that the plasma suffered a radial collapse onto the inner wall early in its lifecycle. The radial collapse was traced to a magnetic pressure imbalance. A correction made to the circuit was successful in restoring an equilibrium pressure balance and prolonging radial stability by an additional 2.5 μs. The equilibrium state was sufficient to confirm that the plasmoid current in an AFRC reaches a steady-state prior to the peak of the coil currents. This implies that the plasmoid will always be driven to the inner wall, unless it translates from the coils prior to peak coil currents. However, ejection of the plasmoid before the peak coil currents results in severe efficiency losses. These results demonstrate the difficulty in designing an AFRC experiment for translation as balancing the different requirements for stability, balance, and efficient translation can have competing consequences.