857 resultados para serrated aperture


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new 2-D hydrophone array for ultrasound therapy monitoring is presented, along with a novel algorithm for passive acoustic mapping using a sparse weighted aperture. The array is constructed using existing polyvinylidene fluoride (PVDF) ultrasound sensor technology, and is utilized for its broadband characteristics and its high receive sensitivity. For most 2-D arrays, high-resolution imagery is desired, which requires a large aperture at the cost of a large number of elements. The proposed array's geometry is sparse, with elements only on the boundary of the rectangular aperture. The missing information from the interior is filled in using linear imaging techniques. After receiving acoustic emissions during ultrasound therapy, this algorithm applies an apodization to the sparse aperture to limit side lobes and then reconstructs acoustic activity with high spatiotemporal resolution. Experiments show verification of the theoretical point spread function, and cavitation maps in agar phantoms correspond closely to predicted areas, showing the validity of the array and methodology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Solar research is primarily conducted in regions with consistent sunlight, severely limiting research opportunities in many areas. Unfortunately, the unreliable weather in Lewisburg, PA, can prove difficult for such testing to be conducted. As such, a solar simulator was developed for educational purposes for the Mechanical Engineering department at Bucknell University. The objective of this work was to first develop a geometric model to evaluate a one sun solar simulator. This was intended to provide a simplified model that could be used without the necessity of expensive software. This model was originally intended to be validated experimentally, but instead was done using a proven ray tracing program, TracePro. Analyses with the geometrical model and TracePro demonstrated the influence the geometrical properties had results, specifically the reflector (aperture) diameter and the rim angle. Subsequently, the two were approaches were consistent with one another for aperture diameters 0.5 m and larger, and for rim angles larger than 45°. The constructed prototype, that is currently untested, was designed from information provided by the geometric model, includes a metal halide lamp with a 9.5 mm arc diameter and parabolic reflector with an aperture diameter of 0.631 meters. The maximum angular divergence from the geometrical model was predicted to be 30 mRadians. The average angular divergence in TraceProof the system was 19.5 mRadians, compared to the sun’s divergence of 9.2 mRadians. Flux mapping in TracePro showed an intensity of 1000 W/m2 over the target plane located 40 meters from the lamp. The error between spectrum of the metal halide lamp and the solar spectrum was 10.9%, which was found by comparing their respective Plank radiation distributions. The project did not satisfy the original goal of matching the angular divergence of sunlight, although the system could still to be used for optical testing. The geometric model indicated performance in this area could be improved by increasing the diameter of the reflector, as well as decreasing the source diameter. Although ray tracing software provides more information to analyze the simulator system, the geometrical model is adequate to provide enough information to design a system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently, observations of space debris are primarily performed with ground-based sensors. These sensors have a detection limit at some centimetres diameter for objects in Low Earth Orbit (LEO) and at about two decimetres diameter for objects in Geostationary Orbit (GEO). The few space-based debris observations stem mainly from in-situ measurements and from the analysis of returned spacecraft surfaces. Both provide information about mostly sub-millimetre-sized debris particles. As a consequence the population of centimetre- and millimetre-sized debris objects remains poorly understood. The development, validation and improvement of debris reference models drive the need for measurements covering the whole diameter range. In 2003 the European Space Agency (ESA) initiated a study entitled “Space-Based Optical Observation of Space Debris”. The first tasks of the study were to define user requirements and to develop an observation strategy for a space-based instrument capable of observing uncatalogued millimetre-sized debris objects. Only passive optical observations were considered, focussing on mission concepts for the LEO, and GEO regions respectively. Starting from the requirements and the observation strategy, an instrument system architecture and an associated operations concept have been elaborated. The instrument system architecture covers the telescope, camera and onboard processing electronics. The proposed telescope is a folded Schmidt design, characterised by a 20 cm aperture and a large field of view of 6°. The camera design is based on the use of either a frame-transfer charge coupled device (CCD), or on a cooled hybrid sensor with fast read-out. A four megapixel sensor is foreseen. For the onboard processing, a scalable architecture has been selected. Performance simulations have been executed for the system as designed, focussing on the orbit determination of observed debris particles, and on the analysis of the object detection algorithms. In this paper we present some of the main results of the study. A short overview of the user requirements and observation strategy is given. The architectural design of the instrument is discussed, and the main tradeoffs are outlined. An insight into the results of the performance simulations is provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the past sixty years, waveguide slot radiator arrays have played a critical role in microwave radar and communication systems. They feature a well-characterized antenna element capable of direct integration into a low-loss feed structure with highly developed and inexpensive manufacturing processes. Waveguide slot radiators comprise some of the highest performance—in terms of side-lobe-level, efficiency, etc. — antenna arrays ever constructed. A wealth of information is available in the open literature regarding design procedures for linearly polarized waveguide slots. By contrast, despite their presence in some of the earliest published reports, little has been presented to date on array designs for circularly polarized (CP) waveguide slots. Moreover, that which has been presented features a classic traveling wave, efficiency-reducing beam tilt. This work proposes a unique CP waveguide slot architecture which mitigates these problems and a thorough design procedure employing widely available, modern computational tools. The proposed array topology features simultaneous dual-CP operation with grating-lobe-free, broadside radiation, high aperture efficiency, and good return loss. A traditional X-Slot CP element is employed with the inclusion of a slow wave structure passive phase shifter to ensure broadside radiation without the need for performance-limiting dielectric loading. It is anticipated this technology will be advantageous for upcoming polarimetric radar and Ka-band SatCom systems. The presented design methodology represents a philosophical shift away from traditional waveguide slot radiator design practices. Rather than providing design curves and/or analytical expressions for equivalent circuit models, simple first-order design rules – generated via parametric studies — are presented with the understanding that device optimization and design will be carried out computationally. A unit-cell, S-parameter based approach provides a sufficient reduction of complexity to permit efficient, accurate device design with attention to realistic, application-specific mechanical tolerances. A transparent, start-to-finish example of the design procedure for a linear sub-array at X-Band is presented. Both unit cell and array performance is calculated via finite element method simulations. Results are confirmed via good agreement with finite difference, time domain calculations. Array performance exhibiting grating-lobe-free, broadside-scanned, dual-CP radiation with better than 20 dB return loss and over 75% aperture efficiency is presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A free-space optical (FSO) laser communication system with perfect fast-tracking experiences random power fading due to atmospheric turbulence. For a FSO communication system without fast-tracking or with imperfect fast-tracking, the fading probability density function (pdf) is also affected by the pointing error. In this thesis, the overall fading pdfs of FSO communication system with pointing errors are calculated using an analytical method based on the fast-tracked on-axis and off-axis fading pdfs and the fast-tracked beam profile of a turbulence channel. The overall fading pdf is firstly studied for the FSO communication system with collimated laser beam. Large-scale numerical wave-optics simulations are performed to verify the analytically calculated fading pdf with collimated beam under various turbulence channels and pointing errors. The calculated overall fading pdfs are almost identical to the directly simulated fading pdfs. The calculated overall fading pdfs are also compared with the gamma-gamma (GG) and the log-normal (LN) fading pdf models. They fit better than both the GG and LN fading pdf models under different receiver aperture sizes in all the studied cases. Further, the analytical method is expanded to the FSO communication system with beam diverging angle case. It is shown that the gamma pdf model is still valid for the fast-tracked on-axis and off-axis fading pdfs with point-like receiver aperture when the laser beam is propagated with beam diverging angle. Large-scale numerical wave-optics simulations prove that the analytically calculated fading pdfs perfectly fit the overall fading pdfs for both focused and diverged beam cases. The influence of the fast-tracked on-axis and off-axis fading pdfs, the fast-tracked beam profile, and the pointing error on the overall fading pdf is also discussed. At last, the analytical method is compared with the previous heuristic fading pdf models proposed since 1970s. Although some of previously proposed fading pdf models provide close fit to the experiment and simulation data, these close fits only exist under particular conditions. Only analytical method shows accurate fit to the directly simulated fading pdfs under different turbulence strength, propagation distances, receiver aperture sizes and pointing errors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Turbulence affects traditional free space optical communication by causing speckle to appear in the received beam profile. This occurs due to changes in the refractive index of the atmosphere that are caused by fluctuations in temperature and pressure, resulting in an inhomogeneous medium. The Gaussian-Schell model of partial coherence has been suggested as a means of mitigating these atmospheric inhomogeneities on the transmission side. This dissertation analyzed the Gaussian-Schell model of partial coherence by verifying the Gaussian-Schell model in the far-field, investigated the number of independent phase control screens necessary to approach the ideal Gaussian-Schell model, and showed experimentally that the Gaussian-Schell model of partial coherence is achievable in the far-field using a liquid crystal spatial light modulator. A method for optimizing the statistical properties of the Gaussian-Schell model was developed to maximize the coherence of the field while ensuring that it does not exhibit the same statistics as a fully coherent source. Finally a technique to estimate the minimum spatial resolution necessary in a spatial light modulator was developed to effectively propagate the Gaussian-Schell model through a range of atmospheric turbulence strengths. This work showed that regardless of turbulence strength or receiver aperture, transmitting the Gaussian-Schell model of partial coherence instead of a fully coherent source will yield a reduction in the intensity fluctuations of the received field. By measuring the variance of the intensity fluctuations and the received mean, it is shown through the scintillation index that using the Gaussian-Schell model of partial coherence is a simple and straight forward method to mitigate atmospheric turbulence instead of traditional adaptive optics in free space optical communications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Free space optical (FSO) communication links can experience extreme signal degradation due to atmospheric turbulence induced spatial and temporal irradiance fuctuations (scintillation) in the laser wavefront. In addition, turbulence can cause the laser beam centroid to wander resulting in power fading, and sometimes complete loss of the signal. Spreading of the laser beam and jitter are also artifacts of atmospheric turbulence. To accurately predict the signal fading that occurs in a laser communication system and to get a true picture of how this affects crucial performance parameters like bit error rate (BER) it is important to analyze the probability density function (PDF) of the integrated irradiance fuctuations at the receiver. In addition, it is desirable to find a theoretical distribution that accurately models these ?uctuations under all propagation conditions. The PDF of integrated irradiance fuctuations is calculated from numerical wave-optic simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to very strong. Our results show that the gamma-gamma PDF provides a good fit to the simulated data distribution for all aperture sizes studied from weak through moderate scintillation. In strong scintillation, the gamma-gamma PDF is a better fit to the distribution for point-like apertures and the lognormal PDF is a better fit for apertures the size of the atmospheric spatial coherence radius ρ0 or larger. In addition, the PDF of received power from a Gaussian laser beam, which has been adaptively compensated at the transmitter before propagation to the receiver of a FSO link in the moderate scintillation regime is investigated. The complexity of the adaptive optics (AO) system is increased in order to investigate the changes in the distribution of the received power and how this affects the BER. For the 10 km link, due to the non-reciprocal nature of the propagation path the optimal beam to transmit is unknown. These results show that a low-order level of complexity in the AO provides a better estimate for the optimal beam to transmit than a higher order for non-reciprocal paths. For the 20 km link distance it was found that, although minimal, all AO complexity levels provided an equivalent improvement in BER and that no AO complexity provided the correction needed for the optimal beam to transmit. Finally, the temporal power spectral density of received power from a FSO communication link is investigated. Simulated and experimental results for the coherence time calculated from the temporal correlation function are presented. Results for both simulation and experimental data show that the coherence time increases as the receiving aperture diameter increases. For finite apertures the coherence time increases as the communication link distance is increased. We conjecture that this is due to the increasing speckle size within the pupil plane of the receiving aperture for an increasing link distance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Focusing optical beams on a target through random propagation media is very important in many applications such as free space optical communica- tions and laser weapons. Random media effects such as beam spread and scintillation can degrade the optical system's performance severely. Compensation schemes are needed in these applications to overcome these random media effcts. In this research, we investigated the optimal beams for two different optimization criteria: one is to maximize the concentrated received intensity and the other is to minimize the scintillation index at the target plane. In the study of the optimal beam to maximize the weighted integrated intensity, we derive a similarity relationship between pupil-plane phase screen and extended Huygens-Fresnel model, and demonstrate the limited utility of maximizing the average integrated intensity. In the study ofthe optimal beam to minimize the scintillation index, we derive the first- and second-order moments for the integrated intensity of multiple coherent modes. Hermite-Gaussian and Laguerre-Gaussian modes are used as the coherent modes to synthesize an optimal partially coherent beam. The optimal beams demonstrate evident reduction of scintillation index, and prove to be insensitive to the aperture averaging effect.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Measuring shallow seismic sources provides a way to reveal processes that cannot be directly observed, but the correct interpretation and value of these signals depend on the ability to distinguish source from propagation effects. Furthermore, seismic signals produced by a resonating source can look almost identical to those produced by impulsive sources, but modified along the path. Distinguishing these two phenomena can be accomplished by examining the wavefield with small aperture arrays or by recording seismicity near to the source when possible. We examine source and path effects in two different environments: Bering Glacier, Alaska and Villarrica Volcano, Chile. Using three 3-element seismic arrays near the terminus of the Bering Glacier, we have identified and located both terminus calving and iceberg breakup events. We show that automated array analysis provided a robust way to locate icequake events using P waves. This analysis also showed that arrivals within the long-period codas were incoherent within the small aperture arrays, demonstrating that these codas previously attributed to crack resonance were in fact a result of a complicated path rather than a source effect. At Villarrica Volcano, seismometers deployed from near the vent to ~10 km revealed that a several cycle long-period source signal recorded at the vent appeared elongated in the far-field. We used data collected from the stations nearest to the vent to invert for the repetitive seismic source, and found it corresponded to a shallow force within the lava lake oriented N75°E and dipping 7° from horizontal. We also used this repetitive signal to search the data for additional seismic and infrasonic properties which included calculating seismic-acoustic delay times, volcano acoustic-seismic ratios and energies, event frequency, and real-time seismic amplitude measurements. These calculations revealed lava lake level and activity fluctuations consistent with lava lake level changes inferred from the persistent infrasonic tremor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lastarria volcano (Chile) is located at the North-West margin of the `Lazufre' ground inflation signal (37x45 km²), constantly uplifting at a rate of ~2.5 cm/year since 1996 (Pritchard and Simons 2002; Froger et al. 2007). The Lastarria volcano has the double interest to be superimposed on a second, smaller-scale inflation signal and to be the only degassing area of the Lazufre signal. In this project, we compared daily SO2 burdens recorded by AURA's OMI mission for 2005-2010 with Ground Surface Displacements (GSD) calculated from the Advanced Synthetic Aperture Radar (ASAR) images for 2003-2010. We found a constant maximum displacement rate of 2.44 cm/year for the period 2003-2007 and 0.80- 0.95 cm/year for the period 2007-2010. Total SO2 emitted is 67.0 kT for the period 2005-2010, but detection of weak SO2 degassing signals in the Andes remains challenging owing to increased noise in the South Atlantic radiation Anomaly region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Question: Is stomatal regulation specific for climate and tree species, and does it reveal species-specific responses to drought? Is there a link to vegetation dynamics? Location: Dry inner alpine valley, Switzerland Methods: Stomatal aperture (θE) of Pinus sylvestris, Quercus pubescens, Juniperus communis and Picea abies were continuously estimated by the ratio of measured branch sap flow rates to potential transpiration rates (adapted Penman-Monteith single leaf approach) at 10-min intervals over four seasons. Results: θE proved to be specific for climate and species and revealed distinctly different drought responses: Pinus stomata close disproportionately more than neighbouring species under dry conditions, but has a higher θE than the other species when weather was relatively wet and cool. Quercus keeps stomata more open under drought stress but has a lower θE under humid conditions. Juniperus was most drought-tolerant, whereas Picea stomata close almost completely during summer. Conclusions: The distinct microclimatic preferences of the four tree species in terms of θE strongly suggest that climate (change) is altering tree physiological performances and thus species-specific competitiveness. Picea and Pinus currently live at the physiological limit of their ability to withstand increasing temperature and drought intensities at the sites investigated, whereas Quercus and Juniperus perform distinctly better. This corresponds, at least partially, with regional vegetation dynamics: Pinus has strongly declined, whereas Quercus has significantly increased in abundance in the past 30 years. We conclude that θE provides an indication of a species' ability to cope with current and predicted climate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Approximately 20% of all colorectal cancers are hypothesized to arise from the "serrated pathway" characterized by mutation in BRAF, high-level CpG Island Methylator Phenotype, and microsatellite instability/mismatch repair (MMR)-deficiency. MMR-deficient cancers show frequent losses of Cdx2, a homeodomain transcription factor. Here, we determine the predictive value of Cdx2 expression for MMR-deficiency and investigate changes in expression between primary cancers and matched lymph node metastases. Methods: Immunohistochemistry for Cdx2, Mlh1, Msh2, Msh6, and Pms2 was performed on whole tissue sections from 201 patients with primary colorectal cancer and 59 cases of matched lymph node metastases. Receiver operating characteristic curve analysis and Area under the Curve (AUC) were investigated; association of Cdx2 with clinicopathological features and patient survival was carried out. Results: Loss of Cdx2 expression was associated with higher tumor grade (p = 0.0002), advanced pT (p = 0.0166), and perineural invasion (p = 0.0228). Cdx2 loss was an unfavorable prognostic factor in univariate (p = 0.0145) and multivariate [p = 0.0427; HR (95% CI): 0.58 (0.34-0.98)] analysis. The accuracy (AUC) for discriminating MMR-proficient and - deficient cancers was 87% [OR (95% CI): 0.96 (0.95-0.98); p < 0.0001]. Specificity and negative predictive value for MMR-deficiency was 99.1 and 96.3%. One hundred and seventy-four patients had MMR-proficient cancers, of which 60 (34.5%) showed Cdx2 loss. Cdx2 loss in metastases was related to MMR-deficiency (p < 0.0001). There was no difference in expression between primary tumors and matched metastases. Conclusion: Loss of Cdx2 is a sensitive and specific predictor of MMR-deficiency, but is not limited to these tumors, suggesting that events "upstream" of the development of microsatellite instability may impact Cdx2 expression.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To evaluate normal tissue dose reduction in step-and-shoot intensity-modulated radiation therapy (IMRT) on the Varian 2100 platform by tracking the multileaf collimator (MLC) apertures with the accelerator jaws. Methods: Clinical radiation treatment plans for 10 thoracic, 3 pediatric and 3 head and neck patients were converted to plans with the jaws tracking each segment’s MLC apertures. Each segment was then renormalized to account for the change in collimator scatter to obtain target coverage within 1% of that in the original plan. The new plans were compared to the original plans in a commercial radiation treatment planning system (TPS). Reduction in normal tissue dose was evaluated in the new plan by using the parameters V5, V10, and V20 in the cumulative dose-volume histogram for the following structures: total lung minus GTV (gross target volume), heart, esophagus, spinal cord, liver, parotids, and brainstem. In order to validate the accuracy of our beam model, MLC transmission measurements were made and compared to those predicted by the TPS. Results: The greatest change between the original plan and new plan occurred at lower dose levels. The reduction in V20 was never more than 6.3% and was typically less than 1% for all patients. The reduction in V5 was 16.7% maximum and was typically less than 3% for all patients. The variation in normal tissue dose reduction was not predictable, and we found no clear parameters that indicated which patients would benefit most from jaw tracking. Our TPS model of MLC transmission agreed with measurements with absolute transmission differences of less than 0.1 % and thus uncertainties in the model did not contribute significantly to the uncertainty in the dose determination. Conclusion: The amount of dose reduction achieved by collimating the jaws around each MLC aperture in step-and-shoot IMRT does not appear to be clinically significant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

1. BMC Clin Pathol. 2014 May 1;14:19. doi: 10.1186/1472-6890-14-19. eCollection 2014. A case of EDTA-dependent pseudothrombocytopenia: simple recognition of an underdiagnosed and misleading phenomenon. Nagler M, Keller P, Siegrist D, Alberio L. Author information: Department of Hematology and Central Hematology Laboratory, Inselspital University Hospital and University of Berne, CH-3010 Berne, Switzerland. BACKGROUND: EDTA-dependent pseudothrombocytopenia (EDTA-PTCP) is a common laboratory phenomenon with a prevalence ranging from 0.1-2% in hospitalized patients to 15-17% in outpatients evaluated for isolated thrombocytopenia. Despite its harmlessness, EDTA-PTCP frequently leads to time-consuming, costly and even invasive diagnostic investigations. EDTA-PTCP is often overlooked because blood smears are not evaluated visually in routine practice and histograms as well as warning flags of hematology analyzers are not interpreted correctly. Nonetheless, EDTA-PTCP may be diagnosed easily even by general practitioners without any experiences in blood film examinations. This is the first report illustrating the typical patterns of a platelet (PLT) and white blood cell (WBC) histograms of hematology analyzers. CASE PRESENTATION: A 37-year-old female patient of Caucasian origin was referred with suspected acute leukemia and the crew of the emergency unit arranged extensive investigations for work-up. However, examination of EDTA blood sample revealed atypical lymphocytes and an isolated thrombocytopenia together with typical patterns of WBC and PLT histograms: a serrated curve of the platelet histogram and a peculiar peak on the left side of the WBC histogram. EDTA-PTCP was confirmed by a normal platelet count when examining citrated blood. CONCLUSION: Awareness of typical PLT and WBC patterns may alert to the presence of EDTA-PTCP in routine laboratory practice helping to avoid unnecessary investigations and over-treatment. PMCID: PMC4012027 PMID: 24808761 [PubMed]