954 resultados para HIGH-FIELD MAGNETIZATION
Resumo:
Embedded siloxane polymer waveguides have shown promising results for use in optical backplanes. They exhibit high temperature stability, low optical absorption, and require common processing techniques. A challenging aspect of this technology is out-of-plane coupling of the waveguides. A multi-software approach to modeling an optical vertical interconnect (via) is proposed. This approach utilizes the beam propagation method to generate varied modal field distribution structures which are then propagated through a via model using the angular spectrum propagation technique. Simulation results show average losses between 2.5 and 4.5 dB for different initial input conditions. Certain configurations show losses of less than 3 dB and it is shown that in an input/output pair of vias, average losses per via may be lower than the targeted 3 dB.
Resumo:
The need for a stronger and more durable building material is becoming more important as the structural engineering field expands and challenges the behavioral limits of current materials. One of the demands for stronger material is rooted in the effects that dynamic loading has on a structure. High strain rates on the order of 101 s-1 to 103 s-1, though a small part of the overall types of loading that occur anywhere between 10-8 s-1 to 104 s-1 and at any point in a structures life, have very important effects when considering dynamic loading on a structure. High strain rates such as these can cause the material and structure to behave differently than at slower strain rates, which necessitates the need for the testing of materials under such loading to understand its behavior. Ultra high performance concrete (UHPC), a relatively new material in the U.S. construction industry, exhibits many enhanced strength and durability properties compared to the standard normal strength concrete. However, the use of this material for high strain rate applications requires an understanding of UHPC’s dynamic properties under corresponding loads. One such dynamic property is the increase in compressive strength under high strain rate load conditions, quantified as the dynamic increase factor (DIF). This factor allows a designer to relate the dynamic compressive strength back to the static compressive strength, which generally is a well-established property. Previous research establishes the relationships for the concept of DIF in design. The generally accepted methodology for obtaining high strain rates to study the enhanced behavior of compressive material strength is the split Hopkinson pressure bar (SHPB). In this research, 83 Cor-Tuf UHPC specimens were tested in dynamic compression using a SHPB at Michigan Technological University. The specimens were separated into two categories: ambient cured and thermally treated, with aspect ratios of 0.5:1, 1:1, and 2:1 within each category. There was statistically no significant difference in mean DIF for the aspect ratios and cure regimes that were considered in this study. DIF’s ranged from 1.85 to 2.09. Failure modes were observed to be mostly Type 2, Type 4, or combinations thereof for all specimen aspect ratios when classified according to ASTM C39 fracture pattern guidelines. The Comite Euro-International du Beton (CEB) model for DIF versus strain rate does not accurately predict the DIF for UHPC data gathered in this study. Additionally, a measurement system analysis was conducted to observe variance within the measurement system and a general linear model analysis was performed to examine the interaction and main effects that aspect ratio, cannon pressure, and cure method have on the maximum dynamic stress.
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.
Resumo:
Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.
Resumo:
Within the Yellowstone National Park, Wyoming, the silicic Yellowstone volcanic field is one of the most active volcanic systems all over the world. Although the last rhyolite eruption occurred around 70,000 years ago, Yellowstone is still believed to be volcanically active, due to high hydrothermal and seismic activity. The earthquake data used in this study cover the period of time between 1988 and 2010. Earthquake relocations and a set of 369 well-constrained, double-couple, focal mechanism solutions were computed. Events were grouped according to location and time to investigate trends in faulting. The majority of the events has oblique, normal-faulting solutions. The overall direction of extension throughout the 0.64 Ma Yellowstone caldera looks nearly ENE, consistently with the direction of alignments of volcanic vents within the caldera, but detailed study revealed spatial and temporal variations. Stress-field solutions for different areas and time periods were calculated from earthquake focal mechanism inversion. A well-resolved rotation of σ3 was found, from NNE-SSW near the Hebgen Lake fault zone, to ENE-WSW near Norris Junction. In particular, the σ3 direction changed throughout the years in the Norris Junction area, from being ENE-WSW, as calculated in the study by Waite and Smith (2004), to NNE-SSW, while the other σ3 directions are mostly unchanged over time. The Yellowstone caldera was subject to periods of net uplift and subsidence over the past century, explained in previous studies as caused by expanding or contracting sills, at different depths. Based on the models used to explain these deformation periods, we investigated the relationship between variability in aseismic deformation and seismic activity and faulting styles. Focal mechanisms and P and T axes were divided into temporal and depth intervals, in order to identify spatial or temporal trends in deformation. The presence of “chocolate tablet” structures, with composite dilational faults, was identified in many stages of the deformation history both in the Norris Geyser Basin area and inside the caldera. Strike-slip component movement was found in a depth interval below a contracting sill, indicating the movement of magma towards the caldera.
Resumo:
Edited by one of the leading experts in the field, this book fills the need for a book presenting the most important methods for high-throughput screenings and functional characterization of enzymes. It adopts an interdisciplinary approach, making it indispensable for all those involved in this expanding field, and reflects the major advances made over the past few years. For biochemists, analytical, organic and catalytic chemists, and biotechnologists.
Resumo:
High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field
Resumo:
Testis cancer is the most frequent solid malignancy in young men. The majority of patients present with clinical stage I disease and about 50% of them are nonseminomatous germ cell tumors. In this initial stage of disease there is a subgroup of patients at high risk with a likelihood of more than 50% for relapse. Treatment options for these patients include: retroperitoneal lymph node dissection (RPLND), albeit 6-10% of patients will relapse outside the field of RPLND, active surveillance with even higher relapse rates and adjuvant chemotherapy. As most of these patients have the chance to become long-term survivors, avoidance of long-term side effects is of utmost importance. This review provides information on the potential of chemotherapy to achieve a higher chance of cure for patients with high-risk clinical stage I disease than its therapeutic alternatives and addresses toxicity and dose dependency.
Resumo:
The past few years, multimodal interaction has been gaining importance in virtual environments. Although multimodality renders interacting with an environment more natural and intuitive, the development cycle of such an application is often long and expensive. In our overall field of research, we investigate how modelbased design can facilitate the development process by designing environments through the use of highlevel diagrams. In this scope, we present ‘NiMMiT’, a graphical notation for expressing and evaluating multimodal user interaction; we elaborate on the NiMMiT primitives and demonstrate its use by means of a comprehensive example.
Resumo:
We consider a flux formulation of Double Field Theory in which fluxes are dynamical and field-dependent. Gauge consistency imposes a set of quadratic constraints on the dynamical fluxes, which can be solved by truly double configurations. The constraints are related to generalized Bianchi Identities for (non-)geometric fluxes in the double space, sourced by (exotic) branes. Following previous constructions, we then obtain generalized connections, torsion and curvatures compatible with the consistency conditions. The strong constraint-violating terms needed to make contact with gauged supergravities containing duality orbits of non-geometric fluxes, systematically arise in this formulation.
Resumo:
Semi-natural grasslands are widely recognized for their high ecological value. They often count among the most species-rich habitats, especially in traditional cultural landscapes. Maintaining and/or restoring them is a top priority, but nevertheless represents a real conservation challenge, especially regarding their invertebrate assemblages. The main goal of this study was to experimentally investigate the influence of four different mowing regimes on orthopteran communities and populations: (1) control meadow (C-meadow): mowing regime according to the Swiss regulations for extensively managed meadows declared as ecological compensation areas, i.e. first cut not before 15 June; (2) first cut not before 15 July (delayed treatment, D-meadow); (3) first cut not before 15 June and second cut not earlier than 8 weeks from the first cut (8W-meadow); (4) refuges left uncut on 10–20% of the meadow area (R-meadow). Data were collected two years after the introduction of these mowing treatments. Orthopteran densities from spring to early summer were five times higher in D-meadows, compared to C-meadows. In R-meadows, densities were, on average, twice as high as in C-meadows, while mean species richness was 23% higher in R-meadows than in C-meadows. Provided that farmers were given the appropriate financial incentives, the D- and R-meadow regimes could be relatively easy to implement within agri-environment schemes. Such meadows could deliver substantial benefits for functional biodiversity, including sustenance to many secondary consumers dependent on field invertebrates as staple food.
Resumo:
The Gravity field and steady-state Ocean Circulation Explorer (GOCE) is now in orbit for more than four years. This is longer than the originally planned lifetime of the satellite and after three years on the same altitude the satellite has been lowered to 235 km in several steps. In the frame of the GOCE High-level Processing Facility the Astronomical Institute of the University of Bern (AIUB) is responsible for the determination of the official Precise Science Orbit (PSO) product. Kinematic GOCE orbits are part of this product and are used by several institutions in- and outside the HPF for determining the low degrees of the Earth’s gravity field. AIUB GOCE GPS-only gravity field solutions using the Celestial Mechanics Approach and covering the Release 4 period as well as a more recent time interval at the lower orbit altitude are shown and discussed. Special attention is paid to the impact of systematic deficiencies in the kinematic orbits on the resulting gravity fields, e.g., related to the geomagnetic equator, and on possibilities to get rid of them.
Resumo:
[1] In the event of a termination of the Gravity Recovery and Climate Experiment (GRACE) mission before the launch of GRACE Follow-On (due for launch in 2017), high-low satellite-to-satellite tracking (hl-SST) will be the only dedicated observing system with global coverage available to measure the time-variable gravity field (TVG) on a monthly or even shorter time scale. Until recently, hl-SST TVG observations were of poor quality and hardly improved the performance of Satellite Laser Ranging observations. To date, they have been of only very limited usefulness to geophysical or environmental investigations. In this paper, we apply a thorough reprocessing strategy and a dedicated Kalman filter to Challenging Minisatellite Payload (CHAMP) data to demonstrate that it is possible to derive the very long-wavelength TVG features down to spatial scales of approximately 2000 km at the annual frequency and for multi-year trends. The results are validated against GRACE data and surface height changes from long-term GPS ground stations in Greenland. We find that the quality of the CHAMP solutions is sufficient to derive long-term trends and annual amplitudes of mass change over Greenland. We conclude that hl-SST is a viable source of information for TVG and can serve to some extent to bridge a possible gap between the end-of-life of GRACE and the availability of GRACE Follow-On.
Resumo:
The Greenland NEEM (North Greenland Eemian Ice Drilling) operation in 2010 provided the first opportunity to combine trace-gas measurements by laser spectroscopic instruments and continuous-flow analysis along a freshly drilled ice core in a field-based setting. We present the resulting atmospheric methane (CH4) record covering the time period from 107.7 to 9.5 ka b2k (thousand years before 2000 AD). Companion discrete CH4 measurements are required to transfer the laser spectroscopic data from a relative to an absolute scale. However, even on a relative scale, the high-resolution CH4 data set significantly improves our knowledge of past atmospheric methane concentration changes. New significant sub-millennial-scale features appear during interstadials and stadials, generally associated with similar changes in water isotopic ratios of the ice, a proxy for local temperature. In addition to the midpoint of Dansgaard–Oeschger (D/O) CH4 transitions usually used for cross-dating, sharp definition of the start and end of these events brings precise depth markers (with ±20 cm uncertainty) for further cross-dating with other palaeo- or ice core records, e.g. speleothems. The method also provides an estimate of CH4 rates of change. The onsets of D/O events in the methane signal show a more rapid rate of change than their endings. The rate of CH4 increase associated with the onsets of D/O events progressively declines from 1.7 to 0.6 ppbv yr−1 in the course of marine isotope stage 3. The largest observed rate of increase takes place at the onset of D/O event #21 and reaches 2.5 ppbv yr−1.