928 resultados para powerful owl


Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nearly all young stars are variable, with the variability traditionally divided into two classes: periodic variables and aperiodic or "irregular" variables. Periodic variables have been studied extensively, typically using periodograms, while aperiodic variables have received much less attention due to a lack of standard statistical tools. However, aperiodic variability can serve as a powerful probe of young star accretion physics and inner circumstellar disk structure. For my dissertation, I analyzed data from a large-scale, long-term survey of the nearby North America Nebula complex, using Palomar Transient Factory photometric time series collected on a nightly or every few night cadence over several years. This survey is the most thorough exploration of variability in a sample of thousands of young stars over time baselines of days to years, revealing a rich array of lightcurve shapes, amplitudes, and timescales.

I have constrained the timescale distribution of all young variables, periodic and aperiodic, on timescales from less than a day to ~100 days. I have shown that the distribution of timescales for aperiodic variables peaks at a few days, with relatively few (~15%) sources dominated by variability on tens of days or longer. My constraints on aperiodic timescale distributions are based on two new tools, magnitude- vs. time-difference (Δm-Δt) plots and peak-finding plots, for describing aperiodic lightcurves; this thesis provides simulations of their performance and presents recommendations on how to apply them to aperiodic signals in other time series data sets. In addition, I have measured the error introduced into colors or SEDs from combining photometry of variable sources taken at different epochs. These are the first quantitative results to be presented on the distributions in amplitude and time scale for young aperiodic variables, particularly those varying on timescales of weeks to months.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we build a novel analysis framework to perform the direct extraction of all possible effective Higgs boson couplings to the neutral electroweak gauge bosons in the H → ZZ(*) → 4l channel also referred to as the golden channel. We use analytic expressions of the full decay differential cross sections for the H → VV' → 4l process, and the dominant irreducible standard model qq ̄ → 4l background where 4l = 2e2μ,4e,4μ. Detector effects are included through an explicit convolution of these analytic expressions with transfer functions that model the detector responses as well as acceptance and efficiency effects. Using the full set of decay observables, we construct an unbinned 8-dimensional detector level likelihood function which is con- tinuous in the effective couplings, and includes systematics. All potential anomalous couplings of HVV' where V = Z,γ are considered, allowing for general CP even/odd admixtures and any possible phases. We measure the CP-odd mixing between the tree-level HZZ coupling and higher order CP-odd couplings to be compatible with zero, and in the range [−0.40, 0.43], and the mixing between HZZ tree-level coupling and higher order CP -even coupling to be in the ranges [−0.66, −0.57] ∪ [−0.15, 1.00]; namely compatible with a standard model Higgs. We discuss the expected precision in determining the various HVV' couplings in future LHC runs. A powerful and at first glance surprising prediction of the analysis is that with 100-400 fb-1, the golden channel will be able to start probing the couplings of the Higgs boson to diphotons in the 4l channel. We discuss the implications and further optimization of the methods for the next LHC runs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article outlines the outcome of work that set out to provide one of the specified integral contributions to the overarching objectives of the EU- sponsored LIFE98 project described in this volume. Among others, these included a requirement to marry automatic monitoring and dynamic modelling approaches in the interests of securing better management of water quality in lakes and reservoirs. The particular task given to us was to devise the elements of an active management strategy for the Queen Elizabeth II Reservoir. This is one of the larger reservoirs supplying the population of the London area: after purification and disinfection, its water goes directly to the distribution network and to the consumers. The quality of the water in the reservoir is of primary concern, for the greater is the content of biogenic materials, including phytoplankton, then the more prolonged is the purification and the more expensive is the treatment. Whatever good that phytoplankton may do by way of oxygenation and oxidative purification, it is eventually relegated to an impurity that has to be removed from the final product. Indeed, it has been estimated that the cost of removing algae and microorganisms from water represents about one quarter of its price at the tap. In chemically fertile waters, such as those typifying the resources of the Thames Valley, there is thus a powerful and ongoing incentive to be able to minimise plankton growth in storage reservoirs. Indeed, the Thames Water company and its predecessor undertakings, have a long and impressive history of confronting and quantifying the fundamentals of phytoplankton growth in their reservoirs and of developing strategies for operation and design to combat them. The work to be described here follows in this tradition. However, the use of the model PROTECH-D to investigate present phytoplankton growth patterns in the Queen Elizabeth II Reservoir questioned the interpretation of some of the recent observations. On the other hand, it has reinforced the theories underpinning the original design of this and those Thames-Valley storage reservoirs constructed subsequently. The authors recount these experiences as an example of how simulation models can hone the theoretical base and its application to the practical problems of supplying water of good quality at economic cost, before the engineering is initiated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While synoptic surveys in the optical and at high energies have revealed a rich discovery phase space of slow transients, a similar yield is still awaited in the radio. Majority of the past blind surveys, carried out with radio interferometers, have suffered from a low yield of slow transients, ambiguous transient classifications, and contamination by false positives. The newly-refurbished Karl G. Jansky Array (Jansky VLA) offers wider bandwidths for accurate RFI excision as well as substantially-improved sensitivity and survey speed compared with the old VLA. The Jansky VLA thus eliminates the pitfalls of interferometric transient search by facilitating sensitive, wide-field, and near-real-time radio surveys and enabling a systematic exploration of the dynamic radio sky. This thesis aims at carrying out blind Jansky VLA surveys for characterizing the radio variable and transient sources at frequencies of a few GHz and on timescales between days and years. Through joint radio and optical surveys, the thesis addresses outstanding questions pertaining to the rates of slow radio transients (e.g. radio supernovae, tidal disruption events, binary neutron star mergers, stellar flares, etc.), the false-positive foreground relevant for the radio and optical counterpart searches of gravitational wave sources, and the beaming factor of gamma-ray bursts. The need for rapid processing of the Jansky VLA data and near-real-time radio transient search has enabled the development of state-of-the-art software infrastructure. This thesis has successfully demonstrated the Jansky VLA as a powerful transient search instrument, and it serves as a pathfinder for the transient surveys planned for the SKA-mid pathfinder facilities, viz. ASKAP, MeerKAT, and WSRT/Apertif.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An exciting frontier in quantum information science is the integration of otherwise "simple'' quantum elements into complex quantum networks. The laboratory realization of even small quantum networks enables the exploration of physical systems that have not heretofore existed in the natural world. Within this context, there is active research to achieve nanoscale quantum optical circuits, for which atoms are trapped near nano-scopic dielectric structures and "wired'' together by photons propagating through the circuit elements. Single atoms and atomic ensembles endow quantum functionality for otherwise linear optical circuits and thereby enable the capability of building quantum networks component by component. Toward these goals, we have experimentally investigated three different systems, from conventional to rather exotic systems : free-space atomic ensembles, optical nano fibers, and photonics crystal waveguides. First, we demonstrate measurement-induced quadripartite entanglement among four quantum memories. Next, following the landmark realization of a nanofiber trap, we demonstrate the implementation of a state-insensitive, compensated nanofiber trap. Finally, we reach more exotic systems based on photonics crystal devices. Beyond conventional topologies of resonators and waveguides, new opportunities emerge from the powerful capabilities of dispersion and modal engineering in photonic crystal waveguides. We have implemented an integrated optical circuit with a photonics crystal waveguide capable of both trapping and interfacing atoms with guided photons, and have observed the collective effect, superradiance, mediated by the guided photons. These advances provide an important capability for engineered light-matter interactions, enabling explorations of novel quantum transport and quantum many-body phenomena.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tese pretende analisar os processos e mecanismos da participação do controle social na gestão da política de saúde no Município do Rio de Janeiro ao estudar o Conselho Municipal de Saúde do Rio de Janeiro. Os objetivos da pesquisa foram identificar a forma de controle e fiscalização exercida pelo Conselho Municipal de Saúde do Rio de Janeiro na gestão César Maia, averiguar se as decisões importantes da Política de Saúde Municipal passam pelo Conselho Municipal, as principais tensões deste espaço institucionalizado de participação sociopolítica que reproduz as lutas sociais. Realizamos uma pesquisa qualitativa e empírica com enfoque no método dialético, um estudo de caso do Conselho Municipal de Saúde Rio de Janeiro no período de gestão de 2005 a 2008. A tese está estruturada em quatro capítulos. Traz as tensões e os processos sociais da participação do controle social na gestão da saúde, no Município do Rio de Janeiro, no terceiro mandato da gestão César Maia. Foi possível observar o potencial do controle social na cidade do Rio de Janeiro, porém, evidenciam-se vários limites, como a não efetivação da agenda proposta nas diretrizes das conferências municipais, bem como a falta de estratégias ao se elaborar, de forma conjunta, o Plano Municipal de Saúde do Município do Rio de Janeiro e indicativo da de uma assessoria técnica e política por meio do exercício profissional do assistente social nos moldes do projeto políticas públicas da saúde da UERJ.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fuzzy-reasoning theory is widely used in industrial control. Mathematical morphology is a powerful tool to perform image processing. We apply fuzzy-reasoning theory to morphology and suggest a scheme of fuzzy-reasoning morphology, including fuzzy-reasoning dilation and erosion functions. These functions retain more fine details than the corresponding conventional morphological operators with the same structuring element. An optical implementation has been developed with area-coding and thresholding methods. (C) 1997 Optical Society of America.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A more powerful tool for binary image processing, i.e., logic-operated mathematical morphology (LOMM), is proposed. With LOMM the image and the structuring element (SE) are treated as binary logical variables, and the MULTIPLY between the image and the SE in correlation is replaced with 16 logical operations. A total of 12 LOMM operations are obtained. The optical implementation of LOMM is described. The application of LOMM and its experimental results are also presented. (C) 1999 Optical Society of America.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Magnetic resonance techniques have given us a powerful means for investigating dynamical processes in gases, liquids and solids. Dynamical effects manifest themselves in both resonance line shifts and linewidths, and, accordingly, require detailed analyses to extract desired information. The success of a magnetic resonance experiment depends critically on relaxation mechanisms to maintain thermal equilibrium between spin states. Consequently, there must be an interaction between the excited spin states and their immediate molecular environment which promote changes in spin orientation while excess magnetic energy is coupled into other degrees of freedom by non-radiative processes. This is well known as spin-lattice relaxation. Certain dynamical processes cause fluctuations in the spin state energy levels leading to spin-spin relaxation and, here again, the environment at the molecular level plays a significant role in the magnitude of interaction. Relatively few electron spin relaxation studies of solutions have been conducted and the present work is addressed toward the extension of our knowledge in this area and the retrieval of dynamical information from line shape analyses on a time scale comparable to diffusion controlled phenomena.

Specifically, the electron spin relaxation of three Mn+23d5 complexes, Mn(CH3CN)6+2, MnCl4-2 in acetonitrile has been studied in considerable detail. The effective spin Hamiltonian constants were carefully evaluated under a wide range of experimental conditions. Resonance widths of these Mn+2 complexes were studied in the presence of various excess ligand ions and as a function of concentration, viscosity, temperature and frequency (X-band, ~9.5 Ԍ Hz and K-band, ~35 Ԍ Hz).

A number of interesting conclusions were drawn from these studies. For the Et4NCl-4-2 system several relaxation mechanisms leading to resonance broadening were observed. One source appears to arise through spin-orbit interactions caused by modulation of the ligand field resulting from transient distortions of the complex imparted by solvent fluctuations in the immediate surroundings of the paramagnetic ion. An additional spin relaxation was assigned to the formation of ion pairs [Et4N+…MnCl4-2] and it was possible to estimate the dissociation constant for this specie in acetonitrile.

The Bu4NBr-MnBr4-2 study was considerably more interesting. As in the former case, solvent fluctuations and ion-pairing of the paramagnetic complex [Bu4N+…MnBr4-2] provide significant relaxation for the electronic spin system. Most interesting, without doubt, is the onset of a new relaxation mechanism leading to resonance broadening which is best interpreted as chemical exchange. Thus, assuming that resonance widths were simply governed by electron spin state lifetimes, we were able to extract dynamical information from an interaction in which the initial and final states are the same

MnBr4-2 + Br- = MnBr4-2 + Br-.

The bimolecular rate constants were obtained at six different temperatures and their magnitudes suggested that the exchange is probably diffusion controlled with essentially a zero energy of activation. The most important source of spin relaxation in this system stems directly from dipolar interactions between the manganese 3d5 electrons. Moreover, the dipolar broadening is strongly frequency dependent indicating a deviation between the transverse and longitudinal relaxation times. We are led to the conclusion that the 3d5 spin states of ion-paired MnBr4-2 are significantly correlated so that dynamical processes are also entering the picture. It was possible to estimate the correlation time, Td, characterizing this dynamical process.

In Part II we study nuclear magnetic relaxation of bromine ions in the MnBr4-2-Bu4NBr-acetonitrile system. Essentially we monitor the 79Br and 81Br linewidths in response to the [MnBr4-2]/[Br-] ratio with the express purpose of supporting our contention that exchange is occurring between "free" bromine ions in the solvent and bromine in the first coordination sphere of the paramagnetic anion. The complexity of the system elicited a two-part study: (1) the linewidth behavior of Bu4NBr in anhydrous CH3CN in the absence of MnBr4-2 and (2) in the presence of MnBr4-2. It was concluded in study (1) that dynamical association, Bu4NBr k1= Bu4N+ + Br-, was modulating field-gradient interactions at frequencies high enough to provide an estimation of the unimolecular rate constant, k1. A comparison of the two isotopic bromine linewidth-mole fraction results led to the conclusion that quadrupole interactions provided the dominant relaxation mechanism. In study (2) the "residual" bromine linewidths for both 79Br and 81Br are clearly controlled by quadrupole interactions which appear to be modulated by very rapid dynamical processes other than molecular reorientation. We conclude that the "residual" linewidth has its origin in chemical exchange and that bromine nuclei exchange rapidly between a "free" solvated ion and the paramagnetic complex, MnBr4-2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several types of seismological data, including surface wave group and phase velocities, travel times from large explosions, and teleseismic travel time anomalies, have indicated that there are significant regional variations in the upper few hundred kilometers of the mantle beneath continental areas. Body wave travel times and amplitudes from large chemical and nuclear explosions are used in this study to delineate the details of these variations beneath North America.

As a preliminary step in this study, theoretical P wave travel times, apparent velocities, and amplitudes have been calculated for a number of proposed upper mantle models, those of Gutenberg, Jeffreys, Lehman, and Lukk and Nersesov. These quantities have been calculated for both P and S waves for model CIT11GB, which is derived from surface wave dispersion data. First arrival times for all the models except that of Lukk and Nersesov are in close agreement, but the travel time curves for later arrivals are both qualitatively and quantitatively very different. For model CIT11GB, there are two large, overlapping regions of triplication of the travel time curve, produced by regions of rapid velocity increase near depths of 400 and 600 km. Throughout the distance range from 10 to 40 degrees, the later arrivals produced by these discontinuities have larger amplitudes than the first arrivals. The amplitudes of body waves, in fact, are extremely sensitive to small variations in the velocity structure, and provide a powerful tool for studying structural details.

Most of eastern North America, including the Canadian Shield has a Pn velocity of about 8.1 km/sec, with a nearly abrupt increase in compressional velocity by ~ 0.3 km/sec near at a depth varying regionally between 60 and 90 km. Variations in the structure of this part of the mantle are significant even within the Canadian Shield. The low-velocity zone is a minor feature in eastern North America and is subject to pronounced regional variations. It is 30 to 50 km thick, and occurs somewhere in the depth range from 80 to 160 km. The velocity decrease is less than 0.2 km/sec.

Consideration of the absolute amplitudes indicates that the attenuation due to anelasticity is negligible for 2 hz waves in the upper 200 km along the southeastern and southwestern margins of the Canadian Shield. For compressional waves the average Q for this region is > 3000. The amplitudes also indicate that the velocity gradient is at least 2 x 10-3 both above and below the low-velocity zone, implying that the temperature gradient is < 4.8°C/km if the regions are chemically homogeneous.

In western North America, the low-velocity zone is a pronounced feature, extending to the base of the crust and having minimum velocities of 7.7 to 7.8 km/sec. Beneath the Colorado Plateau and Southern Rocky Mountains provinces, there is a rapid velocity increase of about 0.3 km/sec, similar to that observed in eastern North America, but near a depth of 100 km.

Complicated travel time curves observed on profiles with stations in both eastern and western North America can be explained in detail by a model taking into account the lateral variations in the structure of the low-velocity zone. These variations involve primarily the velocity within the zone and the depth to the top of the zone; the depth to the bottom is, for both regions, between 140 and 160 km.

The depth to the transition zone near 400 km also varies regionally, by about 30-40 km. These differences imply variations of 250 °C in the temperature or 6 % in the iron content of the mantle, if the phase transformation of olivine to the spinel structure is assumed responsible. The structural variations at this depth are not correlated with those at shallower depths, and follow no obvious simple pattern.

The computer programs used in this study are described in the Appendices. The program TTINV (Appendix IV) fits spherically symmetric earth models to observed travel time data. The method, described in Appendix III, resembles conventional least-square fitting, using partial derivatives of the travel time with respect to the model parameters to perturb an initial model. The usual ill-conditioned nature of least-squares techniques is avoided by a technique which minimizes both the travel time residuals and the model perturbations.

Spherically symmetric earth models, however, have been found inadequate to explain most of the observed travel times in this study. TVT4, a computer program that performs ray theory calculations for a laterally inhomogeneous earth model, is described in Appendix II. Appendix I gives a derivation of seismic ray theory for an arbitrarily inhomogeneous earth model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.

It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.

In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.

Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The frame of a laser diode transmitter for intersatellite communication is concisely introduced. A simple, novel and visual method for measuring the diffraction-limited wavefront of the transmitter by a Jamin double-shearing interferometer is proposed. To verify the validity of the measurement, the far-field divergence of beam is additionally rigorously analysed in terms of the Fraunhofer diffraction. The measurement, the necessary analyses and discussion are given in detail. By directly measuring the fringe widths and quantitatively interpreting the interference fringes, the minimum detectable wavefront height (DWH) of the wavefront is only 0.2 gimel (the distance between the perfect plane wavefront and the actual wavefront at the transmitting aperture) and the corresponding divergence is only 65.84 mu rad. This indicates that the wavefront approaches the diffraction-limited condition. The results show that this interferometer is a powerful tool for testing the semiconductor laser beam's wavefront, especially the diffraction-limited wavefront.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the field of mechanics, it is a long standing goal to measure quantum behavior in ever larger and more massive objects. It may now seem like an obvious conclusion, but until recently it was not clear whether a macroscopic mechanical resonator -- built up from nearly 1013 atoms -- could be fully described as an ideal quantum harmonic oscillator. With recent advances in the fields of opto- and electro-mechanics, such systems offer a unique advantage in probing the quantum noise properties of macroscopic electrical and mechanical devices, properties that ultimately stem from Heisenberg's uncertainty relations. Given the rapid progress in device capabilities, landmark results of quantum optics are now being extended into the regime of macroscopic mechanics.

The purpose of this dissertation is to describe three experiments -- motional sideband asymmetry, back-action evasion (BAE) detection, and mechanical squeezing -- that are directly related to the topic of measuring quantum noise with mechanical detection. These measurements all share three pertinent features: they explore quantum noise properties in a macroscopic electromechanical device driven by a minimum of two microwave drive tones, hence the title of this work: "Quantum electromechanics with two tone drive".

In the following, we will first introduce a quantum input-output framework that we use to model the electromechanical interaction and capture subtleties related to interpreting different microwave noise detection techniques. Next, we will discuss the fabrication and measurement details that we use to cool and probe these devices with coherent and incoherent microwave drive signals. Having developed our tools for signal modeling and detection, we explore the three-wave mixing interaction between the microwave and mechanical modes, whereby mechanical motion generates motional sidebands corresponding to up-down frequency conversions of microwave photons. Because of quantum vacuum noise, the rates of these processes are expected to be unequal. We will discuss the measurement and interpretation of this asymmetric motional noise in a electromechanical device cooled near the ground state of motion.

Next, we consider an overlapped two tone pump configuration that produces a time-modulated electromechanical interaction. By careful control of this drive field, we report a quantum non-demolition (QND) measurement of a single motional quadrature. Incorporating a second pair of drive tones, we directly measure the measurement back-action associated with both classical and quantum noise of the microwave cavity. Lastly, we slightly modify our drive scheme to generate quantum squeezing in a macroscopic mechanical resonator. Here, we will focus on data analysis techniques that we use to estimate the quadrature occupations. We incorporate Bayesian spectrum fitting and parameter estimation that serve as powerful tools for incorporating many known sources of measurement and fit error that are unavoidable in such work.