944 resultados para Space analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are seven strong earthquakes with M >= 6.5 that occurred in southern California during the period from 1980 to 2005. In this paper, these earthquakes were studied by the LURR (Load/Unload Response Ratio) method and the State Vector method to detect if there are anomalies before them. The results show that LURR anomalies appeared before 6 earthquakes out of 7 and State Vector anomalies appeared before all 7 earthquakes. For the LURR method, the interval between maximum LURR value and the forthcoming earthquake is 1 to 19 months, and the dominant mean interval is about 10.7 months. For the State Vector method, the interval between the maximum modulus of increment State Vector and the forthcoming earthquake is from 3 to 27 months, but the dominant mean interval between the occurrence time of the maximum State Vector anomaly and the forthcoming earthquake is about 4.7 months. The results also show that the minimum valid space window scale for the LURR and the State Vector is a circle with a radius of 100 km and a square of 3 degrees 3 degrees, respectively. These results imply that the State Vector method is more effective for short-term earthquake prediction than the LURR method, however the LURR method is more effective for location prediction than the State Vector method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of subgrid-scale (SGS) modeling on velocity (space-) time correlations is investigated in decaying isotropic turbulence. The performance of several SGS models is evaluated, which shows superiority of the dynamic Smagorinsky model used in conjunction with the multiscale large-eddy simulation (LES) procedure. Compared to the results of direct numerical simulation, LES is shown to underpredict the (un-normalized) correlation magnitude and slightly overpredict the decorrelation time scales. This can lead to inaccurate solutions in applications such as aeroacoustics. The underprediction of correlation functions is particularly severe for higher wavenumber modes which are swept by the most energetic modes. The classic sweeping hypothesis for stationary turbulence is generalized for decaying turbulence and used to analyze the observed discrepancies. Based on this analysis, the time correlations are determined by the wavenumber energy spectra and the sweeping velocity, which is the square root of the total energy. Hence, an accurate prediction of the instantaneous energy spectra is most critical to the accurate computation of time correlations. (C) 2004 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two different spatial levels are involved concerning damage accumulation to eventual failure. nucleation and growth rates of microdamage nN* and V*. It is found that the trans-scale length ratio c*/L does not directly affect the process. Instead, two independent dimensionless numbers: the trans-scale one * * ( V*)including the * **5 * N c V including mesoscopic parameters only, play the key role in the process of damage accumulation to failure. The above implies that there are three time scales involved in the process: the macroscopic imposed time scale tim = /a and two meso-scopic time scales, nucleation and growth of damage, (* *4) N N t =1 n c and tV=c*/V*. Clearly, the dimensionless number De*=tV/tim refers to the ratio of microdamage growth time scale over the macroscopically imposed time scale. So, analogous to the definition of Deborah number as the ratio of relaxation time over external one in rheology. Let De be the imposed Deborah number while De represents the competition and coupling between the microdamage growth and the macroscopically imposed wave loading. In stress-wave induced tensile failure (spallation) De* < 1, this means that microdamage has enough time to grow during the macroscopic wave loading. Thus, the microdamage growth appears to be the predominate mechanism governing the failure. Moreover, the dimensionless number D* = tV/tN characterizes the ratio of two intrinsic mesoscopic time scales: growth over nucleation. Similarly let D be the “intrinsic Deborah number”. Both time scales are relevant to intrinsic relaxation rather than imposed one. Furthermore, the intrinsic Deborah number D* implies a certain characteristic damage. In particular, it is derived that D* is a proper indicator of macroscopic critical damage to damage localization, like D* ∼ (10–3~10–2) in spallation. More importantly, we found that this small intrinsic Deborah number D* indicates the energy partition of microdamage dissipation over bulk plastic work. This explains why spallation can not be formulated by macroscopic energy criterion and must be treated by multi-scale analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of cooperation is analyzed in heterogeneous populations where individuals can be classified in two groups according to their phenotypic appearance. Phenotype recognition is assumed for all individuals: individuals are able to identify the type of every other individual, but fail to recognize their own type, and thus behave under partial information conditions. The interactions between individuals are described by 2 × 2 symmetric games where individuals can either cooperate or defect. The evolution of such populations is studied in the framework of evolutionary game by means of the replicator dynamics. Overlapping generations are considered, so the replicator equations are formulated in discrete-time form. The well-posedness conditions of the system are derived. Depending on the parameters of the game, a restriction may exist for the generation length. The stability analysis of the dynamical system is carried out and a detailed description of the behavior of trajectories starting from the interior of the state-space is given. We find that, provided the conditions of well-posedness are verified, the linear stability of monomorphic states in the discrete-time replicator coincides with the one of the continuous case. Specific from the discrete-time case, a relaxed restriction for the generation length is derived, for which larger time-steps can be used without compromising the well-posedness of the replicator system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ENGLISH: We analyzed catches per unit of effort (CPUE) from the Japanese longline fishery for bigeye tuna (Thunnus obesus) in the central and eastern Pacific Ocean (EPO) with regression tree methods. Regression trees have not previously been used to estimate time series of abundance indices fronl CPUE data. The "optimally sized" tree had 139 parameters; year, month, latitude, and longitude interacted to affect bigeye CPUE. The trend in tree-based abundance indices for the EPO was similar to trends estimated from a generalized linear model and fronl an empirical model that combines oceanographic data with information on the distribution of fish relative to environmental conditions. The regression tree was more parsimonious and would be easier to implement than the other two nl0dels, but the tree provided no information about the nlechanisms that caused bigeye CPUEs to vary in time and space. Bigeye CPUEs increased sharply during the mid-1980's and were more variable at the northern and southern edges of the fishing grounds. Both of these results can be explained by changes in actual abundance and changes in catchability. Results from a regression tree that was fitted to a subset of the data indicated that, in the EPO, bigeye are about equally catchable with regular and deep longlines. This is not consistent with observations that bigeye are more abundant at depth and indicates that classification by gear type (regular or deep longline) may not provide a good measure of capture depth. Asimulated annealing algorithm was used to summarize the tree-based results by partitioning the fishing grounds into regions where trends in bigeye CPUE were similar. Simulated annealing can be useful for designing spatial strata in future sampling programs. SPANISH: Analizamos la captura por unidad de esfuerzo (CPUE) de la pesquería palangrera japonesa de atún patudo (Thunnus obesus) en el Océano Pacifico oriental (OPO) y central con métodos de árbol de regresión. Hasta ahora no se han usado árboles de regresión para estimar series de tiempo de índices de abundancia a partir de datos de CPUE. EI árbol de "tamaño optimo" tuvo 139 parámetros; ano, mes, latitud, y longitud interactuaron para afectar la CPUE de patudo. La tendencia en los índices de abundancia basados en árboles para el OPO fue similar a las tendencias estimadas con un modelo lineal generalizado y con un modelo empírico que combina datos oceanográficos con información sobre la distribución de los peces en relación con las condiciones ambientales. EI árbol de regresión fue mas parsimonioso y seria mas fácil de utilizar que los dos otros modelos, pero no proporciono información sobre los mecanismos que causaron que las CPUE de patudo valiaran en el tiempo y en el espacio. Las CPUE de patudo aumentaron notablemente a mediados de los anos 80 y fueron mas variables en los extremos norte y sur de la zona de pesca. Estos dos resultados pueden ser explicados por cambios en la abundancia real y cambios en la capturabilidad. Los resultados de un arbal de regresión ajustado a un subconjunto de los datos indican que, en el OPO, el patudo es igualmente capturable con palangres regulares y profundos. Esto no es consistente con observaciones de que el patudo abunda mas a profundidad e indica que clasificación por tipo de arte (palangre regular 0 profundo) podría no ser una buena medida de la profundidad de captura. Se uso un algoritmo de templado simulado para resumir los resultados basados en el árbol clasificando las zonas de pesca en zonas con tendencias similares en la CPUE de patudo. El templado simulado podría ser útil para diseñar estratos espaciales en programas futuros de muestreo. (PDF contains 45 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis examines four distinct facets and methods for understanding political ideology, and so it includes four distinct chapters with only moderate connections between them. Chapter 2 examines how reactions to emotional stimuli vary with political opinion, and how the stimuli can produce changes in an individuals political preferences. Chapter 3 examines the connection between self-reported fear and item nonresponse on surveys. Chapter 4 examines the connection between political and moral consistency with low-dimensional ideology, and Chapter 5 develops a technique for estimating ideal points and salience in a low-dimensional ideological space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two most important digital-system design goals today are to reduce power consumption and to increase reliability. Reductions in power consumption improve battery life in the mobile space and reductions in energy lower operating costs in the datacenter. Increased robustness and reliability shorten down time, improve yield, and are invaluable in the context of safety-critical systems. While optimizing towards these two goals is important at all design levels, optimizations at the circuit level have the furthest reaching effects; they apply to all digital systems. This dissertation presents a study of robust minimum-energy digital circuit design and analysis. It introduces new device models, metrics, and methods of calculation—all necessary first steps towards building better systems—and demonstrates how to apply these techniques. It analyzes a fabricated chip (a full-custom QDI microcontroller designed at Caltech and taped-out in 40-nm silicon) by calculating the minimum energy operating point and quantifying the chip’s robustness in the face of both timing and functional failures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.

Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.

Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a concept for ultra-lightweight deformable mirrors based on a thin substrate of optical surface quality coated with continuous active piezopolymer layers that provide modes of actuation and shape correction. This concept eliminates any kind of stiff backing structure for the mirror surface and exploits micro-fabrication technologies to provide a tight integration of the active materials into the mirror structure, to avoid actuator print-through effects. Proof-of-concept, 10-cm-diameter mirrors with a low areal density of about 0.5 kg/m² have been designed, built and tested to measure their shape-correction performance and verify the models used for design. The low cost manufacturing scheme uses replication techniques, and strives for minimizing residual stresses that deviate the optical figure from the master mandrel. It does not require precision tolerancing, is lightweight, and is therefore potentially scalable to larger diameters for use in large, modular space telescopes. Other potential applications for such a laminate could include ground-based mirrors for solar energy collection, adaptive optics for atmospheric turbulence, laser communications, and other shape control applications.

The immediate application for these mirrors is for the Autonomous Assembly and Reconfiguration of a Space Telescope (AAReST) mission, which is a university mission under development by Caltech, the University of Surrey, and JPL. The design concept, fabrication methodology, material behaviors and measurements, mirror modeling, mounting and control electronics design, shape control experiments, predictive performance analysis, and remaining challenges are presented herein. The experiments have validated numerical models of the mirror, and the mirror models have been used within a model of the telescope in order to predict the optical performance. A demonstration of this mirror concept, along with other new telescope technologies, is planned to take place during the AAReST mission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an experimental scheme of a cold atom space clock with a movable cavity. By using a single microwave cavity, we find that the clock has a significant advantage, i.e. the longitudinal cavity phase shift is eliminated. A theoretical analysis has been carried out in terms of the relation between the atomic transition probability and the velocity of the moving cavity by taking into account the velocity distribution of cold atoms. The requirements for the microwave power and its stability for atomic pi/2 excitation at different moving velocities of the cavity lead to the determination of the proper working parameters of the rubidium clock in frequency accuracy 10(-17). Finally, the mechanical stability for the scheme is analysed and the ways of solving the possible mechanical instability of the device are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe.

Initial phase of LIGO started in 2002, and since then data was collected during six science runs. Instrument sensitivity was improving from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010.

In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted till 2014.

This thesis describes results of commissioning work done at LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers.

The first part of this thesis is devoted to the description of methods for bringing interferometer to the linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details.

Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument.

Coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. The last part of this thesis describes static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed.

Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about 6 months. Since current sensitivity of advanced LIGO is already more than factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, upcoming science runs have a good chance for the first direct detection of gravitational waves.