310 resultados para Locking
Resumo:
In recent years modern numerical methods have been employed in the design of Wave Energy Converters (WECs), however the high computational costs associated with their use makes it prohibitive to undertake simulations involving statistically relevant numbers of wave cycles. Experimental tests in wave tanks could also be performed more efficiently and economically if short time traces, consisting of only a few wave cycles, could be used to evaluate the hydrodynamic characteristics of a particular device or design modification. Ideally, accurate estimations of device performance could be made utilizing results obtained from investigations with a relatively small number of wave cycles. However the difficulty here is that many WECs, such as the Oscillating Wave Surge Converter (OWSC), exhibit significant non-linearity in their response. Thus it is challenging to make accurate predictions of annual energy yield for a given spectral sea state using short duration realisations of that sea. This is because the non-linear device response to particular phase couplings of sinusoidal components within those time traces might influence the estimate of mean power capture obtained. As a result it is generally accepted that the most appropriate estimate of mean power capture for a sea state be obtained over many hundreds (or thousands) of wave cycles. This ensures that the potential influence of phase locking is negligible in comparison to the predictions made. In this paper, potential methods of providing reasonable estimates of relative variations in device performance using short duration sea states are introduced. The aim of the work is to establish the shortness of sea state required to provide statistically significant estimations of the mean power capture of a particular type of Wave Energy Converter. The results show that carefully selected wave traces can be used to reliably assess variations in power output due to changes in the hydrodynamic design or wave climate.
Resumo:
On the national scene, soybean crop occupies a prominent position in cultivated area and volume production, being cultivated largely in the no tillage system. This system, due to the intense traffic of machines and implements on its surface has caused soil compaction problems, which has caused the yield loss of crops. In order to minimize this effect the seeder-drill uses the systems to opening the furrow by shank or the double disc type. The use of the shank has become commonplace for allowing the disruption of the compacted surface layer, however requires greater energy demand and may cause excessive tillage in areas where there is not observed high levels of compaction. Thus, this study aimed to evaluate the effects of furrowers mechanisms and levels of soil compacting on traction requirement by a seeder-drill and on the growing and productivity of soybean in an Oxisol texture clay, in a two growing seasons. The experimental design consisted of randomized blocks with split plots with the main plots composed of four levels of soil compaction (N0 – no tillage without additional compaction, N1, N2 and N3 – no tillage subjected to compaction through two, four and six passes with tractor, respectively) corresponding to densities of soil 1.16, 1.20, 1.22 and 1.26 g cm-3, and subplots by two furrowers mechanisms (shank and double disc) with four replicates. To evaluate the average, maximum and specific traction force requested by the seeder-drill, was used a load cell, with capacity of 50 kN and sensitivity of 2 mV V-1, coupled between the tractor and seeder-drill, whose data are stored in a datalogger system model CR800 of Campbell Scientific. In addition, were evaluated the bulk density, soil mechanical resistance to penetration, sowing depth, depth and groove width, soil area mobilized, emergence speed index, emergence operation, final plant stand, stem diameter, plant height, average number of seeds per pod, weight of 1,000 seeds, number of pods per plant and crop productivity. Data were subjected to analysis of variance, the mean of furrowers were compared by Tukey test (p≤0.05), while for the factor soil compaction, polynomial regression analysis was adopted, selected models by the criterion of greater R2 and significance (p≤0.05) of equation parameters. Regardless of the crop season, penetration resistance increase as soil compaction levels up to around 0.20 m deep, and bulk density influenced the sowing quality parameters, however, did not affect the crop yield. In the first season, there was a higher productivity with the use of the shank type. In the second crop season, the shank demanded greater energetic requirement with the increase of bulk density and opposite situation with the double disc. The locking of sowing lines allow better performance of the shank to break the compacted layer.
Resumo:
The study of quantum degenerate gases has many applications in topics such as condensed matter dynamics, precision measurements and quantum phase transitions. We built an apparatus to create 87Rb Bose-Einstein condensates (BECs) and generated, via optical and magnetic interactions, novel quantum systems in which we studied the contained phase transitions. For our first experiment we quenched multi-spin component BECs from a miscible to dynamically unstable immiscible state. The transition rapidly drives any spin fluctuations with a coherent growth process driving the formation of numerous spin polarized domains. At much longer times these domains coarsen as the system approaches equilibrium. For our second experiment we explored the magnetic phases present in a spin-1 spin-orbit coupled BEC and the contained quantum phase transitions. We observed ferromagnetic and unpolarized phases which are stabilized by the spin-orbit coupling’s explicit locking between spin and motion. These two phases are separated by a critical curve containing both first-order and second-order transitions joined at a critical point. The narrow first-order transition gives rise to long-lived metastable states. For our third experiment we prepared independent BECs in a double-well potential, with an artificial magnetic field between the BECs. We transitioned to a single BEC by lowering the barrier while expanding the region of artificial field to cover the resulting single BEC. We compared the vortex distribution nucleated via conventional dynamics to those produced by our procedure, showing our dynamical process populates vortices much more rapidly and in larger number than conventional nucleation.
Resumo:
The Neolithic was marked by a transition from small and relatively egalitarian groups, to much larger groups with increased stratification. But the dynamics of this remain poorly understood. It is hard to see how despotism can arise without coercion, yet coercion could not easily have occurred in an egalitarian setting. Using a quanti- tative model of evolution in a patch-structured population, we demonstrate that the interaction between demographic and ecological factors can overcome this conundrum. We model the co-evolution of individual preferences for hierarchy alongside the degree of despotism of leaders, and the dispersal preferences of followers. We show that voluntary leadership without coercion can evolve in small groups, when leaders help to solve coordination problems related to resource production. An example is coordinating construction of an irrigation system. Our model predicts that the transition to larger despotic groups will then occur when: 1. surplus resources lead to demographic expansion of groups, removing the viability of an acephalous niche in the same area and so locking individuals into hierarchy; 2. high dispersal costs limit followers' ability to escape a despot. Empirical evidence suggests that these conditions were likely met for the first time during the subsistence intensification of the Neolithic.
Resumo:
Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.
To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.
To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.
Resumo:
Integrated circuit scaling has enabled a huge growth in processing capability, which necessitates a corresponding increase in inter-chip communication bandwidth. As bandwidth requirements for chip-to-chip interconnection scale, deficiencies of electrical channels become more apparent. Optical links present a viable alternative due to their low frequency-dependent loss and higher bandwidth density in the form of wavelength division multiplexing. As integrated photonics and bonding technologies are maturing, commercialization of hybrid-integrated optical links are becoming a reality. Increasing silicon integration leads to better performance in optical links but necessitates a corresponding co-design strategy in both electronics and photonics. In this light, holistic design of high-speed optical links with an in-depth understanding of photonics and state-of-the-art electronics brings their performance to unprecedented levels. This thesis presents developments in high-speed optical links by co-designing and co-integrating the primary elements of an optical link: receiver, transmitter, and clocking.
In the first part of this thesis a 3D-integrated CMOS/Silicon-photonic receiver will be presented. The electronic chip features a novel design that employs a low-bandwidth TIA front-end, double-sampling and equalization through dynamic offset modulation. Measured results show -14.9dBm of sensitivity and energy efficiency of 170fJ/b at 25Gb/s. The same receiver front-end is also used to implement source-synchronous 4-channel WDM-based parallel optical receiver. Quadrature ILO-based clocking is employed for synchronization and a novel frequency-tracking method that exploits the dynamics of IL in a quadrature ring oscillator to increase the effective locking range. An adaptive body-biasing circuit is designed to maintain the per-bit-energy consumption constant across wide data-rates. The prototype measurements indicate a record-low power consumption of 153fJ/b at 32Gb/s. The receiver sensitivity is measured to be -8.8dBm at 32Gb/s.
Next, on the optical transmitter side, three new techniques will be presented. First one is a differential ring modulator that breaks the optical bandwidth/quality factor trade-off known to limit the speed of high-Q ring modulators. This structure maintains a constant energy in the ring to avoid pattern-dependent power droop. As a first proof of concept, a prototype has been fabricated and measured up to 10Gb/s. The second technique is thermal stabilization of micro-ring resonator modulators through direct measurement of temperature using a monolithic PTAT temperature sensor. The measured temperature is used in a feedback loop to adjust the thermal tuner of the ring. A prototype is fabricated and a closed-loop feedback system is demonstrated to operate at 20Gb/s in the presence of temperature fluctuations. The third technique is a switched-capacitor based pre-emphasis technique designed to extend the inherently low bandwidth of carrier injection micro-ring modulators. A measured prototype of the optical transmitter achieves energy efficiency of 342fJ/bit at 10Gb/s and the wavelength stabilization circuit based on the monolithic PTAT sensor consumes 0.29mW.
Lastly, a first-order frequency synthesizer that is suitable for high-speed on-chip clock generation will be discussed. The proposed design features an architecture combining an LC quadrature VCO, two sample-and-holds, a PI, digital coarse-tuning, and rotational frequency detection for fine-tuning. In addition to an electrical reference clock, as an extra feature, the prototype chip is capable of receiving a low jitter optical reference clock generated by a high-repetition-rate mode-locked laser. The output clock at 8GHz has an integrated RMS jitter of 490fs, peak-to-peak periodic jitter of 2.06ps, and total RMS jitter of 680fs. The reference spurs are measured to be –64.3dB below the carrier frequency. At 8GHz the system consumes 2.49mW from a 1V supply.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
It is well known that self-generated stimuli are processed differently from externally generated stimuli. For example, many people have noticed since childhood that it is very difficult to make a self-tickling. In the auditory domain, self-generated sounds elicit smaller brain responses as compared to externally generated sounds, known as the sensory attenuation (SA) effect. SA is manifested in reduced amplitudes of evoked responses as measured through MEEG, decreased firing rates of neurons and a lower level of perceived loudness for self-generated sounds. The predominant explanation for SA is based on the idea that self-generated stimuli are predicted (e.g., the forward model account). It is the nature of their predictability that is crucial for SA. On the contrary, the sensory gating account emphasizes a general suppressive effect of actions on sensory processing, regardless of the predictability of the stimuli. Both accounts have received empirical support, which suggests that both mechanisms may exist. In chapter 2, three behavioural studies concerning the influence of motor activation on auditory perception were presented. Study 1 compared the effect of SA and attention in an auditory detection task and showed that SA was present even when substantial attention was paid to unpredictable stimuli. Study 2 compared the loudness perception of tones generated by others between Chinese and British participants. Compared to externally generated tones, a decrease in perceived loudness for others generated tones was found among Chinese but not among the British. In study 3, partial evidence was found that even when reading words that are related to action, auditory detection performance was impaired. In chapter 3, the classic SA effect of M100 suppression was replicated with MEG in study 4. With time-frequency analysis, a potential neural information processing sequence was found in auditory cortex. Prior to the onset of self-generated tones, there was an increase of oscillatory power in the alpha band. After the stimulus onset, reduced gamma power and alpha/beta phase locking were found. The three temporally segregated oscillatory events correlated with each other and with SA effect, which may be the underlying neural implementation of SA. In chapter 4, a TMS-MEG study was presented investigating the role of the cerebellum in adapting to delayed presentation of self-generated tones (study 5). It demonstrated that in sham stimulation condition, the brain can adapt to the delay (about 100 ms) within 300 trials of learning by showing a significant increase of SA effect in the suppression of M100, but not M200 component. Whereas after stimulating the cerebellum with a suppressive TMS protocol, the adaptation in M100 suppression disappeared and the pattern of M200 suppression reversed to M200 enhancement. These data support the idea that the suppressive effect of actions on auditory processing is a consequence of both motor driven sensory predictions and general sensory gating. The results also demonstrate the importance of neural oscillations in implementing SA effect and the critical role of the cerebellum in learning sensory predictions under sensory perturbation.
Resumo:
A finite-strain solid–shell element is proposed. It is based on least-squares in-plane assumed strains, assumed natural transverse shear and normal strains. The singular value decomposition (SVD) is used to define local (integration-point) orthogonal frames-of-reference solely from the Jacobian matrix. The complete finite-strain formulation is derived and tested. Assumed strains obtained from least-squares fitting are an alternative to the enhanced-assumed-strain (EAS) formulations and, in contrast with these, the result is an element satisfying the Patch test. There are no additional degrees-of-freedom, as it is the case with the enhanced-assumed-strain case, even by means of static condensation. Least-squares fitting produces invariant finite strain elements which are shear-locking free and amenable to be incorporated in large-scale codes. With that goal, we use automatically generated code produced by AceGen and Mathematica. All benchmarks show excellent results, similar to the best available shell and hybrid solid elements with significantly lower computational cost.
Resumo:
A finite-strain solid–shell element is proposed. It is based on least-squares in-plane assumed strains, assumed natural transverse shear and normal strains. The singular value decomposition (SVD) is used to define local (integration-point) orthogonal frames-of- reference solely from the Jacobian matrix. The complete finite-strain formulation is derived and tested. Assumed strains obtained from least-squares fitting are an alternative to the enhanced-assumed-strain (EAS) formulations and, in contrast with these, the result is an element satisfying the Patch test. There are no additional degrees-of-freedom, as it is the case with the enhanced- assumed-strain case, even by means of static condensation. Least-squares fitting produces invariant finite strain elements which are shear-locking free and amenable to be incorporated in large-scale codes. With that goal, we use automatically generated code produced by AceGen and Mathematica. All benchmarks show excellent results, similar to the best available shell and hybrid solid elements with significantly lower computational cost.