796 resultados para Perturbed time-delay systems
Resumo:
This thesis details the design and applications of a terahertz (THz) frequency comb spectrometer. The spectrometer employs two offset locked Ti:Sapphire femtosecond oscillators with repetition rates of approximately 80 MHz, offset locked at 100 Hz to continuously sample a time delay of 12.5 ns at a maximum time delay resolution of 15.6 fs. These oscillators emit continuous pulse trains, allowing the generation of a THz pulse train by the master, or pump, oscillator and the sampling of this THz pulse train by the slave, or probe, oscillator via the electro-optic effect. Collecting a train of 16 consecutive THz pulses and taking the Fourier transform of this pulse train produces a decade-spanning frequency comb, from 0.25 to 2.5 THz, with a comb tooth width of 5 MHz and a comb tooth spacing of ~80 MHz. This frequency comb is suitable for Doppler-limited rotational spectroscopy of small molecules. Here, the data from 68 individual scans at slightly different pump oscillator repetition rates were combined, producing an interleaved THz frequency comb spectrum, with a maximum interval between comb teeth of 1.4 MHz, enabling THz frequency comb spectroscopy.
The accuracy of the THz frequency comb spectrometer was tested, achieving a root mean square error of 92 kHz measuring selected absorption center frequencies of water vapor at 10 mTorr, and a root mean square error of 150 kHz in measurements of a K-stack of acetonitrile. This accuracy is sufficient for fitting of measured transitions to a model Hamiltonian to generate a predicted spectrum for molecules of interest in the fields of astronomy and physical chemistry. As such, the rotational spectra of methanol and methanol-OD were acquired by the spectrometer. Absorptions from 1.3 THz to 2.0 THz were compared to JPL catalog data for methanol and the spectrometer achieved an RMS error of 402 kHz, improving to 303 kHz when excluding low signal-to-noise absorptions. This level of accuracy compares favorably with the ~100 kHz accuracy achieved by JPL frequency multiplier submillimeter spectrometers. Additionally, the relative intensity performance of the THz frequency comb spectrometer is linear across the entire decade-spanning bandwidth, making it the preferred instrument for recovering lineshapes and taking absolute intensity measurements in the THz region. The data acquired by the Terahertz Frequency Comb Spectrometer for methanol-OD is of comparable accuracy to the methanol data and may be used to refine the fit parameters for the predicted spectrum of methanol-OD.
Resumo:
To tackle the challenges at circuit level and system level VLSI and embedded system design, this dissertation proposes various novel algorithms to explore the efficient solutions. At the circuit level, a new reliability-driven minimum cost Steiner routing and layer assignment scheme is proposed, and the first transceiver insertion algorithmic framework for the optical interconnect is proposed. At the system level, a reliability-driven task scheduling scheme for multiprocessor real-time embedded systems, which optimizes system energy consumption under stochastic fault occurrences, is proposed. The embedded system design is also widely used in the smart home area for improving health, wellbeing and quality of life. The proposed scheduling scheme for multiprocessor embedded systems is hence extended to handle the energy consumption scheduling issues for smart homes. The extended scheme can arrange the household appliances for operation to minimize monetary expense of a customer based on the time-varying pricing model.
Resumo:
This thesis investigates the rotational behavior of abstracted small-wind-turbine rotors exposed to a sudden increase in oncoming flow velocity, i.e. a gust. These rotors consisted of blades with aspect ratios characteristic of samara seeds, which are known for their ability to maintain autorotation in unsteady wind. The models were tested in a towing tank using a custom-built experimental rig. The setup was designed and constructed to allow for the measurement of instantaneous angular velocity of a rotor model towed at a prescribed kinematic profile along the tank. The conclusions presented in this thesis are based on the observed trends in effective angle-of-attack distribution, tip speed ratio, angular velocity, and time delay in the rotational response for each of rotors over prescribed gust cases. It was found that the blades with the higher aspect ratio had higher tip speed ratios and responded faster than the blades with a lower aspect ratio. The decrease in instantaneous tip speed ratio during the onset of a prescribed gust correlated with the time delay in each rotor model's rotational response. The time delays were found to increase nonlinearly with decreasing durations over which the simulated gusts occurred.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
The problem of stability analysis for a class of neutral systems with mixed time-varying neutral, discrete and distributed delays and nonlinear parameter perturbations is addressed. By introducing a novel Lyapunov-Krasovskii functional and combining the descriptor model transformation, the Leibniz-Newton formula, some free-weighting matrices, and a suitable change of variables, new sufficient conditions are established for the stability of the considered system, which are neutral-delay-dependent, discrete-delay-range dependent, and distributeddelay-dependent. The conditions are presented in terms of linear matrix inequalities (LMIs) and can be efficiently solved using convex programming techniques. Two numerical examples are given to illustrate the efficiency of the proposed method
Resumo:
This correspondence paper addresses the problem of output feedback stabilization of control systems in networked environments with quality-of-service (QoS) constraints. The problem is investigated in discrete-time state space using Lyapunov’s stability theory and the linear inequality matrix technique. A new discrete-time modeling approach is developed to describe a networked control system (NCS) with parameter uncertainties and nonideal network QoS. It integrates a network-induced delay, packet dropout, and other network behaviors into a unified framework. With this modeling, an improved stability condition, which is dependent on the lower and upper bounds of the equivalent network-induced delay, is established for the NCS with norm-bounded parameter uncertainties. It is further extended for the output feedback stabilization of the NCS with nonideal QoS. Numerical examples are given to demonstrate the main results of the theoretical development.
Resumo:
Deploying networked control systems (NCSs) over wireless networks is becoming more and more popular. However, the widely-used transport layer protocols, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), are not designed for real-time applications. Therefore, they may not be suitable for many NCS application scenarios because of their limitations on reliability and/or delay performance, which real-control systems concern. Considering a typical type of NCSs with periodic and sporadic real-time traffic, this paper proposes a highly reliable transport layer protocol featuring a packet loss-sensitive retransmission mechanism and a prioritized transmission mechanism. The packet loss-sensitive retransmission mechanism is designed to improve the reliability of all traffic flows. And the prioritized transmission mechanism offers differentiated services for periodic and sporadic flows. Simulation results show that the proposed protocol has better reliability than UDP and improved delay performance than TCP over wireless networks, particularly when channel errors and congestions occur.
Resumo:
In this paper, we consider low-complexity turbo equalization for multiple-input multiple-output (MIMO) cyclic prefixed single carrier (CPSC) systems in MIMO inter-symbol interference (ISI) channels characterized by large delay spreads. A low-complexity graph based equalization is carried out in the frequency domain. Because of the reduction in correlation among the noise samples that happens for large frame sizes and delay spreads in frequency domain processing, improved performance compared to time domain processing is shown to be achieved. This improved performance is attractive for equalization in severely delay spread ISI channels like ultrawideband channels and underwater acoustic channels.
Resumo:
In this paper we consider a single discrete time queue with infinite buffer. The channel may experience fading. The transmission rate is a linear function of power used for transmission. In this scenario we explicitly obtain power control policies which minimize mean power and/or mean delay. There may also be peak power constraint.
Resumo:
The systems with some system parameters perturbed are investigated. These systems might exist in nature or be obtained by perturbation or truncation theory. Chaos might be suppressed or induced. Some of these dynamical systems exhibit extraordinary long transients, which makes the temporal structure seem sensitively dependent on initial conditions in finite observation time interval.
Resumo:
This paper presents an two weighted neural network approach to determine the delay time for a heating, ventilating and air-conditioning (HVAC) plan to respond to control actions. The two weighted neural network is a fully connected four-layer network. An acceleration technique was used to improve the General Delta Rule for the learning process. Experimental data for heating and cooling modes were used with both the two weighted neural network and a traditional mathematical method to determine the delay time. The results show that two weighted neural networks can be used effectively determining the delay time for AVAC systems.
Resumo:
This paper presents an multi weights neurons approach to determine the delay time for a Heating ventilating and air-conditioning (HVAC) plan to respond to control actions. The multi weights neurons is a fully connected four-layer network. An acceleration technique was used to improve the general delta rule for the learning process. Experimental data for heating and cooling modes were used with both the multi weights neurons and a traditional mathematical method to determine the delay time. The results show that multi weights neurons can be used effectively determining the delay time for HVAC systems.
Resumo:
Fieldbus communication networks aim to interconnect sensors, actuators and controllers within process control applications. Therefore, they constitute the foundation upon which real-time distributed computer-controlled systems can be implemented. P-NET is a fieldbus communication standard, which uses a virtual token-passing medium-access-control mechanism. In this paper pre-run-time schedulability conditions for supporting real-time traffic with P-NET networks are established. Essentially, formulae to evaluate the upper bound of the end-to-end communication delay in P-NET messages are provided. Using this upper bound, a feasibility test is then provided to check the timing requirements for accessing remote process variables. This paper also shows how P-NET network segmentation can significantly reduce the end-to-end communication delays for messages with stringent timing requirements.