894 resultados para Clock and Data Recovery
Resumo:
A computer-controlled laser writing system for optical integrated circuits and data storage is described. The system is characterized by holographic (649F) and high-resolution plates. A minimum linewidth of 2.5 mum is obtained by controlling the system parameters. We show that this system can also be used for data storage applications.
Resumo:
Resistivity imaging of a reconfigurable phantom with circular inhomogeneities is studied with a simple instrumentation and data acquisition system for Electrical Impedance Tomography. The reconfigurable phantom is developed with stainless steel electrodes and a sinusoidal current of constant amplitude is injected to the phantom boundary using opposite current injection protocol. Nylon and polypropylene cylinders with different cross sectional areas are kept inside the phantom and the boundary potential data are collected. The instrumentation and the data acquisition system with a DIP switch-based multiplexer board are used to inject a constant current of desired amplitude and frequency. Voltage data for the first eight current patterns (128 voltage data) are found to be sufficient to reconstruct the inhomogeneities and hence the acquisition time is reduced. Resistivity images are reconstructed from the boundary data for different inhomogeneity positions using EIDORS-2D. The results show that the shape and resistivity of the inhomogeneity as well as the background resistivity are successfully reconstructed from the potential data for single or double inhomogeneity phantoms. The resistivity images obtained from the single and double inhomogeneity phantom clearly indicate the inhomogeneity as the high resistive material. Contrast to noise ratio (CNR) and contrast recovery (CR) of the reconstructed images are found high for the inhomogeneities near all the electrodes arbitrarily chosen for the entire study. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The chemical potentials oi carbon associated with two three-phase fields in the system U-Mo-C were measured by using the methane-hydrogen gas equilibration technique in the temperature range 973 to 1173K. The technique was validated by measuring the standard Gibbs energy of formation of Mo2C. From the experimentally measured values of the chemical potential of carbon in the ternary phase fields UC+Mo+UMoC1.7 and UC+UMoC1.7+UMoC2 and data for UC from the literature, the Gibbs energies of formation of the two ternary carbides were derived:
Resumo:
The removal of noise and outliers from measurement signals is a major problem in jet engine health monitoring. Topical measurement signals found in most jet engines include low rotor speed, high rotor speed. fuel flow and exhaust gas temperature. Deviations in these measurements from a baseline 'good' engine are often called measurement deltas and the health signals used for fault detection, isolation, trending and data mining. Linear filters such as the FIR moving average filter and IIR exponential average filter are used in the industry to remove noise and outliers from the jet engine measurement deltas. However, the use of linear filters can lead to loss of critical features in the signal that can contain information about maintenance and repair events that could be used by fault isolation algorithms to determine engine condition or by data mining algorithms to learn valuable patterns in the data, Non-linear filters such as the median and weighted median hybrid filters offer the opportunity to remove noise and gross outliers from signals while preserving features. In this study. a comparison of traditional linear filters popular in the jet engine industry is made with the median filter and the subfilter weighted FIR median hybrid (SWFMH) filter. Results using simulated data with implanted faults shows that the SWFMH filter results in a noise reduction of over 60 per cent compared to only 20 per cent for FIR filters and 30 per cent for IIR filters. Preprocessing jet engine health signals using the SWFMH filter would greatly improve the accuracy of diagnostic systems. (C) 2002 Published by Elsevier Science Ltd.
Resumo:
The enthalpy increments and the standard molar Gibbs energies of formation-of DyFeO3(s) and Dy3Fe5O12(s) have been measured using a Calvet micro-calorimeter and a solid oxide galvanic cell, respectively. A co-operative phase transition, related to anti-ferromagnetic to paramagnetic transformation, is apparent. from the heat capacity data for DyFeO3 at similar to 648 K. A similar type of phase transition has been observed for Dy3Fe5O12 at similar to 560 K which is related to ferrimagnetic to paramagnetic transformation. Enthalpy increment data for DyFeO3(s) and Dy3Fe5O12(s), except in the vicinity of the second-order transition, can be represented by the following polynomial expressions:{H(0)m(T) - H(0)m(298.15 K)) (Jmol(-1)) (+/-1.1%) = -52754 + 142.9 x (T (K)) + 2.48 x 10(-3) x (T (K))(2) + 2.951 x 10(6) x (T (K))(-1); (298.15 less than or equal to T (K) less than or equal to 1000) for DyFeO3(s), and {H(0)m(T) - H(0)m(298.15 K)} (Jmol(-1)) (+/-1.2%) = -191048 + 545.0 x (T - (K)) + 2.0 x 10(-5) x (T (K))(2) + 8.513 x 10(6) x (T (K))(-1); (208.15 less than or equal to T (K) less than or equal to 1000)for Dy3Fe5O12(s). The reversible emfs of the solid-state electrochemical cells: (-)Pt/{DyFeO3(s) + Dy2O3(s) + Fe(s)}/YDT/CSZ//{Fe(s) + Fe0.95O(s)}/Pt(+) and (-)Pt/{Fe(s) + Fe0.95O(s)}//CSZ//{DyFeO3(s) + Dy3Fe5O12(s) + Fe3O4(s)}/Pt(+), were measured in the temperature range from 1021 to 1250 K and 1035 to 1250 K, respectively. The standard Gibbs energies of formation of solid DyFeO3 and Dy3Fe5O12 calculated by the least squares regression analysis of the data obtained in the present study, and data for Fe0.95O and Dy2O3 from the literature, are given by Delta(f)G(0)m(DyFeO3,s)(kJmol(-1))(+/-3.2)= -1339.9 + 0.2473 x (T(K)); (1021 less than or equal to T (K) less than or equal to 1548)and D(f)G(0)m(Dy3Fe5O12,s) (kJmol(-1)) (+/-3.5) = -4850.4 + 0.9846 x (T (K)); (1035 less than or equal to T (K) less than or equal to 1250) The uncertainty estimates for Delta(f)G(0)m include the standard deviation in the emf and uncertainty in the data taken from the literature. Based on the thermodynamic information, oxygen potential diagram and chemical potential diagrams for the system Dy-Fe-O were developed at 1250 K. (C) 2002 Editions scientifiques et medicales Elsevier SAS. All rights reserved.
Resumo:
The enthalpy increments and the standard molar Gibbs energy (G) of formation of SmFeO3(S) and SM3Fe5O12(s) have been measured using a Calvet micro-calorimeter and a solid oxide galvanic cell, respectively. A X-type transition, related to magnetic order-disorder transformation (antiferromagnetic to paramagnetic), is apparent from the heat capacity data at similar to673 K for SmFeO3(s) and at similar to560 K for Sm3Fe5O12(S). Enthalpy increment data for SmFeO3(s) and SM3Fe5O12(s), except in the vicinity of X-transition, can be represented by the following polynomial expressions:
{H-m(0)(T) - H-m(0)(298.15 K){/J mol-(1)(+/-1.2%) = -54 532.8 + 147.4 . (T/K) + 1.2 . 10(-4) . (T/K)(2) +3.154 . 10(6) . (T/K)(-1); (298.15 less than or equal to T/K less than or equal to 1000)
for SmFeO3(s), and
{H-m(0)(T) - H-m(0)(298.15 K)}/J mol(-1) (+/-1.4%) = -192 763 + 554.7 . (T/K) + 2.0 . 10(-6) . (T/K)(2) + 8.161 . 10(6) - (T/K)(-1); (298.15 less than or equal to T/K less than or equal to 1000) for Sm3Fe5O12(s).
The reversible emf of the solid-state electrochemical cells, (-)Pt/{SmFeO3(s) + Sm2O3(S) + Fe(s)) // YDT / CSZ // {Fe(s) + Fe0.95O(s)} / Pt(+) and (-)Pt/{Fe(s) + Fe0.95O(S)} // CSZ // {SmFeO3(s) + Sm3Fe5O12(s) + Fe3O4(s) / Pt(+), were measured in the temperature ranges of 1005-1259 K and 1030-1252 K, respectively. The standard molar G of formation of solid SmFeO3 and Sm3Fe5O12 calculated by the least squares regression analysis of the data obtained in the current study, and data for Fe0.95O and Sm2O3 from the literature, are given by:
Delta(f)G(m)(0)(SmFeO3, s)/kj . mol(-1)(+/-2.0) = -1355.2 + 0.2643 .
Resumo:
This paper describes a dynamic voltage frequency control scheme for a 256 X 64 SRAM block for reducing the energy in active mode and stand-by mode. The DVFM control system monitors the external clock and changes the supply voltage and the body bias so as to achieve a significant reduction in energy. The behavioral model of the proposed DVFM control system algorithm is described and simulated in HDL using delay and energy parameters obtained through SPICE simulation. The frequency range dictated by an external controller is 100 MHz to I GHz. The supply voltage of the complete memory system is varied in steps of 50 mV over the range of 500 mV to IV. The threshold voltage range of operation is plusmn100 mV around the nominal value, achieving 83.4% energy reduction in the active mode and 86.7% in the stand-by mode. This paper also proposes a energy replica that is used in the energy monitor subsystem of the DVFM system.
Resumo:
Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM
Resumo:
Wireless LAN (WLAN) market consists of IEEE 802.11 MAC standard conformant devices (e.g., access points (APs), client adapters) from multiple vendors. Certain third party certifications such as those specified by the Wi-Fi alliance have been widely used by vendors to ensure basic conformance to the 802.11 standard, thus leading to the expectation that the available devices exhibit identical MAC level behavior. In this paper, however, we present what we believe to be the first ever set of experimental results that highlight the fact that WLAN devices from different vendors in the market can have heterogeneous MAC level behavior. Specifically, we demonstrate with examples and data that in certain cases, devices may not be conformant with the 802.11 standard while in other cases, they may differ in significant details that are not a part of mandatory specifications of the standard. We argue that heterogeneous MAC implementations can adversely impact WLAN operations leading to unfair bandwidth allocation, potential break-down of related MAC functionality and difficulties in provisioning the capacity of a WLAN. However, on the positive side, MAC level heterogeneity can be useful in applications such as vendor/model level device fingerprinting.
Resumo:
We analyze the performance of an SIR based admission control strategy in cellular CDMA systems with both voice and data traffic. Most studies In the current literature to estimate CDMA system capacity with both voice and data traf-Bc do not take signal-tlFlnterference ratio (SIR) based admission control into account In this paper, we present an analytical approach to evaluate the outage probability for voice trafllc, the average system throughput and the mean delay for data traffic for a volce/data CDMA system which employs an SIR based admission controL We show that for a dataaniy system, an improvement of about 25% In both the Erlang capacity as well as the mean delay performance is achieved with an SIR based admission control as compared to code availability based admission control. For a mixed voice/data srtem with 10 Erlangs of voice traffic, the Lmprovement in the mean delay performance for data Is about 40%.Ah, for a mean delay of 50 ms with 10 Erlangs voice traffic, the data Erlang capacity improves by about 9%.
Resumo:
The synthesis of cobalt-doped ZnO nanowires is achieved using a simple, metal salt decomposition growth technique. A sequence of drop casting on a quartz substrate held at 100 degrees C and annealing results in the growth of nanowires of average (modal) length similar to 200 nm and diameter of 15 +/- 4 nm and consequently an aspect ratio of similar to 13. A variation in the synthesis process, where the solution of mixed salts is deposited on the substrate at 25 degrees C, yields a grainy film structure which constitutes a useful comparator case. X-ray diffraction shows a preferred 0001] growth direction for the nanowires while a small unit cell volume contraction for Co-doped samples and data from Raman spectroscopy indicate incorporation of the Co dopant into the lattice; neither technique shows explicit evidence of cobalt oxides. Also the nanowire samples display excellent optical transmission across the entire visible range, as well as strong photoluminescence (exciton emission) in the near UV, centered at 3.25 eV. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We have developed an efficient fully three-dimensional (3D) reconstruction algorithm for diffuse optical tomography (DOT). The 3D DOT, a severely ill-posed problem, is tackled through a pseudodynamic (PD) approach wherein an ordinary differential equation representing the evolution of the solution on pseudotime is integrated that bypasses an explicit inversion of the associated, ill-conditioned system matrix. One of the most computationally expensive parts of the iterative DOT algorithm, the reevaluation of the Jacobian in each of the iterations, is avoided by using the adjoint-Broyden update formula to provide low rank updates to the Jacobian. In addition, wherever feasible, we have also made the algorithm efficient by integrating along the quadratic path provided by the perturbation equation containing the Hessian. These algorithms are then proven by reconstruction, using simulated and experimental data and verifying the PD results with those from the popular Gauss-Newton scheme. The major findings of this work are as follows: (i) the PD reconstructions are comparatively artifact free, providing superior absorption coefficient maps in terms of quantitative accuracy and contrast recovery; (ii) the scaling of computation time with the dimension of the measurement set is much less steep with the Jacobian update formula in place than without it; and (iii) an increase in the data dimension, even though it renders the reconstruction problem less ill conditioned and thus provides relatively artifact-free reconstructions, does not necessarily provide better contrast property recovery. For the latter, one should also take care to uniformly distribute the measurement points, avoiding regions close to the source so that the relative strength of the derivatives for measurements away from the source does not become insignificant. (c) 2012 Optical Society of America
Resumo:
In this paper, we develop a game theoretic approach for clustering features in a learning problem. Feature clustering can serve as an important preprocessing step in many problems such as feature selection, dimensionality reduction, etc. In this approach, we view features as rational players of a coalitional game where they form coalitions (or clusters) among themselves in order to maximize their individual payoffs. We show how Nash Stable Partition (NSP), a well known concept in the coalitional game theory, provides a natural way of clustering features. Through this approach, one can obtain some desirable properties of the clusters by choosing appropriate payoff functions. For a small number of features, the NSP based clustering can be found by solving an integer linear program (ILP). However, for large number of features, the ILP based approach does not scale well and hence we propose a hierarchical approach. Interestingly, a key result that we prove on the equivalence between a k-size NSP of a coalitional game and minimum k-cut of an appropriately constructed graph comes in handy for large scale problems. In this paper, we use feature selection problem (in a classification setting) as a running example to illustrate our approach. We conduct experiments to illustrate the efficacy of our approach.
Resumo:
Given the significant gains that relay-based cooperation promises, the practical problems of acquisition of channel state information (CSI) and the characterization and optimization of performance with imperfect CSI are receiving increasing attention. We develop novel and accurate expressions for the symbol error probability (SEP) for fixed-gain amplify-and-forward relaying when the destination acquires CSI using the time-efficient cascaded channel estimation (CCE) protocol. The CCE protocol saves time by making the destination directly estimate the product of the source-relay and relay-destination channel gains. For a single relay system, we first develop a novel SEP expression and a tight SEP upper bound. We then similarly analyze an opportunistic multi-relay system, in which both selection and coherent demodulation use imperfect estimates. A distinctive aspect of our approach is the use of as few simplifying approximations as possible, which results in new results that are accurate at signal-to-noise-ratios as low as 1 dB for single and multi-relay systems. Using insights gleaned from an asymptotic analysis, we also present a simple, closed-form, nearly-optimal solution for allocation of energy between pilot and data symbols at the source and relay(s).