949 resultados para k-Error linear complexity
Resumo:
The main focus of this research is to design and develop a high performance linear actuator based on a four bar mechanism. The present work includes the detailed analysis (kinematics and dynamics), design, implementation and experimental validation of the newly designed actuator. High performance is characterized by the acceleration of the actuator end effector. The principle of the newly designed actuator is to network the four bar rhombus configuration (where some bars are extended to form an X shape) to attain high acceleration. Firstly, a detailed kinematic analysis of the actuator is presented and kinematic performance is evaluated through MATLAB simulations. A dynamic equation of the actuator is achieved by using the Lagrangian dynamic formulation. A SIMULINK control model of the actuator is developed using the dynamic equation. In addition, Bond Graph methodology is presented for the dynamic simulation. The Bond Graph model comprises individual component modeling of the actuator along with control. Required torque was simulated using the Bond Graph model. Results indicate that, high acceleration (around 20g) can be achieved with modest (3 N-m or less) torque input. A practical prototype of the actuator is designed using SOLIDWORKS and then produced to verify the proof of concept. The design goal was to achieve the peak acceleration of more than 10g at the middle point of the travel length, when the end effector travels the stroke length (around 1 m). The actuator is primarily designed to operate in standalone condition and later to use it in the 3RPR parallel robot. A DC motor is used to operate the actuator. A quadrature encoder is attached with the DC motor to control the end effector. The associated control scheme of the actuator is analyzed and integrated with the physical prototype. From standalone experimentation of the actuator, around 17g acceleration was achieved by the end effector (stroke length was 0.2m to 0.78m). Results indicate that the developed dynamic model results are in good agreement. Finally, a Design of Experiment (DOE) based statistical approach is also introduced to identify the parametric combination that yields the greatest performance. Data are collected by using the Bond Graph model. This approach is helpful in designing the actuator without much complexity.
Resumo:
Several works have reported that haematite has non-linear initial susceptibility at room temperature, like pyrrhotite or titanomagnetite, but there is no explanation for the observed behaviours yet. This study sets out to determine which physical property (grain size, foreign cations content and domain walls displacements) controls the initial susceptibility. The performed measurements include microprobe analysis to determine magnetic phases different to haematite; initial susceptibility (300 K); hysteresis loops, SIRM and backfield curves at 77 and 300 K to calculate magnetic parameters and minor loops at 77 K, to analyse initial susceptibility and magnetization behaviours below Morin transition. The magnetic moment study at low temperature is completed with measurements of zero field cooled-field cooled and AC susceptibility in a range from 5 to 300 K. The minor loops show that the non-linearity of initial susceptibility is closely related to Barkhausen jumps. Because of initial magnetic susceptibility is controlled by domain structure it is difficult to establish a mathematical model to separate magnetic subfabrics in haematite-bearing rocks.
Resumo:
Acknowledgements We acknowledge, with thanks the contributions, of the following people who co-designed Boot Camp: Angus JM Watson (Highland Surgical Research Unit, NHSH & UoS), Morag E Hogg (NHSH Raigmore Hospital) and Ailsa Armstrong (NHSH). We also thank Angus JM Watson and Morag E Hogg for helping with the preparation of the funding application which supported this work. Funding Our thanks to the Clinical Skills Managed Educational Network (CSMEN) of Scotland for funding this research.
Resumo:
Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.
Resumo:
The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The block interleavers are specifically optimized for differential quadrature phase shift keying modulation. We propose a method for selecting BCH codes that, together with the interleavers, achieve a target post-FEC bit error rate (BER). This combination of interleavers and BCH codes has very low implementation complexity. In addition, our approach is straightforward, requiring only short pre-FEC simulations to parameterize a model, based on which we select codes analytically. We aim to correct a pre-FEC BER of around (Formula presented.). We evaluate the accuracy of our approach using numerical simulations. For a target post-FEC BER of (Formula presented.), codes selected using our method result in BERs around 3(Formula presented.) target and achieve the target with around 0.2 dB extra signal-to-noise ratio.
Resumo:
We experimentally demonstrate 7-dB reduction of nonlinearity penalty in 40-Gb/s CO-OFDM at 2000-km using support vector machine regression-based equalization. Simulation in WDM-CO-OFDM shows up to 12-dB enhancement in Q-factor compared to linear equalization.
Resumo:
Laser trackers have been widely used in many industries to meet increasingly high accuracy requirements. In laser tracker measurement, it is complex and difficult to perform an accurate error analysis and uncertainty evaluation. This paper firstly reviews the working principle of single beam laser trackers and state-of- The- Art of key technologies from both industrial and academic efforts, followed by a comprehensive analysis of uncertainty sources. A generic laser tracker modelling method is formulated and the framework of the virtual tracker is proposed. The VLS can be used for measurement planning, measurement accuracy optimization and uncertainty evaluation. The completed virtual laser tracking system should take all the uncertainty sources affecting coordinate measurement into consideration and establish an uncertainty model which will behave in an identical way to the real system. © Springer-Verlag Berlin Heidelberg 2010.
Investigating optical complexity of the phase transition in the intensity of a fibre laser radiation
Resumo:
Fibre lasers have been shown to manifest a laminar-to-turbulent transition when increasing its pump power. In order to study the dynamical complexity of this transition we use advanced statistical tools of time-series analysis. We apply ordinal analysis and the horizontal visibility graph to the experimentally measured laser output intensity. This reveal the presence of temporal correlations during the transition from the laminar to the turbulent lasing regimes. Both methods allow us to unveil coherent structures with well defined time-scales and strong correlations both, in the timing of the laser pulses and in their peak intensities.
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
To provide biological insights into transcriptional regulation, a couple of groups have recently presented models relating the promoter DNA-bound transcription factors (TFs) to downstream gene’s mean transcript level or transcript production rates over time. However, transcript production is dynamic in response to changes of TF concentrations over time. Also, TFs are not the only factors binding to promoters; other DNA binding factors (DBFs) bind as well, especially nucleosomes, resulting in competition between DBFs for binding at same genomic location. Additionally, not only TFs, but also some other elements regulate transcription. Within core promoter, various regulatory elements influence RNAPII recruitment, PIC formation, RNAPII searching for TSS, and RNAPII initiating transcription. Moreover, it is proposed that downstream from TSS, nucleosomes resist RNAPII elongation.
Here, we provide a machine learning framework to predict transcript production rates from DNA sequences. We applied this framework in the S. cerevisiae yeast for two scenarios: a) to predict the dynamic transcript production rate during the cell cycle for native promoters; b) to predict the mean transcript production rate over time for synthetic promoters. As far as we know, our framework is the first successful attempt to have a model that can predict dynamic transcript production rates from DNA sequences only: with cell cycle data set, we got Pearson correlation coefficient Cp = 0.751 and coefficient of determination r2 = 0.564 on test set for predicting dynamic transcript production rate over time. Also, for DREAM6 Gene Promoter Expression Prediction challenge, our fitted model outperformed all participant teams, best of all teams, and a model combining best team’s k-mer based sequence features and another paper’s biologically mechanistic features, in terms of all scoring metrics.
Moreover, our framework shows its capability of identifying generalizable fea- tures by interpreting the highly predictive models, and thereby provide support for associated hypothesized mechanisms about transcriptional regulation. With the learned sparse linear models, we got results supporting the following biological insights: a) TFs govern the probability of RNAPII recruitment and initiation possibly through interactions with PIC components and transcription cofactors; b) the core promoter amplifies the transcript production probably by influencing PIC formation, RNAPII recruitment, DNA melting, RNAPII searching for and selecting TSS, releasing RNAPII from general transcription factors, and thereby initiation; c) there is strong transcriptional synergy between TFs and core promoter elements; d) the regulatory elements within core promoter region are more than TATA box and nucleosome free region, suggesting the existence of still unidentified TAF-dependent and cofactor-dependent core promoter elements in yeast S. cerevisiae; e) nucleosome occupancy is helpful for representing +1 and -1 nucleosomes’ regulatory roles on transcription.
Resumo:
Basalt formation waters collected from Hole 504B at sub-basement depths of 194, 201, 365, and 440 meters show inverse linear relationships between 87Sr/86Sr and Ca, 87Sr/86Sr and Sr, and K and Ca. If the Ca content of a fully reacted formation water end-member is assumed to be 1340 ppm, the K, Sr, and 87Sr/86Sr values for the end-member are 334 ppm, 7.67 ppm, and 0.70836, respectively. With respect to contemporary seawater at Hole 504B, K is depleted by 13%, Sr is enriched by 2.7%, and 87Sr/86Sr is depleted by 0.8%. The Sr/Ca ratio of the formation water (0.0057) is much lower than that of seawater (0.018) but is similar to the submarine hot spring waters from the Galapagos Rift and East Pacific Rise and to geothermal brines from Iceland. At the intermediate temperatures represented by the Hole 504B formation waters (70°-105°C), the interaction between seawater and the ocean crust produces large solution enrichments in Ca, the addition of a significant basalt Sr isotope component accompanied by only a minor elemental Sr component, and the removal from solution of seawater K. The Rb, Cs, and Ba contents of the formation waters appear to be affected by contamination, possibly from drilling muds.
Resumo:
Conventional K-Ar, 40Ar/39Ar total fusion, and 40Ar/39Ar incremental heating data on hawaiite and tholeiitic basalt samples from Ojin (Site 430), alkalic basalt samples from Nintoku (Site 432), and alkalic and tholeiitic basalt samples from Suiko (Site 433) seamounts in the Emperor Seamount chain give the following best ages for these volcanoes: Ojin = 55.2 ± 0.7 m.y., Nintoku = 56.2 ± 0.6 m.y., and Suiko = 64.7 ± 1.1 m.y. These new data bring to 27 the number of dated volcanoes in the Hawaiian-Emperor volcanic chain. The new dates prove that the age progression from Kilauea Volcano on Hawaii (0 m.y.) through the Hawaiian-Emperor bend (- 43 m.y.) to Koko Seamount (48.1 m.y.) in the southernmost Emperor Seamounts continues more than halfway up the Emperor chain to Suiko Seamount. The age versus distance data for the Hawaiian-Emperor chain are consistent with the kinematic hot-spot hypothesis, which predicts that the volcanoes are progressively older west and north away from the active volcanoes of Kilauea and Mauna Loa. The data are consistent with an average volcanic propagation velocity of either 8 cm/year from Suiko to Kilauea or of 6 cm/year from Suiko to Midway followed by a velocity of 9 cm/year from Midway to Kilauea, but it appears that the change in direction that formed the Hawaiian- Emperor bend probably was not accompanied by a major change in velocity.
Resumo:
Continental margin sediments of SE South America originate from various terrestrial sources, each conveying specific magnetic and element signatures. Here, we aim to identify the sources and transport characteristics of shelf and slope sediments deposited between East Brazil and Patagonia (20°-48°S) using enviromagnetic, major element, and grain-size data. A set of five source-indicative parameters (i.e., chi-fd%, ARM/IRM, S0.3T, SIRM/Fe and Fe/K) of 25 surface samples (16-1805 m water depth) was analyzed by fuzzy c-means clustering and non-linear mapping to depict and unmix sediment-province characteristics. This multivariate approach yields three regionally coherent sediment provinces with petrologically and climatically distinct source regions. The southernmost province is entirely restricted to the slope off the Argentinean Pampas and has been identified as relict Andean-sourced sands with coarse unaltered magnetite. The direct transport to the slope was enabled by Rio Colorado and Rio Negro meltwaters during glacial and deglacial phases of low sea level. The adjacent shelf province consists of coastal loessoidal sands (highest hematite and goethite proportions) delivered from the Argentinean Pampas by wave erosion and westerly winds. The northernmost province includes the Plata mudbelt and Rio Grande Cone. It contains tropically weathered clayey silts from the La Plata Drainage Basin with pronounced proportions of fine magnetite, which were distributed up to ~24° S by the Brazilian Coastal Current and admixed to coarser relict sediments of Pampean loessoidal origin. Grain-size analyses of all samples showed that sediment fractionation during transport and deposition had little impact on magnetic and element source characteristics. This study corroborates the high potential of the chosen approach to access sediment origin in regions with contrasting sediment sources, complex transport dynamics, and large grain-size variability.
Resumo:
Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.
Resumo:
This paper is on the use and performance of M-path polyphase Infinite Impulse Response (IIR) filters for channelisation, conventionally where Finite Impulse Response (FIR) filters are preferred. This paper specifically focuses on the Discrete Fourier Transform (DFT) modulated filter banks, which are known to be an efficient choice for channelisation in communication systems. In this paper, the low-pass prototype filter for the DFT filter bank has been implemented using an M-path polyphase IIR filter and we show that the spikes present at the stopband can be avoided by making use of the guardbands between narrowband channels. It will be shown that the channelisation performance will not be affected when polyphase IIR filters are employed instead of their counterparts derived from FIR prototype filters. Detailed complexity and performance analysis of the proposed use will be given in this article.