960 resultados para vector network analyzer
Resumo:
Traditional Time Division Multiple Access (TDMA) protocol provides deterministic periodic collision free data transmissions. However, TDMA lacks flexibility and exhibits low efficiency in dynamic environments such as wireless LANs. On the other hand contention-based MAC protocols such as the IEEE 802.11 DCF are adaptive to network dynamics but are generally inefficient in heavily loaded or large networks. To take advantage of the both types of protocols, a D-CVDMA protocol is proposed. It is based on the k-round elimination contention (k-EC) scheme, which provides fast contention resolution for Wireless LANs. D-CVDMA uses a contention mechanism to achieve TDMA-like collision-free data transmissions, which does not need to reserve time slots for forthcoming transmissions. These features make the D-CVDMA robust and adaptive to network dynamics such as node leaving and joining, changes in packet size and arrival rate, which in turn make it suitable for the delivery of hybrid traffic including multimedia and data content. Analyses and simulations demonstrate that D-CVDMA outperforms the IEEE 802.11 DCF and k-EC in terms of network throughput, delay, jitter, and fairness.
Resumo:
Nonlinear principal component analysis (PCA) based on neural networks has drawn significant attention as a monitoring tool for complex nonlinear processes, but there remains a difficulty with determining the optimal network topology. This paper exploits the advantages of the Fast Recursive Algorithm, where the number of nodes, the location of centres, and the weights between the hidden layer and the output layer can be identified simultaneously for the radial basis function (RBF) networks. The topology problem for the nonlinear PCA based on neural networks can thus be solved. Another problem with nonlinear PCA is that the derived nonlinear scores may not be statistically independent or follow a simple parametric distribution. This hinders its applications in process monitoring since the simplicity of applying predetermined probability distribution functions is lost. This paper proposes the use of a support vector data description and shows that transforming the nonlinear principal components into a feature space allows a simple statistical inference. Results from both simulated and industrial data confirm the efficacy of the proposed method for solving nonlinear principal component problems, compared with linear PCA and kernel PCA.
Resumo:
Massively parallel networks of highly efficient, high performance Single Instruction Multiple Data (SIMD) processors have been shown to enable FPGA-based implementation of real-time signal processing applications with performance and
cost comparable to dedicated hardware architectures. This is achieved by exploiting simple datapath units with deep processing pipelines. However, these architectures are highly susceptible to pipeline bubbles resulting from data and control hazards; the only way to mitigate against these is manual interleaving of
application tasks on each datapath, since no suitable automated interleaving approach exists. In this paper we describe a new automated integrated mapping/scheduling approach to map algorithm tasks to processors and a new low-complexity list scheduling technique to generate the interleaved schedules. When applied to a spatial Fixed-Complexity Sphere Decoding (FSD) detector
for next-generation Multiple-Input Multiple-Output (MIMO) systems, the resulting schedules achieve real-time performance for IEEE 802.11n systems on a network of 16-way SIMD processors on FPGA, enable better performance/complexity balance than current approaches and produce results comparable to handcrafted implementations.
Resumo:
Besides making contact with an approaching ball at the proper place and time, hitting requires control of the effector velocity at contact. A dynamical neural network for the planning of hitting movements was derived in order to account for both these requirements. The model in question implements continuous required velocity control by extending the Vector Integration To Endpoint model while providing explicit control of effector velocity at interception. It was shown that the planned movement trajectories generated by the model agreed qualitatively with the kinematics of hitting movements as observed in two recent experiments. Outstanding features of this comparison concerned the timing and amplitude of the empirical backswing movements, which were largely consistent with the predictions from the model. Several theoretical implications as well as the informational basis and possible neural underpinnings of the model were discussed.
Resumo:
Research has been undertaken to investigate the use of artificial neural network (ANN) techniques to improve the performance of a low bit-rate vector transform coder. Considerable improvements in the perceptual quality of the coded speech have been obtained. New ANN-based methods for vector quantiser (VQ) design and for the adaptive updating of VQ codebook are introduced for use in speech coding applications.
Resumo:
Introduction: Amplicon deep-sequencing using second-generation sequencing technology is an innovative molecular diagnostic technique and enables a highly-sensitive detection of mutations. As an international consortium we had investigated previously the robustness, precision, and reproducibility of 454 amplicon next-generation sequencing (NGS) across 10 laboratories from 8 countries (Leukemia, 2011;25:1840-8).
Aims: In Phase II of the study, we established distinct working groups for various hematological malignancies, i.e. acute myeloid leukemia (AML), acute lymphoblastic leukemia (ALL), chronic lymphocytic leukemia (CLL), chronic myelogenous leukemia (CML), myelodysplastic syndromes (MDS), myeloproliferative neoplasms (MPN), and multiple myeloma. Currently, 27 laboratories from 13 countries are part of this research consortium. In total, 74 gene targets were selected by the working groups and amplicons were developed for a NGS deep-sequencing assay (454 Life Sciences, Branford, CT). A data analysis pipeline was developed to standardize mutation interpretation both for accessing raw data (Roche Amplicon Variant Analyzer, 454 Life Sciences) and variant interpretation (Sequence Pilot, JSI Medical Systems, Kippenheim, Germany).
Results: We will report on the design, standardization, quality control aspects, landscape of mutations, as well as the prognostic and predictive utility of this assay in a cohort of 8,867 cases. Overall, 1,146 primer sequences were designed and tested. In detail, for example in AML, 924 cases had been screened for CEBPA mutations. RUNX1 mutations were analyzed in 1,888 cases applying the deep-sequencing read counts to study the stability of such mutations at relapse and their utility as a biomarker to detect residual disease. Analyses of DNMT3A (n=1,041) were focused to perform landscape investigations and to address the prognostic relevance. Additionally, this working group is focusing on TET2, ASXL1, and TP53 analyses. A novel prognostic model is being developed allowing stratification of AML into prognostic subgroups based on molecular markers only. In ALL, 1,124 pediatric and adult cases have been screened, including 763 assays for TP53 mutations both at diagnosis and relapse of ALL. Pediatric and adult leukemia expert labs developed additional content to study the mutation incidence of other B and T lineage markers such as IKZF1, JAK2, IL7R, PAX5, EP300, LEF1, CRLF2, PHF6, WT1, JAK1, PTEN, AKT1, IL7R, NOTCH1, CREBBP, or FBXW7. Further, the molecular landscape of CLL is changing rapidly. As such, a separate working group focused on analyses including NOTCH1, SF3B1, MYD88, XPO1, FBXW7 and BIRC3. Currently, 922 cases were screened to investigate the range of mutational burden of NOTCH1 mutations for their prognostic relevance. In MDS, RUNX1 mutation analyses were performed in 977 cases. The prognostic relevance of TP53 mutations in MDS was assessed in additional 327 cases, including isolated deletions of chromosome 5q. Next, content was developed targeting genes of the cellular splicing component, e.g. SF3B1, SRSF2, U2AF1, and ZRSR2. In BCR-ABL1-negative MPN, nine genes of interest (JAK2, MPL, TET2, CBL, KRAS, EZH2, IDH1, IDH2, ASXL1) have been analyzed in a cohort of 155 primary myelofibrosis cases searching for novel somatic mutations and addressing their relevance for disease progression and leukemia transformation. Moreover, an assay was developed and applied to CMML cases allowing the simultaneous analysis of 25 leukemia-associated target genes in a single sequencing run using just 20 ng of starting DNA. Finally, nine laboratories are studying CML, applying ultra-deep sequencing of the BCR-ABL1 tyrosine kinase domain. Analyses were performed on 615 cases investigating the dynamics of expansion of mutated clones under various tyrosine kinase inhibitor therapies.
Conclusion: Molecular characterization of hematological malignancies today requires high diagnostic sensitivity and specificity. As part of the IRON-II study, a network of laboratories analyzed a variety of disease entities applying amplicon-based NGS assays. Importantly, the consortium not only standardized assay design for disease-specific panels, but also achieved consensus on a common data analysis pipeline for mutation interpretation. Distinct working groups have been forged to address scientific tasks and in total 8,867 cases had been analyzed thus far.
Resumo:
Directional Modulation (DM) is a recently proposed technique for securing wireless communication. In this paper we point out that modulation-directionality is a consequence of varying the beamforming network, either in baseband or in the RF stage, at the information rate In order to formalize and extend on previous analysis and synthesis methods a new theoretical treatment using vector representations of directional modulation (DM) systems is introduced and used to obtain the necessary and sufficient con
Resumo:
This experimental study focuses on a detection system at the seismic station level that should have a similar role to the detection algorithms based on the ratio STA/LTA. We tested two types of neural network: Multi-Layer Perceptrons and Support Vector Machines, trained in supervised mode. The universe of data consisted of 2903 patterns extracted from records of the PVAQ station, of the seismography network of the Institute of Meteorology of Portugal. The spectral characteristics of the records and its variation in time were reflected in the input patterns, consisting in a set of values of power spectral density in selected frequencies, extracted from a spectro gram calculated over a segment of record of pre-determined duration. The universe of data was divided, with about 60% for the training and the remainder reserved for testing and validation. To ensure that all patterns in the universe of data were within the range of variation of the training set, we used an algorithm to separate the universe of data by hyper-convex polyhedrons, determining in this manner a set of patterns that have a mandatory part of the training set. Additionally, an active learning strategy was conducted, by iteratively incorporating poorly classified cases in the training set. The best results, in terms of sensitivity and selectivity in the whole data ranged between 98% and 100%. These results compare very favorably with the ones obtained by the existing detection system, 50%.
Resumo:
Wind speed forecasting has been becoming an important field of research to support the electricity industry mainly due to the increasing use of distributed energy sources, largely based on renewable sources. This type of electricity generation is highly dependent on the weather conditions variability, particularly the variability of the wind speed. Therefore, accurate wind power forecasting models are required to the operation and planning of wind plants and power systems. A Support Vector Machines (SVM) model for short-term wind speed is proposed and its performance is evaluated and compared with several artificial neural network (ANN) based approaches. A case study based on a real database regarding 3 years for predicting wind speed at 5 minutes intervals is presented.
Resumo:
We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.
Resumo:
In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
Deep Brain Stimulation has been used in the study of and for treating Parkinson’s Disease (PD) tremor symptoms since the 1980s. In the research reported here we have carried out a comparative analysis to classify tremor onset based on intraoperative microelectrode recordings of a PD patient’s brain Local Field Potential (LFP) signals. In particular, we compared the performance of a Support Vector Machine (SVM) with two well known artificial neural network classifiers, namely a Multiple Layer Perceptron (MLP) and a Radial Basis Function Network (RBN). The results show that in this study, using specifically PD data, the SVM provided an overall better classification rate achieving an accuracy of 81% recognition.