965 resultados para Classical orthogonal polynomials
Resumo:
Orthogonal neighborhood-preserving projection (ONPP) is a recently developed orthogonal linear algorithm for overcoming the out-of-sample problem existing in the well-known manifold learning algorithm, i.e., locally linear embedding. It has been shown that ONPP is a strong analyzer of high-dimensional data. However, when applied to classification problems in a supervised setting, ONPP only focuses on the intraclass geometrical information while ignores the interaction of samples from different classes. To enhance the performance of ONPP in classification, a new algorithm termed discriminative ONPP (DONPP) is proposed in this paper. DONPP 1) takes into account both intraclass and interclass geometries; 2) considers the neighborhood information of interclass relationships; and 3) follows the orthogonality property of ONPP. Furthermore, DONPP is extended to the semisupervised case, i.e., semisupervised DONPP (SDONPP). This uses unlabeled samples to improve the classification accuracy of the original DONPP. Empirical studies demonstrate the effectiveness of both DONPP and SDONPP.
Resumo:
The reaction mechanism of the Beckmann rearrangement over B2O3/gamma-Al2O3 and TS-1 in the gas phase has been investigated by isotope labeling approach. The isotopic labeled products were measured by mass spectrometry method. By exchanging oxygen with H, 180 in the rearrangement step, it was found that the exchange reaction between cyclohexanone oxime and (H2O)-O-18 over B2O3/-gamma-Al2O3 and TS-1 could only be carried out in some extent. It suggested that the dissociation of nitrilium, over solid acids be not completely free as the classical mechanism. A concept of the dissociation degree (alpha) that is defined as the ratio of the dissociated intermediate nitrilium to the total intermediate nitrilium has been proposed. By fitting the experimental values with the calculation equation of isotopic labeled products, it is obtained that a values for B2O3/-gamma-Al2O3 and TS-1 are 0.199 and 0.806 at the reaction conditions, respectively.
Resumo:
In chemistry for chemical analysis of a multi-component sample or quantitative structure-activity/property relationship (QSAR/QSPR) studies, variable selection is a key step. In this study, comparisons between different methods were performed. These methods include three classical methods such as forward selection, backward elimination and stepwise regression; orthogonal descriptors; leaps-and-bounds regression and genetic algorithm. Thirty-five nitrobenzenes were taken as the data set. From these structures quantum chemical parameters, topological indices and indicator variable were extracted as the descriptors for the comparisons of variable selections. The interesting results have been obtained. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper, the comparison of orthogonal descriptors and Leaps-and-Bounds regression analysis is performed. The results obtained by using orthogonal descriptors are better than that obtained by using Leaps-and-Bounds regression for the data set of nitrobenzenes used in this study. Leaps-and-Bounds regression can be used effectively for selection of variables in quantitative structure-activity/property relationship(QSAR/QSPR) studies. Consequently, orthogonalisation of descriptors is also a good method for variable selection for studies on QSAR/QSPR.
Resumo:
The present work is first reporting the hemolytic activity of venom from jellyfish Rhopilema esculentum Kishinouye extracted by different phosphate buffer solutions and incubated at different temperature according to the orthogonal test L6(1) x 3(6). Of the seven controllable independent variables, incubated temperature and phenylmethylsulfonyl fluoride (PMSF) had strongest effect on the hemolytic activity. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This thesis investigates what knowledge is necessary to solve mechanics problems. A program NEWTON is described which understands and solves problems in mechanics mini-world of objects moving on surfaces. Facts and equations such as those given in mechanics text need to be represented. However, this is far from sufficient to solve problems. Human problem solvers rely on "common sense" and "qualitative" knowledge which the physics text tacitly assumes to be present. A mechanics problem solver must embody such knowledge. Quantitative knowledge given by equations and more qualitative common sense knowledge are the major research points exposited in this thesis. The major issue in solving problems is planning. Planning involves tentatively outlining a possible path to the solution without actually solving the problem. Such a plan needs to be constructed and debugged in the process of solving the problem. Envisionment, or qualitative simulation of the event, plays a central role in this planning process.
Resumo:
Williams, Mike, 'Why ideas matter in International Relations: Hans Morgenthau, Classical Realism, and the Moral Construction of Power Politics', International Organization (2004) 58(4) pp.633-665 RAE2008
Resumo:
Mavron, Vassili; Jungnickel, D.; McDonough, T.P., (2001) 'The Geometry of Frequency Squares', Journal of Combinatorial Theory, Series A 96, pp.376-387 RAE2008
Resumo:
Iantchenko, A.; Sj?strand, J.; Zworski, M., (2002) 'Birkhoff normal forms in semi-classical inverse problems', Mathematical Research Letters 9(3) pp.337-362 RAE2008
Resumo:
We consider a fault model of Boolean gates, both classical and quantum, where some of the inputs may not be connected to the actual gate hardware. This model is somewhat similar to the stuck-at model which is a very popular model in testing Boolean circuits. We consider the problem of detecting such faults; the detection algorithm can query the faulty gate and its complexity is the number of such queries. This problem is related to determining the sensitivity of Boolean functions. We show how quantum parallelism can be used to detect such faults. Specifically, we show that a quantum algorithm can detect such faults more efficiently than a classical algorithm for a Parity gate and an AND gate. We give explicit constructions of quantum detector algorithms and show lower bounds for classical algorithms. We show that the model for detecting such faults is similar to algebraic decision trees and extend some known results from quantum query complexity to prove some of our results.
Resumo:
For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Carbon nanotubes (CNTs) have attracted attention for their remarkable electrical properties and have being explored as one of the best building blocks in nano-electronics. A key challenge to realize such potential is the control of the nanotube growth directions. Even though both vertical growth and controlled horizontal growth of carbon nanotubes have been realized before, the growth of complex nanotube structures with both vertical and horizontal orientation control on the same substrate has never been achieved. Here, we report a method to grow three-dimensional (3D) complex nanotube structures made of vertical nanotube forests and horizontal nanotube arrays on a single substrate and from the same catalyst pattern by an orthogonally directed nanotube growth method using chemical vapor deposition (CVD). More importantly, such a capability represents a major advance in controlled growth of carbon nanotubes. It enables researchers to control the growth directions of nanotubes by simply changing the reaction conditions. The high degree of control represented in these experiments will surely make the fabrication of complex nanotube devices a possibility.