902 resultados para Desire-filled machines
Resumo:
The mechanical properties of filled natural rubber latex vulcanizates were found to be improved by the addition of polyethylene glycols of different molecular weight and glycerol. There is a slight reduction in the optimum cure times of the compounds containing PEG/Glycerol. The morphology study shows that the filler distribution is more uniform in the compounds containing PEG/Glycerol.
Resumo:
Amine Terminated Liquid Natural Rubber (ATNR) was used as a plasticiser in filled NR and NBR compounds replacing oil/DOP. The scorch time and cure time were found to be lowered when ATNR was used as the plasticiser. ATNR was found to improve the mechanical properties like tensile strength, tear strength and modulus of the vulcanizates . The ageing resistance of the vulcanizates containing ATNR was superior compared to the vulcanizates containing oil/DOP.
Resumo:
Filled and gum compounds of Isobutylene-Isoprene rubber were extruded through a laboratory extruder at various feeding rates, different temperatures and revolutions per minute. The extruded compounds were vulcanized up to their optimum cure times and the mechanical properties of the vulcanizates were determined. The properties suggest that there is a particular feeding rate in the starved fed region, which results in maximum mechanical properties. The study shows that running the extruder at a slightly starved condition is an attractive means of improving the physical properties.
Resumo:
A new approach, the multipole theory (MT) method, is presented for the computation of cutoff wavenumbers of waveguides partially filled with dielectric. The MT formulation of the eigenvalue problem of an inhomogeneous waveguide is derived. Representative computational examples, including dielectric-rod-loaded rectangular and double-ridged waveguides, are given to validate the theory, and to demonstrate the degree of its efficiency
Resumo:
Precipitated silica is the most promising alternative for carbon black in tyre tread compounds due to its improved performance in terms of rolling resistance and wet grip.But its poor processability is a serious limitation to its commercial application.This thesis suggests a novel route for the incorporation of silica in rubbers,i.e.,precipitation of silica in rubber latex followed by coagulation of the latex to get rubber-silica maseterbatch.Composites with in situ precipitated silica showed improved processability and mechanical properties,when compared to conventional silica composites.
Resumo:
In this letter, we report flexible, non corrosive, and light weight nickel nanoparticle@multi-walled carbon nanotube–polystyrene (Ni@MWCNT/PS) composite films as microwave absorbing material in the frequency range of S band (2-4 GHz). Dielectric permittivity and magnetic permeability of composites having 0.5 and 1.5 wt. % filler amount were measured using the cavity perturbation technique. Reflection loss maxima of 33 dB (at 2.7 GHz) and 24 dB (at 2.7 GHz) were achieved for 0.5 and 1.5 wt. % Ni@MWCNT/PS composite films of 6 and 4 mm thickness, respectively, suggesting that low concentrations of filler provide significant electromagnetic interference shielding
Resumo:
Expanded polystyrene (EPS) constitutes a considerable part of thermoplastic waste in the environment in terms of volume. In this study, this waste material has been utilized for blending with silica-reinforced natural rubber (NR). The NR/EPS (35/5) blends were prepared by melt mixing in a Brabender Plasticorder. Since NR and EPS are incompatible and immiscible a method has been devised to improve compatibility. For this, EPS and NR were initially grafted with maleic anhydride (MA) using dicumyl peroxide (DCP) to give a graft copolymer. Grafting was confirmed by Fourier Transform Infrared Spectroscopy (FTIR) spectroscopy. This grafted blend was subsequently blended with more of NR during mill compounding. Morphological studies using Scanning Electron Microscopy (SEM) showed better dispersion of EPS in the compatibilized blend compared to the noncompatibilized blend. By this technique, the tensile strength, elongation at break, modulus, tear strength, compression set and hardness of the blend were found to be either at par with or better than that of virgin silica filled NR compound. It is also noted that the thermal properties of the blends are equivalent with that of virgin NR. The study establishes the potential of this method for utilising waste EPS
Resumo:
Fine-grained parallel machines have the potential for very high speed computation. To program massively-concurrent MIMD machines, programmers need tools for managing complexity. These tools should not restrict program concurrency. Concurrent Aggregates (CA) provides multiple-access data abstraction tools, Aggregates, which can be used to implement abstractions with virtually unlimited potential for concurrency. Such tools allow programmers to modularize programs without reducing concurrency. I describe the design, motivation, implementation and evaluation of Concurrent Aggregates. CA has been used to construct a number of application programs. Multi-access data abstractions are found to be useful in constructing highly concurrent programs.
Resumo:
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.
Resumo:
Support Vector Machines (SVMs) perform pattern recognition between two point classes by finding a decision surface determined by certain points of the training set, termed Support Vectors (SV). This surface, which in some feature space of possibly infinite dimension can be regarded as a hyperplane, is obtained from the solution of a problem of quadratic programming that depends on a regularization parameter. In this paper we study some mathematical properties of support vectors and show that the decision surface can be written as the sum of two orthogonal terms, the first depending only on the margin vectors (which are SVs lying on the margin), the second proportional to the regularization parameter. For almost all values of the parameter, this enables us to predict how the decision surface varies for small parameter changes. In the special but important case of feature space of finite dimension m, we also show that there are at most m+1 margin vectors and observe that m+1 SVs are usually sufficient to fully determine the decision surface. For relatively small m this latter result leads to a consistent reduction of the SV number.
Resumo:
We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.
Resumo:
We study the relation between support vector machines (SVMs) for regression (SVMR) and SVM for classification (SVMC). We show that for a given SVMC solution there exists a SVMR solution which is equivalent for a certain choice of the parameters. In particular our result is that for $epsilon$ sufficiently close to one, the optimal hyperplane and threshold for the SVMC problem with regularization parameter C_c are equal to (1-epsilon)^{- 1} times the optimal hyperplane and threshold for SVMR with regularization parameter C_r = (1-epsilon)C_c. A direct consequence of this result is that SVMC can be seen as a special case of SVMR.
Resumo:
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples -- in particular the regression problem of approximating a multivariate function from sparse data. We present both formulations in a unified framework, namely in the context of Vapnik's theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics.
Resumo:
In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem.
Resumo:
The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.