88 resultados para Rough Kernels
Resumo:
Artisanal miners have tended to be portrayed in the literature and media as people who work hard and play hard, not infrequently depicted as ‘rough diamonds’ likely to cross the boundaries of appropriate behaviour through pursuit of wealth and flamboyant living, often at the cost of local environmental damage. A popular alternative image is that of marginalised labourers, driven by poverty to toil in harsh conditions and pursuing mining livelihoods in the face of national governments and large-scale mining companies’ subversion of their land and mineral rights. Both views reflect partial realities, but are inclined to exaggerate the position of miners as mischief-making rogues or victims. Through documentation of the multi-faceted nature of Tanzanian artisanal miners’ work and home lives during the country’s on-going economic mineralisation, we endeavour to convey a balanced rendering of their aspirations, occupational identity and social ties. Our emphasis is on their working lives as artisans, how they organise themselves and contend with the risks of their occupation, including their engagement with government policy and large-scale mining interests.
Resumo:
A new sparse kernel density estimator is introduced. Our main contribution is to develop a recursive algorithm for the selection of significant kernels one at time using the minimum integrated square error (MISE) criterion for both kernel selection. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
This contribution proposes a novel probability density function (PDF) estimation based over-sampling (PDFOS) approach for two-class imbalanced classification problems. The classical Parzen-window kernel function is adopted to estimate the PDF of the positive class. Then according to the estimated PDF, synthetic instances are generated as the additional training data. The essential concept is to re-balance the class distribution of the original imbalanced data set under the principle that synthetic data sample follows the same statistical properties. Based on the over-sampled training data, the radial basis function (RBF) classifier is constructed by applying the orthogonal forward selection procedure, in which the classifier’s structure and the parameters of RBF kernels are determined using a particle swarm optimisation algorithm based on the criterion of minimising the leave-one-out misclassification rate. The effectiveness of the proposed PDFOS approach is demonstrated by the empirical study on several imbalanced data sets.
Resumo:
This thesis describes a form of non-contact measurement using two dimensional hall effect sensing to resolve the location of a moving magnet which is part of a ‘magnetic spring’ type suspension system. This work was inspired by the field of Space Robotics, which currently relies on solid link suspension techniques for rover stability. This thesis details the design, development and testing of a novel magnetic suspension system with a possible application in space and terrestrial based robotics, especially when the robot needs to traverse rough terrain. A number of algorithms were developed, to utilize experimental data from testing, that can approximate the separation between magnets in the suspension module through observation of the magnetic fields. Experimental hardware was also developed to demonstrate how two dimensional hall effect sensor arrays could provide accurate feedback, with respects to the magnetic suspension modules operation, so that future work can include the sensor array in a real-time control system to produce dynamic ride control for space robots. The research performed has proven that two dimensional hall effect sensing with respects to magnetic suspension is accurate, effective and suitable for future testing.
Resumo:
We study the solutions of the Smoluchowski coagulation equation with a regularization term which removes clusters from the system when their mass exceeds a specified cutoff size, M. We focus primarily on collision kernels which would exhibit an instantaneous gelation transition in the absence of any regularization. Numerical simulations demonstrate that for such kernels with monodisperse initial data, the regularized gelation time decreasesas M increases, consistent with the expectation that the gelation time is zero in the unregularized system. This decrease appears to be a logarithmically slow function of M, indicating that instantaneously gelling kernels may still be justifiable as physical models despite the fact that they are highly singular in the absence of a cutoff. We also study the case when a source of monomers is introduced in the regularized system. In this case a stationary state is reached. We present a complete analytic description of this regularized stationary state for the model kernel, K(m1,m2)=max{m1,m2}ν, which gels instantaneously when M→∞ if ν>1. The stationary cluster size distribution decays as a stretched exponential for small cluster sizes and crosses over to a power law decay with exponent ν for large cluster sizes. The total particle density in the stationary state slowly vanishes as [(ν−1)logM]−1/2 when M→∞. The approach to the stationary state is nontrivial: Oscillations about the stationary state emerge from the interplay between the monomer injection and the cutoff, M, which decay very slowly when M is large. A quantitative analysis of these oscillations is provided for the addition model which describes the situation in which clusters can only grow by absorbing monomers.
Resumo:
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.
Resumo:
Transport of pollution and heatout of streets into the boundary layer above is not currently understood and so fluxes cannot be quantified. Scalar concentration within the street is determined by the flux out of it and so quantifying fluxes for turbulent flow over a rough urban surface is essential. We have developed a naphthalene sublimation technique to measure transfer from a two-dimensional street canyon in a wind tunnel for the case of flow perpendicular to the street. The street was coated with naphthalene, which sublimes at room temperature, so that the vapour represented the scalar source. The transfer velocity wT relates the flux out of the canyon to the concentration within it and is shown to be linearly related to windspeed above the street. The dimensionless transfer coefficient wT/Uδ represents the ventilation efficiency of the canyon (here, wT is a transfer velocity,Uδ is the wind speed at the boundary-layer top). Observed values are between 1.5 and 2.7 ×10-3 and, for the case where H/W→0 (ratio of buildingheight to street width), values are in the same range as estimates of transfer from a flat plate, giving confidence that the technique yields accurate values for street canyon scalar transfer. wT/Uδ varies with aspect ratio (H/W), reaching a maximum in the wake interference regime (0.3 < H/W < 0.65). However, when upstream roughness is increased, the maximum in wT/Uδ reduces, suggesting that street ventilation is less sensitive to H/W when the flow is in equilibrium with the urban surface. The results suggest that using naphthalene sublimation with wind-tunnel models of urban surfaces can provide a direct measure of area-averaged scalar fluxes.
Resumo:
Linear theory, model ion-density profiles and MSIS neutral thermospheric predictions are used to investigate the stability of the auroral, topside ionosphere to oxygen cyclotron waves: variations of the critical height, above which the plasma is unstable, with field-aligned current, thermal ion density and exospheric temperature are considered. In addition, probabilities are assessed that interactions with neutral atomic gases prevent O+ ions from escaping into the magnetosphere after they have been transversely accelerated by these waves. The two studies are combined to give a rough estimate of the total O+ escape flux as a function of the field-aligned current density for an assumed rise in the perpendicular ion temperature. Charge exchange with neutral oxygen, not hydrogen, is shown to be the principle limitation to the escape of O+ ions, which occurs when the waves are driven unstable down to low altitudes. It is found that the largest observed field-aligned current densities can heat a maximum of about 5×1014 O+ ions m−2 to a threshold above which they are subsequently able to escape into the magnetosphere in the following 500s. Averaged over this period, this would constitute a flux of 1012 m−2 s−1 and in steady-state the peak outflow would then be limited to about 1013 m−2 s−1 by frictional drag on thermal O+ at lower altitudes. Maximum escape is at low plasma density unless the O+ scale height is very large. The outflow decreases with decreasing field-aligned current density and, to a lesser extent, with increasing exospheric temperature. Upward flowing ion events are evaluated as a source of O+ ions for the magnetosphere and as an explanation of the observed solar cycle variation of ring current O+ abundance.
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.
Resumo:
An efficient data based-modeling algorithm for nonlinear system identification is introduced for radial basis function (RBF) neural networks with the aim of maximizing generalization capability based on the concept of leave-one-out (LOO) cross validation. Each of the RBF kernels has its own kernel width parameter and the basic idea is to optimize the multiple pairs of regularization parameters and kernel widths, each of which is associated with a kernel, one at a time within the orthogonal forward regression (OFR) procedure. Thus, each OFR step consists of one model term selection based on the LOO mean square error (LOOMSE), followed by the optimization of the associated kernel width and regularization parameter, also based on the LOOMSE. Since like our previous state-of-the-art local regularization assisted orthogonal least squares (LROLS) algorithm, the same LOOMSE is adopted for model selection, our proposed new OFR algorithm is also capable of producing a very sparse RBF model with excellent generalization performance. Unlike our previous LROLS algorithm which requires an additional iterative loop to optimize the regularization parameters as well as an additional procedure to optimize the kernel width, the proposed new OFR algorithm optimizes both the kernel widths and regularization parameters within the single OFR procedure, and consequently the required computational complexity is dramatically reduced. Nonlinear system identification examples are included to demonstrate the effectiveness of this new approach in comparison to the well-known approaches of support vector machine and least absolute shrinkage and selection operator as well as the LROLS algorithm.
Resumo:
We establish a general framework for a class of multidimensional stochastic processes over [0,1] under which with probability one, the signature (the collection of iterated path integrals in the sense of rough paths) is well-defined and determines the sample paths of the process up to reparametrization. In particular, by using the Malliavin calculus we show that our method applies to a class of Gaussian processes including fractional Brownian motion with Hurst parameter H>1/4, the Ornstein–Uhlenbeck process and the Brownian bridge.
Resumo:
A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion combining local component analysis for the finite mixture model. We start with a Parzen window estimator which has the Gaussian kernels with a common covariance matrix, the local component analysis is initially applied to find the covariance matrix using expectation maximization algorithm. Since the constraint on the mixing coefficients of a finite mixture model is on the multinomial manifold, we then use the well-known Riemannian trust-region algorithm to find the set of sparse mixing coefficients. The first and second order Riemannian geometry of the multinomial manifold are utilized in the Riemannian trust-region algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.