933 resultados para Archivos LOG
Resumo:
The paper describes the technical and operational features of a composite equipment for simultaneous measurement of five important parameters, warp load, boat speed, water temperature, water salinity and air temperature pertaining to the craft, gear and the environment. The equipment is designed for continuous measurement in small and medium crafts easily without disturbance to routine fishing operations. The system operated on 9V supply, is suitable for portable operations from one vessel to another. The compact electronic meter kept in the wheel-house displays the data one by one in engineering units.
Resumo:
Activities of the project included: preparation of awareness materials on data reporting; organizing stakeholder awareness programmes; setting and maintaining an electronic database; inputs from participants and recommendations
Resumo:
Large margin criteria and discriminative models are two effective improvements for HMM-based speech recognition. This paper proposed a large margin trained log linear model with kernels for CSR. To avoid explicitly computing in the high dimensional feature space and to achieve the nonlinear decision boundaries, a kernel based training and decoding framework is proposed in this work. To make the system robust to noise a kernel adaptation scheme is also presented. Previous work in this area is extended in two directions. First, most kernels for CSR focus on measuring the similarity between two observation sequences. The proposed joint kernels defined a similarity between two observation-label sequence pairs on the sentence level. Second, this paper addresses how to efficiently employ kernels in large margin training and decoding with lattices. To the best of our knowledge, this is the first attempt at using large margin kernel-based log linear models for CSR. The model is evaluated on a noise corrupted continuous digit task: AURORA 2.0. © 2013 IEEE.
Resumo:
McCullagh and Yang (2006) suggest a family of classification algorithms based on Cox processes. We further investigate the log Gaussian variant which has a number of appealing properties. Conditioned on the covariates, the distribution over labels is given by a type of conditional Markov random field. In the supervised case, computation of the predictive probability of a single test point scales linearly with the number of training points and the multiclass generalization is straightforward. We show new links between the supervised method and classical nonparametric methods. We give a detailed analysis of the pairwise graph representable Markov random field, which we use to extend the model to semi-supervised learning problems, and propose an inference method based on graph min-cuts. We give the first experimental analysis on supervised and semi-supervised datasets and show good empirical performance.
Resumo:
Log-polar image architectures, motivated by the structure of the human visual field, have long been investigated in computer vision for use in estimating motion parameters from an optical flow vector field. Practical problems with this approach have been: (i) dependence on assumed alignment of the visual and motion axes; (ii) sensitivity to occlusion form moving and stationary objects in the central visual field, where much of the numerical sensitivity is concentrated; and (iii) inaccuracy of the log-polar architecture (which is an approximation to the central 20°) for wide-field biological vision. In the present paper, we show that an algorithm based on generalization of the log-polar architecture; termed the log-dipolar sensor, provides a large improvement in performance relative to the usual log-polar sampling. Specifically, our algorithm: (i) is tolerant of large misalignmnet of the optical and motion axes; (ii) is insensitive to significant occlusion by objects of unknown motion; and (iii) represents a more correct analogy to the wide-field structure of human vision. Using the Helmholtz-Hodge decomposition to estimate the optical flow vector field on a log-dipolar sensor, we demonstrate these advantages, using synthetic optical flow maps as well as natural image sequences.
Resumo:
For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.
Resumo:
La estratigrafía del intervalo sedimentario correspondiente al Keuper, Triásico, y al Lías basal ha atraído históricamente mucho menos interés que los sedimentos adyacentes del Muschelkalk y Buntsandstein, y del Lías Inferior y Medio. Se ha confeccionado una correlación de 12 sondeos profundos del área de La Mancha para mostrar la distribución de facies de estas formaciones, así como su ordenación en secuencias sedimentarias. Los datos de pozo muestran la existencia de secuencias evaporíticas complejas y multiepisódicas en la unidad inferior K1, de carácter más salino, y en la unidad superior K4-K5, relativamente más anhidrítica. El episodio clástico correspondiente a las unidades K2-K3 está extendido en toda la zona, con varias áreas preferenciales de acumulación de arenas. La unidad K6, la más alta del Keuper, constituye un excelente nivel guía. La sedimentación continúa en el Lías basal con otra potente unidad evaporítica, la Zona de Anhidrita, que muestra un episodio salino muy signifi cativo en la parte central de la correlación. Es de resaltar la buena correlación que existe entre los afl oramientos y las sondeos para estas formaciones.