957 resultados para K-Fold Accuracy
Resumo:
We present a new wrapper feature selection algorithm for human detection. This algorithm is a hybrid featureselection approach combining the benefits of filter and wrapper methods. It allows the selection of an optimalfeature vector that well represents the shapes of the subjects in the images. In detail, the proposed featureselection algorithm adopts the k-fold subsampling and sequential backward elimination approach, while thestandard linear support vector machine (SVM) is used as the classifier for human detection. We apply theproposed algorithm to the publicly accessible INRIA and ETH pedestrian full image datasets with the PASCALVOC evaluation criteria. Compared to other state of the arts algorithms, our feature selection based approachcan improve the detection speed of the SVM classifier by over 50% with up to 2% better detection accuracy.Our algorithm also outperforms the equivalent systems introduced in the deformable part model approach witharound 9% improvement in the detection accuracy
Resumo:
The C++ class library C-XSC for scientific computing has been extended with the possibility to compute scalar products with selectable accuracy in version 2.3.0. In previous versions, scalar products have always been computed exactly with the help of the so-called long accumulator. Additionally, optimized floating point computation of matrix and vector operations using BLAS-routines are added in C-XSC version 2.4.0. In this article the algorithms used and their implementations, as well as some potential pitfalls in the compilation, are described in more detail. Additionally, the theoretical background of the employed DotK algorithm and the necessary modifications of the concrete implementation in C-XSC are briefly explained. Run-time tests and numerical examples are presented as well.
Resumo:
A major percentage of the heat emitted from electronic packages can be extracted by air cooling whether by means of natural or forced convection. This flow of air throughout an electronic system and the heat extracted is highly dependable on the nature of turbulence present in the flow field. This paper will discuss results from an investigation into the accuracy of turbulence models to predict air cooling for electronic packages and systems.
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
Existing crowd counting algorithms rely on holistic, local or histogram based features to capture crowd properties. Regression is then employed to estimate the crowd size. Insufficient testing across multiple datasets has made it difficult to compare and contrast different methodologies. This paper presents an evaluation across multiple datasets to compare holistic, local and histogram based methods, and to compare various image features and regression models. A K-fold cross validation protocol is followed to evaluate the performance across five public datasets: UCSD, PETS 2009, Fudan, Mall and Grand Central datasets. Image features are categorised into five types: size, shape, edges, keypoints and textures. The regression models evaluated are: Gaussian process regression (GPR), linear regression, K nearest neighbours (KNN) and neural networks (NN). The results demonstrate that local features outperform equivalent holistic and histogram based features; optimal performance is observed using all image features except for textures; and that GPR outperforms linear, KNN and NN regression
Resumo:
In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images ('No Reference' metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.
Resumo:
This thesis consists of two parts; in the first part we performed a single-molecule force extension measurement with 10kb long DNA-molecules from phage-λ to validate the calibration and single-molecule capability of our optical tweezers instrument. Fitting the worm-like chain interpolation formula to the data revealed that ca. 71% of the DNA tethers featured a contour length within ±15% of the expected value (3.38 µm). Only 25% of the found DNA had a persistence length between 30 and 60 nm. The correct value should be within 40 to 60 nm. In the second part we designed and built a precise temperature controller to remove thermal fluctuations that cause drifting of the optical trap. The controller uses feed-forward and PID (proportional-integral-derivative) feedback to achieve 1.58 mK precision and 0.3 K absolute accuracy. During a 5 min test run it reduced drifting of the trap from 1.4 nm/min in open-loop to 0.6 nm/min in closed-loop.
Resumo:
A partir de 2011, ocorreram e ainda ocorrerão eventos de grande repercussão para a cidade do Rio de Janeiro, como a conferência Rio+20 das Nações Unidas e eventos esportivos de grande importância mundial (Copa do Mundo de Futebol, Olimpíadas e Paraolimpíadas). Estes acontecimentos possibilitam a atração de recursos financeiros para a cidade, assim como a geração de empregos, melhorias de infraestrutura e valorização imobiliária, tanto territorial quanto predial. Ao optar por um imóvel residencial em determinado bairro, não se avalia apenas o imóvel, mas também as facilidades urbanas disponíveis na localidade. Neste contexto, foi possível definir uma interpretação qualitativa linguística inerente aos bairros da cidade do Rio de Janeiro, integrando-se três técnicas de Inteligência Computacional para a avaliação de benefícios: Lógica Fuzzy, Máquina de Vetores Suporte e Algoritmos Genéticos. A base de dados foi construída com informações da web e institutos governamentais, evidenciando o custo de imóveis residenciais, benefícios e fragilidades dos bairros da cidade. Implementou-se inicialmente a Lógica Fuzzy como um modelo não supervisionado de agrupamento através das Regras Elipsoidais pelo Princípio de Extensão com o uso da Distância de Mahalanobis, configurando-se de forma inferencial os grupos de designação linguística (Bom, Regular e Ruim) de acordo com doze características urbanas. A partir desta discriminação, foi tangível o uso da Máquina de Vetores Suporte integrado aos Algoritmos Genéticos como um método supervisionado, com o fim de buscar/selecionar o menor subconjunto das variáveis presentes no agrupamento que melhor classifique os bairros (Princípio da Parcimônia). A análise das taxas de erro possibilitou a escolha do melhor modelo de classificação com redução do espaço de variáveis, resultando em um subconjunto que contém informações sobre: IDH, quantidade de linhas de ônibus, instituições de ensino, valor m médio, espaços ao ar livre, locais de entretenimento e crimes. A modelagem que combinou as três técnicas de Inteligência Computacional hierarquizou os bairros do Rio de Janeiro com taxas de erros aceitáveis, colaborando na tomada de decisão para a compra e venda de imóveis residenciais. Quando se trata de transporte público na cidade em questão, foi possível perceber que a malha rodoviária ainda é a prioritária
Resumo:
The ratios R-k1 of k-fold to single ionization of the target atom with simultaneous one-electron capture by the projectile have been measured for 15-480 keV/u (nu(p) = 0.8-4.4 a.u.) collisions of Cq+, Oq+ (q=1-4) with Ar, using time-of-flight techniques which allowed the simultaneous identification of the final charge state of both the low-velocity recoil ion and the high-velocity projectile for each collision event. The present ratios are similar to those for He+ and He2+ ion impact. The energy dependence of R-k1 shows a maximum at a certain energy, E-max. which approximately conforms to the q(1/2)-dependence scaling. For a fixed projectile state, the ratios R-k1 also vary strongly with outgoing reaction channels. The general behavior of the measured data can be qualitatively analyzed by a simple impact-parameter, independent-electron model. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The L-shell ionization processes of a Ne gas target associated with single-electron capture by bombardment of Cq+ and Oq+ (q=2,3) are investigated using the projectile-recoil-ion coincidence method in the energy range from 80 to 400 keV/u (v(p)=1.8-4 a.u.). The cross-section ratios (R-k1) of k-fold ionization to single capture are compared with the results for He2+-Ne collisions by Dubois [Phys. Rev. A 36, 2585 (1987)]. All the velocity dependences are quite similar. The ratios increase as the projectile energy increases in the lower-energy region, reach the maxima for projectile energies around E-max=160q(1/2) keV/u, and then decrease at higher energies. These results qualitatively agree with our calculations in terms of the Bohr-Lindhard model within the independent-electron approximation.
Resumo:
Effective collision strengths for forbidden transitions among the five energetically lowest fine-structure levels of O ii are calculated in the Breit-Pauli approximation using the R-matrix method. Results are presented for the electron temperature range 100-100 000 K. The accuracy of the calculations is evaluated via the use of different types of radial orbital sets and a different configuration expansion basis for the target wavefunctions. A detailed assessment of previous available data is given, and erroneous results are highlighted. Our results reconfirm the validity of the original Seaton and Osterbrock scaling for the optical O ii ratio, a matter of some recent controversy. Finally, we present plasma diagnostic diagrams using the best collision strengths and transition probabilities.
Resumo:
Los bomberos aeronáuticos son los encargados de atender todas las emergencias en los aeropuertos y sus cercanías. Estas emergencias incluyen emergencias aéreas, en tierra, eventos con materiales peligros e incendios, entre otros. Su trabajo tiene como características la realización de actividades durante periodos largos de baja intensidad y periodos cortos de alta intensidad. De acuerdo con estas características, es necesario que los bomberos aeronáuticos tengan una buena condición física. El consumo máximo de oxígeno (VO2 máx) como indicador de capacidad aeróbica resulta indispensable para conocer el desempeño de los bomberos en su trabajo. El objetivo de este estudio es determinar la capacidad aeróbica de los bomberos aeronáuticos y sus factores determinantes. Por tanto se desarrolló un estudio transversal de tipo descriptivo en una muestra de 23 hombres bomberos aeronáuticos. Se obtuvo información acerca de sus variables socio-demográficas, se determinó el VO2 máx y umbral ventilatorio mediante análisis de gases espirados durante un protocolo de ejercicio máximo sobre tapiz rodante, se evaluó la composición corporal mediante adipometría y se determinó el nivel de actividad física mediante el cuestionario internacional de actividad física IPAQ. Se encontró que la muestra tenia una edad de 32,6 ± 4,8 años, peso de 78,4 ± 9,8 kg, porcentaje de grasa de 14,8 ± 3,8 %, índice de masa corporal de 25,7 ± 2,7 y VO2máx de 44,6 ± 6. No se encontraron cambios significativos del VO2máx con la edad, pero si con la actividad física, porcentaje de grasa e índice de masa corporal. Se sugiere que el entrenamiento de los bomberos aeronáuticos durante su jornada laboral sea de intervalos de alta intensidad y que se monitorice su nivel de actividad física y composición corporal.