980 resultados para kernel method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce a new parameter to investigate replica symmetry breaking transitions using finite-size scaling methods. Based on exact equalities initially derived by F. Guerra this parameter is a direct check of the self-averaging character of the spin-glass order parameter. This new parameter can be used to study models with time reversal symmetry but its greatest interest lies in models where this symmetry is absent. We apply the method to long-range and short-range Ising spin-glasses with and without a magnetic field as well as short-range multispin interaction spin-glasses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Studies testing the High Energy Moisture Characteristic (HEMC) technique in tropical soils are still incipient. By this method, the effects of different management systems can be evaluated. This study investigated the aggregation state of an Oxisol under coffee with Brachiaria between crop rows and surface-applied gypsum rates using HEMC. Soil in an experimental area in the Upper São Francisco region, Minas Gerais, was studied at depths of 0.05 and 0.20 m in coffee rows. The treatments consisted of 0, 7, and 28 Mg ha-1 of agricultural gypsum rates distributed on the soil surface of the coffee rows, between which Brachiaria was grown and periodically cut, and compared with a treatment without Brachiaria between coffee rows and no gypsum application. To determine the aggregation state using the HEMC method, soil aggregates were placed in a Büchner funnel (500 mL) and wetted using a peristaltic pump with a volumetric syringe. The wetting was applied increasingly at two pre-set speeds: slow (2 mm h-1) and fast (100 mm h-1). Once saturated, the aggregates were exposed to a gradually increasing tension by the displacement of a water column (varying from 0 to 30 cm) to obtain the moisture retention curve [M = f (Ψ) ], underlying the calculation of the stability parameters: modal suction, volume of drainable pores (VDP), stability index (slow and fast), VDP ratio, and stability ratio. The HEMC method conferred sensitivity in quantifying the aggregate stability parameters, and independent of whether gypsum was used, the soil managed with Brachiaria between the coffee rows, with regular cuts discharged in the crop row direction, exhibited a decreased susceptibility to disaggregation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: Evaluation of the quantitative antibiogram as an epidemiological tool for the prospective typing of methicillin-resistant Staphylococcus aureus (MRSA), and comparison with ribotyping. METHODS: The method is based on the multivariate analysis of inhibition zone diameters of antibiotics in disk diffusion tests. Five antibiotics were used (erythromycin, clindamycin, cotrimoxazole, gentamicin, and ciprofloxacin). Ribotyping was performed using seven restriction enzymes (EcoRV, HindIII, KpnI, PstI, EcoRI, SfuI, and BamHI). SETTING: 1,000-bed tertiary university medical center. RESULTS: During a 1-year period, 31 patients were found to be infected or colonized with MRSA. Cluster analysis of antibiogram data showed nine distinct antibiotypes. Four antibiotypes were isolated from multiple patients (2, 4, 7, and 13, respectively). Five additional antibiotypes were isolated from the remaining five patients. When analyzed with respect to the epidemiological data, the method was found to be equivalent to ribotyping. Among 206 staff members who were screened, six were carriers of MRSA. Both typing methods identified concordant of MRSA types in staff members and in the patients under their care. CONCLUSIONS: The quantitative antibiogram was found to be equivalent to ribotyping as an epidemiological tool for typing of MRSA in our setting. Thus, this simple, rapid, and readily available method appears to be suitable for the prospective surveillance and control of MRSA for hospitals that do not have molecular typing facilities and in which MRSA isolates are not uniformly resistant or susceptible to the antibiotics tested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we develop a data-driven methodology to characterize the likelihood of orographic precipitation enhancement using sequences of weather radar images and a digital elevation model (DEM). Geographical locations with topographic characteristics favorable to enforce repeatable and persistent orographic precipitation such as stationary cells, upslope rainfall enhancement, and repeated convective initiation are detected by analyzing the spatial distribution of a set of precipitation cells extracted from radar imagery. Topographic features such as terrain convexity and gradients computed from the DEM at multiple spatial scales as well as velocity fields estimated from sequences of weather radar images are used as explanatory factors to describe the occurrence of localized precipitation enhancement. The latter is represented as a binary process by defining a threshold on the number of cell occurrences at particular locations. Both two-class and one-class support vector machine classifiers are tested to separate the presumed orographic cells from the nonorographic ones in the space of contributing topographic and flow features. Site-based validation is carried out to estimate realistic generalization skills of the obtained spatial prediction models. Due to the high class separability, the decision function of the classifiers can be interpreted as a likelihood or susceptibility of orographic precipitation enhancement. The developed approach can serve as a basis for refining radar-based quantitative precipitation estimates and short-term forecasts or for generating stochastic precipitation ensembles conditioned on the local topography.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the considerable environmental importance of mercury (Hg), given its high toxicity and ability to contaminate large areas via atmospheric deposition, little is known about its activity in soils, especially tropical soils, in comparison with other heavy metals. This lack of information about Hg arises because analytical methods for determination of Hg are more laborious and expensive compared to methods for other heavy metals. The situation is even more precarious regarding speciation of Hg in soils since sequential extraction methods are also inefficient for this metal. The aim of this paper is to present a technique of thermal desorption associated with atomic absorption spectrometry, TDAAS, as an efficient tool for quantitative determination of Hg in soils. The method consists of the release of Hg by heating, followed by its quantification by atomic absorption spectrometry. It was developed by constructing calibration curves in different soil samples based on increasing volumes of standard Hg2+ solutions. Performance, accuracy, precision, and quantification and detection limit parameters were evaluated. No matrix interference was detected. Certified reference samples and comparison with a Direct Mercury Analyzer, DMA (another highly recognized technique), were used in validation of the method, which proved to be accurate and precise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT High cost and long time required to determine a retention curve by the conventional methods of the Richards Chamber and Haines Funnel limit its use; therefore, alternative methods to facilitate this routine are needed. The filter paper method to determine the soil water retention curve was evaluated and compared to the conventional method. Undisturbed samples were collected from five different soils. Using a Haines Funnel and Richards Chamber, moisture content was obtained for tensions of 2; 4; 6; 8; 10; 33; 100; 300; 700; and 1,500 kPa. In the filter paper test, the soil matric potential was obtained from the filter-paper calibration equation, and the moisture subsequently determined based on the gravimetric difference. The van Genuchten model was fitted to the observed data of soil matric potential versus moisture. Moisture values of the conventional and the filter paper methods, estimated by the van Genuchten model, were compared. The filter paper method, with R2 of 0.99, can be used to determine water retention curves of agricultural soils as an alternative to the conventional method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT Particle density, gravimetric and volumetric water contents and porosity are important basic concepts to characterize porous systems such as soils. This paper presents a proposal of an experimental method to measure these physical properties, applicable in experimental physics classes, in porous media samples consisting of spheres with the same diameter (monodisperse medium) and with different diameters (polydisperse medium). Soil samples are not used given the difficulty of working with this porous medium in laboratories dedicated to teaching basic experimental physics. The paper describes the method to be followed and results of two case studies, one in monodisperse medium and the other in polydisperse medium. The particle density results were very close to theoretical values for lead spheres, whose relative deviation (RD) was -2.9 % and +0.1 % RD for the iron spheres. The RD of porosity was also low: -3.6 % for lead spheres and -1.2 % for iron spheres, in the comparison of procedures – using particle and porous medium densities and saturated volumetric water content – and monodisperse and polydisperse media.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate modeling of flow instabilities requires computational tools able to deal with several interacting scales, from the scale at which fingers are triggered up to the scale at which their effects need to be described. The Multiscale Finite Volume (MsFV) method offers a framework to couple fine-and coarse-scale features by solving a set of localized problems which are used both to define a coarse-scale problem and to reconstruct the fine-scale details of the flow. The MsFV method can be seen as an upscaling-downscaling technique, which is computationally more efficient than standard discretization schemes and more accurate than traditional upscaling techniques. We show that, although the method has proven accurate in modeling density-driven flow under stable conditions, the accuracy of the MsFV method deteriorates in case of unstable flow and an iterative scheme is required to control the localization error. To avoid large computational overhead due to the iterative scheme, we suggest several adaptive strategies both for flow and transport. In particular, the concentration gradient is used to identify a front region where instabilities are triggered and an accurate (iteratively improved) solution is required. Outside the front region the problem is upscaled and both flow and transport are solved only at the coarse scale. This adaptive strategy leads to very accurate solutions at roughly the same computational cost as the non-iterative MsFV method. In many circumstances, however, an accurate description of flow instabilities requires a refinement of the computational grid rather than a coarsening. For these problems, we propose a modified iterative MsFV, which can be used as downscaling method (DMsFV). Compared to other grid refinement techniques the DMsFV clearly separates the computational domain into refined and non-refined regions, which can be treated separately and matched later. This gives great flexibility to employ different physical descriptions in different regions, where different equations could be solved, offering an excellent framework to construct hybrid methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop an abstract extrapolation theory for the real interpolation method that covers and improves the most recent versions of the celebrated theorems of Yano and Zygmund. As a consequence of our method, we give new endpoint estimates of the embedding Sobolev theorem for an arbitrary domain Omega

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En aquest treball demostrem que en la classe de jocs d'assignació amb diagonal dominant (Solymosi i Raghavan, 2001), el repartiment de Thompson (que coincideix amb el valor tau) és l'únic punt del core que és maximal respecte de la relació de dominància de Lorenz, i a més coincideix amb la solucié de Dutta i Ray (1989), també coneguda com solució igualitària. En segon lloc, mitjançant una condició més forta que la de diagonal dominant, introduïm una nova classe de jocs d'assignació on cada agent obté amb la seva parella òptima almenys el doble que amb qualsevol altra parella. Per aquests jocs d'assignació amb diagonal 2-dominant, el repartiment de Thompson és l'únic punt del kernel, i per tant el nucleolo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.