996 resultados para standard batch algorithms


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de mestrado em Bioquímica Aplicada (área de especialização em Biotecnologia)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There are far-reaching conceptual similarities between bi-static surface georadar and post-stack, "zero-offset" seismic reflection data, which is expressed in largely identical processing flows. One important difference is, however, that standard deconvolution algorithms routinely used to enhance the vertical resolution of seismic data are notoriously problematic or even detrimental to the overall signal quality when applied to surface georadar data. We have explored various options for alleviating this problem and have tested them on a geologically well-constrained surface georadar dataset. Standard stochastic and direct deterministic deconvolution approaches proved to be largely unsatisfactory. While least-squares-type deterministic deconvolution showed some promise, the inherent uncertainties involved in estimating the source wavelet introduced some artificial "ringiness". In contrast, we found spectral balancing approaches to be effective, practical and robust means for enhancing the vertical resolution of surface georadar data, particularly, but not exclusively, in the uppermost part of the georadar section, which is notoriously plagued by the interference of the direct air- and groundwaves. For the data considered in this study, it can be argued that band-limited spectral blueing may provide somewhat better results than standard band-limited spectral whitening, particularly in the uppermost part of the section affected by the interference of the air- and groundwaves. Interestingly, this finding is consistent with the fact that the amplitude spectrum resulting from least-squares-type deterministic deconvolution is characterized by a systematic enhancement of higher frequencies at the expense of lower frequencies and hence is blue rather than white. It is also consistent with increasing evidence that spectral "blueness" is a seemingly universal, albeit enigmatic, property of the distribution of reflection coefficients in the Earth. Our results therefore indicate that spectral balancing techniques in general and spectral blueing in particular represent simple, yet effective means of enhancing the vertical resolution of surface georadar data and, in many cases, could turn out to be a preferable alternative to standard deconvolution approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider an agent who has to repeatedly make choices in an uncertainand changing environment, who has full information of the past, who discountsfuture payoffs, but who has no prior. We provide a learning algorithm thatperforms almost as well as the best of a given finite number of experts orbenchmark strategies and does so at any point in time, provided the agentis sufficiently patient. The key is to find the appropriate degree of forgettingdistant past. Standard learning algorithms that treat recent and distant pastequally do not have the sequential epsilon optimality property.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les méthodes de Monte Carlo par chaînes de Markov (MCCM) sont des méthodes servant à échantillonner à partir de distributions de probabilité. Ces techniques se basent sur le parcours de chaînes de Markov ayant pour lois stationnaires les distributions à échantillonner. Étant donné leur facilité d’application, elles constituent une des approches les plus utilisées dans la communauté statistique, et tout particulièrement en analyse bayésienne. Ce sont des outils très populaires pour l’échantillonnage de lois de probabilité complexes et/ou en grandes dimensions. Depuis l’apparition de la première méthode MCCM en 1953 (la méthode de Metropolis, voir [10]), l’intérêt pour ces méthodes, ainsi que l’éventail d’algorithmes disponibles ne cessent de s’accroître d’une année à l’autre. Bien que l’algorithme Metropolis-Hastings (voir [8]) puisse être considéré comme l’un des algorithmes de Monte Carlo par chaînes de Markov les plus généraux, il est aussi l’un des plus simples à comprendre et à expliquer, ce qui en fait un algorithme idéal pour débuter. Il a été sujet de développement par plusieurs chercheurs. L’algorithme Metropolis à essais multiples (MTM), introduit dans la littérature statistique par [9], est considéré comme un développement intéressant dans ce domaine, mais malheureusement son implémentation est très coûteuse (en termes de temps). Récemment, un nouvel algorithme a été développé par [1]. Il s’agit de l’algorithme Metropolis à essais multiples revisité (MTM revisité), qui définit la méthode MTM standard mentionnée précédemment dans le cadre de l’algorithme Metropolis-Hastings sur un espace étendu. L’objectif de ce travail est, en premier lieu, de présenter les méthodes MCCM, et par la suite d’étudier et d’analyser les algorithmes Metropolis-Hastings ainsi que le MTM standard afin de permettre aux lecteurs une meilleure compréhension de l’implémentation de ces méthodes. Un deuxième objectif est d’étudier les perspectives ainsi que les inconvénients de l’algorithme MTM revisité afin de voir s’il répond aux attentes de la communauté statistique. Enfin, nous tentons de combattre le problème de sédentarité de l’algorithme MTM revisité, ce qui donne lieu à un tout nouvel algorithme. Ce nouvel algorithme performe bien lorsque le nombre de candidats générés à chaque itérations est petit, mais sa performance se dégrade à mesure que ce nombre de candidats croît.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Free-word order languages have long posed significant problems for standard parsing algorithms. This thesis presents an implemented parser, based on Government-Binding (GB) theory, for a particular free-word order language, Warlpiri, an aboriginal language of central Australia. The words in a sentence of a free-word order language may swap about relatively freely with little effect on meaning: the permutations of a sentence mean essentially the same thing. It is assumed that this similarity in meaning is directly reflected in the syntax. The parser presented here properly processes free word order because it assigns the same syntactic structure to the permutations of a single sentence. The parser also handles fixed word order, as well as other phenomena. On the view presented here, there is no such thing as a "configurational" or "non-configurational" language. Rather, there is a spectrum of languages that are more or less ordered. The operation of this parsing system is quite different in character from that of more traditional rule-based parsing systems, e.g., context-free parsers. In this system, parsing is carried out via the construction of two different structures, one encoding precedence information and one encoding hierarchical information. This bipartite representation is the key to handling both free- and fixed-order phenomena. This thesis first presents an overview of the portion of Warlpiri that can be parsed. Following this is a description of the linguistic theory on which the parser is based. The chapter after that describes the representations and algorithms of the parser. In conclusion, the parser is compared to related work. The appendix contains a substantial list of test cases ??th grammatical and ungrammatical ??at the parser has actually processed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L'increment de bases de dades que cada vegada contenen imatges més difícils i amb un nombre més elevat de categories, està forçant el desenvolupament de tècniques de representació d'imatges que siguin discriminatives quan es vol treballar amb múltiples classes i d'algorismes que siguin eficients en l'aprenentatge i classificació. Aquesta tesi explora el problema de classificar les imatges segons l'objecte que contenen quan es disposa d'un gran nombre de categories. Primerament s'investiga com un sistema híbrid format per un model generatiu i un model discriminatiu pot beneficiar la tasca de classificació d'imatges on el nivell d'anotació humà sigui mínim. Per aquesta tasca introduïm un nou vocabulari utilitzant una representació densa de descriptors color-SIFT, i desprès s'investiga com els diferents paràmetres afecten la classificació final. Tot seguit es proposa un mètode par tal d'incorporar informació espacial amb el sistema híbrid, mostrant que la informació de context es de gran ajuda per la classificació d'imatges. Desprès introduïm un nou descriptor de forma que representa la imatge segons la seva forma local i la seva forma espacial, tot junt amb un kernel que incorpora aquesta informació espacial en forma piramidal. La forma es representada per un vector compacte obtenint un descriptor molt adequat per ésser utilitzat amb algorismes d'aprenentatge amb kernels. Els experiments realitzats postren que aquesta informació de forma te uns resultats semblants (i a vegades millors) als descriptors basats en aparença. També s'investiga com diferents característiques es poden combinar per ésser utilitzades en la classificació d'imatges i es mostra com el descriptor de forma proposat juntament amb un descriptor d'aparença millora substancialment la classificació. Finalment es descriu un algoritme que detecta les regions d'interès automàticament durant l'entrenament i la classificació. Això proporciona un mètode per inhibir el fons de la imatge i afegeix invariança a la posició dels objectes dins les imatges. S'ensenya que la forma i l'aparença sobre aquesta regió d'interès i utilitzant els classificadors random forests millora la classificació i el temps computacional. Es comparen els postres resultats amb resultats de la literatura utilitzant les mateixes bases de dades que els autors Aixa com els mateixos protocols d'aprenentatge i classificació. Es veu com totes les innovacions introduïdes incrementen la classificació final de les imatges.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, the statistical properties of tropical ice clouds (ice water content, visible extinction, effective radius, and total number concentration) derived from 3 yr of ground-based radar–lidar retrievals from the U.S. Department of Energy Atmospheric Radiation Measurement Climate Research Facility in Darwin, Australia, are compared with the same properties derived using the official CloudSat microphysical retrieval methods and from a simpler statistical method using radar reflectivity and air temperature. It is shown that the two official CloudSat microphysical products (2B-CWC-RO and 2B-CWC-RVOD) are statistically virtually identical. The comparison with the ground-based radar–lidar retrievals shows that all satellite methods produce ice water contents and extinctions in a much narrower range than the ground-based method and overestimate the mean vertical profiles of microphysical parameters below 10-km height by over a factor of 2. Better agreements are obtained above 10-km height. Ways to improve these estimates are suggested in this study. Effective radii retrievals from the standard CloudSat algorithms are characterized by a large positive bias of 8–12 μm. A sensitivity test shows that in response to such a bias the cloud longwave forcing is increased from 44.6 to 46.9 W m−2 (implying an error of about 5%), whereas the negative cloud shortwave forcing is increased from −81.6 to −82.8 W m−2. Further analysis reveals that these modest effects (although not insignificant) can be much larger for optically thick clouds. The statistical method using CloudSat reflectivities and air temperature was found to produce inaccurate mean vertical profiles and probability distribution functions of effective radius. This study also shows that the retrieval of the total number concentration needs to be improved in the official CloudSat microphysical methods prior to a quantitative use for the characterization of tropical ice clouds. Finally, the statistical relationship used to produce ice water content from extinction and air temperature obtained by the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite is evaluated for tropical ice clouds. It is suggested that the CALIPSO ice water content retrieval is robust for tropical ice clouds, but that the temperature dependence of the statistical relationship used should be slightly refined to better reproduce the radar–lidar retrievals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to gain insights into events and issues that may cause errors and outages in parts of IP networks, intelligent methods that capture and express causal relationships online (in real-time) are needed. Whereas generalised rule induction has been explored for non-streaming data applications, its application and adaptation on streaming data is mostly undeveloped or based on periodic and ad-hoc training with batch algorithms. Some association rule mining approaches for streaming data do exist, however, they can only express binary causal relationships. This paper presents the ongoing work on Online Generalised Rule Induction (OGRI) in order to create expressive and adaptive rule sets real-time that can be applied to a broad range of applications, including network telemetry data streams.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Robotic mapping is the process of automatically constructing an environment representation using mobile robots. We address the problem of semantic mapping, which consists of using mobile robots to create maps that represent not only metric occupancy but also other properties of the environment. Specifically, we develop techniques to build maps that represent activity and navigability of the environment. Our approach to semantic mapping is to combine machine learning techniques with standard mapping algorithms. Supervised learning methods are used to automatically associate properties of space to the desired classification patterns. We present two methods, the first based on hidden Markov models and the second on support vector machines. Both approaches have been tested and experimentally validated in two problem domains: terrain mapping and activity-based mapping.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The central interest of this thesis is to comprehend how the public action impels the formation and transformation of the tourist destinies. The research was based on the premise that the public actions are the result of the mediation process of state and non-state actors considered important in a section, which interact aiming for prevailing their interests and world visions above the others. The case of Porto de Galinhas beach, in Pernambuco, locus of the investigation of this thesis, allowed the analysis of a multiplicity of actors on the formation and implementation of local actions toward the development of the tourism between the years 1970 and 2010, as well as permitted the comprehension of the construction of the referential on the interventions made. This thesis, of a qualitative nature, has as theoretical support the cognitive approach of analysis of the public policies developed in France, and it has as main exponents the authors Bruno Jobert and Pierre Muller. This choice was made by the emphasis on the cognitive and normative factors of the politics, which aspects are not very explored in the studies of public policies in Brazil. As the source of the data collection, documental, bibliographic and field researches were utilized to the (re)constitution of the formation and transformation in the site concerned. The analysis techniques applied were the content and the documental analysis. To trace the public action referential, it started by the characterization of the touristic section frontiers and the creation of images by the main international body: the World Tourism Organization, of which analysis of the minutes of the meetings underscored guidelines to the member countries, including Brazil, which compounds the global-sectorial reference of the section. As from the analysis of the evolution of the tourism in the country, was identified that public policies in Brazil passed by transformations in their organization over the years, indicating changes in the referential that guided the interventions. These guidelines and transformations were identified in the construction of the tourist destination of Porto de Galinhas, of which data was systematized and presented in four historical periods, in which were discussed the values, the standard, the algorithms, the images and the important mediators. It has been revealed that the State worked in different roles in the decades analyzed in local tourism. From the 1990s, however, new actors were inserted in the formulation and implementation of policies developed, especially for local hotelkeepers. These, through their association, establishes a leadership relation in the local touristic section, thereby, they could set their hegemony and spread their own interest. The leadership acquired by a group of actors, in the case of Porto de Galinhas, does not mean that trade within the industry were neutralized, but that there is a cognitive framework that confronts the actors involved. In spite of the advances achieved by the work of the mediators in the last decades, that resulted in an amplification and diversification of the activity in the area, as well as the consolidation at the beach, as a tourist destiny of national standout, the position of the place is instable, concerned to the competitiveness, once that there is an situation of social and environmental unsustainability

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A reformulation of the bounded mixed complementarity problem is introduced. It is proved that the level sets of the objective function are bounded and, under reasonable assumptions, stationary points coincide with solutions of the original variational inequality problem. Therefore, standard minimization algorithms applied to the new reformulation must succeed. This result is applied to the compactification of unbounded mixed complementarity problems. © 2001 OPA (Overseas Publishers Association) N.V. Published by license under the Gordon and Breach Science Publishers imprint, a member of the Taylor & Francis Group.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis offers a practical and theoretical evaluations about gossip-epidemic algorithms, comparing those most common in the literature with new proposed algorithms and analyzing their behavior. Tests have been executed using one hundred graphs that has been randomly generated by Large Unstructured NEtwork Simulator (LUNES), a simulation software provided by Parallel and Distributed Simulation Research Group (PADS), of the Department of Computer Science, Università di Bologna and simulated using Advanced RTI System (ARTÌS), based on the High Level Architecture standard. Literatures algorithms have been analyzed and taken as base for new algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The primary Mg/Ca ratio of foraminiferal shells is a potentially valuable paleoproxy for sea surface temperature (SST) reconstructions. However, the reliable extraction of this ratio from sedimentary calcite assumes that we can overcome artifacts related to foraminiferal ecology and partial dissolution, as well as contamination by secondary calcite and clay. The standard batch method for Mg/Ca analysis involves cracking, sonicating, and rinsing the tests to remove clay, followed by chemical cleaning, and finally acid-digestion and single-point measurement. This laborious procedure often results in substantial loss of sample (typically 30-60%). We find that even the earliest steps of this procedure can fractionate Mg from Ca, thus biasing the result toward a more variable and often anomalously low Mg/Ca ratio. Moreover, the more rigorous the cleaning, the more calcite is lost, and the more likely it becomes that any residual clay that has not been removed by physical cleaning will increase the ratio. These potentially significant sources of error can be overcome with a flow-through (FT) sequential leaching method that makes time- and labor-intensive pretreatments unnecessary. When combined with time-resolved analysis (FT-TRA) flow-through, performed with a gradually increasing and highly regulated acid strength, produces continuous records of Mg, Sr, Al, and Ca concentrations in the leachate sorted by dissolution susceptibility of the reacting material. Flow-through separates secondary calcite from less susceptible biogenic calcite and clay, and further resolves the biogenic component into primary and more resistant fractions. FT-TRA reliably separates secondary calcite (which is not representative of original life habitats) from the more resistant biogenic calcite (the desired signal) and clay (a contaminant of high Mg/Ca, which also contains Al), and further resolves the biogenic component into primary and more resistant fractions that may reflect habitat or other changes during ontogeny. We find that the most susceptible fraction of biogenic calcite in surface dwelling foraminifera gives the most accurate value for SST and therefore best represents primary calcite. Sequential dissolution curves can be used to correct the primary Mg/Ca ratio for clay, if necessary. However, the temporal separation of calcite from clay in FT-TRA is so complete that this correction is typically <=2%, even in clay-rich sediments. Unlike hands-on batch methods, that are difficult to reproduce exactly, flow-through lends itself to automation, providing precise replication of treatment for every sample. Our automated flow-through system can process 22 samples, two system blanks, and 48 mixed standards in <12 hours of unattended operation. FT-TRA thus represents a faster, cheaper, and better way to determine Mg/Ca ratios in foraminiferal calcite.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A system of cluster analysis for genome-wide expression data from DNA microarray hybridization is described that uses standard statistical algorithms to arrange genes according to similarity in pattern of gene expression. The output is displayed graphically, conveying the clustering and the underlying expression data simultaneously in a form intuitive for biologists. We have found in the budding yeast Saccharomyces cerevisiae that clustering gene expression data groups together efficiently genes of known similar function, and we find a similar tendency in human data. Thus patterns seen in genome-wide expression experiments can be interpreted as indications of the status of cellular processes. Also, coexpression of genes of known function with poorly characterized or novel genes may provide a simple means of gaining leads to the functions of many genes for which information is not available currently.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Given a territory composed of basic geographical units, the delineation of local labour market areas (LLMAs) can be seen as a problem in which those units are grouped subject to multiple constraints. In previous research, standard genetic algorithms were not able to find valid solutions, and a specific evolutionary algorithm was developed. The inclusion of multiple ad hoc operators allowed the algorithm to find better solutions than those of a widely-used greedy method. However, the percentage of invalid solutions was still very high. In this paper we improve that evolutionary algorithm through the inclusion of (i) a reparation process, that allows every invalid individual to fulfil the constraints and contribute to the evolution, and (ii) a hillclimbing optimisation procedure for each generated individual by means of an appropriate reassignment of some of its constituent units. We compare the results of both techniques against the previous results and a greedy method.