988 resultados para Monte Carlo algorithms
Resumo:
International audience
Resumo:
The majority of research work carried out in the field of Operations-Research uses methods and algorithms to optimize the pick-up and delivery problem. Most studies aim to solve the vehicle routing problem, to accommodate optimum delivery orders, vehicles etc. This paper focuses on green logistics approach, where existing Public Transport infrastructure capability of a city is used for the delivery of small and medium sized packaged goods thus, helping improve the situation of urban congestion and greenhouse gas emissions reduction. It carried out a study to investigate the feasibility of the proposed multi-agent based simulation model, for efficiency of cost, time and energy consumption. Multimodal Dijkstra Shortest Path algorithm and Nested Monte Carlo Search have been employed for a two-phase algorithmic approach used for generation of time based cost matrix. The quality of the tour is dependent on the efficiency of the search algorithm implemented for plan generation and route planning. The results reveal a definite advantage of using Public Transportation over existing delivery approaches in terms of energy efficiency.
Resumo:
Scientific curiosity, exploration of georesources and environmental concerns are pushing the geoscientific research community toward subsurface investigations of ever-increasing complexity. This review explores various approaches to formulate and solve inverse problems in ways that effectively integrate geological concepts with geophysical and hydrogeological data. Modern geostatistical simulation algorithms can produce multiple subsurface realizations that are in agreement with conceptual geological models and statistical rock physics can be used to map these realizations into physical properties that are sensed by the geophysical or hydrogeological data. The inverse problem consists of finding one or an ensemble of such subsurface realizations that are in agreement with the data. The most general inversion frameworks are presently often computationally intractable when applied to large-scale problems and it is necessary to better understand the implications of simplifying (1) the conceptual geological model (e.g., using model compression); (2) the physical forward problem (e.g., using proxy models); and (3) the algorithm used to solve the inverse problem (e.g., Markov chain Monte Carlo or local optimization methods) to reach practical and robust solutions given today's computer resources and knowledge. We also highlight the need to not only use geophysical and hydrogeological data for parameter estimation purposes, but also to use them to falsify or corroborate alternative geological scenarios.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
The study of random probability measures is a lively research topic that has attracted interest from different fields in recent years. In this thesis, we consider random probability measures in the context of Bayesian nonparametrics, where the law of a random probability measure is used as prior distribution, and in the context of distributional data analysis, where the goal is to perform inference given avsample from the law of a random probability measure. The contributions contained in this thesis can be subdivided according to three different topics: (i) the use of almost surely discrete repulsive random measures (i.e., whose support points are well separated) for Bayesian model-based clustering, (ii) the proposal of new laws for collections of random probability measures for Bayesian density estimation of partially exchangeable data subdivided into different groups, and (iii) the study of principal component analysis and regression models for probability distributions seen as elements of the 2-Wasserstein space. Specifically, for point (i) above we propose an efficient Markov chain Monte Carlo algorithm for posterior inference, which sidesteps the need of split-merge reversible jump moves typically associated with poor performance, we propose a model for clustering high-dimensional data by introducing a novel class of anisotropic determinantal point processes, and study the distributional properties of the repulsive measures, shedding light on important theoretical results which enable more principled prior elicitation and more efficient posterior simulation algorithms. For point (ii) above, we consider several models suitable for clustering homogeneous populations, inducing spatial dependence across groups of data, extracting the characteristic traits common to all the data-groups, and propose a novel vector autoregressive model to study of growth curves of Singaporean kids. Finally, for point (iii), we propose a novel class of projected statistical methods for distributional data analysis for measures on the real line and on the unit-circle.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
This work assessed the environmental impacts of the production and use of 1 MJ of hydrous ethanol (E100) in Brazil in prospective scenarios (2020-2030), considering the deployment of technologies currently under development and better agricultural practices. The life cycle assessment technique was employed using the CML method for the life cycle impact assessment and the Monte Carlo method for the uncertainty analysis. Abiotic depletion, global warming, human toxicity, ecotoxicity, photochemical oxidation, acidification, and eutrophication were the environmental impacts categories analyzed. Results indicate that the proposed improvements (especially no-til farming-scenarios s2 and s4) would lead to environmental benefits in prospective scenarios compared to the current ethanol production (scenario s0). Combined first and second generation ethanol production (scenarios s3 and s4) would require less agricultural land but would not perform better than the projected first generation ethanol, although the uncertainties are relatively high. The best use of 1 ha of sugar cane was also assessed, considering the displacement of the conventional products by ethanol and electricity. No-til practices combined with the production of first generation ethanol and electricity (scenario s2) would lead to the largest mitigation effects for global warming and abiotic depletion. For the remaining categories, emissions would not be mitigated with the utilization of the sugar cane products. However, this conclusion is sensitive to the displaced electricity sources.
Resumo:
Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.
Resumo:
A combination of the variational principle, expectation value and Quantum Monte Carlo method is used to solve the Schrödinger equation for some simple systems. The results are accurate and the simplicity of this version of the Variational Quantum Monte Carlo method provides a powerful tool to teach alternative procedures and fundamental concepts in quantum chemistry courses. Some numerical procedures are described in order to control accuracy and computational efficiency. The method was applied to the ground state energies and a first attempt to obtain excited states is described.
Resumo:
Neste artigo apresentamos uma análise Bayesiana para o modelo de volatilidade estocástica (SV) e uma forma generalizada deste, cujo objetivo é estimar a volatilidade de séries temporais financeiras. Considerando alguns casos especiais dos modelos SV usamos algoritmos de Monte Carlo em Cadeias de Markov e o software WinBugs para obter sumários a posteriori para as diferentes formas de modelos SV. Introduzimos algumas técnicas Bayesianas de discriminação para a escolha do melhor modelo a ser usado para estimar as volatilidades e fazer previsões de séries financeiras. Um exemplo empírico de aplicação da metodologia é introduzido com a série financeira do IBOVESPA.
Resumo:
The interplay between the biocolloidal characteristics (especially size and charge), pH, salt concentration and the thermal energy results in a unique collection of mesoscopic forces of importance to the molecular organization and function in biological systems. By means of Monte Carlo simulations and semi-quantitative analysis in terms of perturbation theory, we describe a general electrostatic mechanism that gives attraction at low electrolyte concentrations. This charge regulation mechanism due to titrating amino acid residues is discussed in a purely electrostatic framework. The complexation data reported here for interaction between a polyelectrolyte chain and the proteins albumin, goat and bovine alpha-lactalbumin, beta-lactoglobulin, insulin, k-casein, lysozyme and pectin methylesterase illustrate the importance of the charge regulation mechanism. Special attention is given to pH congruent to pI where ion-dipole and charge regulation interactions could overcome the repulsive ion-ion interaction. By means of protein mutations, we confirm the importance of the charge regulation mechanism, and quantify when the complexation is dominated either by charge regulation or by the ion-dipole term.
Resumo:
Large-conductance Ca(2+)-activated K(+) channels (BK) play a fundamental role in modulating membrane potential in many cell types. The gating of BK channels and its modulation by Ca(2+) and voltage has been the subject of intensive research over almost three decades, yielding several of the most complicated kinetic mechanisms ever proposed. A large number of open and closed states disposed, respectively, in two planes, named tiers, characterize these mechanisms. Transitions between states in the same plane are cooperative and modulated by Ca(2+). Transitions across planes are highly concerted and voltage-dependent. Here we reexamine the validity of the two-tiered hypothesis by restricting attention to the modulation by Ca(2+). Large single channel data sets at five Ca(2+) concentrations were simultaneously analyzed from a Bayesian perspective by using hidden Markov models and Markov-chain Monte Carlo stochastic integration techniques. Our results support a dramatic reduction in model complexity, favoring a simple mechanism derived from the Monod-Wyman-Changeux allosteric model for homotetramers, able to explain the Ca(2+) modulation of the gating process. This model differs from the standard Monod-Wyman-Changeux scheme in that one distinguishes when two Ca(2+) ions are bound to adjacent or diagonal subunits of the tetramer.
Resumo:
Background: Hepatitis C virus (HCV) is an important human pathogen affecting around 3% of the human population. In Brazil, it is estimated that there are approximately 2 to 3 million HCV chronic carriers. There are few reports of HCV prevalence in Rondonia State (RO), but it was estimated in 9.7% from 1999 to 2005. The aim of this study was to characterize HCV genotypes in 58 chronic HCV infected patients from Porto Velho, Rondonia (RO), Brazil. Methods: A fragment of 380 bp of NS5B region was amplified by nested PCR for genotyping analysis. Viral sequences were characterized by phylogenetic analysis using reference sequences obtained from the GenBank (n = 173). Sequences were aligned using Muscle software and edited in the SE-AL software. Phylogenetic analyses were conducted using Bayesian Markov chain Monte Carlo simulation (MCMC) to obtain the MCC tree using BEAST v. 1.5.3. Results: From 58 anti-HCV positive samples, 22 were positive to the NS5B fragment and successfully sequenced. Genotype 1b was the most prevalent in this population (50%), followed by 1a (27.2%), 2b (13.6%) and 3a (9.0%). Conclusions: This study is the first report of HCV genotypes from Rondonia State and subtype 1b was found to be the most prevalent. This subtype is mostly found among people who have a previous history of blood transfusion but more detailed studies with a larger number of patients are necessary to understand the HCV dynamics in the population of Rondonia State, Brazil.
Resumo:
Background: Hepatitis B virus (HBV) can be classified into nine genotypes (A-I) defined by sequence divergence of more than 8% based on the complete genome. This study aims to identify the genotypic distribution of HBV in 40 HBsAg-positive patients from Rondonia, Brazil. A fragment of 1306 bp partially comprising surface and polymerase overlapping genes was amplified by PCR. Amplified DNA was purified and sequenced. Amplified DNA was purified and sequenced on an ABI PRISM (R) 377 Automatic Sequencer (Applied Biosystems, Foster City, CA, USA). The obtained sequences were aligned with reference sequences obtained from the GenBank using Clustal X software and then edited with Se-Al software. Phylogenetic analyses were conducted by the Markov Chain Monte Carlo (MCMC) approach using BEAST v.1.5.3. Results: The subgenotypes distribution was A1 (37.1%), D3 (22.8%), F2a (20.0%), D4 (17.1%) and D2 (2.8%). Conclusions: These results for the first HBV genotypic characterization in Rondonia state are consistent with other studies in Brazil, showing the presence of several HBV genotypes that reflects the mixed origin of the population, involving descendants from Native Americans, Europeans, and Africans.
Resumo:
Background: The Brazilian population is mainly descendant from European colonizers, Africans and Native Americans. Some Afro-descendants lived in small isolated communities since the slavery period. The epidemiological status of HBV infection in Quilombos communities from northeast of Brazil remains unknown. The aim of this study was to characterize the HBV genotypes circulating inside a Quilombo isolated community from Maranhao State, Brazil. Methods: Seventy-two samples from Frechal Quilombo community at Maranhao were collected. All serum samples were screened by enzyme-linked immunosorbent assays for the presence of hepatitis B surface antigen ( HBsAg). HBsAg positive samples were submitted to DNA extraction and a fragment of 1306 bp partially comprising HBsAg and polymerase coding regions (S/POL) was amplified by nested PCR and its nucleotide sequence was determined. Viral isolates were genotyped by phylogenetic analysis using reference sequences from each genotype obtained from GenBank (n = 320). Sequences were aligned using Muscle software and edited in the SE-AL software. Bayesian phylogenetic analyses were conducted using Markov Chain Monte Carlo (MCMC) method to obtain the MCC tree using BEAST v.1.5.3. Results: Of the 72 individuals, 9 (12.5%) were HBsAg-positive and 4 of them were successfully sequenced for the 1306 bp fragment. All these samples were genotype A1 and grouped together with other sequences reported from Brazil. Conclusions: The present study represents the first report on the HBV genotypes characterization of this community in the Maranhao state in Brazil where a high HBsAg frequency was found. In this study, we reported a high frequency of HBV infection and the exclusive presence of subgenotype A1 in an Afro-descendent community in the Maranhao State, Brazil.