992 resultados para Monte Carlo, Método de


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose a new nonlocal density functional theory characterization procedure, the finite wall thickness model, for nanoporous carbons, whereby heterogeneity of pore size and pore walls in the carbon is probed simultaneously. We determine the pore size distributions and pore wall thickness distributions of several commercial activated carbons and coal chars, with good correspondence with X-ray diffraction. It is shown that the conventional infinite wall thickness approach overestimates the pore size slightly. Pore-pore correlation has been shown to have a negligible effect on prediction of pore size and pore wall thickness distributions for small molecules such as argon used in characterization. By utilizing the structural parameters (pore size and pore wall thickness distribution) in the generalized adsorption isotherm (GAI) we are able to predict adsorption uptake of supercritical gases in BPL and Norit RI Extra carbons, in excellent agreement with experimental adsorption uptake data up to 60 MPa. The method offers a useful technique for probing features of the solid skeleton, hitherto studied by crystallographic methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper use consider the problem of providing standard errors of the component means in normal mixture models fitted to univariate or multivariate data by maximum likelihood via the EM algorithm. Two methods of estimation of the standard errors are considered: the standard information-based method and the computationally-intensive bootstrap method. They are compared empirically by their application to three real data sets and by a small-scale Monte Carlo experiment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

SETTING: Chronic obstructive pulmonary disease (COPD) is the third leading cause of death among adults in Brazil. OBJECTIVE: To evaluate the mortality and hospitalisation trends in Brazil caused by COPD during the period 1996-2008. DESIGN: We used the health official statistics system to obtain data about mortality (1996-2008) and morbidity (1998-2008) due to COPD and all respiratory diseases (tuberculosis: codes A15-16; lung cancer: code C34, and all diseases coded from J40 to 47 in the 10th Revision of the International Classification of Diseases) as the underlying cause, in persons aged 45-74 years. We used the Joinpoint Regression Program log-linear model using Poisson regression that creates a Monte Carlo permutation test to identify points where trend lines change significantly in magnitude/direction to verify peaks and trends. RESULTS: The annual per cent change in age-adjusted death rates due to COPD declined by 2.7% in men (95%CI -3.6 to -1.8) and -2.0% (95%CI -2.9 to -1.0) in women; and due to all respiratory causes it declined by -1.7% (95%CI 2.4 to -1.0) in men and -1.1% (95%CI -1.8 to -0.3) in women. Although hospitalisation rates for COPD are declining, the hospital admission fatality rate increased in both sexes. CONCLUSION: COPD is still a leading cause of mortality in Brazil despite the observed decline in the mortality/hospitalisation rates for both sexes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Radiation dose calculations in nuclear medicine depend on quantification of activity via planar and/or tomographic imaging methods. However, both methods have inherent limitations, and the accuracy of activity estimates varies with object size, background levels, and other variables. The goal of this study was to evaluate the limitations of quantitative imaging with planar and single photon emission computed tomography (SPECT) approaches, with a focus on activity quantification for use in calculating absorbed dose estimates for normal organs and tumors. To do this we studied a series of phantoms of varying complexity of geometry, with three radionuclides whose decay schemes varied from simple to complex. Four aqueous concentrations of (99m)Tc, (131)I, and (111)In (74, 185, 370, and 740 kBq mL(-1)) were placed in spheres of four different sizes in a water-filled phantom, with three different levels of activity in the surrounding water. Planar and SPECT images of the phantoms were obtained on a modern SPECT/computed tomography (CT) system. These radionuclides and concentration/background studies were repeated using a cardiac phantom and a modified torso phantom with liver and ""tumor"" regions containing the radionuclide concentrations and with the same varying background levels. Planar quantification was performed using the geometric mean approach, with attenuation correction (AC), and with and without scatter corrections (SC and NSC). SPECT images were reconstructed using attenuation maps (AM) for AC; scatter windows were used to perform SC during image reconstruction. For spherical sources with corrected data, good accuracy was observed (generally within +/- 10% of known values) for the largest sphere (11.5 mL) and for both planar and SPECT methods with (99m)Tc and (131)I, but were poorest and deviated from known values for smaller objects, most notably for (111)In. SPECT quantification was affected by the partial volume effect in smaller objects and generally showed larger errors than the planar results in these cases for all radionuclides. For the cardiac phantom, results were the most accurate of all of the experiments for all radionuclides. Background subtraction was an important factor influencing these results. The contribution of scattered photons was important in quantification with (131)I; if scatter was not accounted for, activity tended to be overestimated using planar quantification methods. For the torso phantom experiments, results show a clear underestimation of activity when compared to previous experiment with spherical sources for all radionuclides. Despite some variations that were observed as the level of background increased, the SPECT results were more consistent across different activity concentrations. Planar or SPECT quantification on state-of-the-art gamma cameras with appropriate quantitative processing can provide accuracies of better than 10% for large objects and modest target-to-background concentrations; however when smaller objects are used, in the presence of higher background, and for nuclides with more complex decay schemes, SPECT quantification methods generally produce better results. Health Phys. 99(5):688-701; 2010

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hepatitis B is a worldwide health problem affecting about 2 billion people and more than 350 million are chronic carriers of the virus. Nine HBV genotypes (A to I) have been described. The geographical distribution of HBV genotypes is not completely understood due to the limited number of samples from some parts of the world. One such example is Colombia, in which few studies have described the HBV genotypes. In this study, we characterized HBV genotypes in 143 HBsAg-positive volunteer blood donors from Colombia. A fragment of 1306 bp partially comprising HBsAg and the DNA polymerase coding regions (S/POL) was amplified and sequenced. Bayesian phylogenetic analyses were conducted using the Markov Chain Monte Carlo (MCMC) approach to obtain the maximum clade credibility (MCC) tree using BEAST v.1.5.3. Of all samples, 68 were positive and 52 were successfully sequenced. Genotype F was the most prevalent in this population (77%) - subgenotypes F3 (75%) and Fib (2%). Genotype G (7.7%) and subgenotype A2 (15.3%) were also found. Genotype G sequence analysis suggests distinct introductions of this genotype in the country. Furthermore, we estimated the time of the most recent common ancestor (TMRCA) for each HBV/F subgenotype and also for Colombian F3 sequences using two different datasets: (i) 77 sequences comprising 1306 bp of S/POL region and (ii) 283 sequences comprising 681 bp of S/POL region. We also used two other previously estimated evolutionary rates: (i) 2.60 x 10(-4) s/s/y and (ii) 1.5 x 10(-5) s/s/y. Here we report the HBV genotypes circulating in Colombia and estimated the TMRCA for the four different subgenotypes of genotype F. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hepatitis C virus (HCV) is a frequent cause of acute and chronic hepatitis and a leading cause for cirrhosis of the liver and hepatocellular carcinoma. HCV is classified in six major genotypes and more than 70 subtypes. In Colombian blood banks, serum samples were tested for anti-HCV antibodies using a third-generation ELISA. The aim of this study was to characterize the viral sequences in plasma of 184 volunteer blood donors who attended the ""Banco Nacional de Sangre de la Cruz Roja Colombiana,`` Bogota, Colombia. Three different HCV genomic regions were amplified by nested PCR. The first of these was a segment of 180 bp of the 5`UTR region to confirm the previous diagnosis by ELISA. From those that were positive to the 5`UTR region, two further segments were amplified for genotyping and subtyping by phylogenetic analysis: a segment of 380 bp from the NS5B region; and a segment of 391 bp from the E1 region. The distribution of HCV subtypes was: 1b (82.8%), 1a (5.7%), 2a (5.7%), 2b (2.8%), and 3a (2.8%). By applying Bayesian Markov chain Monte Carlo simulation, it was estimated that HCV-1b was introduced into Bogota around 1950. Also, this subtype spread at an exponential rate between about 1970 to about 1990, after which transmission of HCV was reduced by anti-HCV testing of this population. Among Colombian blood donors, HCV genotype 1b is the most frequent genotype, especially in large urban conglomerates such as Bogota, as is the case in other South American countries. J. Med. Virol. 82: 1889-1898, 2010. (C) 2010 Wiley-Liss, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Molecular epidemiological data concerning the hepatitis B virus (HBV) in Chile are not known completely. Since the HBV genotype F is the most prevalent in the country, the goal of this study was to obtain full HBV genome sequences from patients infected chronically in order to determine their subgenotypes and the occurrence of resistance-associated mutations. Twenty-one serum samples from antiviral drug-naive patients with chronic hepatitis B were subjected to full-length PCR amplification, and both strands of the whole genomes were fully sequenced. Phylogenetic analyses were performed along with reference sequences available from GenBank (n = 290). The sequences were aligned using Clustal X and edited in the SE-AL software. Bayesian phylogenetic analyses were conducted by Markov Chain Monte Carlo simulations (MCMC) for 10 million generations in order to obtain the substitution tree using BEAST. The sequences were also analyzed for the presence of primary drug resistance mutations using CodonCode Aligner Software. The phylogenetic analyses indicated that all sequences were found to be the HBV subgenotype F1b, clustered into four different groups, suggesting that diverse lineages of this subgenotype may be circulating within this population of Chilean patients. J. Med. Virol. 83: 1530-1536, 2011. (C) 2011 Wiley-Liss, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A significant loss in electron probe current can occur before the electron beam enters the specimen chamber of an environmental scanning electron microscope (ESEM). This loss results from electron scattering in a gaseous jet formed inside and downstream (above) the pressure-limiting aperture (PLA), which separates the high-pressure and high-vacuum regions of the microscope. The electron beam loss above the PLA has been calculated for three different ESEMs, each with a different PLA geometry: an ElectroScan E3, a Philips XL30 ESEM, and a prototype instrument. The mass thickness of gas above the PLA in each case has been determined by Monte Carlo simulation of the gas density variation in the gas jet. It has been found that the PLA configurations used in the commercial instruments produce considerable loss in the electron probe current that dramatically degrades their performance at high chamber pressure and low accelerating voltage. These detrimental effects are minimized in the prototype instrument, which has an optimized thin-foil PLA design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new addition to the family of single-molecule magnets is reported: an Fete cage stabilized with benzoate and pyridonate ligands. Monte Carlo methods have been used to derive exchange parameters within the cage, and hence model susceptibility behavior.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quantum dynamics simulations can be improved using novel quasiprobability distributions based on non-orthogonal Hermitian kernel operators. This introduces arbitrary functions (gauges) into the stochastic equations. which can be used to tailor them for improved calculations. A possible application to full quantum dynamic simulations of BEC's is presented. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. There are a variety of methods that could be used to increase the efficiency of the design of experiments. However, it is only recently that such methods have been considered in the design of clinical pharmacology trials. 2. Two such methods, termed data-dependent (e.g. simulation) and data-independent (e.g. analytical evaluation of the information in a particular design), are becoming increasingly used as efficient methods for designing clinical trials. These two design methods have tended to be viewed as competitive, although a complementary role in design is proposed here. 3. The impetus for the use of these two methods has been the need for a more fully integrated approach to the drug development process that specifically allows for sequential development (i.e. where the results of early phase studies influence later-phase studies). 4. The present article briefly presents the background and theory that underpins both the data-dependent and -independent methods with the use of illustrative examples from the literature. In addition, the potential advantages and disadvantages of each method are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent progress in the production, purification, and experimental and theoretical investigations of carbon nanotubes for hydrogen storage are reviewed. From the industrial point of view, the chemical vapor deposition process has shown advantages over laser ablation and electric-arc-discharge methods. The ultimate goal in nanotube synthesis should be to gain control over geometrical aspects of nanotubes, such as location and orientation, and the atomic structure of nanotubes, including helicity and diameter. There is currently no effective and simple purification procedure that fulfills all requirements for processing carbon nanotubes. Purification is still the bottleneck for technical applications, especially where large amounts of material are required. Although the alkali-metal-doped carbon nanotubes showed high H-2 Weight uptake, further investigations indicated that some of this uptake was due to water rather than hydrogen. This discovery indicates a potential source of error in evaluation of the storage capacity of doped carbon nanotubes. Nevertheless, currently available single-wall nanotubes yield a hydrogen uptake value near 4 wt% under moderate pressure and room temperature. A further 50% increase is needed to meet U.S. Department of Energy targets for commercial exploitation. Meeting this target will require combining experimental and theoretical efforts to achieve a full understanding of the adsorption process, so that the uptake can be rationally optimized to commercially attractive levels. Large-scale production and purification of carbon nanotubes and remarkable improvement of H-2 storage capacity in carbon nanotubes represent significant technological and theoretical challenges in the years to come.