50 resultados para Data modeling
Resumo:
Modeling volatile organic compounds (voc`s) adsorption onto cup-stacked carbon nanotubes (cscnt) using the linear driving force model. Volatile organic compounds (VOC`s) are an important category of air pollutants and adsorption has been employed in the treatment (or simply concentration) of these compounds. The current study used an ordinary analytical methodology to evaluate the properties of a cup-stacked nanotube (CSCNT), a stacking morphology of truncated conical graphene, with large amounts of open edges on the outer surface and empty central channels. This work used a Carbotrap bearing a cup-stacked structure (composite); for comparison, Carbotrap was used as reference (without the nanotube). The retention and saturation capacities of both adsorbents to each concentration used (1, 5, 20 and 35 ppm of toluene and phenol) were evaluated. The composite performance was greater than Carbotrap; the saturation capacities for the composite was 67% higher than Carbotrap (average values). The Langmuir isotherm model was used to fit equilibrium data for both adsorbents, and a linear driving force model (LDF) was used to quantify intraparticle adsorption kinetics. LDF was suitable to describe the curves.
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
Functional magnetic resonance imaging (fMRI) is currently one of the most widely used methods for studying human brain function in vivo. Although many different approaches to fMRI analysis are available, the most widely used methods employ so called ""mass-univariate"" modeling of responses in a voxel-by-voxel fashion to construct activation maps. However, it is well known that many brain processes involve networks of interacting regions and for this reason multivariate analyses might seem to be attractive alternatives to univariate approaches. The current paper focuses on one multivariate application of statistical learning theory: the statistical discrimination maps (SDM) based on support vector machine, and seeks to establish some possible interpretations when the results differ from univariate `approaches. In fact, when there are changes not only on the activation level of two conditions but also on functional connectivity, SDM seems more informative. We addressed this question using both simulations and applications to real data. We have shown that the combined use of univariate approaches and SDM yields significant new insights into brain activations not available using univariate methods alone. In the application to a visual working memory fMRI data, we demonstrated that the interaction among brain regions play a role in SDM`s power to detect discriminative voxels. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Historically, the cure rate model has been used for modeling time-to-event data within which a significant proportion of patients are assumed to be cured of illnesses, including breast cancer, non-Hodgkin lymphoma, leukemia, prostate cancer, melanoma, and head and neck cancer. Perhaps the most popular type of cure rate model is the mixture model introduced by Berkson and Gage [1]. In this model, it is assumed that a certain proportion of the patients are cured, in the sense that they do not present the event of interest during a long period of time and can found to be immune to the cause of failure under study. In this paper, we propose a general hazard model which accommodates comprehensive families of cure rate models as particular cases, including the model proposed by Berkson and Gage. The maximum-likelihood-estimation procedure is discussed. A simulation study analyzes the coverage probabilities of the asymptotic confidence intervals for the parameters. A real data set on children exposed to HIV by vertical transmission illustrates the methodology.
Resumo:
To test a mathematical model for measuring blinking kinematics. Spontaneous and reflex blinks of 23 healthy subjects were recorded with two different temporal resolutions. A magnetic search coil was used to record 77 blinks sampled at 200 Hz and 2 kHz in 13 subjects. A video system with low temporal resolution (30 Hz) was employed to register 60 blinks of 10 other subjects. The experimental data points were fitted with a model that assumes that the upper eyelid movement can be divided into two parts: an impulsive accelerated motion followed by a damped harmonic oscillation. All spontaneous and reflex blinks, including those recorded with low resolution, were well fitted by the model with a median coefficient of determination of 0.990. No significant difference was observed when the parameters of the blinks were estimated with the under-damped or critically damped solutions of the harmonic oscillator. On the other hand, the over-damped solution was not applicable to fit any movement. There was good agreement between the model and numerical estimation of the amplitude but not of maximum velocity. Spontaneous and reflex blinks can be mathematically described as consisting of two different phases. The down-phase is mainly an accelerated movement followed by a short time that represents the initial part of the damped harmonic oscillation. The latter is entirely responsible for the up-phase of the movement. Depending on the instantaneous characteristics of each movement, the under-damped or critically damped oscillation is better suited to describe the second phase of the blink. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this analysis, using available hourly and daily radiometric data performed at Botucatu, Brazil, several empirical models relating ultraviolet (UV), photosynthetically active (PAR) and near infrared (NIR) solar global components with solar global radiation (G) are established. These models are developed and discussed through clearness index K(T) (ratio of the global-to-extraterrestrial solar radiation). Results obtained reveal that the proposed empirical models predict hourly and daily values accurately. Finally. the overall analysis carried Out demonstrates that the sky conditions are more important in developing correlation models between the UV component and the global solar radiation. The linear regression models derived to estimate PAR and NIR components may be obtained without sky condition considerations within a maximum variation of 8%. In the case of UV, not taking into consideration the sky condition may cause a discrepancy of up to 18% for hourly values and 15% for daily values. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Krameria plants are found in arid regions of the Americas and present a floral system that attracts oil-collecting bees. Niche modeling and multivariate tools were applied to examine ecological and geographical aspects of the 18 species of this genus, using occurrence data obtained from herbaria and literature. Niche modeling showed the potential areas of occurrence for each species and the analysis of climatic variables suggested that North American species occur mostly in deserted or xeric ecoregions with monthly precipitation below 140 mm and large temperature ranges. South American species are mainly found in deserted ecoregions and subtropical savannas where monthly precipitation often exceeds 150 mm and temperature ranges are smaller. Principal Component Analysis (PCA) performed with values of temperature and precipitation showed that the distribution limits of Krameria species are primarily associated with maximum and minimum temperatures. Modeling of Krameria species proved to be a useful tool for analyzing the influence of the ecological niche variables in the geographical distribution of species, providing new information to guide future investigations. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Phylogenetic analyses of chloroplast DNA sequences, morphology, and combined data have provided consistent support for many of the major branches within the angiosperm, clade Dipsacales. Here we use sequences from three mitochondrial loci to test the existing broad scale phylogeny and in an attempt to resolve several relationships that have remained uncertain. Parsimony, maximum likelihood, and Bayesian analyses of a combined mitochondrial data set recover trees broadly consistent with previous studies, although resolution and support are lower than in the largest chloroplast analyses. Combining chloroplast and mitochondrial data results in a generally well-resolved and very strongly supported topology but the previously recognized problem areas remain. To investigate why these relationships have been difficult to resolve we conducted a series of experiments using different data partitions and heterogeneous substitution models. Usually more complex modeling schemes are favored regardless of the partitions recognized but model choice had little effect on topology or support values. In contrast there are consistent but weakly supported differences in the topologies recovered from coding and non-coding matrices. These conflicts directly correspond to relationships that were poorly resolved in analyses of the full combined chloroplast-mitochondrial data set. We suggest incongruent signal has contributed to our inability to confidently resolve these problem areas. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we present different ofrailtyo models to analyze longitudinal data in the presence of covariates. These models incorporate the extra-Poisson variability and the possible correlation among the repeated counting data for each individual. Assuming a CD4 counting data set in HIV-infected patients, we develop a hierarchical Bayesian analysis considering the different proposed models and using Markov Chain Monte Carlo methods. We also discuss some Bayesian discrimination aspects for the choice of the best model.
Resumo:
In this paper, the laminar fluid flow of Newtonian and non-Newtonian of aqueous solutions in a tubular membrane is numerically studied. The mathematical formulation, with associated initial and boundary conditions for cylindrical coordinates, comprises the mass conservation, momentum conservation and mass transfer equations. These equations are discretized by using the finite-difference technique on a staggered grid system. Comparisons of the three upwinding schemes for discretization of the non-linear (convective) terms are presented. The effects of several physical parameters on the concentration profile are investigated. The numerical results compare favorably with experimental data and the analytical solutions. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
In interval-censored survival data, the event of interest is not observed exactly but is only known to occur within some time interval. Such data appear very frequently. In this paper, we are concerned only with parametric forms, and so a location-scale regression model based on the exponentiated Weibull distribution is proposed for modeling interval-censored data. We show that the proposed log-exponentiated Weibull regression model for interval-censored data represents a parametric family of models that include other regression models that are broadly used in lifetime data analysis. Assuming the use of interval-censored data, we employ a frequentist analysis, a jackknife estimator, a parametric bootstrap and a Bayesian analysis for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Furthermore, for different parameter settings, sample sizes and censoring percentages, various simulations are performed; in addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to a modified deviance residual in log-exponentiated Weibull regression models for interval-censored data. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In survival analysis applications, the failure rate function may frequently present a unimodal shape. In such case, the log-normal or log-logistic distributions are used. In this paper, we shall be concerned only with parametric forms, so a location-scale regression model based on the Burr XII distribution is proposed for modeling data with a unimodal failure rate function as an alternative to the log-logistic regression model. Assuming censored data, we consider a classic analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the log-logistic and log-Burr XII regression models. Besides, we use sensitivity analysis to detect influential or outlying observations, and residual analysis is used to check the assumptions in the model. Finally, we analyze a real data set under log-Buff XII regression models. (C) 2008 Published by Elsevier B.V.
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009
Resumo:
The adsorption kinetics curves of poly(xylylidene tetrahydrothiophenium chloride) (PTHT), a poly-p-phenylenevinylene (PPV) precursor, and the sodium salt of dodecylbenzene sulfonic acid (DBS), onto (PTHT/DBS)(n) layer-by-layer (LBL) films were characterized by means of UV-vis spectroscopy. The amount of PTHT/DBS and PTHT adsorbed on each layer was shown to be practically independent of adsorption time. A Langmuir-type metastable equilibrium model was used to adjust the adsorption isotherms data and to estimate adsorption/desorption coefficients ratios, k = k(ads)/k(des), values of 2 x 10(5) and 4 x 10(6) for PTHT and PTHT/DBS layers, respectively. The desorption coefficient has been estimated, using literature values for poly(o-methoxyaniline) desorption coefficient, as was found to be in the range of 10(-9) to 10(-6) s(-1), indicating that quasi equilibrium is rapidly attained.