16 resultados para Data Modeling
Resumo:
Specific choices about how to represent complex networks can have a substantial impact on the execution time required for the respective construction and analysis of those structures. In this work we report a comparison of the effects of representing complex networks statically by adjacency matrices or dynamically by adjacency lists. Three theoretical models of complex networks are considered: two types of Erdos-Renyi as well as the Barabasi-Albert model. We investigated the effect of the different representations with respect to the construction and measurement of several topological properties (i.e. degree, clustering coefficient, shortest path length, and betweenness centrality). We found that different forms of representation generally have a substantial effect on the execution time, with the sparse representation frequently resulting in remarkably superior performance. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this analysis, using available hourly and daily radiometric data performed at Botucatu, Brazil, several empirical models relating ultraviolet (UV), photosynthetically active (PAR) and near infrared (NIR) solar global components with solar global radiation (G) are established. These models are developed and discussed through clearness index K(T) (ratio of the global-to-extraterrestrial solar radiation). Results obtained reveal that the proposed empirical models predict hourly and daily values accurately. Finally. the overall analysis carried Out demonstrates that the sky conditions are more important in developing correlation models between the UV component and the global solar radiation. The linear regression models derived to estimate PAR and NIR components may be obtained without sky condition considerations within a maximum variation of 8%. In the case of UV, not taking into consideration the sky condition may cause a discrepancy of up to 18% for hourly values and 15% for daily values. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
Krameria plants are found in arid regions of the Americas and present a floral system that attracts oil-collecting bees. Niche modeling and multivariate tools were applied to examine ecological and geographical aspects of the 18 species of this genus, using occurrence data obtained from herbaria and literature. Niche modeling showed the potential areas of occurrence for each species and the analysis of climatic variables suggested that North American species occur mostly in deserted or xeric ecoregions with monthly precipitation below 140 mm and large temperature ranges. South American species are mainly found in deserted ecoregions and subtropical savannas where monthly precipitation often exceeds 150 mm and temperature ranges are smaller. Principal Component Analysis (PCA) performed with values of temperature and precipitation showed that the distribution limits of Krameria species are primarily associated with maximum and minimum temperatures. Modeling of Krameria species proved to be a useful tool for analyzing the influence of the ecological niche variables in the geographical distribution of species, providing new information to guide future investigations. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Phylogenetic analyses of chloroplast DNA sequences, morphology, and combined data have provided consistent support for many of the major branches within the angiosperm, clade Dipsacales. Here we use sequences from three mitochondrial loci to test the existing broad scale phylogeny and in an attempt to resolve several relationships that have remained uncertain. Parsimony, maximum likelihood, and Bayesian analyses of a combined mitochondrial data set recover trees broadly consistent with previous studies, although resolution and support are lower than in the largest chloroplast analyses. Combining chloroplast and mitochondrial data results in a generally well-resolved and very strongly supported topology but the previously recognized problem areas remain. To investigate why these relationships have been difficult to resolve we conducted a series of experiments using different data partitions and heterogeneous substitution models. Usually more complex modeling schemes are favored regardless of the partitions recognized but model choice had little effect on topology or support values. In contrast there are consistent but weakly supported differences in the topologies recovered from coding and non-coding matrices. These conflicts directly correspond to relationships that were poorly resolved in analyses of the full combined chloroplast-mitochondrial data set. We suggest incongruent signal has contributed to our inability to confidently resolve these problem areas. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we present different ofrailtyo models to analyze longitudinal data in the presence of covariates. These models incorporate the extra-Poisson variability and the possible correlation among the repeated counting data for each individual. Assuming a CD4 counting data set in HIV-infected patients, we develop a hierarchical Bayesian analysis considering the different proposed models and using Markov Chain Monte Carlo methods. We also discuss some Bayesian discrimination aspects for the choice of the best model.
Resumo:
In this paper, the laminar fluid flow of Newtonian and non-Newtonian of aqueous solutions in a tubular membrane is numerically studied. The mathematical formulation, with associated initial and boundary conditions for cylindrical coordinates, comprises the mass conservation, momentum conservation and mass transfer equations. These equations are discretized by using the finite-difference technique on a staggered grid system. Comparisons of the three upwinding schemes for discretization of the non-linear (convective) terms are presented. The effects of several physical parameters on the concentration profile are investigated. The numerical results compare favorably with experimental data and the analytical solutions. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
In interval-censored survival data, the event of interest is not observed exactly but is only known to occur within some time interval. Such data appear very frequently. In this paper, we are concerned only with parametric forms, and so a location-scale regression model based on the exponentiated Weibull distribution is proposed for modeling interval-censored data. We show that the proposed log-exponentiated Weibull regression model for interval-censored data represents a parametric family of models that include other regression models that are broadly used in lifetime data analysis. Assuming the use of interval-censored data, we employ a frequentist analysis, a jackknife estimator, a parametric bootstrap and a Bayesian analysis for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Furthermore, for different parameter settings, sample sizes and censoring percentages, various simulations are performed; in addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to a modified deviance residual in log-exponentiated Weibull regression models for interval-censored data. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In survival analysis applications, the failure rate function may frequently present a unimodal shape. In such case, the log-normal or log-logistic distributions are used. In this paper, we shall be concerned only with parametric forms, so a location-scale regression model based on the Burr XII distribution is proposed for modeling data with a unimodal failure rate function as an alternative to the log-logistic regression model. Assuming censored data, we consider a classic analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the log-logistic and log-Burr XII regression models. Besides, we use sensitivity analysis to detect influential or outlying observations, and residual analysis is used to check the assumptions in the model. Finally, we analyze a real data set under log-Buff XII regression models. (C) 2008 Published by Elsevier B.V.
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009
Resumo:
The adsorption kinetics curves of poly(xylylidene tetrahydrothiophenium chloride) (PTHT), a poly-p-phenylenevinylene (PPV) precursor, and the sodium salt of dodecylbenzene sulfonic acid (DBS), onto (PTHT/DBS)(n) layer-by-layer (LBL) films were characterized by means of UV-vis spectroscopy. The amount of PTHT/DBS and PTHT adsorbed on each layer was shown to be practically independent of adsorption time. A Langmuir-type metastable equilibrium model was used to adjust the adsorption isotherms data and to estimate adsorption/desorption coefficients ratios, k = k(ads)/k(des), values of 2 x 10(5) and 4 x 10(6) for PTHT and PTHT/DBS layers, respectively. The desorption coefficient has been estimated, using literature values for poly(o-methoxyaniline) desorption coefficient, as was found to be in the range of 10(-9) to 10(-6) s(-1), indicating that quasi equilibrium is rapidly attained.
Resumo:
Eukaryotic translation initiation factor 5A (eIF5A) is a protein that is highly conserved and essential for cell viability. This factor is the only protein known to contain the unique and essential amino acid residue hypusine. This work focused on the structural and functional characterization of Saccharomyces cerevisiae eIF5A. The tertiary structure of yeast eIF5A was modeled based on the structure of its Leishmania mexicana homologue and this model was used to predict the structural localization of new site-directed and randomly generated mutations. Most of the 40 new mutants exhibited phenotypes that resulted from eIF-5A protein-folding defects. Our data provided evidence that the C-terminal alpha-helix present in yeast eIF5A is an essential structural element, whereas the eIF5A N-terminal 10 amino acid extension not present in archaeal eIF5A homologs, is not. Moreover, the mutants containing substitutions at or in the vicinity of the hypusine modification site displayed nonviable or temperature-sensitive phenotypes and were defective in hypusine modification. Interestingly, two of the temperature-sensitive strains produced stable mutant eIF5A proteins - eIF5A(K56A) and eIF5A(Q22H,L93F)- and showed defects in protein synthesis at the restrictive temperature. Our data revealed important structural features of eIF5A that are required for its vital role in cell viability and underscored an essential function of eIF5A in the translation step of gene expression.
Resumo:
This work presents a Bayesian semiparametric approach for dealing with regression models where the covariate is measured with error. Given that (1) the error normality assumption is very restrictive, and (2) assuming a specific elliptical distribution for errors (Student-t for example), may be somewhat presumptuous; there is need for more flexible methods, in terms of assuming only symmetry of errors (admitting unknown kurtosis). In this sense, the main advantage of this extended Bayesian approach is the possibility of considering generalizations of the elliptical family of models by using Dirichlet process priors in dependent and independent situations. Conditional posterior distributions are implemented, allowing the use of Markov Chain Monte Carlo (MCMC), to generate the posterior distributions. An interesting result shown is that the Dirichlet process prior is not updated in the case of the dependent elliptical model. Furthermore, an analysis of a real data set is reported to illustrate the usefulness of our approach, in dealing with outliers. Finally, semiparametric proposed models and parametric normal model are compared, graphically with the posterior distribution density of the coefficients. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Birnbaum and Saunders (1969a) introduced a probability distribution which is commonly used in reliability studies For the first time based on this distribution the so-called beta-Birnbaum-Saunders distribution is proposed for fatigue life modeling Various properties of the new model including expansions for the moments moment generating function mean deviations density function of the order statistics and their moments are derived We discuss maximum likelihood estimation of the model s parameters The superiority of the new model is illustrated by means of three failure real data sets (C) 2010 Elsevier B V All rights reserved
Resumo:
The interaction of 4-nerolidylcatechol (4-NRC), a potent antioxidant agent, and 2-hydroxypropyl-beta-cyclodextrin (HP-beta-CD) was investigated by the solubility method using Fourier transform infrared (FTIR) methods in addition to UV-Vis, (1)H-nuclear magnetic resonance (NMR) spectroscopy and molecular modeling. The inclusion complexes were prepared using grinding, kneading and freeze-drying methods. According to phase solubility studies in water a B(S)-type diagram was found, displaying a stoichiometry complexation of 2:1 (drug:host) and stability constant of 6494 +/- A 837 M(-1). Stoichiometry was established by the UV spectrophotometer using Job`s plot method and, also confirmed by molecular modeling. Data from (1)H-NMR, and FTIR, experiments also provided formation evidence of an inclusion complex between 4-NRC and HP-beta-CD. 4-NRC complexation indeed led to higher drug solubility and stability which could probably be useful to improve its biological properties and make it available to oral administration and topical formulations.