945 resultados para Data modeling
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009
Resumo:
The adsorption kinetics curves of poly(xylylidene tetrahydrothiophenium chloride) (PTHT), a poly-p-phenylenevinylene (PPV) precursor, and the sodium salt of dodecylbenzene sulfonic acid (DBS), onto (PTHT/DBS)(n) layer-by-layer (LBL) films were characterized by means of UV-vis spectroscopy. The amount of PTHT/DBS and PTHT adsorbed on each layer was shown to be practically independent of adsorption time. A Langmuir-type metastable equilibrium model was used to adjust the adsorption isotherms data and to estimate adsorption/desorption coefficients ratios, k = k(ads)/k(des), values of 2 x 10(5) and 4 x 10(6) for PTHT and PTHT/DBS layers, respectively. The desorption coefficient has been estimated, using literature values for poly(o-methoxyaniline) desorption coefficient, as was found to be in the range of 10(-9) to 10(-6) s(-1), indicating that quasi equilibrium is rapidly attained.
Resumo:
Eukaryotic translation initiation factor 5A (eIF5A) is a protein that is highly conserved and essential for cell viability. This factor is the only protein known to contain the unique and essential amino acid residue hypusine. This work focused on the structural and functional characterization of Saccharomyces cerevisiae eIF5A. The tertiary structure of yeast eIF5A was modeled based on the structure of its Leishmania mexicana homologue and this model was used to predict the structural localization of new site-directed and randomly generated mutations. Most of the 40 new mutants exhibited phenotypes that resulted from eIF-5A protein-folding defects. Our data provided evidence that the C-terminal alpha-helix present in yeast eIF5A is an essential structural element, whereas the eIF5A N-terminal 10 amino acid extension not present in archaeal eIF5A homologs, is not. Moreover, the mutants containing substitutions at or in the vicinity of the hypusine modification site displayed nonviable or temperature-sensitive phenotypes and were defective in hypusine modification. Interestingly, two of the temperature-sensitive strains produced stable mutant eIF5A proteins - eIF5A(K56A) and eIF5A(Q22H,L93F)- and showed defects in protein synthesis at the restrictive temperature. Our data revealed important structural features of eIF5A that are required for its vital role in cell viability and underscored an essential function of eIF5A in the translation step of gene expression.
Resumo:
This work presents a Bayesian semiparametric approach for dealing with regression models where the covariate is measured with error. Given that (1) the error normality assumption is very restrictive, and (2) assuming a specific elliptical distribution for errors (Student-t for example), may be somewhat presumptuous; there is need for more flexible methods, in terms of assuming only symmetry of errors (admitting unknown kurtosis). In this sense, the main advantage of this extended Bayesian approach is the possibility of considering generalizations of the elliptical family of models by using Dirichlet process priors in dependent and independent situations. Conditional posterior distributions are implemented, allowing the use of Markov Chain Monte Carlo (MCMC), to generate the posterior distributions. An interesting result shown is that the Dirichlet process prior is not updated in the case of the dependent elliptical model. Furthermore, an analysis of a real data set is reported to illustrate the usefulness of our approach, in dealing with outliers. Finally, semiparametric proposed models and parametric normal model are compared, graphically with the posterior distribution density of the coefficients. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Birnbaum and Saunders (1969a) introduced a probability distribution which is commonly used in reliability studies For the first time based on this distribution the so-called beta-Birnbaum-Saunders distribution is proposed for fatigue life modeling Various properties of the new model including expansions for the moments moment generating function mean deviations density function of the order statistics and their moments are derived We discuss maximum likelihood estimation of the model s parameters The superiority of the new model is illustrated by means of three failure real data sets (C) 2010 Elsevier B V All rights reserved
Resumo:
The interaction of 4-nerolidylcatechol (4-NRC), a potent antioxidant agent, and 2-hydroxypropyl-beta-cyclodextrin (HP-beta-CD) was investigated by the solubility method using Fourier transform infrared (FTIR) methods in addition to UV-Vis, (1)H-nuclear magnetic resonance (NMR) spectroscopy and molecular modeling. The inclusion complexes were prepared using grinding, kneading and freeze-drying methods. According to phase solubility studies in water a B(S)-type diagram was found, displaying a stoichiometry complexation of 2:1 (drug:host) and stability constant of 6494 +/- A 837 M(-1). Stoichiometry was established by the UV spectrophotometer using Job`s plot method and, also confirmed by molecular modeling. Data from (1)H-NMR, and FTIR, experiments also provided formation evidence of an inclusion complex between 4-NRC and HP-beta-CD. 4-NRC complexation indeed led to higher drug solubility and stability which could probably be useful to improve its biological properties and make it available to oral administration and topical formulations.
Resumo:
Mathematical modeling has been extensively applied to the study and development of fuel cells. In this work, the objective is to characterize a mechanistic model for the anode of a direct ethanol fuel cell and perform appropriate simulations. The software Comsol Multiphysics (R) (and the Chemical Engineering Module) was used in this work. The software Comsol Multiphysics (R) is an interactive environment for modeling scientific and engineering applications using partial differential equations (PDEs). Based on the finite element method, it provides speed and accuracy for several applications. The mechanistic model developed here can supply details of the physical system, such as the concentration profiles of the components within the anode and the coverage of the adsorbed species on the electrode surface. Also, the anode overpotential-current relationship can be obtained. To validate the anode model presented in this paper, experimental data obtained with a single fuel cell operating with an ethanol solution at the anode were used. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Continuous casting is a casting process that produces steel slabs in a continuous manner with steel being poured at the top of the caster and a steel strand emerging from the mould below. Molten steel is transferred from the AOD converter to the caster using a ladle. The ladle is designed to be strong and insulated. Complete insulation is never achieved. Some of the heat is lost to the refractories by convection and conduction. Heat losses by radiation also occur. It is important to know the temperature of the melt during the process. For this reason, an online model was previously developed to simulate the steel and ladle wall temperatures during the ladle cycle. The model was developed as an ODE based model using grey box modeling technique. The model’s performance was acceptable and needed to be presented in a user friendly way. The aim of this thesis work was basically to design a GUI that presents steel and ladle wall temperatures calculated by the model and also allow the user to make adjustments to the model. This thesis work also discusses the sensitivity analysis of different parameters involved and their effects on different temperature estimations.
Resumo:
Using a physically based model, the microstructural evolution of Nb microalloyed steels during rolling in SSAB Tunnplåt’s hot strip mill was modeled. The model describes the evolution of dislocation density, the creation and diffusion of vacancies, dynamic and static recovery through climb and glide, subgrain formation and growth, dynamic and static recrystallization and grain growth. Also, the model describes the dissolution and precipitation of particles. The impeding effect on grain growth and recrystallization due to solute drag and particles is accounted for. During hot strip rolling of Nb steels, Nb in solid solution retards recrystallization due to solute drag and at lower temperatures strain-induced precipitation of Nb(C,N) may occur which effectively retard recrystallization. The flow stress behavior during hot rolling was calculated where the mean flow stress values were calculated using both the model and measured mill data. The model showed that solute drag has an essential effect on recrystallization during hot rolling of Nb steels.
Resumo:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision. Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes. The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
Resumo:
The gradual changes in the world development have brought energy issues back into high profile. An ongoing challenge for countries around the world is to balance the development gains against its effects on the environment. The energy management is the key factor of any sustainable development program. All the aspects of development in agriculture, power generation, social welfare and industry in Iran are crucially related to the energy and its revenue. Forecasting end-use natural gas consumption is an important Factor for efficient system operation and a basis for planning decisions. In this thesis, particle swarm optimization (PSO) used to forecast long run natural gas consumption in Iran. Gas consumption data in Iran for the previous 34 years is used to predict the consumption for the coming years. Four linear and nonlinear models proposed and six factors such as Gross Domestic Product (GDP), Population, National Income (NI), Temperature, Consumer Price Index (CPI) and yearly Natural Gas (NG) demand investigated.
Resumo:
By modeling the spectral energy distribution (SED) of the W3 IRS5 high-mass star formation region and matching this model to observed data, we can constrain the physical parameters of the basic system geometry and cloud mass distribution. From these parameters, we hope to add to the understanding of high-mass star formation processes. In particular, we hope to determine if the geometries associated with lowmass star formation carry over into the high-mass regime.
Resumo:
This project used data from the National Park Service, the SRTM data set, and recorded weather conditions to predict snow deposition and snow and ice melt in the Grand Canyon National Park. This model, a simplified version of previous research, shows the location of persistent ice and snow on the Canyon slopes in March.
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.
Resumo:
In this research the 3DVAR data assimilation scheme is implemented in the numerical model DIVAST in order to optimize the performance of the numerical model by selecting an appropriate turbulence scheme and tuning its parameters. Two turbulence closure schemes: the Prandtl mixing length model and the two-equation k-ε model were incorporated into DIVAST and examined with respect to their universality of application, complexity of solutions, computational efficiency and numerical stability. A square harbour with one symmetrical entrance subject to tide-induced flows was selected to investigate the structure of turbulent flows. The experimental part of the research was conducted in a tidal basin. A significant advantage of such laboratory experiment is a fully controlled environment where domain setup and forcing are user-defined. The research shows that the Prandtl mixing length model and the two-equation k-ε model, with default parameterization predefined according to literature recommendations, overestimate eddy viscosity which in turn results in a significant underestimation of velocity magnitudes in the harbour. The data assimilation of the model-predicted velocity and laboratory observations significantly improves model predictions for both turbulence models by adjusting modelled flows in the harbour to match de-errored observations. 3DVAR allows also to identify and quantify shortcomings of the numerical model. Such comprehensive analysis gives an optimal solution based on which numerical model parameters can be estimated. The process of turbulence model optimization by reparameterization and tuning towards optimal state led to new constants that may be potentially applied to complex turbulent flows, such as rapidly developing flows or recirculating flows.