969 resultados para computational models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the first questions to consider when designing a new roll forming line is the number of forming steps required to produce a profile. The number depends on material properties, the cross-section geometry and tolerance requirements, but the tool designer also wants to minimize the number of forming steps in order to reduce the investment costs for the customer. There are several computer aided engineering systems on the market that can assist the tool designing process. These include more or less simple formulas to predict deformation during forming as well as the number of forming steps. In recent years it has also become possible to use finite element analysis for the design of roll forming processes. The objective of the work presented in this thesis was to answer the following question: How should the roll forming process be designed for complex geometries and/or high strength steels? The work approach included both literature studies as well as experimental and modelling work. The experimental part gave direct insight into the process and was also used to develop and validate models of the process. Starting with simple geometries and standard steels the work progressed to more complex profiles of variable depth and width, made of high strength steels. The results obtained are published in seven papers appended to this thesis. In the first study (see paper 1) a finite element model for investigating the roll forming of a U-profile was built. It was used to investigate the effect on longitudinal peak membrane strain and deformation length when yield strength increases, see paper 2 and 3. The simulations showed that the peak strain decreases whereas the deformation length increases when the yield strength increases. The studies described in paper 4 and 5 measured roll load, roll torque, springback and strain history during the U-profile forming process. The measurement results were used to validate the finite element model in paper 1. The results presented in paper 6 shows that the formability of stainless steel (e.g. AISI 301), that in the cold rolled condition has a large martensite fraction, can be substantially increased by heating the bending zone. The heated area will then become austenitic and ductile before the roll forming. Thanks to the phenomenon of strain induced martensite formation, the steel will regain the martensite content and its strength during the subsequent plastic straining. Finally, a new tooling concept for profiles with variable cross-sections is presented in paper 7. The overall conclusions of the present work are that today, it is possible to successfully develop profiles of complex geometries (3D roll forming) in high strength steels and that finite element simulation can be a useful tool in the design of the roll forming process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a two-step pseudo likelihood estimation technique for generalized linear mixed models with the random effects being correlated between groups. The core idea is to deal with the intractable integrals in the likelihood function by multivariate Taylor's approximation. The accuracy of the estimation technique is assessed in a Monte-Carlo study. An application of it with a binary response variable is presented using a real data set on credit defaults from two Swedish banks. Thanks to the use of two-step estimation technique, the proposed algorithm outperforms conventional pseudo likelihood algorithms in terms of computational time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this research the 3DVAR data assimilation scheme is implemented in the numerical model DIVAST in order to optimize the performance of the numerical model by selecting an appropriate turbulence scheme and tuning its parameters. Two turbulence closure schemes: the Prandtl mixing length model and the two-equation k-ε model were incorporated into DIVAST and examined with respect to their universality of application, complexity of solutions, computational efficiency and numerical stability. A square harbour with one symmetrical entrance subject to tide-induced flows was selected to investigate the structure of turbulent flows. The experimental part of the research was conducted in a tidal basin. A significant advantage of such laboratory experiment is a fully controlled environment where domain setup and forcing are user-defined. The research shows that the Prandtl mixing length model and the two-equation k-ε model, with default parameterization predefined according to literature recommendations, overestimate eddy viscosity which in turn results in a significant underestimation of velocity magnitudes in the harbour. The data assimilation of the model-predicted velocity and laboratory observations significantly improves model predictions for both turbulence models by adjusting modelled flows in the harbour to match de-errored observations. 3DVAR allows also to identify and quantify shortcomings of the numerical model. Such comprehensive analysis gives an optimal solution based on which numerical model parameters can be estimated. The process of turbulence model optimization by reparameterization and tuning towards optimal state led to new constants that may be potentially applied to complex turbulent flows, such as rapidly developing flows or recirculating flows.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Point pattern matching in Euclidean Spaces is one of the fundamental problems in Pattern Recognition, having applications ranging from Computer Vision to Computational Chemistry. Whenever two complex patterns are encoded by two sets of points identifying their key features, their comparison can be seen as a point pattern matching problem. This work proposes a single approach to both exact and inexact point set matching in Euclidean Spaces of arbitrary dimension. In the case of exact matching, it is assured to find an optimal solution. For inexact matching (when noise is involved), experimental results confirm the validity of the approach. We start by regarding point pattern matching as a weighted graph matching problem. We then formulate the weighted graph matching problem as one of Bayesian inference in a probabilistic graphical model. By exploiting the existence of fundamental constraints in patterns embedded in Euclidean Spaces, we prove that for exact point set matching a simple graphical model is equivalent to the full model. It is possible to show that exact probabilistic inference in this simple model has polynomial time complexity with respect to the number of elements in the patterns to be matched. This gives rise to a technique that for exact matching provably finds a global optimum in polynomial time for any dimensionality of the underlying Euclidean Space. Computational experiments comparing this technique with well-known probabilistic relaxation labeling show significant performance improvement for inexact matching. The proposed approach is significantly more robust under augmentation of the sizes of the involved patterns. In the absence of noise, the results are always perfect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents an analysis of the wavelet-Galerkin method for one-dimensional elastoplastic-damage problems. Time-stepping algorithm for non-linear dynamics is presented. Numerical treatment of the constitutive models is developed by the use of return-mapping algorithm. For spacial discretization we can use wavelet-Galerkin method instead of standard finite element method. This approach allows to locate singularities. The discrete formulation developed can be applied to the simulation of one-dimensional problems for elastic-plastic-damage models. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comparative study of aggregation error bounds for the generalized transportation problem is presented. A priori and a posteriori error bounds were derived and a computational study was performed to (a) test the correlation between the a priori, the a posteriori, and the actual error and (b) quantify the difference of the error bounds from the actual error. Based on the results we conclude that calculating the a priori error bound can be considered as a useful strategy to select the appropriate aggregation level. The a posteriori error bound provides a good quantitative measure of the actual error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article introduces an efficient method to generate structural models for medium-sized silicon clusters. Geometrical information obtained from previous investigations of small clusters is initially sorted and then introduced into our predictor algorithm in order to generate structural models for large clusters. The method predicts geometries whose binding energies are close (95%) to the corresponding value for the ground-state with very low computational cost. These predictions can be used as a very good initial guess for any global optimization algorithm. As a test case, information from clusters up to 14 atoms was used to predict good models for silicon clusters up to 20 atoms. We believe that the new algorithm may enhance the performance of most optimization methods whenever some previous information is available. (C) 2003 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A minimalist representation of protein structures using a Go- like potential for interactions is implemented to investigate the mechanisms of the domain swapping of p13suc1, a protein that exists in two native conformations: a monomer and a domain- swapped dimer formed by the exchange of a beta- strand. Inspired by experimental studies which showed a similarity of the transition states for folding of the monomer and the dimer, in this study we justify this similarity in molecular descriptions. When intermediates are populated in the simulations, formation of a domain- swapped dimer initiates from the ensemble of unfolded monomers, given by the fact that the dimer formation occurs at the folding/ unfolding temperature of the monomer ( T-f). It is also shown that transitions, leading to a dimer, involve the presence of two intermediates, one of them has a dimeric form and the other is monomeric; the latter is much more populated than the former. However, at temperatures lower than T-f, the population of intermediates decreases. It is argued that the two folded forms may coexist in absence of intermediates at a temperature much lower than T-f. Computational simulations enable us to find a mechanism, `` lock- and- dock'', for domain swapping of p13suc1. To explore the route toward dimer formation, the folding of unstructured monomers must be retarded by first locking one of the free ends of each chain. Then, the other free termini could follow and dock at particular regions, where most intrachain contacts are formed, and thus de. ne the transition states of the dimer. The simulations also showed that a decrease in the maximum distance between monomers increased their stability, which is explained based on confinement arguments. Although the simulations are based on models extracted from the native structure of the monomer and the dimer of p13suc1, the mechanism of the domain- swapping process could be general, not only for p13suc1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a methodology to incorporate voltage/reactive representation to Short Term Generation Scheduling (STGS) models, which is based on active/reactive decoupling characteristics of power systems. In such approach STGS is decoupled in both Active (AGS) and Reactive (RGS) Generation Scheduling models. AGS model establishes an initial active generation scheduling through a traditional dispatch model. The scheduling proposed by AGS model is evaluated from the voltage/reactive points of view, through the proposed RGS model. RGS is formulated as a sequence of T nonlinear OPF problems, solved separately but taking into account load tracking between consecutive time intervals. This approach considerably reduces computational effort to perform the reactive analysis of the RGS problem as a whole. When necessary, RGS model is capable to propose active generation redispatches, such that critical reactive problems (in which all reactive variables have been insufficient to control the reactive problems) can be overcome. The formulation and solution methodology proposed are evaluated in the IEEE30 system in two case studies. These studies show that the methodology is robust enough to incorporate reactive aspects to STGS problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the importance of using a top-down methodology and suitable CAD tools in the development of electronic circuits. The paper presents an evaluation of the methodology used in a computational tool created to support the synthesis of digital to analog converter models by translating between different tools used in a wide variety of applications. This tool is named MS 2SV and works directly with the following two commercial tools: MATLAB/Simulink and SystemVision. Model translation of an electronic circuit is achieved by translating a mixed-signal block diagram developed in Simulink into a lower level of abstraction in VHDL-AMS and the simulation project support structure in SystemVision. The method validation was performed by analyzing the power spectral of the signal obtained by the discrete Fourier transform of a digital to analog converter simulation model. © 2011 IEEE.