943 resultados para Computational Mechanics, Numerical Analysis, Meshfree Method, Meshless Method, Time Dependent, MEMS
Resumo:
In this paper, we present a framework for Bayesian inference in continuous-time diffusion processes. The new method is directly related to the recently proposed variational Gaussian Process approximation (VGPA) approach to Bayesian smoothing of partially observed diffusions. By adopting a basis function expansion (BF-VGPA), both the time-dependent control parameters of the approximate GP process and its moment equations are projected onto a lower-dimensional subspace. This allows us both to reduce the computational complexity and to eliminate the time discretisation used in the previous algorithm. The new algorithm is tested on an Ornstein-Uhlenbeck process. Our preliminary results show that BF-VGPA algorithm provides a reasonably accurate state estimation using a small number of basis functions.
Resumo:
Issues of wear and tribology are increasingly important in computer hard drives as slider flying heights are becoming lower and disk protective coatings thinner to minimise spacing loss and allow higher areal density. Friction, stiction and wear between the slider and disk in a hard drive were studied using Accelerated Friction Test (AFT) apparatus. Contact Start Stop (CSS) and constant speed drag tests were performed using commercial rigid disks and two different air bearing slider types. Friction and stiction were captured during testing by a set of strain gauges. System parameters were varied to investigate their effect on tribology at the head/disk interface. Chosen parameters were disk spinning velocity, slider fly height, temperature, humidity and intercycle pause. The effect of different disk texturing methods was also studied. Models were proposed to explain the influence of these parameters on tribology. Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM) were used to study head and disk topography at various test stages and to provide physical parameters to verify the models. X-ray Photoelectron Spectroscopy (XPS) was employed to identify surface composition and determine if any chemical changes had occurred as a result of testing. The parameters most likely to influence the interface were identified for both CSS and drag testing. Neural Network modelling was used to substantiate results. Topographical AFM scans of disk and slider were exported numerically to file and explored extensively. Techniques were developed which improved line and area analysis. A method for detecting surface contacts was also deduced, results supported and explained observed AFT behaviour. Finally surfaces were computer generated to simulate real disk scans, this allowed contact analysis of many types of surface to be performed. Conclusions were drawn about what disk characteristics most affected contacts and hence friction, stiction and wear.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
Grafting of antioxidants and other modifiers onto polymers by reactive extrusion, has been performed successfully by the Polymer Processing and Performance Group at Aston University. Traditionally the optimum conditions for the grafting process have been established within a Brabender internal mixer. Transfer of this batch process to a continuous processor, such as an extruder, has, typically, been empirical. To have more confidence in the success of direct transfer of the process requires knowledge of, and comparison between, residence times, mixing intensities, shear rates and flow regimes in the internal mixer and in the continuous processor.The continuous processor chosen for the current work in the closely intermeshing, co-rotating twin-screw extruder (CICo-TSE). CICo-TSEs contain screw elements that convey material with a self-wiping action and are widely used for polymer compounding and blending. Of the different mixing modules contained within the CICo-TSE, the trilobal elements, which impose intensive mixing, and the mixing discs, which impose extensive mixing, are of importance when establishing the intensity of mixing. In this thesis, the flow patterns within the various regions of the single-flighted conveying screw elements and within both the trilobal element and mixing disc zones of a Betol BTS40 CICo-TSE, have been modelled using the computational fluid dynamics package Polyflow. A major obstacle encountered when solving the flow problem within all of these sets of elements, arises from both the complex geometry and the time-dependent flow boundaries as the elements rotate about their fixed axes. Simulation of the time dependent boundaries was overcome by selecting a number of sequential 2D and 3D geometries, used to represent partial mixing cycles. The flow fields were simulated using the ideal rheological properties of polypropylene and characterised in terms of velocity vectors, shear stresses generated and a parameter known as the mixing efficiency. The majority of the large 3D simulations were performed on the Cray J90 supercomputer situated at the Rutherford-Appleton laboratories, with pre- and postprocessing operations achieved via a Silicon Graphics Indy workstation. A mechanical model was constructed consisting of various CICo-TSE elements rotating within a transparent outer barrel. A technique has been developed using coloured viscous clays whereby the flow patterns and mixing characteristics within the CICo-TSE may be visualised. In order to test and verify the simulated predictions, the patterns observed within the mechanical model were compared with the flow patterns predicted by the computational model. The flow patterns within the single-flighted conveying screw elements in particular, showed good agreement between the experimental and simulated results.
Resumo:
Many automated negotiation models have been developed to solve the conflict in many distributed computational systems. However, the problem of finding win-win outcome in multiattribute negotiation has not been tackled well. To address this issue, based on an evolutionary method of multiobjective optimization, this paper presents a negotiation model that can find win-win solutions of multiple attributes, but needs not to reveal negotiating agents' private utility functions to their opponents or a third-party mediator. Moreover, we also equip our agents with a general type of utility functions of interdependent multiattributes, which captures human intuitions well. In addition, we also develop a novel time-dependent concession strategy model, which can help both sides find a final agreement among a set of win-win ones. Finally, lots of experiments confirm that our negotiation model outperforms the existing models developed recently. And the experiments also show our model is stable and efficient in finding fair win-win outcomes, which is seldom solved in the existing models. © 2012 Wiley Periodicals, Inc.
Resumo:
Internally heated fluids are found across the nuclear fuel cycle. In certain situations the motion of the fluid is driven by the decay heat (i.e. corium melt pools in severe accidents, the shutdown of liquid metal reactors, molten salt and the passive control of light water reactors) as well as normal operation (i.e. intermediate waste storage and generation IV reactor designs). This can in the long-term affect reactor vessel integrity or lead to localized hot spots and accumulation of solid wastes that may prompt local increases in activity. Two approaches to the modeling of internally heated convection are presented here. These are based on numerical analysis using codes developed in-house and simulations using widely available computational fluid dynamics solvers. Open and closed fluid layers at around the transition between conduction and convection of various aspect ratios are considered. We determine optimum domain aspect ratio (1:7:7 up to 1:24:24 for open systems and 5:5:1, 1:10:10 and 1:20:20 for closed systems), mesh resolutions and turbulence models required to accurately and efficiently capture the convection structures that evolve when perturbing the conductive state of the fluid layer. Note that the open and closed fluid layers we study here are bounded by a conducting surface over an insulating surface. Conclusions will be drawn on the influence of the periodic boundary conditions on the flow patterns observed. We have also examined the stability of the nonlinear solutions that we found with the aim of identifying the bifurcation sequence of these solutions en route to turbulence.
Resumo:
Purpose – To propose and investigate a stable numerical procedure for the reconstruction of the velocity of a viscous incompressible fluid flow in linear hydrodynamics from knowledge of the velocity and fluid stress force given on a part of the boundary of a bounded domain. Design/methodology/approach – Earlier works have involved the similar problem but for stationary case (time-independent fluid flow). Extending these ideas a procedure is proposed and investigated also for the time-dependent case. Findings – The paper finds a novel variation method for the Cauchy problem. It proves convergence and also proposes a new boundary element method. Research limitations/implications – The fluid flow domain is limited to annular domains; this restriction can be removed undertaking analyses in appropriate weighted spaces to incorporate singularities that can occur on general bounded domains. Future work involves numerical investigations and also to consider Oseen type flow. A challenging problem is to consider non-linear Navier-Stokes equation. Practical implications – Fluid flow problems where data are known only on a part of the boundary occur in a range of engineering situations such as colloidal suspension and swimming of microorganisms. For example, the solution domain can be the region between to spheres where only the outer sphere is accessible for measurements. Originality/value – A novel variational method for the Cauchy problem is proposed which preserves the unsteady Stokes operator, convergence is proved and using recent for the fundamental solution for unsteady Stokes system, a new boundary element method for this system is also proposed.
Resumo:
The computational mechanics approach has been applied to the orientational behavior of water molecules in a molecular dynamics simulated water–Na + system. The distinctively different statistical complexity of water molecules in the bulk and in the first solvation shell of the ion is demonstrated. It is shown that the molecules undergo more complex orientational motion when surrounded by other water molecules compared to those constrained by the electric field of the ion. However the spatial coordinates of the oxygen atom shows the opposite complexity behavior in that complexity is higher for the solvation shell molecules. New information about the dynamics of water molecules in the solvation shell is provided that is additional to that given by traditional methods of analysis.
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
This paper presents a new interpretation for the Superpave IDT strength test based on a viscoelastic-damage framework. The framework is based on continuum damage mechanics and the thermodynamics of irreversible processes with an anisotropic damage representation. The new approach introduces considerations for the viscoelastic effects and the damage accumulation that accompanies the fracture process in the interpretation of the Superpave IDT strength test for the identification of the Dissipated Creep Strain Energy (DCSE) limit from the test result. The viscoelastic model is implemented in a Finite Element Method (FEM) program for the simulation of the Superpave IDT strength test. The DCSE values obtained using the new approach is compared with the values obtained using the conventional approach to evaluate the validity of the assumptions made in the conventional interpretation of the test results. The result shows that the conventional approach over-estimates the DCSE value with increasing estimation error at higher deformation rates.
Resumo:
The reactivity of chemically isolated lignocellulosic blocks, namely, α-cellulose, holocellulose, and lignin, has been rationalized on the basis of the dependence of the effective activation energy (Eα) upon conversion (α) determined via the popular isoconversional kinetic analysis, Friedman’s method. First of all, a detailed procedure for the thermogravimetric data preparation, kinetic calculation, and uncertainty estimation was implemented. Resulting Eα dependencies obtained for the slow pyrolysis of the extractive-free Eucalyptus grandis isolated α-cellulose and holocellulose remained constant for 0.05 < α < 0.80 and equal to 173 ± 10, 208 ± 11, and 197 ± 118 kJ/mol, thus confirming the single-step nature of pyrolysis. On the other hand, large and significant variations in Eα with α from 174 ± 10 to 322 ± 11 kJ/mol in the region of 0.05 and 0.79 were obtained for the Klason lignin and reported for the first time. The non-monotonic nature of weight loss at low and high conversions had a direct consequence on the confidence levels of Eα. The new experimental and calculation guidelines applied led to more accurate estimates of Eα values than those reported earlier. The increasing Eα dependency trend confirms that lignin is converted into a thermally more stable carbonaceous material.
Resumo:
This dissertation derived hypotheses from the theories of Piaget, Bruner and Dienes regarding the effects of using Algebra Tiles and other manipulative materials to teach remedial algebra to community college students. The dependent variables measured were achievement and attitude towards mathematics. The Piagetian cognitive level of the students in the study was measured and used as a concomitant factor in the study.^ The population for the study was comprised of remedial algebra students at a large urban community college. The sample for the study consisted of 253 students enrolled in 10 sections of remedial algebra at three of the six campuses of the college. Pretests included administration of an achievement pre-measure, Aiken's Mathematics Attitude Inventory (MAI), and the Group Assessment of Logical Thinking (GALT). Posttest measures included a course final exam and a second administration of the MAI.^ The results of the GALT test revealed that 161 students (63.6%) were concrete operational, 65 (25.7%) were transitional, and 27 (10.7%) were formal operational. For the purpose of analyzing the data, the transitional and formal operational students were grouped together.^ Univariate factorial analyses of covariance ($\alpha$ =.05) were performed on the posttest of achievement (covariate = achievement pretest) and the MAI posttest (covariate = MAI pretest). The factors used in the analysis were method of teaching (manipulative vs. traditional) and cognitive level (concrete operational vs. transitional/formal operational).^ The analyses for achievement revealed a significant difference in favor of the manipulatives groups in the computations by campus. Significant differences were not noted in the analysis by individual instructors.^ The results for attitude towards mathematics showed a significant difference in favor of the manipulatives groups for the college-wide analysis and for one campus. The analysis by individual instructor was not significant. In addition, the college-wide analysis was significant in favor of the transitional/formal operational stage of cognitive development. However, support for this conclusion was not obtained in the analyses by campus or individual instructor. ^
Resumo:
In the last 16 years emerged in Brazil a segment of independent producers with focus on onshore basins and shallow waters. Among the challenges of these companies is the development of fields with projects with a low net present value (NPV). The objective of this work was to study the technical-economical best option to develop an oil field in the Brazilian Northeast using reservoir simulation. Real geology, reservoir and production data was used to build the geological and simulation model. Due to not having PVT analysis, distillation method test data known as the true boiling points (TBP) were used to create a fluids model generating the PVT data. After execution of the history match, four development scenarios were simulated: the extrapolation of production without new investments, the conversion of a producing well for immiscible gas injection, the drilling of a vertical well and the drilling of a horizontal well. As a result, from the financial point of view, the gas injection is the alternative with lower added value, but it may be viable if there are environmental or regulatory restrictions to flaring or venting the produced gas into the atmosphere from this field or neighboring accumulations. The recovery factor achieved with the drilling of vertical and horizontal wells is similar, but the horizontal well is a project of production acceleration; therefore, the present incremental cumulative production with a minimum rate of company's attractiveness is higher. Depending on the crude oil Brent price and the drilling cost, this option can be technically and financially viable.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.