924 resultados para Computational methods
Resumo:
Recent technological developments have made it possible to design various microdevices where fluid flow and heat transfer are involved. For the proper design of such systems, the governing physics needs to be investigated. Due to the difficulty to study complex geometries in micro scales using experimental techniques, computational tools are developed to analyze and simulate flow and heat transfer in microgeometries. However, conventional numerical methods using the Navier-Stokes equations fail to predict some aspects of microflows such as nonlinear pressure distribution, increase mass flow rate, slip flow and temperature jump at the solid boundaries. This necessitates the development of new computational methods which depend on the kinetic theory that are both accurate and computationally efficient. In this study, lattice Boltzmann method (LBM) was used to investigate the flow and heat transfer in micro sized geometries. The LBM depends on the Boltzmann equation which is valid in the whole rarefaction regime that can be observed in micro flows. Results were obtained for isothermal channel flows at Knudsen numbers higher than 0.01 at different pressure ratios. LBM solutions for micro-Couette and micro-Poiseuille flow were found to be in good agreement with the analytical solutions valid in the slip flow regime (0.01 < Kn < 0.1) and direct simulation Monte Carlo solutions that are valid in the transition regime (0.1 < Kn < 10) for pressure distribution and velocity field. The isothermal LBM was further extended to simulate flows including heat transfer. The method was first validated for continuum channel flows with and without constrictions by comparing the thermal LBM results against accurate solutions obtained from analytical equations and finite element method. Finally, the capability of thermal LBM was improved by adding the effect of rarefaction and the method was used to analyze the behavior of gas flow in microchannels. The major finding of this research is that, the newly developed particle-based method described here can be used as an alternative numerical tool in order to study non-continuum effects observed in micro-electro-mechanical-systems (MEMS).
Resumo:
An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.
Resumo:
Structure, energetics and reactions of ions in the gas phase can be revealed by mass spectrometry techniques coupled to ions activation methods. Ions can gain enough energy for dissociation by absorbing IR light photons introduced by an IR laser to the mass spectrometer. Also collisions with a neutral molecule can increase the internal energy of ions and provide the dissociation threshold energy. Infrared multiple photon dissociation (IRMPD) or sustained off-resonance irradiation collision-induced dissociation (SORI-CID) methods are combined with Fourier Transform Ion Cyclotron Resonance (FT-ICR) mass spectrometers where ions can be held at low pressures for a long time. The outcome of ion activation techniques especially when it is compared to the computational methods results is of great importance since it provides useful information about the structure, thermochemistry and reactivity of ions of interest. In this work structure, energetics and reactivity of metal cation complexes with dipeptides are investigated. Effect of metal cation size and charge as well as microsolvation on the structure of these complexes has been studied. Structures of bare and hydrated Na and Ca complexes with isomeric dipeptides AlaGly and GlyAla are characterized by means of IRMPD spectroscopy and computational methods. At the second step unimolecular dissociation reactions of singly charged and doubly charged multimetallic complexes of alkaline earth metal cations with GlyGly are examined by CID method. Also structural features of these complexes are revealed by comparing their IRMPD spectra with calculated IR spectra of possible structures. At last the unimolecular dissociation reactions of Mn complexes are studied. IRMPD spectroscopy along with computational methods is also employed for structural elucidation of Mn complexes. In addition the ion-molecule reactions of Mn complexes with CO and water are explored in the low pressures obtained in the ICR cell.
Resumo:
The goal of this work is to present an efficient CAD-based adjoint process chain for calculating parametric sensitivities (derivatives of the objective function with respect to the CAD parameters) in timescales acceptable for industrial design processes. The idea is based on linking parametric design velocities (geometric sensitivities computed from the CAD model) with adjoint surface sensitivities. A CAD-based design velocity computation method has been implemented based on distances between discrete representations of perturbed geometries. This approach differs from other methods due to the fact that it works with existing commercial CAD packages (unlike most analytical approaches) and it can cope with the changes in CAD model topology and face labeling. Use of the proposed method allows computation of parametric sensitivities using adjoint data at a computational cost which scales with the number of objective functions being considered, while it is essentially independent of the number of design variables. The gradient computation is demonstrated on test cases for a Nozzle Guide Vane (NGV) model and a Turbine Rotor Blade model. The results are validated against finite difference values and good agreement is shown. This gradient information can be passed to an optimization algorithm, which will use it to update the CAD model parameters.
Resumo:
Les protéines membranaires intégrales jouent un rôle indispensable dans la survie des cellules et 20 à 30% des cadres de lectures ouverts codent pour cette classe de protéines. La majorité des protéines membranaires se trouvant sur la Protein Data Bank n’ont pas une orientation et une insertion connue. L’orientation, l’insertion et la conformation que les protéines membranaires ont lorsqu’elles interagissent avec une bicouche lipidique sont importantes pour la compréhension de leur fonction, mais ce sont des caractéristiques difficiles à obtenir par des méthodes expérimentales. Des méthodes computationnelles peuvent réduire le temps et le coût de l’identification des caractéristiques des protéines membranaires. Dans le cadre de ce projet de maîtrise, nous proposons une nouvelle méthode computationnelle qui prédit l’orientation et l’insertion d’une protéine dans une membrane. La méthode est basée sur les potentiels de force moyenne de l’insertion membranaire des chaînes latérales des acides aminés dans une membrane modèle composèe de dioléoylphosphatidylcholine.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-07
Resumo:
This dissertation covers two separate topics in statistical physics. The first part of the dissertation focuses on computational methods of obtaining the free energies (or partition functions) of crystalline solids. We describe a method to compute the Helmholtz free energy of a crystalline solid by direct evaluation of the partition function. In the many-dimensional conformation space of all possible arrangements of N particles inside a periodic box, the energy landscape consists of localized islands corresponding to different solid phases. Calculating the partition function for a specific phase involves integrating over the corresponding island. Introducing a natural order parameter that quantifies the net displacement of particles from lattices sites, we write the partition function in terms of a one-dimensional integral along the order parameter, and evaluate this integral using umbrella sampling. We validate the method by computing free energies of both face-centered cubic (FCC) and hexagonal close-packed (HCP) hard sphere crystals with a precision of $10^{-5}k_BT$ per particle. In developing the numerical method, we find several scaling properties of crystalline solids in the thermodynamic limit. Using these scaling properties, we derive an explicit asymptotic formula for the free energy per particle in the thermodynamic limit. In addition, we describe several changes of coordinates that can be used to separate internal degrees of freedom from external, translational degrees of freedom. The second part of the dissertation focuses on engineering idealized physical devices that work as Maxwell's demon. We describe two autonomous mechanical devices that extract energy from a single heat bath and convert it into work, while writing information onto memory registers. Additionally, both devices can operate as Landauer's eraser, namely they can erase information from a memory register, while energy is dissipated into the heat bath. The phase diagrams and the efficiencies of the two models are solved and analyzed. These two models provide concrete physical illustrations of the thermodynamic consequences of information processing.
Resumo:
While news stories are an important traditional medium to broadcast and consume news, microblogging has recently emerged as a place where people can dis- cuss, disseminate, collect or report information about news. However, the massive information in the microblogosphere makes it hard for readers to keep up with these real-time updates. This is especially a problem when it comes to breaking news, where people are more eager to know “what is happening”. Therefore, this dis- sertation is intended as an exploratory effort to investigate computational methods to augment human effort when monitoring the development of breaking news on a given topic from a microblog stream by extractively summarizing the updates in a timely manner. More specifically, given an interest in a topic, either entered as a query or presented as an initial news report, a microblog temporal summarization system is proposed to filter microblog posts from a stream with three primary concerns: topical relevance, novelty, and salience. Considering the relatively high arrival rate of microblog streams, a cascade framework consisting of three stages is proposed to progressively reduce quantity of posts. For each step in the cascade, this dissertation studies methods that improve over current baselines. In the relevance filtering stage, query and document expansion techniques are applied to mitigate sparsity and vocabulary mismatch issues. The use of word embedding as a basis for filtering is also explored, using unsupervised and supervised modeling to characterize lexical and semantic similarity. In the novelty filtering stage, several statistical ways of characterizing novelty are investigated and ensemble learning techniques are used to integrate results from these diverse techniques. These results are compared with a baseline clustering approach using both standard and delay-discounted measures. In the salience filtering stage, because of the real-time prediction requirement a method of learning verb phrase usage from past relevant news reports is used in conjunction with some standard measures for characterizing writing quality. Following a Cranfield-like evaluation paradigm, this dissertation includes a se- ries of experiments to evaluate the proposed methods for each step, and for the end- to-end system. New microblog novelty and salience judgments are created, building on existing relevance judgments from the TREC Microblog track. The results point to future research directions at the intersection of social media, computational jour- nalism, information retrieval, automatic summarization, and machine learning.
Resumo:
A teoria de jogos modela estratégias entre agentes (jogadores), os quais possuem recompensas ao fim do jogo conforme suas ações. O melhor par de estratégias para os jogadores constitui uma solução de equilíbrio. Porém, nem sempre se consegue estimar os dados do problema. Diante disso, os parâmetros incertos presentes em modelos de jogos são formalizados pela teoria fuzzy. Assim, a teoria fuzzy auxilia a teoria de jogos, formando jogos fuzzy. Dessa forma, parâmetros, como as recompensas, tornam-se números fuzzy. Mais ainda, quando há incerteza na representação desses números fuzzy utilizam-se os números fuzzy intervalares. Então, neste trabalho modelos de jogos fuzzy intervalares são analisados e métodos computacionais são desenvolvidos para a resolução desses jogos. Por fim, realizam-se simulações de programação linear para observar melhor a aplicação das teorias estudadas e avaliar a proposta.
Resumo:
The production of artistic prints in the sixteenth- and seventeenth-century Netherlands was an inherently social process. Turning out prints at any reasonable scale depended on the fluid coordination between designers, platecutters, and publishers; roles that, by the sixteenth century, were considered distinguished enough to merit distinct credits engraved on the plates themselves: invenit, fecit/sculpsit, and excudit. While any one designer, plate cutter, and publisher could potentially exercise a great deal of influence over the production of a single print, their individual decisions (Whom to select as an engraver? What subjects to create for a print design? What market to sell to?) would have been variously constrained or encouraged by their position in this larger network (Who do they already know? And who, in turn, do their contacts know?) This dissertation addresses the impact of these constraints and affordances through the novel application of computational social network analysis to major databases of surviving prints from this period. This approach is used to evaluate several questions about trends in early modern print production practices that have not been satisfactorily addressed by traditional literature based on case studies alone: Did the social capital demanded by print production result in centralized, or distributed production of prints? When, and to what extent, did printmakers and publishers in the Low countries favor international versus domestic collaborators? And were printmakers under the same pressure as painters to specialize in particular artistic genres? This dissertation ultimately suggests how simple professional incentives endemic to the practice of printmaking may, at large scales, have resulted in quite complex patterns of collaboration and production. The framework of network analysis surfaces the role of certain printmakers who tend to be neglected in aesthetically-focused histories of art. This approach also highlights important issues concerning art historians’ balancing of individual influence versus the impact of longue durée trends. Finally, this dissertation also raises questions about the current limitations and future possibilities of combining computational methods with cultural heritage datasets in the pursuit of historical research.
Resumo:
The evaluation of the mesh opening stiffness of fishing nets is an important issue in assessing the selectivity of trawls. It appeared that a larger bending rigidity of twines decreases the mesh opening and could reduce the escapement of fish. Nevertheless, netting structure is complex. A netting is made up of braided twines made of polyethylene or polyamide. These twines are tied with non-symmetrical knots. Thus, these assemblies develop contact-friction interactions. Moreover, the netting can be subject to large deformation. In this study, we investigate the responses of netting samples to different types of solicitations. Samples are loaded and unloaded with creep and relaxation stages, with different boundary conditions. Then, two models have been developed: an analytical model and a finite element model. The last one was used to assess, with an inverse identification algorithm, the bending stiffness of twines. In this paper, experimental results and a model for netting structures made up of braided twines are presented. During dry forming of a composite, for example, the matrix is not present or not active, and relative sliding can occur between constitutive fibres. So an accurate modelling of the mechanical behaviour of fibrous material is necessary. This study offers experimental data which could permit to improve current models of contact-friction interactions [4], to validate models for large deformation analysis of fibrous materials [1] on a new experimental case, then to improve the evaluation of the mesh opening stiffness of a fishing net
Resumo:
Intraneural Ganglion Cyst is disorder observed in the nerve injury, it is still unknown and very difficult to predict its propagation in the human body so many times it is referred as an unsolved history. The treatments for this disorder are to remove the cystic substance from the nerve by a surgery. However these treatments may result in neuropathic pain and recurrence of the cyst. The articular theory proposed by Spinner et al., (Spinner et al. 2003) considers the neurological deficit in Common Peroneal Nerve (CPN) branch of the sciatic nerve and adds that in addition to the treatment, ligation of articular branch results into foolproof eradication of the deficit. Mechanical modeling of the affected nerve cross section will reinforce the articular theory (Spinner et al. 2003). As the cyst propagates, it compresses the neighboring fascicles and the nerve cross section appears like a signet ring. Hence, in order to mechanically model the affected nerve cross section; computational methods capable of modeling excessively large deformations are required. Traditional FEM produces distorted elements while modeling such deformations, resulting into inaccuracies and premature termination of the analysis. The methods described in research report have the capability to simulate large deformation. The results obtained from this research shows significant deformation as compared to the deformation observed in the conventional finite element models. The report elaborates the neurological deficit followed by detail explanation of the Smoothed Particle Hydrodynamic approach. Finally, the results show the large deformation in stages and also the successful implementation of the SPH method for the large deformation of the biological organ like the Intra-neural ganglion cyst.
Resumo:
Thin plate spline finite element methods are used to fit a surface to an irregularly scattered dataset [S. Roberts, M. Hegland, and I. Altas. Approximation of a Thin Plate Spline Smoother using Continuous Piecewise Polynomial Functions. SIAM, 1:208--234, 2003]. The computational bottleneck for this algorithm is the solution of large, ill-conditioned systems of linear equations at each step of a generalised cross validation algorithm. Preconditioning techniques are investigated to accelerate the convergence of the solution of these systems using Krylov subspace methods. The preconditioners under consideration are block diagonal, block triangular and constraint preconditioners [M. Benzi, G. H. Golub, and J. Liesen. Numerical solution of saddle point problems. Acta Numer., 14:1--137, 2005]. The effectiveness of each of these preconditioners is examined on a sample dataset taken from a known surface. From our numerical investigation, constraint preconditioners appear to provide improved convergence for this surface fitting problem compared to block preconditioners.