983 resultados para Methods: numerical
Resumo:
Several problems arise when measuring the mode II interlaminar fracture toughness using a Transverse Crack Tension specimen; in particular, the fracture toughness depends on the geometry of the specimen and cannot be considered a material parameter. A preliminary experimental campaign was conducted on TCTs of different sizes but no fracture toughness was measured because the TCTs failed in an unacceptable way, invalidating the tests. A comprehensive numerical and experimental investigation is conducted to identify the main causes of this behaviour and a modification of the geometry of the specimen is proposed. It is believed that the obtained results represent a significant contribution in the understanding of the TCT test as a mode II characterization procedure and, at the same time, provide new guidelines to characterize the mode II crack propagation under tensile loads.
Resumo:
This Licentiate Thesis is devoted to the presentation and discussion of some new contributions in applied mathematics directed towards scientific computing in sports engineering. It considers inverse problems of biomechanical simulations with rigid body musculoskeletal systems especially in cross-country skiing. This is a contrast to the main research on cross-country skiing biomechanics, which is based mainly on experimental testing alone. The thesis consists of an introduction and five papers. The introduction motivates the context of the papers and puts them into a more general framework. Two papers (D and E) consider studies of real questions in cross-country skiing, which are modelled and simulated. The results give some interesting indications, concerning these challenging questions, which can be used as a basis for further research. However, the measurements are not accurate enough to give the final answers. Paper C is a simulation study which is more extensive than paper D and E, and is compared to electromyography measurements in the literature. Validation in biomechanical simulations is difficult and reducing mathematical errors is one way of reaching closer to more realistic results. Paper A examines well-posedness for forward dynamics with full muscle dynamics. Moreover, paper B is a technical report which describes the problem formulation and mathematical models and simulation from paper A in more detail. Our new modelling together with the simulations enable new possibilities. This is similar to simulations of applications in other engineering fields, and need in the same way be handled with care in order to achieve reliable results. The results in this thesis indicate that it can be very useful to use mathematical modelling and numerical simulations when describing cross-country skiing biomechanics. Hence, this thesis contributes to the possibility of beginning to use and develop such modelling and simulation techniques also in this context.
Resumo:
Hybrid simulation is a technique that combines experimental and numerical testing and has been used for the last decades in the fields of aerospace, civil and mechanical engineering. During this time, most of the research has focused on developing algorithms and the necessary technology, including but not limited to, error minimisation techniques, phase lag compensation and faster hydraulic cylinders. However, one of the main shortcomings in hybrid simulation that has pre- vented its widespread use is the size of the numerical models and the effect that higher frequencies may have on the stability and accuracy of the simulation. The first chapter in this document provides an overview of the hybrid simulation method and the different hybrid simulation schemes, and the corresponding time integration algorithms, that are more commonly used in this field. The scope of this thesis is presented in more detail in chapter 2: a substructure algorithm, the Substep Force Feedback (Subfeed), is adapted in order to fulfil the necessary requirements in terms of speed. The effects of more complex models on the Subfeed are also studied in detail, and the improvements made are validated experimentally. Chapters 3 and 4 detail the methodologies that have been used in order to accomplish the objectives mentioned in the previous lines, listing the different cases of study and detailing the hardware and software used to experimentally validate them. The third chapter contains a brief introduction to a project, the DFG Subshake, whose data have been used as a starting point for the developments that are shown later in this thesis. The results obtained are presented in chapters 5 and 6, with the first of them focusing on purely numerical simulations while the second of them is more oriented towards a more practical application including experimental real-time hybrid simulation tests with large numerical models. Following the discussion of the developments in this thesis is a list of hardware and software requirements that have to be met in order to apply the methods described in this document, and they can be found in chapter 7. The last chapter, chapter 8, of this thesis focuses on conclusions and achievements extracted from the results, namely: the adaptation of the hybrid simulation algorithm Subfeed to be used in conjunction with large numerical models, the study of the effect of high frequencies on the substructure algorithm and experimental real-time hybrid simulation tests with vibrating subsystems using large numerical models and shake tables. A brief discussion of possible future research activities can be found in the concluding chapter.
Resumo:
Lors du transport du bois de la forêt vers les usines, de nombreux événements imprévus peuvent se produire, événements qui perturbent les trajets prévus (par exemple, en raison des conditions météo, des feux de forêt, de la présence de nouveaux chargements, etc.). Lorsque de tels événements ne sont connus que durant un trajet, le camion qui accomplit ce trajet doit être détourné vers un chemin alternatif. En l’absence d’informations sur un tel chemin, le chauffeur du camion est susceptible de choisir un chemin alternatif inutilement long ou pire, qui est lui-même "fermé" suite à un événement imprévu. Il est donc essentiel de fournir aux chauffeurs des informations en temps réel, en particulier des suggestions de chemins alternatifs lorsqu’une route prévue s’avère impraticable. Les possibilités de recours en cas d’imprévus dépendent des caractéristiques de la chaîne logistique étudiée comme la présence de camions auto-chargeurs et la politique de gestion du transport. Nous présentons trois articles traitant de contextes d’application différents ainsi que des modèles et des méthodes de résolution adaptés à chacun des contextes. Dans le premier article, les chauffeurs de camion disposent de l’ensemble du plan hebdomadaire de la semaine en cours. Dans ce contexte, tous les efforts doivent être faits pour minimiser les changements apportés au plan initial. Bien que la flotte de camions soit homogène, il y a un ordre de priorité des chauffeurs. Les plus prioritaires obtiennent les volumes de travail les plus importants. Minimiser les changements dans leurs plans est également une priorité. Étant donné que les conséquences des événements imprévus sur le plan de transport sont essentiellement des annulations et/ou des retards de certains voyages, l’approche proposée traite d’abord l’annulation et le retard d’un seul voyage, puis elle est généralisée pour traiter des événements plus complexes. Dans cette ap- proche, nous essayons de re-planifier les voyages impactés durant la même semaine de telle sorte qu’une chargeuse soit libre au moment de l’arrivée du camion à la fois au site forestier et à l’usine. De cette façon, les voyages des autres camions ne seront pas mo- difiés. Cette approche fournit aux répartiteurs des plans alternatifs en quelques secondes. De meilleures solutions pourraient être obtenues si le répartiteur était autorisé à apporter plus de modifications au plan initial. Dans le second article, nous considérons un contexte où un seul voyage à la fois est communiqué aux chauffeurs. Le répartiteur attend jusqu’à ce que le chauffeur termine son voyage avant de lui révéler le prochain voyage. Ce contexte est plus souple et offre plus de possibilités de recours en cas d’imprévus. En plus, le problème hebdomadaire peut être divisé en des problèmes quotidiens, puisque la demande est quotidienne et les usines sont ouvertes pendant des périodes limitées durant la journée. Nous utilisons un modèle de programmation mathématique basé sur un réseau espace-temps pour réagir aux perturbations. Bien que ces dernières puissent avoir des effets différents sur le plan de transport initial, une caractéristique clé du modèle proposé est qu’il reste valable pour traiter tous les imprévus, quelle que soit leur nature. En effet, l’impact de ces événements est capturé dans le réseau espace-temps et dans les paramètres d’entrée plutôt que dans le modèle lui-même. Le modèle est résolu pour la journée en cours chaque fois qu’un événement imprévu est révélé. Dans le dernier article, la flotte de camions est hétérogène, comprenant des camions avec des chargeuses à bord. La configuration des routes de ces camions est différente de celle des camions réguliers, car ils ne doivent pas être synchronisés avec les chargeuses. Nous utilisons un modèle mathématique où les colonnes peuvent être facilement et naturellement interprétées comme des itinéraires de camions. Nous résolvons ce modèle en utilisant la génération de colonnes. Dans un premier temps, nous relaxons l’intégralité des variables de décision et nous considérons seulement un sous-ensemble des itinéraires réalisables. Les itinéraires avec un potentiel d’amélioration de la solution courante sont ajoutés au modèle de manière itérative. Un réseau espace-temps est utilisé à la fois pour représenter les impacts des événements imprévus et pour générer ces itinéraires. La solution obtenue est généralement fractionnaire et un algorithme de branch-and-price est utilisé pour trouver des solutions entières. Plusieurs scénarios de perturbation ont été développés pour tester l’approche proposée sur des études de cas provenant de l’industrie forestière canadienne et les résultats numériques sont présentés pour les trois contextes.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Phase change problems arise in many practical applications such as air-conditioning and refrigeration, thermal energy storage systems and thermal management of electronic devices. The physical phenomenon in such applications are complex and are often difficult to be studied in detail with the help of only experimental techniques. The efforts to improve computational techniques for analyzing two-phase flow problems with phase change are therefore gaining momentum. The development of numerical methods for multiphase flow has been motivated generally by the need to account more accurately for (a) large topological changes such as phase breakup and merging, (b) sharp representation of the interface and its discontinuous properties and (c) accurate and mass conserving motion of the interface. In addition to these considerations, numerical simulation of multiphase flow with phase change introduces additional challenges related to discontinuities in the velocity and the temperature fields. Moreover, the velocity field is no longer divergence free. For phase change problems, the focus of developmental efforts has thus been on numerically attaining a proper conservation of energy across the interface in addition to the accurate treatment of fluxes of mass and momentum conservation as well as the associated interface advection. Among the initial efforts related to the simulation of bubble growth in film boiling applications the work in \cite{Welch1995} was based on the interface tracking method using a moving unstructured mesh. That study considered moderate interfacial deformations. A similar problem was subsequently studied using moving, boundary fitted grids \cite{Son1997}, again for regimes of relatively small topological changes. A hybrid interface tracking method with a moving interface grid overlapping a static Eulerian grid was developed \cite{Juric1998} for the computation of a range of phase change problems including, three-dimensional film boiling \cite{esmaeeli2004computations}, multimode two-dimensional pool boiling \cite{Esmaeeli2004} and film boiling on horizontal cylinders \cite{Esmaeeli2004a}. The handling of interface merging and pinch off however remains a challenge with methods that explicitly track the interface. As large topological changes are crucial for phase change problems, attention has turned in recent years to front capturing methods utilizing implicit interfaces that are more effective in treating complex interface deformations. The VOF (Volume of Fluid) method was adopted in \cite{Welch2000} to simulate the one-dimensional Stefan problem and the two-dimensional film boiling problem. The approach employed a specific model for mass transfer across the interface involving a mass source term within cells containing the interface. This VOF based approach was further coupled with the level set method in \cite{Son1998}, employing a smeared-out Heaviside function to avoid the numerical instability related to the source term. The coupled level set, volume of fluid method and the diffused interface approach was used for film boiling with water and R134a at the near critical pressure condition \cite{Tomar2005}. The effect of superheat and saturation pressure on the frequency of bubble formation were analyzed with this approach. The work in \cite{Gibou2007} used the ghost fluid and the level set methods for phase change simulations. A similar approach was adopted in \cite{Son2008} to study various boiling problems including three-dimensional film boiling on a horizontal cylinder, nucleate boiling in microcavity \cite{lee2010numerical} and flow boiling in a finned microchannel \cite{lee2012direct}. The work in \cite{tanguy2007level} also used the ghost fluid method and proposed an improved algorithm based on enforcing continuity and divergence-free condition for the extended velocity field. The work in \cite{sato2013sharp} employed a multiphase model based on volume fraction with interface sharpening scheme and derived a phase change model based on local interface area and mass flux. Among the front capturing methods, sharp interface methods have been found to be particularly effective both for implementing sharp jumps and for resolving the interfacial velocity field. However, sharp velocity jumps render the solution susceptible to erroneous oscillations in pressure and also lead to spurious interface velocities. To implement phase change, the work in \cite{Hardt2008} employed point mass source terms derived from a physical basis for the evaporating mass flux. To avoid numerical instability, the authors smeared the mass source by solving a pseudo time-step diffusion equation. This measure however led to mass conservation issues due to non-symmetric integration over the distributed mass source region. The problem of spurious pressure oscillations related to point mass sources was also investigated by \cite{Schlottke2008}. Although their method is based on the VOF, the large pressure peaks associated with sharp mass source was observed to be similar to that for the interface tracking method. Such spurious fluctuation in pressure are essentially undesirable because the effect is globally transmitted in incompressible flow. Hence, the pressure field formation due to phase change need to be implemented with greater accuracy than is reported in current literature. The accuracy of interface advection in the presence of interfacial mass flux (mass flux conservation) has been discussed in \cite{tanguy2007level,tanguy2014benchmarks}. The authors found that the method of extending one phase velocity to entire domain suggested by Nguyen et al. in \cite{nguyen2001boundary} suffers from a lack of mass flux conservation when the density difference is high. To improve the solution, the authors impose a divergence-free condition for the extended velocity field by solving a constant coefficient Poisson equation. The approach has shown good results with enclosed bubble or droplet but is not general for more complex flow and requires additional solution of the linear system of equations. In current thesis, an improved approach that addresses both the numerical oscillation of pressure and the spurious interface velocity field is presented by featuring (i) continuous velocity and density fields within a thin interfacial region and (ii) temporal velocity correction steps to avoid unphysical pressure source term. Also I propose a general (iii) mass flux projection correction for improved mass flux conservation. The pressure and the temperature gradient jump condition are treated sharply. A series of one-dimensional and two-dimensional problems are solved to verify the performance of the new algorithm. Two-dimensional and cylindrical film boiling problems are also demonstrated and show good qualitative agreement with the experimental observations and heat transfer correlations. Finally, a study on Taylor bubble flow with heat transfer and phase change in a small vertical tube in axisymmetric coordinates is carried out using the new multiphase, phase change method.
Resumo:
In this article we consider the development of discontinuous Galerkin finite element methods for the numerical approximation of the compressible Navier-Stokes equations. For the discretization of the leading order terms, we propose employing the generalization of the symmetric version of the interior penalty method, originally developed for the numerical approximation of linear self-adjoint second-order elliptic partial differential equations. In order to solve the resulting system of nonlinear equations, we exploit a (damped) Newton-GMRES algorithm. Numerical experiments demonstrating the practical performance of the proposed discontinuous Galerkin method with higher-order polynomials are presented.
Resumo:
In this article we consider the application of the generalization of the symmetric version of the interior penalty discontinuous Galerkin finite element method to the numerical approximation of the compressible Navier--Stokes equations. In particular, we consider the a posteriori error analysis and adaptive mesh design for the underlying discretization method. Indeed, by employing a duality argument (weighted) Type I a posteriori bounds are derived for the estimation of the error measured in terms of general target functionals of the solution; these error estimates involve the product of the finite element residuals with local weighting terms involving the solution of a certain dual problem that must be numerically approximated. This general approach leads to the design of economical finite element meshes specifically tailored to the computation of the target functional of interest, as well as providing efficient error estimation. Numerical experiments demonstrating the performance of the proposed approach will be presented.
Study of white spot disease in four native species in Persian Gulf by histopathology and PCR methods
Resumo:
After serious disease outbreak, caused by new virus (WSV), has been occurring among cultured penaeid shrimps in Asian countries like China since 1993 and then in Latin American countries, during June till July 2002 a rapid and high mortality in cultured Penaeus indicus in Abadan region located in south of Iran with typical signs and symptoms of White Spot Syndrome Virus was confirmed by different studies of Histopathology, PCR, TEM, Virology. This study was conducted for the purpose of determination of prevalence(rate of infection)/ROI and grading severity (SOI) of WSD to five species: 150 samples of captured shrimps and 90 samples of cultured ones; Penaeus indicus, P. semisulcatus, P. merguiensis, Parapenaopsis styliferus, and Metapenaeus affinis in 2005. 136 of 240 samples have shown clinical and macroscopical signs & symptoms including; white spots on carapase (0.5-2 mm), easily removing of cuticule, fragility of hepatopancreas and red color of motility limbs. Histopathological changes like specific intranuclear inclusion bodies (cowdry-type A) were observed in all target tissues (gill, epidermis, haemolymph and midgut) but not in hepatopancreas, among shrimps collected from various farms in the south and captured ones from Persian Gulf, even ones without clinical signs. ROI among species estimated, using the NATIVIDAD & LIGHTNER formula(1992b) and SOI were graded, using a generalized scheme for assigning a numerical qualitative value to severity grade of infection which was provided by LIGHTNER(1996), in consideration to histopathology and counting specific inclusion bodies in different stages(were modified by B. Gholamhoseini). Samples with clinical signs, showed grades more than 2. Most of the P. semisulcatus and M. affinis samples showed grade of 3, in the other hand in most of P. styliferus samples grade of 4 were observed, which can suggest different sensitivity of different species. All samples were tested by Nested PCR method with IQTm 2000 WSSV kit and 183 of 240 samples were positive and 3 1evel of infection which was shown in this PCR confirmed our SOI grades, but they were more specified.
Resumo:
In this paper, we present a new numerical method to solve fractional differential equations. Given a fractional derivative of arbitrary real order, we present an approximation formula for the fractional operator that involves integer-order derivatives only. With this, we can rewrite FDEs in terms of a classical one and then apply any known technique. With some examples, we show the accuracy of the method.
Resumo:
With the continued miniaturization and increasing performance of electronic devices, new technical challenges have arisen. One such issue is delamination occurring at critical interfaces inside the device. This major reliability issue can occur during the manufacturing process or during normal use of the device. Proper evaluation of the adhesion strength of critical interfaces early in the product development cycle can help reduce reliability issues and time-to-market of the product. However, conventional adhesion strength testing is inherently limited in the face of package miniaturization, which brings about further technical challenges to quantify design integrity and reliability. Although there are many different interfaces in today's advanced electronic packages, they can be generalized into two main categories: 1) rigid to rigid connections with a thin flexible polymeric layer in between, or 2) a thin film membrane on a rigid structure. Knowing that every technique has its own advantages and disadvantages, multiple testing methods must be enhanced and developed to be able to accommodate all the interfaces encountered for emerging electronic packaging technologies. For evaluating the adhesion strength of high adhesion strength interfaces in thin multilayer structures a novel adhesion test configuration called “single cantilever adhesion test (SCAT)” is proposed and implemented for an epoxy molding compound (EMC) and photo solder resist (PSR) interface. The test method is then shown to be capable of comparing and selecting the stronger of two potential EMC/PSR material sets. Additionally, a theoretical approach for establishing the applicable testing domain for a four-point bending test method was presented. For evaluating polymeric films on rigid substrates, major testing challenges are encountered for reducing testing scatter and for factoring in the potentially degrading effect of environmental conditioning on the material properties of the film. An advanced blister test with predefined area test method was developed that considers an elasto-plastic analytical solution and implemented for a conformal coating used to prevent tin whisker growth. The advanced blister testing with predefined area test method was then extended by employing a numerical method for evaluating the adhesion strength when the polymer’s film properties are unknown.
Resumo:
We develop the a-posteriori error analysis of hp-version interior-penalty discontinuous Galerkin finite element methods for a class of second-order quasilinear elliptic partial differential equations. Computable upper and lower bounds on the error are derived in terms of a natural (mesh-dependent) energy norm. The bounds are explicit in the local mesh size and the local degree of the approximating polynomial. The performance of the proposed estimators within an automatic hp-adaptive refinement procedure is studied through numerical experiments.
Resumo:
In this paper we consider the a posteriori and a priori error analysis of discontinuous Galerkin interior penalty methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes. In particular, we discuss the question of error estimation for linear target functionals, such as the outflow flux and the local average of the solution. Based on our a posteriori error bound we design and implement the corresponding adaptive algorithm to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement. The theoretical results are illustrated by a series of numerical experiments.
Resumo:
We consider the a priori error analysis of hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form under weak assumptions on the mesh design and the local finite element spaces employed. In particular, we prove a priori hp-error bounds for linear target functionals of the solution, on (possibly) anisotropic computational meshes with anisotropic tensor-product polynomial basis functions. The theoretical results are illustrated by a numerical experiment.
Resumo:
This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.