900 resultados para Numerical Schemes


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The results of a numerical study of premixed Hydrogen-air flows ignition by an oblique shock wave (OSW) stabilized by a wedge are presented, in situations when initial and boundary conditions are such that transition between the initial OSW and an oblique detonation wave (ODW) is observed. More precisely, the objectives of the paper are: (i) to identify the different possible structures of the transition region that exist between the initial OSW and the resulting ODW and (ii) to evidence the effect on the ODW of an abrupt decrease of the wedge angle in such a way that the final part of the wedge surface becomes parallel to the initial flow. For such a geometrical configuration and for the initial and boundary conditions considered, the overdriven detonation supported by the initial wedge angle is found to relax towards a Chapman-Jouguet detonation in the region where the wedge surface is parallel to the initial flow. Computations are performed using an adaptive, unstructured grid, finite volume computer code previously developed for the sake of the computations of high speed, compressible flows of reactive gas mixtures. Physico-chemical properties are functions of the local mixture composition, temperature and pressure, and they are computed using the CHEMKIN-II subroutines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, due to the increasing total construction and transportation cost and difficulties associated with handling massive structural components or assemblies, there has been increasing financial pressure to reduce structural weight. Furthermore, advances in material technology coupled with continuing advances in design tools and techniques have encouraged engineers to vary and combine materials, offering new opportunities to reduce the weight of mechanical structures. These new lower mass systems, however, are more susceptible to inherent imbalances, a weakness that can result in higher shock and harmonic resonances which leads to poor structural dynamic performances. The objective of this thesis is the modeling of layered sheet steel elements, to accurately predict dynamic performance. During the development of the layered sheet steel model, the numerical modeling approach, the Finite Element Analysis and the Experimental Modal Analysis are applied in building a modal model of the layered sheet steel elements. Furthermore, in view of getting a better understanding of the dynamic behavior of layered sheet steel, several binding methods have been studied to understand and demonstrate how a binding method affects the dynamic behavior of layered sheet steel elements when compared to single homogeneous steel plate. Based on the developed layered sheet steel model, the dynamic behavior of a lightweight wheel structure to be used as the structure for the stator of an outer rotor Direct-Drive Permanent Magnet Synchronous Generator designed for high-power wind turbines is studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of the tip clearance and vaneless diffuser width on the stage performance and flow fields of a centrifugal compressor were studied numerically and results were compared to the experimental measurements. The diffuser width was changed by moving the shroud side of the diffuser axially and six tip clearances size from 0.5 to 3 mm were studied. Moreover, the effects of rotor-stator interaction on the diffuser and impeller flow fields and performance were studied. Also transient simulations were carried out in order to investigate the influence of the interaction on the impeller and diffuser performance parameters. It was seen that pinch could improve the performance and it help to get more uniform flow at exit and less back flow from diffuser to the impeller.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cosmological standard view is based on the assumptions of homogeneity, isotropy and general relativistic gravitational interaction. These alone are not sufficient for describing the current cosmological observations of accelerated expansion of space. Although general relativity is extremely accurately tested to describe the local gravitational phenomena, there is a strong demand for modifying either the energy content of the universe or the gravitational interaction itself to account for the accelerated expansion. By adding a non-luminous matter component and a constant energy component with negative pressure, the observations can be explained with general relativity. Gravitation, cosmological models and their observational phenomenology are discussed in this thesis. Several classes of dark energy models that are motivated by theories outside the standard formulation of physics were studied with emphasis on the observational interpretation. All the cosmological models that seek to explain the cosmological observations, must also conform to the local phenomena. This poses stringent conditions for the physically viable cosmological models. Predictions from a supergravity quintessence model was compared to Supernova 1a data and several metric gravity models were studied with local experimental results. Polytropic stellar configurations of solar, white dwarf and neutron stars were numerically studied with modified gravity models. The main interest was to study the spacetime around the stars. The results shed light on the viability of the studied cosmological models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerical simulation of plasma sources is very important. Such models allows to vary different plasma parameters with high degree of accuracy. Moreover, they allow to conduct measurements not disturbing system balance.Recently, the scientific and practical interest increased in so-called two-chamber plasma sources. In one of them (small or discharge chamber) an external power source is embedded. In that chamber plasma forms. In another (large or diffusion chamber) plasma exists due to the transport of particles and energy through the boundary between chambers.In this particular work two-chamber plasma sources with argon and oxygen as active mediums were onstructed. This models give interesting results in electric field profiles and, as a consequence, in density profiles of charged particles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

All-electron partitioning of wave functions into products ^core^vai of core and valence parts in orbital space results in the loss of core-valence antisymmetry, uncorrelation of motion of core and valence electrons, and core-valence overlap. These effects are studied with the variational Monte Carlo method using appropriately designed wave functions for the first-row atoms and positive ions. It is shown that the loss of antisymmetry with respect to interchange of core and valence electrons is a dominant effect which increases rapidly through the row, while the effect of core-valence uncorrelation is generally smaller. Orthogonality of the core and valence parts partially substitutes the exclusion principle and is absolutely necessary for meaningful calculations with partitioned wave functions. Core-valence overlap may lead to nonsensical values of the total energy. It has been found that even relatively crude core-valence partitioned wave functions generally can estimate ionization potentials with better accuracy than that of the traditional, non-partitioned ones, provided that they achieve maximum separation (independence) of core and valence shells accompanied by high internal flexibility of ^core and Wvai- Our best core-valence partitioned wave function of that kind estimates the IP's with an accuracy comparable to the most accurate theoretical determinations in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to determine the effect that calculators have on the attitudes and numerical problem-solving skills of primary students. The sample used for this research was one of convenience. The sample consisted of two grade 3 classes within the York Region District School Board. The students in the experimental group used calculators for this problem-solving unit. The students in the control group completed the same numerical problem-solving unit without the use of calculators. The pretest-posttest control group design was used for this study. All students involved in this study completed a computational pretest and an attitude pretest. At the end of the study, the students completed a computational posttest. Five students from the experimental group and five students from the control group received their posttests in the form of a taped interview. At the end of the unit, all students completed the attitude scale that they had received before the numerical problem-solving unit once again. Data for qualitative analysis included anecdotal observations, journal entries, and transcribed interviews. The constant comparative method was used to analyze the qualitative data. A t test was also performed on the data to determine whether there were changes in test and attitude scores between the control and experimental group. Overall, the findings of this study support the hypothesis that calculators improve the attitudes of primary students toward mathematics. Also, there is some evidence to suggest that calculators improve the computational skills of grade 3 students.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Part I, theoretical derivations for Variational Monte Carlo calculations are compared with results from a numerical calculation of He; both indicate that minimization of the ratio estimate of Evar , denoted EMC ' provides different optimal variational parameters than does minimization of the variance of E MC • Similar derivations for Diffusion Monte Carlo calculations provide a theoretical justification for empirical observations made by other workers. In Part II, Importance sampling in prolate spheroidal coordinates allows Monte Carlo calculations to be made of E for the vdW molecule var He2' using a simplifying partitioning of the Hamiltonian and both an HF-SCF and an explicitly correlated wavefunction. Improvements are suggested which would permit the extension of the computational precision to the point where an estimate of the interaction energy could be made~

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It Has Been Argued That in the Construction and Simulation Process of Computable General Equilibrium (Cge) Models, the Choice of the Proper Macroclosure Remains a Fundamental Problem. in This Study, with a Standard Cge Model, We Simulate Disturbances Stemming From the Supply Or Demand Side of the Economy, Under Alternative Macroclosures. According to Our Results, the Choice of a Particular Closure Rule, for a Given Disturbance, May Have Different Quantitative and Qualitative Impacts. This Seems to Confirm the Imiportance of Simulating Cge Models Under Alternative Closure Rules and Eventually Choosing the Closure Which Best Applies to the Economy Under Study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cette thèse examine les impacts sur la morphologie des tributaires du fleuve Saint-Laurent des changements dans leur débit et leur niveau de base engendrés par les changements climatiques prévus pour la période 2010–2099. Les tributaires sélectionnés (rivières Batiscan, Richelieu, Saint-Maurice, Saint-François et Yamachiche) ont été choisis en raison de leurs différences de taille, de débit et de contexte morphologique. Non seulement ces tributaires subissent-ils un régime hydrologique modifié en raison des changements climatiques, mais leur niveau de base (niveau d’eau du fleuve Saint-Laurent) sera aussi affecté. Le modèle morphodynamique en une dimension (1D) SEDROUT, à l’origine développé pour des rivières graveleuses en mode d’aggradation, a été adapté pour le contexte spécifique des tributaires des basses-terres du Saint-Laurent afin de simuler des rivières sablonneuses avec un débit quotidien variable et des fluctuations du niveau d’eau à l’aval. Un module pour simuler le partage des sédiments autour d’îles a aussi été ajouté au modèle. Le modèle ainsi amélioré (SEDROUT4-M), qui a été testé à l’aide de simulations à petite échelle et avec les conditions actuelles d’écoulement et de transport de sédiments dans quatre tributaires du fleuve Saint-Laurent, peut maintenant simuler une gamme de problèmes morphodynamiques de rivières. Les changements d’élévation du lit et d’apport en sédiments au fleuve Saint-Laurent pour la période 2010–2099 ont été simulés avec SEDROUT4-M pour les rivières Batiscan, Richelieu et Saint-François pour toutes les combinaisons de sept régimes hydrologiques (conditions actuelles et celles prédites par trois modèles de climat globaux (MCG) et deux scénarios de gaz à effet de serre) et de trois scénarios de changements du niveau de base du fleuve Saint-Laurent (aucun changement, baisse graduelle, baisse abrupte). Les impacts sur l’apport de sédiments et l’élévation du lit diffèrent entre les MCG et semblent reliés au statut des cours d’eau (selon qu’ils soient en état d’aggradation, de dégradation ou d’équilibre), ce qui illustre l’importance d’examiner plusieurs rivières avec différents modèles climatiques afin d’établir des tendances dans les effets des changements climatiques. Malgré le fait que le débit journalier moyen et le débit annuel moyen demeurent près de leur valeur actuelle dans les trois scénarios de MCG, des changements importants dans les taux de transport de sédiments simulés pour chaque tributaire sont observés. Ceci est dû à l’impact important de fortes crues plus fréquentes dans un climat futur de même qu’à l’arrivée plus hâtive de la crue printanière, ce qui résulte en une variabilité accrue dans les taux de transport en charge de fond. Certaines complications avec l’approche de modélisation en 1D pour représenter la géométrie complexe des rivières Saint-Maurice et Saint-François suggèrent qu’une approche bi-dimensionnelle (2D) devrait être sérieusement considérée afin de simuler de façon plus exacte la répartition des débits aux bifurcations autour des îles. La rivière Saint-François est utilisée comme étude de cas pour le modèle 2D H2D2, qui performe bien d’un point de vue hydraulique, mais qui requiert des ajustements pour être en mesure de pleinement simuler les ajustements morphologiques des cours d’eau.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Le problème de tarification qui nous intéresse ici consiste à maximiser le revenu généré par les usagers d'un réseau de transport. Pour se rendre à leurs destinations, les usagers font un choix de route et utilisent des arcs sur lesquels nous imposons des tarifs. Chaque route est caractérisée (aux yeux de l'usager) par sa "désutilité", une mesure de longueur généralisée tenant compte à la fois des tarifs et des autres coûts associés à son utilisation. Ce problème a surtout été abordé sous une modélisation déterministe de la demande selon laquelle seules des routes de désutilité minimale se voient attribuer une mesure positive de flot. Le modèle déterministe se prête bien à une résolution globale, mais pèche par manque de réalisme. Nous considérons ici une extension probabiliste de ce modèle, selon laquelle les usagers d'un réseau sont alloués aux routes d'après un modèle de choix discret logit. Bien que le problème de tarification qui en résulte est non linéaire et non convexe, il conserve néanmoins une forte composante combinatoire que nous exploitons à des fins algorithmiques. Notre contribution se répartit en trois articles. Dans le premier, nous abordons le problème d'un point de vue théorique pour le cas avec une paire origine-destination. Nous développons une analyse de premier ordre qui exploite les propriétés analytiques de l'affectation logit et démontrons la validité de règles de simplification de la topologie du réseau qui permettent de réduire la dimension du problème sans en modifier la solution. Nous établissons ensuite l'unimodalité du problème pour une vaste gamme de topologies et nous généralisons certains de nos résultats au problème de la tarification d'une ligne de produits. Dans le deuxième article, nous abordons le problème d'un point de vue numérique pour le cas avec plusieurs paires origine-destination. Nous développons des algorithmes qui exploitent l'information locale et la parenté des formulations probabilistes et déterministes. Un des résultats de notre analyse est l'obtention de bornes sur l'erreur commise par les modèles combinatoires dans l'approximation du revenu logit. Nos essais numériques montrent qu'une approximation combinatoire rudimentaire permet souvent d'identifier des solutions quasi-optimales. Dans le troisième article, nous considérons l'extension du problème à une demande hétérogène. L'affectation de la demande y est donnée par un modèle de choix discret logit mixte où la sensibilité au prix d'un usager est aléatoire. Sous cette modélisation, l'expression du revenu n'est pas analytique et ne peut être évaluée de façon exacte. Cependant, nous démontrons que l'utilisation d'approximations non linéaires et combinatoires permet d'identifier des solutions quasi-optimales. Finalement, nous en profitons pour illustrer la richesse du modèle, par le biais d'une interprétation économique, et examinons plus particulièrement la contribution au revenu des différents groupes d'usagers.