952 resultados para Numerical example
Resumo:
The absolute nodal coordinate formulation was originally developed for the analysis of structures undergoing large rotations and deformations. This dissertation proposes several enhancements to the absolute nodal coordinate formulation based finite beam and plate elements. The main scientific contribution of this thesis relies on the development of elements based on the absolute nodal coordinate formulation that do not suffer from commonly known numerical locking phenomena. These elements can be used in the future in a number of practical applications, for example, analysis of biomechanical soft tissues. This study presents several higher-order Euler–Bernoulli beam elements, a simple method to alleviate Poisson’s and transverse shear locking in gradient deficient plate elements, and a nearly locking free gradient deficient plate element. The absolute nodal coordinate formulation based gradient deficient plate elements developed in this dissertation describe most of the common numerical locking phenomena encountered in the formulation of a continuum mechanics based description of elastic energy. Thus, with these fairly straightforwardly formulated elements that are comprised only of the position and transverse direction gradient degrees of freedom, the pathologies and remedies for the numerical locking phenomena are presented in a clear and understandable manner. The analysis of the Euler–Bernoulli beam elements developed in this study show that the choice of higher gradient degrees of freedom as nodal degrees of freedom leads to a smoother strain field. This improves the rate of convergence.
Resumo:
The effect of the tip clearance and vaneless diffuser width on the stage performance and flow fields of a centrifugal compressor were studied numerically and results were compared to the experimental measurements. The diffuser width was changed by moving the shroud side of the diffuser axially and six tip clearances size from 0.5 to 3 mm were studied. Moreover, the effects of rotor-stator interaction on the diffuser and impeller flow fields and performance were studied. Also transient simulations were carried out in order to investigate the influence of the interaction on the impeller and diffuser performance parameters. It was seen that pinch could improve the performance and it help to get more uniform flow at exit and less back flow from diffuser to the impeller.
Resumo:
The cosmological standard view is based on the assumptions of homogeneity, isotropy and general relativistic gravitational interaction. These alone are not sufficient for describing the current cosmological observations of accelerated expansion of space. Although general relativity is extremely accurately tested to describe the local gravitational phenomena, there is a strong demand for modifying either the energy content of the universe or the gravitational interaction itself to account for the accelerated expansion. By adding a non-luminous matter component and a constant energy component with negative pressure, the observations can be explained with general relativity. Gravitation, cosmological models and their observational phenomenology are discussed in this thesis. Several classes of dark energy models that are motivated by theories outside the standard formulation of physics were studied with emphasis on the observational interpretation. All the cosmological models that seek to explain the cosmological observations, must also conform to the local phenomena. This poses stringent conditions for the physically viable cosmological models. Predictions from a supergravity quintessence model was compared to Supernova 1a data and several metric gravity models were studied with local experimental results. Polytropic stellar configurations of solar, white dwarf and neutron stars were numerically studied with modified gravity models. The main interest was to study the spacetime around the stars. The results shed light on the viability of the studied cosmological models.
Resumo:
Numerical simulation of plasma sources is very important. Such models allows to vary different plasma parameters with high degree of accuracy. Moreover, they allow to conduct measurements not disturbing system balance.Recently, the scientific and practical interest increased in so-called two-chamber plasma sources. In one of them (small or discharge chamber) an external power source is embedded. In that chamber plasma forms. In another (large or diffusion chamber) plasma exists due to the transport of particles and energy through the boundary between chambers.In this particular work two-chamber plasma sources with argon and oxygen as active mediums were onstructed. This models give interesting results in electric field profiles and, as a consequence, in density profiles of charged particles.
Resumo:
The liberalisation of the wholesale electricity markets has been considered an efficient way to organise the markets. In Europe, the target is to liberalise and integrate the common European electricity markets. However, insufficient transmission capacity between the market areas hampers the integration, and therefore, new investments are required. Again, massive transmission capacity investments are not usually easy to carry through. This doctoral dissertation aims at elaborating on critical determinants required to deliver the necessary transmission capacity investments. The Nordic electricity market is used as an illustrative example. This study suggests that changes in the governance structure have affected the delivery of Nordic cross-border investments. In addition, the impacts of not fully delivered investments are studied in this doctoral dissertation. An insufficient transmission network can degrade the market uniformity and may also cause a need to split the market into smaller submarkets. This may have financial impacts on market actors when the targeted efficient sharing of resources is not met and even encourage gaming. The research methods applied in this doctoral dissertation are mainly empirical ranging from a Delphi study to case studies and numerical calculations.
Resumo:
Tässä lisensiaatintyössä käsitellään sekaelementtien sovellusmahdollisuuksia absoluuttisten solmukoordinaattien menetelmässä. Absoluuttisten solmukoordinaattien menetelmä on uudentyyppinen lähestymistapa elementtimenetelmän elementtien koordinaattien määrittämiseksi ja sen yhtenä tavoitteena on tehostaa suuria siirtymiä tai kiertymiä sisältävien elementtien laskentatehokkuutta. Tässä työssä absoluuttisten solmukoordinaattien menetelmä esitellään pääpiirteittäin sekä annetaan esimerkkejä muutamista tyypillisimmistä elementeistä lausuttuna edellä mainittujen koordinaattien perusteella. Sekaelementeiksi kutsutaan elementtityyppejä, missä tuntemattomien muuttujien joukkoja on aina enemmän kuin yksi. Sekaelementit erottavat redusoitumattomista elementeistä siirtymäkentän sisältyminen muuttujaryhmään ja hybridielementeistä muuttujien identtiset ulottuvuudet. Sekaelementtejä käytetään esimerkiksi kokoonpuristumattomien materiaalien rakenneanalyyseissä, alentamaan elementiltä vaadittavia jatkuvuusehtoja tai mallintamaan ilmiöitä, missä fysikaaliset ominaisuudet ovat jostain syystä voimakkaasti toisistaan riippuvaisia. Tämän lisensiaatintyön kirjoittamiseksi on tehty tutkimusta sekaelementtien mahdollisuuksista toimia absoluuttisten solmukoordinaattien menetelmässä. Tutkimuksen tuloksena on saatu aikaan kaksi tässä työssä esiteltävää, varsin rajatun toimintakyvyn omaavaa sekaelementtityyppiä, joiden siirtymäkentät on määritelty globaalien koordinaattien suhteen sisältäen myös orientaatiotermit. Tutkimusaihe vaatii kuitenkin vielä paljon lisätyötä, ennen kuin sekaelementtityyppejä voidaan kauttaaltaan soveltaa absoluuttisten solmukoordinaattien menetelmällä toteutetuissa rakenneanalyyseissä.
Resumo:
The purpose of this study was to determine the effect that calculators have on the attitudes and numerical problem-solving skills of primary students. The sample used for this research was one of convenience. The sample consisted of two grade 3 classes within the York Region District School Board. The students in the experimental group used calculators for this problem-solving unit. The students in the control group completed the same numerical problem-solving unit without the use of calculators. The pretest-posttest control group design was used for this study. All students involved in this study completed a computational pretest and an attitude pretest. At the end of the study, the students completed a computational posttest. Five students from the experimental group and five students from the control group received their posttests in the form of a taped interview. At the end of the unit, all students completed the attitude scale that they had received before the numerical problem-solving unit once again. Data for qualitative analysis included anecdotal observations, journal entries, and transcribed interviews. The constant comparative method was used to analyze the qualitative data. A t test was also performed on the data to determine whether there were changes in test and attitude scores between the control and experimental group. Overall, the findings of this study support the hypothesis that calculators improve the attitudes of primary students toward mathematics. Also, there is some evidence to suggest that calculators improve the computational skills of grade 3 students.
Resumo:
In Part I, theoretical derivations for Variational Monte Carlo calculations are compared with results from a numerical calculation of He; both indicate that minimization of the ratio estimate of Evar , denoted EMC ' provides different optimal variational parameters than does minimization of the variance of E MC • Similar derivations for Diffusion Monte Carlo calculations provide a theoretical justification for empirical observations made by other workers. In Part II, Importance sampling in prolate spheroidal coordinates allows Monte Carlo calculations to be made of E for the vdW molecule var He2' using a simplifying partitioning of the Hamiltonian and both an HF-SCF and an explicitly correlated wavefunction. Improvements are suggested which would permit the extension of the computational precision to the point where an estimate of the interaction energy could be made~
Resumo:
Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free
Resumo:
Qualitative spatial reasoning (QSR) is an important field of AI that deals with qualitative aspects of spatial entities. Regions and their relationships are described in qualitative terms instead of numerical values. This approach models human based reasoning about such entities closer than other approaches. Any relationships between regions that we encounter in our daily life situations are normally formulated in natural language. For example, one can outline one's room plan to an expert by indicating which rooms should be connected to each other. Mereotopology as an area of QSR combines mereology, topology and algebraic methods. As mereotopology plays an important role in region based theories of space, our focus is on one of the most widely referenced formalisms for QSR, the region connection calculus (RCC). RCC is a first order theory based on a primitive connectedness relation, which is a binary symmetric relation satisfying some additional properties. By using this relation we can define a set of basic binary relations which have the property of being jointly exhaustive and pairwise disjoint (JEPD), which means that between any two spatial entities exactly one of the basic relations hold. Basic reasoning can now be done by using the composition operation on relations whose results are stored in a composition table. Relation algebras (RAs) have become a main entity for spatial reasoning in the area of QSR. These algebras are based on equational reasoning which can be used to derive further relations between regions in a certain situation. Any of those algebras describe the relation between regions up to a certain degree of detail. In this thesis we will use the method of splitting atoms in a RA in order to reproduce known algebras such as RCC15 and RCC25 systematically and to generate new algebras, and hence a more detailed description of regions, beyond RCC25.
Resumo:
It Has Been Argued That in the Construction and Simulation Process of Computable General Equilibrium (Cge) Models, the Choice of the Proper Macroclosure Remains a Fundamental Problem. in This Study, with a Standard Cge Model, We Simulate Disturbances Stemming From the Supply Or Demand Side of the Economy, Under Alternative Macroclosures. According to Our Results, the Choice of a Particular Closure Rule, for a Given Disturbance, May Have Different Quantitative and Qualitative Impacts. This Seems to Confirm the Imiportance of Simulating Cge Models Under Alternative Closure Rules and Eventually Choosing the Closure Which Best Applies to the Economy Under Study.
Resumo:
Cette thèse porte sur l’évaluation de la cohérence du réseau conceptuel démontré par des étudiants de niveau collégial inscrits en sciences de la nature. L’évaluation de cette cohérence s’est basée sur l’analyse des tableaux de Burt issus des réponses à des questionnaires à choix multiples, sur l’étude détaillée des indices de discrimination spécifique qui seront décrits plus en détail dans le corps de l’ouvrage et sur l’analyse de séquences vidéos d’étudiants effectuant une expérimentation en contexte réel. Au terme de ce projet, quatre grands axes de recherche ont été exploré. 1) Quelle est la cohérence conceptuelle démontrée en physique newtonienne ? 2) Est-ce que la maîtrise du calcul d’incertitude est corrélée au développement de la pensée logique ou à la maîtrise des mathématiques ? 3) Quelle est la cohérence conceptuelle démontrée dans la quantification de l’incertitude expérimentale ? 4) Quelles sont les procédures concrètement mise en place par des étudiants pour quantifier l’incertitude expérimentale dans un contexte de laboratoire semi-dirigé ? Les principales conclusions qui ressortent pour chacun des axes peuvent se formuler ainsi. 1) Les conceptions erronées les plus répandues ne sont pas solidement ancrées dans un réseau conceptuel rigide. Par exemple, un étudiant réussissant une question sur la troisième loi de Newton (sujet le moins bien réussi du Force Concept Inventory) montre une probabilité à peine supérieure de réussir une autre question sur ce même sujet que les autres participants. De nombreux couples de questions révèlent un indice de discrimination spécifique négatif indiquant une faible cohérence conceptuelle en prétest et une cohérence conceptuelle légèrement améliorée en post-test. 2) Si une petite proportion des étudiants ont montré des carences marquées pour les questions reliées au contrôle des variables et à celles traitant de la relation entre la forme graphique de données expérimentales et un modèle mathématique, la majorité des étudiants peuvent être considérés comme maîtrisant adéquatement ces deux sujets. Toutefois, presque tous les étudiants démontrent une absence de maîtrise des principes sous-jacent à la quantification de l’incertitude expérimentale et de la propagation des incertitudes (ci-après appelé métrologie). Aucune corrélation statistiquement significative n’a été observée entre ces trois domaines, laissant entendre qu’il s’agit d’habiletés cognitives largement indépendantes. Le tableau de Burt a pu mettre en lumière une plus grande cohérence conceptuelle entre les questions de contrôle des variables que n’aurait pu le laisser supposer la matrice des coefficients de corrélation de Pearson. En métrologie, des questions équivalentes n’ont pas fait ressortir une cohérence conceptuelle clairement démontrée. 3) L’analyse d’un questionnaire entièrement dédié à la métrologie laisse entrevoir des conceptions erronées issues des apprentissages effectués dans les cours antérieurs (obstacles didactiques), des conceptions erronées basées sur des modèles intuitifs et une absence de compréhension globale des concepts métrologiques bien que certains concepts paraissent en voie d’acquisition. 4) Lorsque les étudiants sont laissés à eux-mêmes, les mêmes difficultés identifiées par l’analyse du questionnaire du point 3) reviennent ce qui corrobore les résultats obtenus. Cependant, nous avons pu observer d’autres comportements reliés à la mesure en laboratoire qui n’auraient pas pu être évalués par le questionnaire à choix multiples. Des entretiens d’explicitations tenus immédiatement après chaque séance ont permis aux participants de détailler certains aspects de leur méthodologie métrologique, notamment, l’emploi de procédures de répétitions de mesures expérimentales, leurs stratégies pour quantifier l’incertitude et les raisons sous-tendant l’estimation numérique des incertitudes de lecture. L’emploi des algorithmes de propagation des incertitudes a été adéquat dans l’ensemble. De nombreuses conceptions erronées en métrologie semblent résister fortement à l’apprentissage. Notons, entre autres, l’assignation de la résolution d’un appareil de mesure à affichage numérique comme valeur de l’incertitude et l’absence de procédures d’empilement pour diminuer l’incertitude. La conception que la précision d’une valeur numérique ne peut être inférieure à la tolérance d’un appareil semble fermement ancrée.