929 resultados para non-uniform scale perturbation finite difference scheme


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho, é implementada uma interface gráfica de usuários (GUI) usando a ferramenta Qt da Nokia (versão 3.0). A interface visa simplificar a criação de cenários para a realização de simulações paralelas usando a técnica numérica Local Nonorthogonal Finite Difference Time-Domain (LN-FDTD), aplicada para solucionar as equações de Maxwell. O simulador foi desenvolvido usando a linguagem de programação C e paralelizado utilizando threads. Para isto, a biblioteca pthread foi empregada. A visualização 3D do cenário a ser simulado (e da malha) é realizada por um programa especialmente desenvolvido que utiliza a biblioteca OpenGL. Para melhorar o desenvolvimento e alcançar os objetivos do projeto computacional, foram utilizados conceitos da Engenharia de Software, tais como o modelo de processo de software por prototipagem. Ao privar o usuário de interagir diretamente com o código-fonte da simulação, a probabilidade de ocorrência de erros humanos durante o processo de construção de cenários é minimizada. Para demonstrar o funcionamento da ferramenta desenvolvida, foi realizado um estudo relativo ao efeito de flechas em linhas de baixa tensão nas tensões transitórias induzidas nas mesmas por descargas atmosféricas. As tensões induzidas nas tomadas da edificação também são estudadas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate gasometric differences of severe trauma patients requiring intubation in prehospital care. METHODS: Patients requiring airway management were submitted to collection of arterial blood samples at the beginning of pre-hospital care and at arrival at the Emergency Room. We analyzed: Glasgow Coma Scale, respiratory rate, arterial pH, arterial partial pressure of CO2 (PaCO2), arterial partial pressure of O2 (PaO2), base excess (BE), hemoglobin O2 saturation (SpO2) and the relation of PaO2 and inspired O2 (PaO2/FiO2). RESULTS: There was statistical significance of the mean differences between the data collected at the site of the accident and at the entrance of the ER as for respiratory rate (p = 0.0181), Glasgow Coma Scale (p = 0.0084), PaO2 (p <0.0001) and SpO2 (p = 0.0018). CONCLUSION: tracheal intubation changes the parameters PaO2 and SpO2. There was no difference in metabolic parameters (pH, bicarbonate and base excess). In the analysis of blood gas parameters between survivors and non-survivors there was statistical difference between PaO2, hemoglobin oxygen saturation and base excess.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Matematica Aplicada e Computacional - FCT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Beef quality control, particularly its sensory characteristics, is an important factor for producers and retailers in order to satisfy consumer’s choices. Sensory analysis is an important tool to evaluate attributes that cannot be measured by easily available instrumental techniques, as well as texture – tenderness and juiciness – whose human perception is more complete, through trained panels. The aim of this study was evaluate the use of a beef sensory analysis protocol in three different laboratories. Six commercial samples of different brands of aged beef and 14 samples from crossbred animals (Bonsmara × Nelore - 7 and Canchim × Nelore - 7), aged during 14 days were analyzed. The samples were distributed to each participant laboratory, where 7 to 12 panelists were trained. A sheet containing a 9 cm non-structured scale with 14 attributes was used. The attributes were brown colour (CMAR); aponevrosis (PNAP); hydration degree (GH); characteristic beef aroma (SCCB); salty taste (SS); liver flavour (SF); fat flavour (SG); metallic flavour (SM); tenderness (MZ); juiciness (SL); fibrosity (FBS) and liver texture (SF). Obtained data was analyzed using analysis of variance and principal component analysis (PCA). The results showed that there was no interaction between samples and laboratories, indicating that all of them responded in a similar manner in relation to the samples, except PNAP attribute, which was expected as meat is very non-uniform normally. Samples were well differentiated in all laboratories as it could be observed in PCA graphs. With proper training it is possible to use a standard protocol for beef sensory analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents numerical simulations of two fluid flow problems involving moving free surfaces: the impacting drop and fluid jet buckling. The viscoelastic model used in these simulations is the eXtended Pom-Pom (XPP) model. To validate the code, numerical predictions of the drop impact problem for Newtonian and Oldroyd-B fluids are presented and compared with other methods. In particular, a benchmark on numerical simulations for a XPP drop impacting on a rigid plate is performed for a wide range of the relevant parameters. Finally, to provide an additional application of free surface flows of XPP fluids, the viscous jet buckling problem is simulated and discussed. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The numerical simulation of flows of highly elastic fluids has been the subject of intense research over the past decades with important industrial applications. Therefore, many efforts have been made to improve the convergence capabilities of the numerical methods employed to simulate viscoelastic fluid flows. An important contribution for the solution of the High-Weissenberg Number Problem has been presented by Fattal and Kupferman [J. Non-Newton. Fluid. Mech. 123 (2004) 281-285] who developed the matrix-logarithm of the conformation tensor technique, henceforth called log-conformation tensor. Its advantage is a better approximation of the large growth of the stress tensor that occur in some regions of the flow and it is doubly beneficial in that it ensures physically correct stress fields, allowing converged computations at high Weissenberg number flows. In this work we investigate the application of the log-conformation tensor to three-dimensional unsteady free surface flows. The log-conformation tensor formulation was applied to solve the Upper-Convected Maxwell (UCM) constitutive equation while the momentum equation was solved using a finite difference Marker-and-Cell type method. The resulting developed code is validated by comparing the log-conformation results with the analytic solution for fully developed pipe flows. To illustrate the stability of the log-conformation tensor approach in solving three-dimensional free surface flows, results from the simulation of the extrudate swell and jet buckling phenomena of UCM fluids at high Weissenberg numbers are presented. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports experiments on the use of a recently introduced advection bounded upwinding scheme, namely TOPUS (Computers & Fluids 57 (2012) 208-224), for flows of practical interest. The numerical results are compared against analytical, numerical and experimental data and show good agreement with them. It is concluded that the TOPUS scheme is a competent, powerful and generic scheme for complex flow phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit werden Quantum-Hydrodynamische (QHD) Modelle betrachtet, die ihren Einsatz besonders in der Modellierung von Halbleiterbauteilen finden. Das QHD Modell besteht aus den Erhaltungsgleichungen für die Teilchendichte, das Momentum und die Energiedichte, inklusive der Quanten-Korrekturen durch das Bohmsche Potential. Zu Beginn wird eine Übersicht über die bekannten Ergebnisse der QHD Modelle unter Vernachlässigung von Kollisionseffekten gegeben, die aus ein­em Schrödinger-System für den gemischten-Zustand oder aus der Wigner-Glei­chung hergeleitet werden können. Nach der Reformulierung der eindimensionalen QHD Gleichungen mit linearem Potential als stationäre Schrö­din­ger-Gleichung werden die semianalytischen Fassungen der QHD Gleichungen für die Gleichspannungs-Kurve betrachtet. Weiterhin werden die viskosen Stabilisierungen des QHD Modells be­rück­sich­tigt, sowie die von Gardner vorgeschlagene numerische Viskosität für das {sf upwind} Finite-Differenzen Schema berechnet. Im Weiteren wird das viskose QHD Modell aus der Wigner-Glei­chung mit Fokker-Planck Kollisions-Ope­ra­tor hergeleitet. Dieses Modell enthält die physikalische Viskosität, die durch den Kollision-Operator eingeführt wird. Die Existenz der Lösungen (mit strikt positiver Teilchendichte) für das isotherme, stationäre, eindimensionale, viskose Modell für allgemeine Daten und nichthomogene Randbedingungen wird gezeigt. Die dafür notwendigen Abschätzungen hängen von der Viskosität ab und erlauben daher den Grenzübergang zum nicht-viskosen Fall nicht. Numerische Simulationen der Resonanz-Tunneldiode modelliert mit dem nichtisothermen, stationären, eindimensionalen, viskosen QHD Modell zeigen den Einfluss der Viskosität auf die Lösung. Unter Verwendung des von Degond und Ringhofer entwickelten Quanten-Entropie-Minimierungs-Verfahren werden die allgemeinen QHD-Gleichungen aus der Wigner-Boltzmann-Gleichung mit dem BGK-Kollisions-Operator hergeleitet. Die Herleitung basiert auf der vorsichtige Entwicklung des Quanten-Max­well­ians in Potenzen der skalierten Plankschen Konstante. Das so erhaltene Modell enthält auch vertex-Terme und dispersive Terme für die Ge­schwin­dig­keit. Dadurch bleibt die Gleichspannungs-Kurve für die Re­so­nanz-Tunnel­diode unter Verwendung des allgemeinen QHD Modells in einer Dimension numerisch erhalten. Die Ergebnisse zeigen, dass der dispersive Ge­schwin­dig­keits-Term die Lösung des Systems stabilisiert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit wird ein neuer Dynamikkern entwickelt und in das bestehendernnumerische Wettervorhersagesystem COSMO integriert. Für die räumlichernDiskretisierung werden diskontinuierliche Galerkin-Verfahren (DG-Verfahren)rnverwendet, für die zeitliche Runge-Kutta-Verfahren. Hierdurch ist ein Verfahrenrnhoher Ordnung einfach zu realisieren und es sind lokale Erhaltungseigenschaftenrnder prognostischen Variablen gegeben. Der hier entwickelte Dynamikkern verwendetrngeländefolgende Koordinaten in Erhaltungsform für die Orographiemodellierung undrnkoppelt das DG-Verfahren mit einem Kessler-Schema für warmen Niederschlag. Dabeirnwird die Fallgeschwindigkeit des Regens, nicht wie üblich implizit imrnKessler-Schema diskretisiert, sondern explizit im Dynamikkern. Hierdurch sindrndie Zeitschritte der Parametrisierung für die Phasenumwandlung des Wassers undrnfür die Dynamik vollständig entkoppelt, wodurch auch sehr große Zeitschritte fürrndie Parametrisierung verwendet werden können. Die Kopplung ist sowohl fürrnOperatoraufteilung, als auch für Prozessaufteilung realisiert.rnrnAnhand idealisierter Testfälle werden die Konvergenz und die globalenrnErhaltungseigenschaften des neu entwickelten Dynamikkerns validiert. Die Massernwird bis auf Maschinengenauigkeit global erhalten. Mittels Bergüberströmungenrnwird die Orographiemodellierung validiert. Die verwendete Kombination ausrnDG-Verfahren und geländefolgenden Koordinaten ermöglicht die Behandlung vonrnsteileren Bergen, als dies mit dem auf Finite-Differenzenverfahren-basierendenrnDynamikkern von COSMO möglich ist. Es wird gezeigt, wann die vollernTensorproduktbasis und wann die Minimalbasis vorteilhaft ist. Die Größe desrnEinflusses auf das Simulationsergebnis der Verfahrensordnung, desrnParametrisierungszeitschritts und der Aufteilungsstrategie wirdrnuntersucht. Zuletzt wird gezeigt dass bei gleichem Zeitschritt die DG-Verfahrenrnaufgrund der besseren Skalierbarkeit in der Laufzeit konkurrenzfähig zurnFinite-Differenzenverfahren sind.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquids and gasses form a vital part of nature. Many of these are complex fluids with non-Newtonian behaviour. We introduce a mathematical model describing the unsteady motion of an incompressible polymeric fluid. Each polymer molecule is treated as two beads connected by a spring. For the nonlinear spring force it is not possible to obtain a closed system of equations, unless we approximate the force law. The Peterlin approximation replaces the length of the spring by the length of the average spring. Consequently, the macroscopic dumbbell-based model for dilute polymer solutions is obtained. The model consists of the conservation of mass and momentum and time evolution of the symmetric positive definite conformation tensor, where the diffusive effects are taken into account. In two space dimensions we prove global in time existence of weak solutions. Assuming more regular data we show higher regularity and consequently uniqueness of the weak solution. For the Oseen-type Peterlin model we propose a linear pressure-stabilized characteristics finite element scheme. We derive the corresponding error estimates and we prove, for linear finite elements, the optimal first order accuracy. Theoretical error of the pressure-stabilized characteristic finite element scheme is confirmed by a series of numerical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The optical properties of a match-like plasmonic nanostructure are numerically investigated using full-wave finite-difference time-domain analysis in conjunction with dispersive material models. This work is mainly motivated by the developed technique enabling reproducible fabrication of nanomatch structures as well as the growing applications that utilize the localized field enhancement in plasmonic nanostructures. Our research revealed that due to the pronounced field enhancement and larger resonance tunabilities, some nanomatch topologies show potentials for various applications in the field of, e.g., sensing as well as a novel scheme for highly reproducible tips in scanning near field optical microscopy, among others. Despite the additional degrees of freedom that are offered by the composite nature of the proposed nanomatch topology, the paper also reflects on a fundamental complication intrinsic to the material interfaces especially in the nanoscale: stoichiometric mixing. We conclude that the specificity in material modeling will become a significant issue in future research on functionalized composite nanostructures.