937 resultados para Finite-difference Time-domain (fdtd)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: In alpine skiing, chronometry analysis is currently the most common tool to assess performance. It is widely used to rank competitors during races, as well as to manage athletes training and to evaluate material. Usually, this measurement is accurately realized using timing cells. Nevertheless, these devices are too complex and expensive to allow chronometry of every gates crossing. On the other side, differential GPS can be used for measuring gate crossing time (Waegli et al). However, this is complex (e.g. recording gate position with GPS) and mainly used in research applications. The aim of the study was to propose a wearable system to time gates crossing during alpine skiing slalom (SL), which is suitable for routine uses. METHODS: The proposed system was composed of a 3D accelerometer (ADXL320®, Analog Device, USA) placed at the sacrum of the athlete, a matrix of force sensors (Flexiforce®, Tekscan, USA) fixed on the right shin guard and a data logger (Physilog®, BioAGM, Switzerland). The sensors were sampled at 500 Hz. The crossing time were calculated in two phases. First, the accelerometer was used to detect the curves by considering the maximum of the mediolateral peak acceleration. Then, the force sensors were used to detect the impacts with the gates by considering maximum force variation. In case of non impact, the detection was realized based on the acceleration and features measured at the other gates. In order to assess the efficiency of the system, two different SL were monitored twice for two world cup level skiers, a male SL expert and a female downhill expert. RESULTS AND DISCUSSION: The combination of the accelerometer and force sensors allowed to clearly identify the gate crossing times. When comparing the runs of the SL expert and the downhill expert, we noticed that the SL expert was faster. For example for the first SL, the overall difference between the best run of each athlete was of 5.47s. At each gate, the SL expert increased the time difference slower at the beginning (0.27s/gate) than at the end (0.34s/gate). Furthermore, when comparing the runs of the SL expert, a maximum time difference of 20ms at each gate was noticed. This showed high repeatability skills of the SL expert. In opposite, the downhill expert with a maximum difference time of 1s at each gate was clearly less repeatable. Both skiers were not disturbed by the system. CONCLUSION: This study proposed a new wearable system to automatically time gates crossing during alpine skiing slalom combining force and accelerometer sensors. The system was evaluated with two professional world cup skiers and showed a high potential. This system could be extended to time other parameters. REFERENCES Waegli A, Skaloud J (2007). Inside GNSS, Spring, 24-34.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are several filtration applications in the pulp and paper industry where the capacity and cost-effectiveness of processes are of importance. Ultrafiltration is used to clean process water. Ultrafiltration is a membrane process that separates a certain component or compound from a liquid stream. The pressure difference across the membrane sieves macromolecules smaller than 0.001-0.02 μm through the membrane. When optimizing the filtration process capacity, online information about the conditions of the membrane is needed. Fouling and compaction of the membrane both affect the capacity of the filtration process. In fouling a “cake” layer starts to build on the surface of the membrane. This layer blocks the molecules from sieving through the membrane thereby decreasing the yield of the process. In compaction of the membrane the structure is flattened out because of the high pressure applied. The higher pressure increases the capacity but may damage the structure of the membrane permanently. Information about the compaction is needed to effectively operate the filters. The objective of this study was to develop an accurate system for online monitoring of the condition of the membrane using ultrasound reflectometry. Measurements of ultrafiltration membrane compaction were made successfully utilizing ultrasound. The results were confirmed by permeate flux decline, measurements of compaction with a micrometer, mechanical compaction using a hydraulic piston and a scanning electron microscope (SEM). The scientific contribution of this thesis is to introduce a secondary ultrasound transducer to determine the speed of sound in the fluid used. The speed of sound is highly dependent on the temperature and pressure used in the filters. When the exact speed of sound is obtained by the reference transducer, the effect of temperature and pressure is eliminated. This speed is then used to calculate the distances with a higher accuracy. As the accuracy or the resolution of the ultrasound measurement is increased, the method can be applied to a higher amount of applications especially for processes where fouling layers are thinner because of smaller macromolecules. With the help of the transducer, membrane compaction of 13 μm was measured in the pressure of 5 bars. The results were verified with the permeate flux decline, which indicated that compaction had taken place. The measurements of compaction with a micrometer showed compaction of 23–26 μm. The results are in the same range and confirm the compaction. Mechanical compaction measurements were made using a hydraulic piston, and the result was the same 13 μm as obtained by applying the ultrasound time domain reflectometry (UTDR). A scanning electron microscope (SEM) was used to study the structure of the samples before and after the compaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse porte sur l’étude de l’anatomie de la cornée après 3 techniques de greffe soient, la greffe totale traditionnelle (GTT) et des techniques de greffe lamellaire postérieur (GLP) telles que la greffe lamellaire endothéliale profonde (DLEK) et la greffe endothélium/membrane de Descemet (EDMG) pour le traitement des maladies de l’endothélium, telles que la dystrophie de Fuchs et de la kératopathie de l’aphaque et du pseudophaque. Dans ce contexte, cette thèse contribue également à démontrer l’utilité de la tomographie de cohérence optique (OCT) pour l’étude de l’anatomie des plaies chirurgicales la cornée post transplantation. Au cours de ce travail nous avons étudié l'anatomie de la DLEK, avant et 1, 6, 12 et 24 mois après la chirurgie. Nous avons utilisé le Stratus OCT (Version 3, Carl Zeiss, Meditec Inc.) pour documenter l’anatomie de la plaie. L'acquisition et la manipulation des images du Stratus OCT, instrument qui à été conçu originalement pour l’étude de la rétine et du nerf optique, ont été adaptées pour l'analyse du segment antérieur de l’oeil. Des images cornéennes centrales verticales et horizontales, ainsi que 4 mesures radiaires perpendiculaires à la plaie à 12, 3, 6 et 9 heures ont été obtenues. Les paramètres suivants ont été étudiés: (1) Les espaces (gap) entre les rebords du disque donneur et ceux du receveur, (2) les dénivelés de surface postérieure (step) entre le les rebords du disque donneur et ceux du receveur, (3) la compression tissulaire, (4) le décollement du greffon, 6) les élévations de la surface antérieure de la cornée et 7) la pachymétrie centrale de la cornée. Les mesures d’épaisseur totale de la cornée ont été comparées et corrélées avec celles obtenues avec un pachymètre à ultra-sons. Des mesures d’acuité visuelle, de réfraction manifeste et de topographie ont aussi été acquises afin d’évaluer les résultats fonctionnels. Enfin, nous avons comparé les données de DLEK à celles obtenues de l’EDMG et de la GTT, afin de caractériser les plaies et de cerner les avantages et inconvénients relatifs à chaque technique chirurgicale. Nos résultats anatomiques ont montré des différences importantes entre les trois techniques chirurgicales. Certains des paramètres étudiés, comme le sep et le gap, ont été plus prononcés dans la GTT que dans la DLEK et complètement absents dans l’EDMG. D’autres, comme la compression tissulaire et le décollement du greffon n’ont été observés que dans la DLEK. Ceci laisse entrevoir que la distorsion de la plaie varie proportionnellement à la profondeur de la découpe stromale du receveur, à partir de la face postérieure de la cornée. Moins la découpe s’avance vers la face antérieure (comme dans l’EDMG), moins elle affecte l’intégrité anatomique de la cornée, le pire cas étant la découpe totale comme dans la GTT. Cependant, tous les paramètres d’apposition postérieure sous-optimale et d’élévation de la surface antérieure (ce dernier observé uniquement dans la GTT) finissent par diminuer avec le temps, évoluant à des degrés variables vers un profil topographique plus semblable à celui d’une cornée normale. Ce processus paraît plus long et plus incomplet dans les cas de GTT à cause du type de plaie, de la présence de sutures et de la durée de la cicatrisation. Les valeurs moyennes d’épaisseur centrale se sont normalisées après la chirurgie. De plus, ces valeurs moyennes obtenues par OCT étaient fortement corrélées à celles obtenues par la pachymétrie à ultra-sons et nous n’avons remarqué aucune différence significative entre les valeurs moyennes des deux techniques de mesure. L’OCT s’est avéré un outil utile pour l’étude de l’anatomie microscopique des plaies chirurgicales. Les résultats d’acuité visuelle, de réfraction et de topographie des techniques de GLP ont montré qu’il existe une récupération visuelle rapide et sans changements significatifs de l’astigmatisme, contrairement à la GTT avec et sans suture. La GLP a permis une meilleure conservation de la morphologie de la cornée, et par conséquence des meilleurs résultats fonctionnels que la greffe de pleine épaisseur. Ceci nous permet d’avancer que la GLP pourrait être la technique chirurgicale à adopter comme traitement pour les maladies de l’endothélium cornéen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La fibrillation auriculaire (FA) est la forme d’arythmie la plus fréquente et représente environ un tiers des hospitalisations attribuables aux troubles du rythme cardiaque. Les mécanismes d’initiation et de maintenance de la FA sont complexes et multiples. Parmi ceux-ci, une contribution du système nerveux autonome a été identifiée mais son rôle exact demeure mal compris. Ce travail cible l’étude de la modulation induite par l’acétylcholine (ACh) sur l’initiation et le maintien de la FA, en utilisant un modèle de tissu bidimensionnel. La propagation de l’influx électrique sur ce tissu est décrite par une équation réaction-diffusion non-linéaire résolue sur un maillage rectangulaire avec une méthode de différences finies, et la cinétique d'ACh suit une évolution temporelle prédéfinie qui correspond à l’activation du système parasympathique. Plus de 4400 simulations ont été réalisées sur la base de 4 épisodes d’arythmies, 5 tailles différentes de région modulée par l’ACh, 10 concentrations d’ACh et 22 constantes de temps de libération et de dégradation d’ACh. La complexité de la dynamique des réentrées est décrite en fonction de la constante de temps qui représente le taux de variation d’ACh. Les résultats obtenus suggèrent que la stimulation vagale peut mener soit à une dynamique plus complexe des réentrées soit à l’arrêt de la FA en fonction des quatre paramètres étudiés. Ils démontrent qu’une décharge vagale rapide, représentée par des constantes de temps faibles combinées à une quantité suffisamment grande d’ACh, a une forte probabilité de briser la réentrée primaire provoquant une activité fibrillatoire. Cette activité est caractérisée par la création de plusieurs ondelettes à partir d’un rotor primaire sous l’effet de l’hétérogénéité du gradient de repolarisation causé par l’activité autonomique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Un modèle mathématique de la propagation de la malaria en temps discret est élaboré en vue de déterminer l'influence qu'un déplacement des populations des zones rurales vers les zones urbaines aurait sur la persistance ou la diminution de l'incidence de la malaria. Ce modèle, sous la forme d'un système de quatorze équations aux différences finies, est ensuite comparé à un modèle analogue mais en temps continu, qui prend la forme d'équations différentielles ordinaires. Une étude comparative avec la littérature récente permet de déterminer les forces et les faiblesses de notre modèle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The finite element method (FEM) is now developed to solve two-dimensional Hartree-Fock (HF) equations for atoms and diatomic molecules. The method and its implementation is described and results are presented for the atoms Be, Ne and Ar as well as the diatomic molecules LiH, BH, N_2 and CO as examples. Total energies and eigenvalues calculated with the FEM on the HF-level are compared with results obtained with the numerical standard methods used for the solution of the one dimensional HF equations for atoms and for diatomic molecules with the traditional LCAO quantum chemical methods and the newly developed finite difference method on the HF-level. In general the accuracy increases from the LCAO - to the finite difference - to the finite element method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report on the self-consistent field solution of the Hartree-Fock-Slater equations using the finite-element method for the three small diatomic molecules N_2, BH and CO as examples. The quality of the results is not only better by two orders of magnitude than the fully numerical finite difference method of Laaksonen et al. but the method also requires a smaller number of grid points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an immersed interface method for the incompressible Navier Stokes equations capable of handling rigid immersed boundaries. The immersed boundary is represented by a set of Lagrangian control points. In order to guarantee that the no-slip condition on the boundary is satisfied, singular forces are applied on the fluid at the immersed boundary. The forces are related to the jumps in pressure and the jumps in the derivatives of both pressure and velocity, and are interpolated using cubic splines. The strength of singular forces is determined by solving a small system of equations at each time step. The Navier-Stokes equations are discretized on a staggered Cartesian grid by a second order accurate projection method for pressure and velocity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducción: El glaucoma representa la tercera causa de ceguera a nivel mundial y un diagnóstico oportuno requiere evaluar la excavación del nervio óptico que está relacionada con el área del mismo. Existen reportes de áreas grandes (macrodiscos) que pueden ser protectoras, mientras otros las asocian a susceptibilidad para glaucoma. Objetivo: Establecer si existe asociación entre macrodisco y glaucoma en individuos estudiados con Tomografía Optica Coherente (OCT ) en la Fundación Oftalmológica Nacional. Métodos: Estudio transversal de asociación que incluyó 25 ojos con glaucoma primario de ángulo abierto y 74 ojos sanos. A cada individuo se realizó examen oftalmológico, campo visual computarizado y OCT de nervio óptico. Se compararon por grupos áreas de disco óptico y número de macrodiscos, definidos según Jonas como un área de la media más dos desviaciones estándar y según Adabache como área ≥3.03 mm2 quien evaluó población Mexicana. Resultados: El área promedio de disco óptico fue 2,78 y 2,80 mm2 glaucoma Vs. sanos. De acuerdo al criterio de Jonas, se observó un macrodisco en el grupo sanos y según criterio de Adabache se encontraron ocho y veinticinco macrodiscos glaucoma Vs. sanos. (OR=0,92 IC95%=0.35 – 2.43). Discusión: No hubo diferencia significativa (P=0.870) en el área de disco entre los dos grupos y el porcentaje de macrodiscos para los dos grupos fue similar, aunque el bajo número de éstos no permitió concluir en términos estadísticos sobre la presencia de macrodisco y glaucoma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

QUAGMIRE is a quasi-geostrophic numerical model for performing fast, high-resolution simulations of multi-layer rotating annulus laboratory experiments on a desktop personal computer. The model uses a hybrid finite-difference/spectral approach to numerically integrate the coupled nonlinear partial differential equations of motion in cylindrical geometry in each layer. Version 1.3 implements the special case of two fluid layers of equal resting depths. The flow is forced either by a differentially rotating lid, or by relaxation to specified streamfunction or potential vorticity fields, or both. Dissipation is achieved through Ekman layer pumping and suction at the horizontal boundaries, including the internal interface. The effects of weak interfacial tension are included, as well as the linear topographic beta-effect and the quadratic centripetal beta-effect. Stochastic forcing may optionally be activated, to represent approximately the effects of random unresolved features. A leapfrog time stepping scheme is used, with a Robert filter. Flows simulated by the model agree well with those observed in the corresponding laboratory experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A finite difference scheme based on flux difference splitting is presented for the solution of the two-dimensional shallow water equations of ideal fluid flow. A linearised problem, analogous to that of Riemann for gas dynamics is defined, and a scheme, based on numerical characteristic decomposition is presented for obtaining approximate solutions to the linearised problem, and incorporates the technique of operator splitting. An average of the flow variables across the interface between cells is required, and this average is chosen to be the arithmetic mean for computational efficiency leading to arithmetic averaging. This is in contrast to usual ‘square root’ averages found in this type of Riemann solver, where the computational expense can be prohibitive. The method of upwind differencing is used for the resulting scalar problems, together with a flux limiter for obtaining a second order scheme which avoids nonphysical, spurious oscillations. An extension to the two-dimensional equations with source terms is included. The scheme is applied to the one-dimensional problems of a breaking dam and reflection of a bore, and in each case the approximate solution is compared to the exact solution of ideal fluid flow. The scheme is also applied to a problem of stationary bore generation in a channel of variable cross-section. Finally, the scheme is applied to two other dam-break problems, this time in two dimensions with one having cylindrical symmetry. Each approximate solution compares well with those given by other authors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A second order accurate, characteristic-based, finite difference scheme is developed for scalar conservation laws with source terms. The scheme is an extension of well-known second order scalar schemes for homogeneous conservation laws. Such schemes have proved immensely powerful when applied to homogeneous systems of conservation laws using flux-difference splitting. Many application areas, however, involve inhomogeneous systems of conservation laws with source terms, and the scheme presented here is applied to such systems in a subsequent paper.