943 resultados para Independent-particle shell model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main problems related to the transport and manipulation of multiphase fluids concerns the existence of characteristic flow patterns and its strong influence on important operation parameters. A good example of this occurs in gas-liquid chemical reactors in which maximum efficiencies can be achieved by maintaining a finely dispersed bubbly flow to maximize the total interfacial area. Thus, the ability to automatically detect flow patterns is of crucial importance, especially for the adequate operation of multiphase systems. This work describes the application of a neural model to process the signals delivered by a direct imaging probe to produce a diagnostic of the corresponding flow pattern. The neural model is constituted of six independent neural modules, each of which trained to detect one of the main horizontal flow patterns, and a last winner-take-all layer responsible for resolving when two or more patterns are simultaneously detected. Experimental signals representing different bubbly, intermittent, annular and stratified flow patterns were used to validate the neural model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A decade of studies on long-term habituation (LTH) in the crab Chasmagnathus is reviewed. Upon sudden presentation of a passing object overhead, the crab reacts with an escape response that habituates promptly and for at least five days. LTH proved to be an instance of associative memory and showed context, stimulus frequency and circadian phase specificity. A strong training protocol (STP) (³15 trials, intertrial interval (ITI) of 171 s) invariably yielded LTH, while a weak training protocol (WTP) (£10 trials, ITI = 171 s) invariably failed. STP was used with a presumably amnestic agent and WTP with a presumably hypermnestic agent. Remarkably, systemic administration of low doses was effective, which is likely to be due to the lack of an endothelial blood-brain barrier. LTH was blocked by inhibitors of protein and RNA synthesis, enhanced by protein kinase A (PKA) activators and reduced by PKA inhibitors, facilitated by angiotensin II and IV and disrupted by saralasin. The presence of angiotensins and related compounds in the crab brain was demonstrated. Diverse results suggest that LTH includes two components: an initial memory produced by spaced training and mainly expressed at an initial phase of testing, and a retraining memory produced by massed training and expressed at a later phase of testing (retraining). The initial memory would be associative, context specific and sensitive to cycloheximide, while the retraining memory would be nonassociative, context independent and insensitive to cycloheximide

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A pulsatile pressure-flow model was developed for in vitro quantitative color Doppler flow mapping studies of valvular regurgitation. The flow through the system was generated by a piston which was driven by stepper motors controlled by a computer. The piston was connected to acrylic chambers designed to simulate "ventricular" and "atrial" heart chambers. Inside the "ventricular" chamber, a prosthetic heart valve was placed at the inflow connection with the "atrial" chamber while another prosthetic valve was positioned at the outflow connection with flexible tubes, elastic balloons and a reservoir arranged to mimic the peripheral circulation. The flow model was filled with a 0.25% corn starch/water suspension to improve Doppler imaging. A continuous flow pump transferred the liquid from the peripheral reservoir to another one connected to the "atrial" chamber. The dimensions of the flow model were designed to permit adequate imaging by Doppler echocardiography. Acoustic windows allowed placement of transducers distal and perpendicular to the valves, so that the ultrasound beam could be positioned parallel to the valvular flow. Strain-gauge and electromagnetic transducers were used for measurements of pressure and flow in different segments of the system. The flow model was also designed to fit different sizes and types of prosthetic valves. This pulsatile flow model was able to generate pressure and flow in the physiological human range, with independent adjustment of pulse duration and rate as well as of stroke volume. This model mimics flow profiles observed in patients with regurgitant prosthetic valves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simultaneous measurements of EEG-functional magnetic resonance imaging (fMRI) combine the high temporal resolution of EEG with the distinctive spatial resolution of fMRI. The purpose of this EEG-fMRI study was to search for hemodynamic responses (blood oxygen level-dependent - BOLD responses) associated with interictal activity in a case of right mesial temporal lobe epilepsy before and after a successful selective amygdalohippocampectomy. Therefore, the study found the epileptogenic source by this noninvasive imaging technique and compared the results after removing the atrophied hippocampus. Additionally, the present study investigated the effectiveness of two different ways of localizing epileptiform spike sources, i.e., BOLD contrast and independent component analysis dipole model, by comparing their respective outcomes to the resected epileptogenic region. Our findings suggested a right hippocampus induction of the large interictal activity in the left hemisphere. Although almost a quarter of the dipoles were found near the right hippocampus region, dipole modeling resulted in a widespread distribution, making EEG analysis too weak to precisely determine by itself the source localization even by a sophisticated method of analysis such as independent component analysis. On the other hand, the combined EEG-fMRI technique made it possible to highlight the epileptogenic foci quite efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Angiotensin II is a key player in the pathogenesis of renovascular hypertension, a condition associated with endothelial dysfunction. We investigated aliskiren (ALSK) and L-arginine treatment both alone and in combination on blood pressure (BP), and vascular reactivity in aortic rings. Hypertension was induced in 40 male Wistar rats by clipping the left renal artery. Animals were divided into Sham, 2-kidney, 1-clip (2K1C) hypertension, 2K1C+ALSK (ALSK), 2K1C+L-arginine (L-arg), and 2K1C+ALSK+L-arginine (ALSK+L-arg) treatment groups. For 4 weeks, BP was monitored and endothelium-dependent and independent vasoconstriction and relaxation were assessed in aortic rings. ALSK+L-arg reduced BP and the contractile response to phenylephrine and improved acetylcholine relaxation. Endothelium removal and incubation with N-nitro-L-arginine methyl ester (L-NAME) increased the response to phenylephrine in all groups, but the effect was greater in the ALSK+L-arg group. Losartan reduced the contractile response in all groups, apocynin reduced the contractile response in the 2K1C, ALSK and ALSK+L-arg groups, and incubation with superoxide dismutase reduced the phenylephrine response in the 2K1C and ALSK groups. eNOS expression increased in the 2K1C and L-arg groups, and iNOS was increased significantly only in the 2K1C group compared with other groups. AT1 expression increased in the 2K1C compared with the Sham, ALSK and ALSK+L-arg groups, AT2 expression increased in the ALSK+L-arg group compared with the Sham and L-arg groups, and gp91phox decreased in the ALSK+L-arg group compared with the 2K1C and ALSK groups. In conclusion, combined ALSK+L-arg was effective in reducing BP and preventing endothelial dysfunction in aortic rings of 2K1C hypertensive rats. The responsible mechanisms appear to be related to the modulation of the local renin-angiotensin system, which is associated with a reduction in endothelial oxidative stress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to propose a stochastic model for commodity markets linked with the Burgers equation from fluid dynamics. We construct a stochastic particles method for commodity markets, in which particles represent market participants. A discontinuity in the model is included through an interacting kernel equal to the Heaviside function and its link with the Burgers equation is given. The Burgers equation and the connection of this model with stochastic differential equations are also studied. Further, based on the law of large numbers, we prove the convergence, for large N, of a system of stochastic differential equations describing the evolution of the prices of N traders to a deterministic partial differential equation of Burgers type. Numerical experiments highlight the success of the new proposal in modeling some commodity markets, and this is confirmed by the ability of the model to reproduce price spikes when their effects occur in a sufficiently long period of time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis tested a path model of the relationships of reasons for drinking and reasons for limiting drinking with consumption of alcohol and drinking problems. It was hypothesized that reasons for drinking would be composed of positively and negatively reinforcing reasons, and that reasons for limiting drinking would be composed of personal and social reasons. Problem drinking was operationalized as consisting of two factors, consumption and drinking problems, with a positive relationship between the two. It was predicted that positively and negatively reinforcing reasons for drinking would be associated with heavier consumption and, in turn, more drinking problems, through level of consumption. Negatively reinforcing reasons were also predicted to be associated with drinking problems directly, independent of level of consumption. It was hypothesized that reasons for limiting drinking would be associated with lower levels of consumption and would be related to fewer drinking problems, through level of consumption. Finally, among women, reasons for limiting drinking were expected to be associated with drinking problems directly, independent of level of consumption. The sample, was taken from the second phase of the Niagara Young Aduh Health Study, a community sample of young adult men and women. Measurement models of reasons for drinking, reasons for limiting drinking, and problem drinking were tested using Confirmatory Factor Analysis. After adequate fit of each measurement model was obtained, the complete structural model, with all hypothesized paths, was tested for goodness of fit. Cross-group equality constraints were imposed on all models to test for gender differences. The results provided evidence supporting the hypothesized structure of reasons for drinking and problem drinking. A single factor model of reasons for limiting drinking was used in the analyses because a two-factor model was inadequate. Support was obtained for the structural model. For example, the resuhs revealed independent influences of Positively Reinforcing Reasons for Drinking, Negatively Reinforcing Reasons for Drinking, and Reasons for Limiting Drinking on consumption. In addition. Negatively Reinforcing Reasons helped to account for Drinking Problems independent of the amount of alcohol consumed. Although an additional path from Reasons for Limiting Drinking to Drinking Problems was hypothesized for women, it was of marginal significance and did not improve the model's fit. As a result, no sex differences in the model were found. This may be a result of the convergence of drinking patterns for men and women. Furthermore, it is suggested that gender differences may only be found in clinical samples of problem drinkers, where the relative level of consumption for women and men is similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optical conductivity of the Anderson impurity mode l has been calculated by emp l oying the slave boson technique and an expansion in powers of l i N, where N is the d egeneracy o f the f electron level . This method has been used to find the effective mass of the conduction electrons for temperatures above and below the Kondo tempera ture. For low temperatures, the mass enhancement is f ound to be large while a t high t emperatures, the mass enhancement is sma ll. The conductivity i s f ound to be Drude like with frequency dependent effective mass and scattering time for low independent effective mass and temperatures and scattering time f requency for high t emperatures. The behavior of both the effective mass and the conductivity is in qualitative agreement with experimental r esul t s .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The frequency dependence of the electron-spin fluctuation spectrum, P(Q), is calculated in the finite bandwidth model. We find that for Pd, which has a nearly full d-band, the magnitude, the range, and the peak frequency of P(Q) are greatly reduced from those in the standard spin fluctuation theory. The electron self-energy due to spin fluctuations is calculated within the finite bandwidth model. Vertex corrections are examined, and we find that Migdal's theorem is valid for spin fluctuations in the nearly full band. The conductance of a normal metal-insulator-normal metal tunnel junction is examined when spin fluctuations are present in one electrode. We find that for the nearly full band, the momentum independent self-energy due to spin fluctuations enters the expression for the tunneling conductance with approximately the same weight as the self-energy due to phonons. The effect of spin fluctuations on the tunneling conductance is slight within the finite bandwidth model for Pd. The effect of spin fluctuations on the tunneling conductance of a metal with a less full d-band than Pd may be more pronounced. However, in this case the tunneling conductance is not simply proportional to the self-energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transcription termination of messenger RNA (mRNA) is normally achieved by polyadenylation followed by Rat1p-dependent 5'-3' exoribonuleolytic degradation of the downstream transcript. Here we show that the yeast ortholog of the dsRNA-specific ribonuclease III (Rnt1p) may trigger Rat1p-dependent termination of RNA transcripts that fail to terminate near polyadenylation signals. Rnt1p cleavage sites were found downstream of several genes, and the deletion of RNT1 resulted in transcription readthrough. Inactivation of Rat1p impaired Rnt1p-dependent termination and resulted in the accumulation of 3' end cleavage products. These results support a model for transcription termination in which cotranscriptional cleavage by Rnt1p provides access for exoribonucleases in the absence of polyadenylation signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La stimulation du récepteur de la rénine/prorénine [(P) RR], un membre récemment découvert du système rénine-angiotensine (SRA), augmente l'activité du SRA et des voies de signalisation angiotensine II-indépendante. Pour étudier l'impact potentiel du (P)RR dans le développement de l`obésité, nous avons émis l'hypothèse que les souris déficientes en (P)RR uniquement dans le tissus adipeux (KO) auront une diminution du poids corporel en ciblant le métabolisme du tissu adipeux, l'activité locomoteur et/ou la prise alimentaire. Ainsi, des souris KO ont été générées en utilisant la technologie Cre/Lox. Le gain de poids et la prise alimentaire ont été évalués hebdomadairement dans les mâles et femelles KO et de type sauvage (WT) pendant 4 semaines alors qu’ils étaient maintenu sur une diète normal. De plus, un groupe de femelles a été placé pour 6 semaines sur une diète riche en gras et en glucides (HF/HC). La composition corporelle et l'activité ambulatoire ont été évaluées par l’EchoMRI et à l’aide de cages Physioscan, respectivement. Les tissus adipeux ont été prélevés et pesés. De plus, les gras péri-gonadaux ont été utilisés pour le microarray. Finalement, le niveaux d'expression d'ARNm du (P)RR ont été évalués. Comme le gène du (P)RR est situé sur le chromosome X, les mâles étaient des KOs complets et les femelles étaient des KOs partielles. Les souris KO avaient un poids corporel significativement plus petit par rapport à WT, les différences étant plus prononcées chez les mâles. De plus, les femelles KOs étaient résistantes à l'obésité lorsqu'elles ont été placées sur la diète HF/HC et donc elles avaient significativement moins de masse grasse par rapport aux WTs. L’analyse histologique des gras péri-gonadaux des KOs nous ont dévoilés qu’il avait une réduction du nombre d'adipocytes mais de plus grande taille. Bien qu'il n'y ait eu aucun changement dans la consommation alimentaire, une augmentation de près de 3 fois de l'activité ambulatoire a été détectée chez les mâles. De plus, nous avons observé que leurs tibias étaient de longueur réduite ce qui suggère fortement l'affection de leur développement. Les gras péri-gonadaux des souris KO avaient une expression réduite de l`ABLIM2 (Actin binding LIM protein family, member 2) qui est associé avec le diabète de type II chez l'humain. Ainsi, les données recueillies suggèrent fortement que le (P)RR est impliquée dans la régulation du poids corporelle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans cette thèse, nous présentons une nouvelle méthode smoothed particle hydrodynamics (SPH) pour la résolution des équations de Navier-Stokes incompressibles, même en présence des forces singulières. Les termes de sources singulières sont traités d'une manière similaire à celle que l'on retrouve dans la méthode Immersed Boundary (IB) de Peskin (2002) ou de la méthode régularisée de Stokeslets (Cortez, 2001). Dans notre schéma numérique, nous mettons en oeuvre une méthode de projection sans pression de second ordre inspirée de Kim et Moin (1985). Ce schéma évite complètement les difficultés qui peuvent être rencontrées avec la prescription des conditions aux frontières de Neumann sur la pression. Nous présentons deux variantes de cette approche: l'une, Lagrangienne, qui est communément utilisée et l'autre, Eulerienne, car nous considérons simplement que les particules SPH sont des points de quadrature où les propriétés du fluide sont calculées, donc, ces points peuvent être laissés fixes dans le temps. Notre méthode SPH est d'abord testée à la résolution du problème de Poiseuille bidimensionnel entre deux plaques infinies et nous effectuons une analyse détaillée de l'erreur des calculs. Pour ce problème, les résultats sont similaires autant lorsque les particules SPH sont libres de se déplacer que lorsqu'elles sont fixes. Nous traitons, par ailleurs, du problème de la dynamique d'une membrane immergée dans un fluide visqueux et incompressible avec notre méthode SPH. La membrane est représentée par une spline cubique le long de laquelle la tension présente dans la membrane est calculée et transmise au fluide environnant. Les équations de Navier-Stokes, avec une force singulière issue de la membrane sont ensuite résolues pour déterminer la vitesse du fluide dans lequel est immergée la membrane. La vitesse du fluide, ainsi obtenue, est interpolée sur l'interface, afin de déterminer son déplacement. Nous discutons des avantages à maintenir les particules SPH fixes au lieu de les laisser libres de se déplacer. Nous appliquons ensuite notre méthode SPH à la simulation des écoulements confinés des solutions de polymères non dilués avec une interaction hydrodynamique et des forces d'exclusion de volume. Le point de départ de l'algorithme est le système couplé des équations de Langevin pour les polymères et le solvant (CLEPS) (voir par exemple Oono et Freed (1981) et Öttinger et Rabin (1989)) décrivant, dans le cas présent, les dynamiques microscopiques d'une solution de polymère en écoulement avec une représentation bille-ressort des macromolécules. Des tests numériques de certains écoulements dans des canaux bidimensionnels révèlent que l'utilisation de la méthode de projection d'ordre deux couplée à des points de quadrature SPH fixes conduit à un ordre de convergence de la vitesse qui est de deux et à une convergence d'ordre sensiblement égale à deux pour la pression, pourvu que la solution soit suffisamment lisse. Dans le cas des calculs à grandes échelles pour les altères et pour les chaînes de bille-ressort, un choix approprié du nombre de particules SPH en fonction du nombre des billes N permet, en l'absence des forces d'exclusion de volume, de montrer que le coût de notre algorithme est d'ordre O(N). Enfin, nous amorçons des calculs tridimensionnels avec notre modèle SPH. Dans cette optique, nous résolvons le problème de l'écoulement de Poiseuille tridimensionnel entre deux plaques parallèles infinies et le problème de l'écoulement de Poiseuille dans une conduite rectangulaire infiniment longue. De plus, nous simulons en dimension trois des écoulements confinés entre deux plaques infinies des solutions de polymères non diluées avec une interaction hydrodynamique et des forces d'exclusion de volume.