831 resultados para MATHEMATICAL SIMULATIONS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tuotantotehokkuus näyttelee yhä suurempaa roolia teollisuudessa, minkä vuoksi myös pakkauslinjas­toille joudutaan asettamaan suuria vaatimuksia. Usein leik­kaus- ja kappaleensiirtosovelluksissa käyte­tään lineaarisia ruuvikäyttöjä, jotka voitaisiin tietyin edellytyksin korvata halvemmilla ja osittain suori­tuskykyisimmillä hammashihnavetoisilla johteilla. Yleensä paikkasäädetty työsolu muodostuu kahden tai kolmen eri koordinaatisto­akselin suuntaan asen­netuista johteista. Tällaisen työsolun paikoitustarkkuuteen vaikuttavat muun muassa käytetty säätöra­kenne, moottorisäätöketjun viiveet, sekä laitteiston eri epälineaarisuudet, kuten kitka. Tässä työssä esitetään lineaarisen hammashihnaservokäytön dynaamista käytöstä kuvaava matemaatti­nen malli ja laaditaan mallin pohjalta laitteen simulointimalli. Mallin toimivuus varmistetaan käytän­nön identifiointitesteillä. Lisäksi työssä tut­kitaan, kuinka hyvään suorituskykyyn lineaarinen hammas­hihnaservokäyttö kyke­nee, jos teollisuudessa paikoitussäätörakenteena tyypillisesti käytetty kaskadira­kenne tai PID-rakenne korvataan kehittyneemmällä mallipohjaisella tilasäädinra­kenteella. Säädön toi­mintaa arvioidaan simulointien ja koelaitteistolla suoritetta­vien mittaus­ten perusteella.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the study is to obtain a mathematical description for an alternative variant of controlling a hydraulic circuit with an electrical drive. The electrical and hydraulic systems are described by basic mathematical equations. The flexibilities of the load and boom is modeled with assumed mode method. The model is achieved and proven with simulations. The controller is constructed and proven to decrease oscillations and improve the dynamic response of the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technical developments have made it possible to analyze very low amounts of DNA. This has many advantages, but the drawback of this technological progress is that interpretation of the results becomes increasingly complex: the number of mixed DNA profiles increased relatively to single source DNA profiles and stochastic effects in the DNA profile, such as drop-in and drop-out, are more frequently observed. Moreover, the relevance of low template DNA material regarding the activities alleged is not as straightforward as it was a few years ago, when for example large quantities of blood were recovered. The possibility of secondary and tertiary transfer is now becoming an issue. The purpose of this research is twofold: first, to study the transfer of DNA from the handler and secondly, to observe if handlers would transfer DNA from persons closely connected to them. We chose to mimic cases where the offender would attack a person with a knife. As a first approach, we envisaged that the defense would not give an alternative explanation for the origin of the DNA. In our transfer experiments (4 donors, 16 experiments each, 64 traces), 3% of the traces were single DNA profiles. Most of the time, the DNA profile of the person handling the knife was present as the major profile: in 83% of the traces the major contributor profile corresponded to the stabber's DNA profile (in single stains and mixtures). Mixture with no clear major/minor fraction (12%) were observed. 5% of the traces were considered of insufficient quality (more than 3 contributors, presence of a few minor peaks). In that case, we considered that the stabber's DNA was absent. In our experiments, no traces allowed excluding the stabber, however it must be noted that precautions were taken to minimize background DNA as knives were cleaned before the experiments. DNA profiles of the stabber's colleagues were not observed. We hope that this study will allow for a better understanding of the transfer mechanism and of how to assess and describe results given activity level propositions. In this preliminary research, we have focused on the transfer of DNA on the hand of the person. Besides, more research is needed to assign the probability of the results given an alternative activity proposed by the defense, for instance when the source of the DNA is not contested, but that the activities are.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use two coupled equations to analyze the space-time dynamics of two interacting languages. Firstly, we introduce a cohabitation model, which is more appropriate for human populations than classical (non-cohabitation) models. Secondly, using numerical simulations we nd the front speed of a new language spreading into a region where another language was previously used. Thirdly, for a special case we derive an analytical formula that makes it possible to check the validity of our numerical simulations. Finally, as an example, we nd that the observed front speed for the spread of the English language into Wales in the period 1961-1981 is consistent with the model predictions. We also nd that the e¤ects of linguistic parameters are much more important than those of parameters related to population dispersal and reproduction. If the initial population densities of both languages are similar, they have no e¤ect on the front speed. We outline the potential of the new model to analyze relationships between language replacement and genetic replacement

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Peer-reviewed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report a Lattice-Boltzmann scheme that accounts for adsorption and desorption in the calculation of mesoscale dynamical properties of tracers in media of arbitrary complexity. Lattice Boltzmann simulations made it possible to solve numerically the coupled Navier-Stokes equations of fluid dynamics and Nernst-Planck equations of electrokinetics in complex, heterogeneous media. With the moment propagation scheme, it became possible to extract the effective diffusion and dispersion coefficients of tracers, or solutes, of any charge, e.g., in porous media. Nevertheless, the dynamical properties of tracers depend on the tracer-surface affinity, which is not purely electrostatic and also includes a species-specific contribution. In order to capture this important feature, we introduce specific adsorption and desorption processes in a lattice Boltzmann scheme through a modified moment propagation algorithm, in which tracers may adsorb and desorb from surfaces through kinetic reaction rates. The method is validated on exact results for pure diffusion and diffusion-advection in Poiseuille flows in a simple geometry. We finally illustrate the importance of taking such processes into account in the time-dependent diffusion coefficient in a more complex porous medium.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tässä diplomityössä on mallinnettu höyry- ja kaasuturbiini Balas -prosessisimulointi-ohjelmaan. Balas on Valtion Teknillisen Tutkimuskeskuksen kehittämä simulointiohjelma, erityisesti paperi- ja selluteollisuuden prosessien staattiseen simulointiin. Työn tavoitteena on kehittää simulointimallit höyry- ja kaasuturbiinille, sekä tutkia niiden toimivuutta vertaamalla simulointeja mittaus- ja mitoitustietoihin. Työssä on muodostettu matemaattiset mallit höyryturbiinille, höyryturbiinin säätövyöhykkeelle sekä höyryturbiinin off-design laskennalle. Kaasuturbiinille muodostettiin toimintakäyrät, joiden avulla tarkastellaan sen toimintaa off-design tilanteessa. Komponentit mallinnettiin diplomityövaiheessa Matlab-ympäristöön, josta ne siirretään Balasiin erillisessä työvaiheessa. Malleissa on kiinnitetty huomiota erityisesti niiden helppokäyttöisyyteen ja monipuolisuuteen. Höyryturbiinimalleja testattiin simuloimalla erään paperitehtaan yhteydessä toimivan voimalaitoksen vastapaineturbiini säätövyöhykkeineen ja vertaamalla simulointituloksia tehtaan mittaustietoihin. Kaasuturbiinimallia testattiin vertaamalla GE Power MS 7001 kaasuturbiinin mitoitustietoja vastaavilla parametreilla simuloituun tapaukseen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large Hadron Collider (LHC) is the main particle accelerator at CERN. LHC is created with main goal to search elementary particles and help science investigate our universe. Radiation in LHC is caused by charged particles circular acceleration, therefore detectors tracing particles in existed severe conditions during the experiments must be radiation tolerant. Moreover, further upgrade of luminosity (up to 1035 cm-2s-1) requires development of particle detector’s structure. This work is dedicated to show the new type 3D stripixel detector with serious structural improvement. The new type of radiation-hard detector has a three-dimensional (3D) array of the p+ and n+ electrodes that penetrate into the detector bulk. The electrons and holes are then collected at oppositely biased electrodes. Proposed 3D stripixel detector demonstrates that full depletion voltage is lower that that for planar detectors. Low depletion voltage is one of the main advantages because only depleted part of the device is active are. Because of small spacing between electrodes, charge collection distances are smaller which results in high speed of the detector’s response. In this work is also briefly discussed dual-column type detectors, meaning consisting both n+ and p+ type columnar electrodes in its structure, and was declared that dual-column detectors show better electric filed distribution then single sided radiation detectors. The dead space or in other words low electric field region in significantly suppressed. Simulations were carried out by using Atlas device simulation software. As a simulation results in this work are represented the electric field distribution under different bias voltages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of two-dimensional spectral analysis applied to terrain heights in order to determine characteristic terrain spatial scales and its subsequent use for the objective definition of an adequate grid size required to resolve terrain forcing are presented in this paper. In order to illustrate the influence of grid size, atmospheric flow in a complex terrain area of the Spanish east coast is simulated by the Regional Atmospheric Modeling System (RAMS) mesoscale numerical model using different horizontal grid resolutions. In this area, a grid size of 2 km is required to account for 95% of terrain variance. Comparison among results of the different simulations shows that, although the main wind behavior does not change dramatically, some small-scale features appear when using a resolution of 2 km or finer. Horizontal flow pattern differences are significant both in the nighttime, when terrain forcing is more relevant, and in the daytime, when thermal forcing is dominant. Vertical structures also are investigated, and results show that vertical advection is influenced highly by the horizontal grid size during the daytime period. The turbulent kinetic energy and potential temperature vertical cross sections show substantial differences in the structure of the planetary boundary layer for each model configuration