870 resultados para Agent-Based Models
Resumo:
RESUME Les évidences montrant que les changements globaux affectent la biodiversité s'accumulent. Les facteurs les plus influant dans ce processus sont les changements et destructions d'habitat, l'expansion des espèces envahissantes et l'impact des changements climatiques. Une évaluation pertinente de la réponse des espèces face à ces changements est essentielle pour proposer des mesures permettant de réduire le déclin actuel de la biodiversité. La modélisation de la répartition d'espèces basée sur la niche (NBM) est l'un des rares outils permettant cette évaluation. Néanmoins, leur application dans le contexte des changements globaux repose sur des hypothèses restrictives et demande une interprétation critique. Ce travail présente une série d'études de cas investiguant les possibilités et limitations de cette approche pour prédire l'impact des changements globaux. Deux études traitant des menaces sur les espèces rares et en danger d'extinction sont présentées. Les caractéristiques éco-géographiques de 118 plantes avec un haut degré de priorité de conservation sont revues. La prévalence des types de rareté sont analysées en relation avec leur risque d'extinction UICN. La revue souligne l'importance de la conservation à l'échelle régionale. Une évaluation de la rareté à échelle globale peut être trompeuse pour certaine espèces car elle ne tient pas en compte des différents degrés de rareté que présente une espèce à différentes échelles spatiales. La deuxième étude test une approche pour améliorer l'échantillonnage d'espèces rares en incluant des phases itératives de modélisation et d'échantillonnage sur le terrain. L'application de l'approche en biologie de la conservation (illustrée ici par le cas du chardon bleu, Eryngium alpinum), permettrait de réduire le temps et les coûts d'échantillonnage. Deux études sur l'impact des changements climatiques sur la faune et la flore africaine sont présentées. La première étude évalue la sensibilité de 227 mammifères africains face aux climatiques d'ici 2050. Elle montre qu'un nombre important d'espèces pourrait être bientôt en danger d'extinction et que les parcs nationaux africains (principalement ceux situé en milieux xériques) pourraient ne pas remplir leur mandat de protection de la biodiversité dans le futur. La seconde étude modélise l'aire de répartition en 2050 de 975 espèces de plantes endémiques du sud de l'Afrique. L'étude propose l'inclusion de méthodes améliorant la prédiction des risques liés aux changements climatiques. Elle propose également une méthode pour estimer a priori la sensibilité d'une espèce aux changements climatiques à partir de ses propriétés écologiques et des caractéristiques de son aire de répartition. Trois études illustrent l'utilisation des modèles dans l'étude des invasions biologiques. Une première étude relate l'expansion de la laitue sáuvage (Lactuca serriola) vers le nord de l'Europe en lien avec les changements du climat depuis 250 ans. La deuxième étude analyse le potentiel d'invasion de la centaurée tachetée (Centaures maculosa), une mauvaise herbe importée en Amérique du nord vers 1890. L'étude apporte la preuve qu'une espèce envahissante peut occuper une niche climatique différente après introduction sur un autre continent. Les modèles basés sur l'aire native prédisent de manière incorrecte l'entier de l'aire envahie mais permettent de prévoir les aires d'introductions potentielles. Une méthode alternative, incluant la calibration du modèle à partir des deux aires où l'espèce est présente, est proposée pour améliorer les prédictions de l'invasion en Amérique du nord. Je présente finalement une revue de la littérature sur la dynamique de la niche écologique dans le temps et l'espace. Elle synthétise les récents développements théoriques concernant le conservatisme de la niche et propose des solutions pour améliorer la pertinence des prédictions d'impact des changements climatiques et des invasions biologiques. SUMMARY Evidences are accumulating that biodiversity is facing the effects of global change. The most influential drivers of change in ecosystems are land-use change, alien species invasions and climate change impacts. Accurate projections of species' responses to these changes are needed to propose mitigation measures to slow down the on-going erosion of biodiversity. Niche-based models (NBM) currently represent one of the only tools for such projections. However, their application in the context of global changes relies on restrictive assumptions, calling for cautious interpretations. In this thesis I aim to assess the effectiveness and shortcomings of niche-based models for the study of global change impacts on biodiversity through the investigation of specific, unsolved limitations and suggestion of new approaches. Two studies investigating threats to rare and endangered plants are presented. I review the ecogeographic characteristic of 118 endangered plants with high conservation priority in Switzerland. The prevalence of rarity types among plant species is analyzed in relation to IUCN extinction risks. The review underlines the importance of regional vs. global conservation and shows that a global assessment of rarity might be misleading for some species because it can fail to account for different degrees of rarity at a variety of spatial scales. The second study tests a modeling framework including iterative steps of modeling and field surveys to improve the sampling of rare species. The approach is illustrated with a rare alpine plant, Eryngium alpinum and shows promise for complementing conservation practices and reducing sampling costs. Two studies illustrate the impacts of climate change on African taxa. The first one assesses the sensitivity of 277 mammals at African scale to climate change by 2050 in terms of species richness and turnover. It shows that a substantial number of species could be critically endangered in the future. National parks situated in xeric ecosystems are not expected to meet their mandate of protecting current species diversity in the future. The second study model the distribution in 2050 of 975 endemic plant species in southern Africa. The study proposes the inclusion of new methodological insights improving the accuracy and ecological realism of predictions of global changes studies. It also investigates the possibility to estimate a priori the sensitivity of a species to climate change from the geographical distribution and ecological proprieties of the species. Three studies illustrate the application of NBM in the study of biological invasions. The first one investigates the Northwards expansion of Lactuca serriola L. in Europe during the last 250 years in relation with climate changes. In the last two decades, the species could not track climate change due to non climatic influences. A second study analyses the potential invasion extent of spotted knapweed, a European weed first introduced into North America in the 1890s. The study provides one of the first empirical evidence that an invasive species can occupy climatically distinct niche spaces following its introduction into a new area. Models fail to predict the current full extent of the invasion, but correctly predict areas of introduction. An alternative approach, involving the calibration of models with pooled data from both ranges, is proposed to improve predictions of the extent of invasion on models based solely on the native range. I finally present a review on the dynamic nature of ecological niches in space and time. It synthesizes the recent theoretical developments to the niche conservatism issues and proposes solutions to improve confidence in NBM predictions of the impacts of climate change and species invasions on species distributions.
Resumo:
This paper presents a statistical model for the quantification of the weight of fingerprint evidence. Contrarily to previous models (generative and score-based models), our model proposes to estimate the probability distributions of spatial relationships, directions and types of minutiae observed on fingerprints for any given fingermark. Our model is relying on an AFIS algorithm provided by 3M Cogent and on a dataset of more than 4,000,000 fingerprints to represent a sample from a relevant population of potential sources. The performance of our model was tested using several hundreds of minutiae configurations observed on a set of 565 fingermarks. In particular, the effects of various sub-populations of fingers (i.e., finger number, finger general pattern) on the expected evidential value of our test configurations were investigated. The performance of our model indicates that the spatial relationship between minutiae carries more evidential weight than their type or direction. Our results also indicate that the AFIS component of our model directly enables us to assign weight to fingerprint evidence without the need for the additional layer of complex statistical modeling involved by the estimation of the probability distributions of fingerprint features. In fact, it seems that the AFIS component is more sensitive to the sub-population effects than the other components of the model. Overall, the data generated during this research project contributes to support the idea that fingerprint evidence is a valuable forensic tool for the identification of individuals.
Resumo:
There is no definite theory yet for the mechanism by which the pattern of epidermal ridges on fingers, palms and soles forming friction ridge skin (FRS) patterns is created. For a long time growth forces in the embryonal epidermis have been believed to be involved in FRS formation. More recent evidence suggests that Merkel cells play an important part in this process as well. Here we suggest a model for the formation of FRS patterns that links Merkel cells to the epidermal stress distribution. The Merkel cells are modeled as agents in an agent based model that move anisotropically where the anisotropy is created by the epidermal stress tensor. As a result ridge patterns are created with pattern defects as they occur in real FRS patterns. As a consequence we suggest why the topology of FRS patterns is indeed unique as the arrangement of pattern defects is sensitive to the initial configuration of Merkel cells.
Resumo:
There is a wide range of evidence to suggest that permeability can be constrained through of induced polarization measurements. For clean sands and sandstones, current mechanistic models of induced polarization predict a relationship between the low-frequency time constant inferred from induced polarization measurements and the grain diameter. A number of observations do, however, disagree with this and indicate that the observed relaxation behavior is rather governed by the so-called dynamic pore radius L. To test this hypothesis, we have developed a set of new scaling relationships, which allow the relaxation time to be computed from the pore size and the permeability to be computed from both the Cole-Cole time constant and the formation factor. Moreover, these new scaling relationships can be also used to predict the dependence of the Cole-Cole time constant as a function of the water saturation under unsaturated conditions. Comparative tests of the proposed new relationships with regard to various published experimental results for saturated clean sands and sandstones as well as for partially saturated clean sandstones, do indeed confirm that the dynamic pore radius L is a much more reliable indicator of the observed relaxation behavior than grain-size-based models.
Resumo:
Statistical properties of binary complex networks are well understood and recently many attempts have been made to extend this knowledge to weighted ones. There are, however, subtle yet important considerations to be made regarding the nature of the weights used in this generalization. Weights can be either continuous or discrete magnitudes, and in the latter case, they can additionally have undistinguishable or distinguishable nature. This fact has not been addressed in the literature insofar and has deep implications on the network statistics. In this work we face this problem introducing multiedge networks as graphs where multiple (distinguishable) connections between nodes are considered. We develop a statistical mechanics framework where it is possible to get information about the most relevant observables given a large spectrum of linear and nonlinear constraints including those depending both on the number of multiedges per link and their binary projection. The latter case is particularly interesting as we show that binary projections can be understood from multiedge processes. The implications of these results are important as many real-agent-based problems mapped onto graphs require this treatment for a proper characterization of their collective behavior.
Resumo:
The depositional stratigraphy of within-channel deposits in sandy braided rivers is dominated by a variety of barforms (both singular `unit' bars and complex `compound' bars), as well as the infill of individual channels (herein termed `channel fills'). The deposits of bars and channel fills define the key components of facies models for braided rivers and their within-channel heterogeneity, knowledge of which is important for reservoir characterization. However, few studies have sought to address the question of whether the deposits of bars and channel fills can be readily differentiated from each other. This paper presents the first quantitative study to achieve this aim, using aerial images of an evolving modern sandy braided river and geophysical imaging of its subsurface deposits. Aerial photographs taken between 2000 and 2004 document the abandonment and fill of a 1 3 km long, 80 m wide anabranch channel in the sandy braided South Saskatchewan River, Canada. Upstream river regulation traps the majority of very fine sediment and there is little clay (<1%) in the bed sediments. Channel abandonment was initiated by a series of unit bars that stalled and progressively blocked the anabranch entrance, together with dune deposition and stacking at the anabranch entrance and exit. Complete channel abandonment and subsequent fill of up to 3 m of sediment took approximately two years. Thirteen kilometres of ground-penetrating radar surveys, coupled with 18 cores, were obtained over the channel fill and an adjacent 750 m long, 400 m wide, compound bar, enabling a quantitative analysis of the channel and bar deposits. Results show that, in terms of grain-size trends, facies proportions and scale of deposits, there are only subtle differences between the channel fill and bar deposits which, therefore, renders them indistinguishable. Thus, it may be inappropriate to assign different geometric and sedimentological attributes to channel fill and bar facies in object-based models of sandy braided river alluvial architecture.
Resumo:
Syttymistä ja palamisen etenemistä partikkelikerroksessa tutkitaan paloturvallisuuden parantamista sekä kiinteitä polttoaineita käyttävien polttolaitteiden toiminnan tuntemista ja kehittämistä varten. Tässä tutkimuksessa on tavoitteena kerätä yhteen syttymiseen ja liekkirintaman etenemiseen liittyviä kokeellisia ja teoreettisia tutkimustuloksia, jotka auttavat kiinteäkerrospoltto- ja -kaasutus-laitteiden kehittämisessä ja suunnittelussa. Työ on esitutkimus sitä seuraavalle kokeelliselle ja teoreettiselle osalle. Käsittelyssä keskitytään erityisesti puuperäisiin polttoaineisiin. Hiilidioksidipäästöjen vähentämistavoitteet sekä kiinteiden jätteiden energiakäytön lisääminen ja kaatopaikalle viennin vähentäminen aiheuttavat lähitulevaisuudessa kerrospolton lisääntymistä. Kuljetusmatkojen optimoinnin takia joudutaan rakentamaan melko pieniä polttolaitoksia, joissa kerrospolttotekniikka on edullisin vaihtoehto. Syttymispisteellä tarkoitetaan Semenovin määritelmän mukaan tilaa ja ajankohtaa, jolloin polttoaineen ja hapen reaktioissa muodostuva nettoenergia aikayksikössä on yhtäsuuri kuin ympäristöön siirtyvä nettoenergiavirta. Itsesyttyminen tarkoittaa syttymistä ympäristön lämpötilan tai paineen suurenemisen seurauksena. Pakotettu syttyminen tapahtuu, kun syttymispisteen läheisyydessä on esimerkiksi liekki tai hehkuva kiinteä kappale, joka aiheuttaa paikallisen syttymisen ja syttymisrintaman leviämisen muualle polttoaineeseen. Kokeellinen tutkimus on osoittanut tärkeimmiksi syttymiseen ja syttymisrintaman etenemiseen vaikuttaviksi tekijöiksi polttoaineen kosteuden, haihtuvien aineiden pitoisuuden ja lämpöarvon, partikkelikerroksen huokoisuuden, partikkelien koon ja muodon, polttoaineen pinnalle tulevan säteilylämpövirran tiheyden, kaasun virtausnopeuden kerroksessa, hapen osuuden ympäristössä sekä palamisilman esilämmityksen. Kosteuden lisääntyminen suurentaa syttymisenergiaa ja -lämpötilaa sekä pidentää syttymisaikaa. Mitä enemmän polttoaine sisältää haihtuvia aineita sitä pienemmässä lämpötilassa se syttyy. Syttyminen ja syttymisrintaman eteneminen ovat sitä nopeampia mitä suurempi on polttoaineen lämpöarvo. Kerroksen huokoisuuden kasvun on havaittu suurentavan palamisen etenemisnopeutta. Pienet partikkelit syttyvät yleensä nopeammin ja pienemmässä lämpötilassa kuin suuret. Syttymisrintaman eteneminen nopeutuu partikkelien pinta-ala - tilavuussuhteen kasvaessa. Säteilylämpövirran tiheys on useissa polttosovellutuksissa merkittävin lämmönsiirtotekijä, jonka kasvu luonnollisesti nopeuttaa syttymistä. Ilman ja palamiskaasujen virtausnopeus kerroksessa vaikuttaa konvektiiviseen lämmönsiirtoon ja hapen pitoisuuteen syttymisvyöhykkeellä. Ilmavirtaus voi jäähdyttää ja kuumankaasun virtaus lämmittää kerrosta. Hapen osuuden kasvaminen nopeuttaa syttymistä ja liekkirintaman etenemistä kunnes saavutetaan tila, jota suuremmilla virtauksilla ilma jäähdyttää ja laimentaa reaktiovyöhykettä. Palamisilman esilämmitys nopeuttaa syttymisrintaman etenemistä. Syttymistä ja liekkirintaman etenemistä kuvataan yleensä empiirisillä tai säilyvyysyhtälöihin perustuvilla malleilla. Empiiriset mallit perustuvat mittaustuloksista tehtyihin korrelaatioihin sekä joihinkin tunnettuihin fysikaalisiin lainalaisuuksiin. Säilyvyysyhtälöihin perustuvissa malleissa systeemille määritetään massan, energian, liikemäärän ja alkuaineiden säilymisyhtälöt, joiden nopeutta kuvaavien siirtoyhtälöiden muodostamiseen käytetään teoreettisella ja kokeellisella tutkimuksella saatuja yhtälöitä. Nämä mallinnusluokat ovat osittain päällekkäisiä. Pintojen syttymistä kuvataan usein säilyvyysyhtälöihin perustuvilla malleilla. Partikkelikerrosten mallinnuksessa tukeudutaan enimmäkseen empiirisiin yhtälöihin. Partikkelikerroksia kuvaavista malleista Xien ja Liangin hiilipartikkelikerroksen syttymiseen liittyvä tutkimus ja Gortin puun ja jätteen polttoon liittyvä reaktiorintaman etenemistutkimus ovat lähimpänä säilyvyysyhtälöihin perustuvaa mallintamista. Kaikissa malleissa joudutaan kuitenkin yksinkertaistamaan todellista tapausta esimerkiksi vähentämällä dimensioita, reaktioita ja yhdisteitä sekä eliminoimalla vähemmän merkittävät siirtomekanismit. Suoraan kerrospolttoa ja -kaasutusta palvelevia syttymisen ja palamisen etenemisen tutkimuksia on vähän. Muita tarkoituksia varten tehtyjen tutkimusten polttoaineet, kerrokset ja ympäristöolosuhteet poikkeavat yleensä selvästi polttolaitteiden vastaavista olosuhteista. Erikokoisten polttoainepartikkelien ja ominaisuuksiltaan erilaisten polttoaineiden seospolttoa ei ole tutkittu juuri ollenkaan. Polttoainepartikkelien muodon vaikutuksesta on vain vähän tutkimusta.Ilman kanavoitumisen vaikutuksista ei löytynyt tutkimuksia.
Resumo:
Simulation is a useful tool in cardiac SPECT to assess quantification algorithms. However, simple equation-based models are limited in their ability to simulate realistic heart motion and perfusion. We present a numerical dynamic model of the left ventricle, which allows us to simulate normal and anomalous cardiac cycles, as well as perfusion defects. Bicubic splines were fitted to a number of control points to represent endocardial and epicardial surfaces of the left ventricle. A transformation from each point on the surface to a template of activity was made to represent the myocardial perfusion. Geometry-based and patient-based simulations were performed to illustrate this model. Geometry-based simulations modeled ~1! a normal patient, ~2! a well-perfused patient with abnormal regional function, ~3! an ischaemic patient with abnormal regional function, and ~4! a patient study including tracer kinetics. Patient-based simulation consisted of a left ventricle including a realistic shape and motion obtained from a magnetic resonance study. We conclude that this model has the potential to study the influence of several physical parameters and the left ventricle contraction in myocardial perfusion SPECT and gated-SPECT studies.
Resumo:
One of the classic research topics in adaptive behavior is the collective displacement of groups of organisms such as flocks of birds, schools of fish, herds of mammals and crowds of people. However, most agent-based simulations of group behavior do not provide a quantitative index for determining the point at which the flock emerges. We have developed an index of the aggregation of moving individuals in a flock and have provided an example of how it can be used to quantify the degree to which a group of moving individuals actually forms a flock.
Resumo:
Statistical properties of binary complex networks are well understood and recently many attempts have been made to extend this knowledge to weighted ones. There are, however, subtle yet important considerations to be made regarding the nature of the weights used in this generalization. Weights can be either continuous or discrete magnitudes, and in the latter case, they can additionally have undistinguishable or distinguishable nature. This fact has not been addressed in the literature insofar and has deep implications on the network statistics. In this work we face this problem introducing multiedge networks as graphs where multiple (distinguishable) connections between nodes are considered. We develop a statistical mechanics framework where it is possible to get information about the most relevant observables given a large spectrum of linear and nonlinear constraints including those depending both on the number of multiedges per link and their binary projection. The latter case is particularly interesting as we show that binary projections can be understood from multiedge processes. The implications of these results are important as many real-agent-based problems mapped onto graphs require this treatment for a proper characterization of their collective behavior.
Resumo:
This thesis focuses on the social-psychological factors that help coping with structural disadvantage, and specifically on the role of cohesive ingroups and the sense of connectedness and efficacy they entail in this process. It aims to complement existing group-based models of coping that are grounded in a categorization perspective to groups and consequently focus exclusively on the large-scale categories made salient in intergroup contexts of comparisons. The dissertation accomplishes this aim through a reconsideration of between-persons relational interdependence as a sufficient and independent antecedent of a sense of groupness, and the benefits that a sense of group connectedness in one's direct environment, regardless of the categorical or relational basis of groupness, might have in the everyday struggles of disadvantaged group members. The three empirical papers aim to validate this approach, outlined in the first theoretical introduction, by testing derived hypotheses. They are based on data collected with youth populations (15-30) from three institutions in French-speaking Switzerland within the context of a larger project on youth transitions. Methods of data collection are paper-pencil questionnaires and in-depth interviews with a selected sub-sample of participants. The key argument of the first paper is that members of socially disadvantaged categories face higher barriers to their life project and that a general sense of connectedness, either based on categorical identities or other proximal groups and relations, mitigates the feeling of powerlessness associated with this experience. The second paper develops and tests a model that defines individual needs satisfaction as antecedent of self-group bonds and the efficacy beliefs derived from these intragroup bonds as the mechanism underlining the role of ingroups in coping. The third paper highlights the complexities that might be associated with the construction of a sense of groupness directly from intergroup comparisons and categorization-based disadvantage, and points out a more subtle understanding of the processes underling the emergence of groupness out of the situation of structural disadvantage. Overall, the findings confirm the central role of ingroups in coping with structural disadvantage and the importance of an understanding of groupness and its role that goes beyond the dominant focus on intergroup contexts and categorization processes.
Resumo:
Recent studies have demonstrated that the use of paramagnetic hepatobiliary contrast agents in the acquisition of magnetic resonance images remarkably improves the detection and differentiation of focal liver lesions, as compared with extracellular contrast agents. Paramagnetic hepatobiliary contrast agents initially show the perfusion of the lesions, as do extracellular agents, but delayed contrast-enhanced images can demonstrate contrast uptake by functional hepatocytes, providing further information for a better characterization of the lesions. Additionally, this intrinsic characteristic increases the accuracy in the detection of hepatocellular carcinomas and metastases, particularly the small-sized ones. Recently, a hepatobiliary contrast agent called gadolinium ethoxybenzyl dimeglumine, that is simply known as gadoxetic acid, was approved by the National Health Surveillance Agency for use in humans. The authors present a literature review and a practical approach of magnetic resonance imaging utilizing gadoxetic acid as contrast agent, based on patients' images acquired during their initial experiment.
Resumo:
Käyttöliittymä on rajapinta käyttäjän ja järjestelmän tarjoamien toimintojen välillä ja sen toimivuus vaikuttaa toimintojen suorittamiseen joko positiivisesti tai negatiivisesti. Täten sovelluksen suunnitteluvaiheessa on hyvä arvioida käyttöliittymän ja sen toimintojen laatua ja kokeilla ideoiden toimivuutta rakentamalla asiasta prototyyppejä. Prototypoinnilla voidaan tunnistaa ja korjata mahdolliset ongelmat jo suunnittelupöydällä. Tämä diplomityö käsittelee Web-sovelluksen kehityksen aikana toteutettua käyttöliittymän ja sen toimintojen prototypointia. Käyttöliittymien mallintamista voidaan toteuttaa erilaisilla menetelmillä, joita työssä käydään läpi teknologisista näkökulmista eli miten prototypointimenetelmiä voidaan soveltaa projektin eri vaiheissa. Prototypoinnin apuna käytettäviin työkaluihin luodaan lyhyt katsaus esitellen yleisellä tasolla muutamia eri sovelluskategorian ohjelmistoja ja lisäksi käsitellään suunnittelumallien hyödyntämistä. Työ osoittaa, että yleisiä prototypointimenetelmiä ja -periaatteita voidaan soveltaa Web-sovellusten prototypoinnissa. Prototypointi on hyödyllistä aloittaa luonnostelemalla ja jatkaa aikaisessa vaiheessa HTML-malleihin, joilla päästään lähelle toteutuksen teknologioita ja mallintamaan sovelluksen luonnetta, ilmettä, tuntumaa ja vuorovaikutusta. HTML-prototyypeistä voidaan jalostaa sekoitetun tarkkuuden malleja ja ne toimivat toteutuksen perustana. Jatkokehityksessä ideoita voidaan esittää useilla eri tarkkuuden tekniikoilla.
Resumo:
Työssä esitellään käytetyimpiä tuotantofilosofioita. Tuotantofilosofia on hyvin laaja käsite ja sen vuoksi myös jotkin esiteltävistä menetelmistä ovat hyvin kaukana toisistaan. Työ koostuu teoriaosiosta, jossa on esitelty kukin tuotantofilosofia ja lopuksi johtopäätöksiä-osiossa käsitellään sitä, kuinka menetelmät liittyvät toisiinsa. Työssä esitellään JIT/JOT-tuotanto, Lean-tuotanto, Monozukuri, Modulointi, Standardointi, Strategiatyö, Six Sigma, TQM, TPM, QFD, MFD, Simulointi, Digitaalinen valmistus, DFX ja ns. uudet tuotantofilosofiat. Eri menetelmistä löytyvää lähdemateriaalia on tarjolla monipuolisesti, josta johtuen menetelmistä on voitu esitellä vain pääpiirteet. Tuotantofilosofioiden avulla voidaan saavuttaa monia eri asioita. Osa menetelmistä on luotu tuotannon tehostamiseksi ja yksinkertaistamiseksi, osa puolestaan lisää tuotannon tai koko yrityksen laatutasoa ja osa puolestaan helpottaa tuotteiden suunnittelu-työtä. Moni esiteltävistä filosofioista ei istu yksinomaan yhteen edellä mainituista kategorioista vaan kattaa laajempia alueita pitäen sisällään jopa kaikkia kolmea mainittua tulosta. Näiden lisäksi työssä on esitelty lyhyesti uusia tuotantofilosofioita, jotka ovat hieman irrallisia kokonaisuuksia verrattuna muihin työssä esiteltäviin filosofioihin. Työn tarkoituksena on auttaa hahmottamaan suurta kokonaisuutta jonka tuotantofilosofiat tuottavat. On tärkeää osata hahmottaa filosofioiden riippuvuus toisistaan ja se, että otettaessa käyttöön jotain tuotantofilosofiaa, tarkoittaa se myös mahdollisesti monen muunkin asian huomioonottamista. Tätä näkökantaa selvennetään johtopäätöksissä.
Resumo:
The aim of this study was to simulate blood flow in thoracic human aorta and understand the role of flow dynamics in the initialization and localization of atherosclerotic plaque in human thoracic aorta. The blood flow dynamics in idealized and realistic models of human thoracic aorta were numerically simulated in three idealized and two realistic thoracic aorta models. The idealized models of thoracic aorta were reconstructed with measurements available from literature, and the realistic models of thoracic aorta were constructed by image processing Computed Tomographic (CT) images. The CT images were made available by South Karelia Central Hospital in Lappeenranta. The reconstruction of thoracic aorta consisted of operations, such as contrast adjustment, image segmentations, and 3D surface rendering. Additional design operations were performed to make the aorta model compatible for the numerical method based computer code. The image processing and design operations were performed with specialized medical image processing software. Pulsatile pressure and velocity boundary conditions were deployed as inlet boundary conditions. The blood flow was assumed homogeneous and incompressible. The blood was assumed to be a Newtonian fluid. The simulations with idealized models of thoracic aorta were carried out with Finite Element Method based computer code, while the simulations with realistic models of thoracic aorta were carried out with Finite Volume Method based computer code. Simulations were carried out for four cardiac cycles. The distribution of flow, pressure and Wall Shear Stress (WSS) observed during the fourth cardiac cycle were extensively analyzed. The aim of carrying out the simulations with idealized model was to get an estimate of flow dynamics in a realistic aorta model. The motive behind the choice of three aorta models with distinct features was to understand the dependence of flow dynamics on aorta anatomy. Highly disturbed and nonuniform distribution of velocity and WSS was observed in aortic arch, near brachiocephalic, left common artery, and left subclavian artery. On the other hand, the WSS profiles at the roots of branches show significant differences with geometry variation of aorta and branches. The comparison of instantaneous WSS profiles revealed that the model with straight branching arteries had relatively lower WSS compared to that in the aorta model with curved branches. In addition to this, significant differences were observed in the spatial and temporal profiles of WSS, flow, and pressure. The study with idealized model was extended to study blood flow in thoracic aorta under the effects of hypertension and hypotension. One of the idealized aorta models was modified along with the boundary conditions to mimic the thoracic aorta under the effects of hypertension and hypotension. The results of simulations with realistic models extracted from CT scans demonstrated more realistic flow dynamics than that in the idealized models. During systole, the velocity in ascending aorta was skewed towards the outer wall of aortic arch. The flow develops secondary flow patterns as it moves downstream towards aortic arch. Unlike idealized models, the distribution of flow was nonplanar and heavily guided by the artery anatomy. Flow cavitation was observed in the aorta model which was imaged giving longer branches. This could not be properly observed in the model with imaging containing a shorter length for aortic branches. The flow circulation was also observed in the inner wall of the aortic arch. However, during the diastole, the flow profiles were almost flat and regular due the acceleration of flow at the inlet. The flow profiles were weakly turbulent during the flow reversal. The complex flow patterns caused a non-uniform distribution of WSS. High WSS was distributed at the junction of branches and aortic arch. Low WSS was distributed at the proximal part of the junction, while intermedium WSS was distributed in the distal part of the junction. The pulsatile nature of the inflow caused oscillating WSS at the branch entry region and inner curvature of aortic arch. Based on the WSS distribution in the realistic model, one of the aorta models was altered to induce artificial atherosclerotic plaque at the branch entry region and inner curvature of aortic arch. Atherosclerotic plaque causing 50% blockage of lumen was introduced in brachiocephalic artery, common carotid artery, left subclavian artery, and aortic arch. The aim of this part of the study was first to study the effect of stenosis on flow and WSS distribution, understand the effect of shape of atherosclerotic plaque on flow and WSS distribution, and finally to investigate the effect of lumen blockage severity on flow and WSS distributions. The results revealed that the distribution of WSS is significantly affected by plaque with mere 50% stenosis. The asymmetric shape of stenosis causes higher WSS in branching arteries than in the cases with symmetric plaque. The flow dynamics within thoracic aorta models has been extensively studied and reported here. The effects of pressure and arterial anatomy on the flow dynamic were investigated. The distribution of complex flow and WSS is correlated with the localization of atherosclerosis. With the available results we can conclude that the thoracic aorta, with complex anatomy is the most vulnerable artery for the localization and development of atherosclerosis. The flow dynamics and arterial anatomy play a role in the localization of atherosclerosis. The patient specific image based models can be used to diagnose the locations in the aorta vulnerable to the development of arterial diseases such as atherosclerosis.