45 resultados para flow-based
Resumo:
Tässä diplomityössä haluttiin mallintaa kuidutusrummun toimintaa Fluent virtausmallinusohjelman avulla. Aikaisempi tieto ja kehitystyö on perustunut kokemukseen ja käytännön kokeisiin. Kehitystyön alkuaikoina on suoritettu muutamia laskelmia koskien rummun tuottoa mutta sen jälkeen ei toimintaa ole laskennallisesti kuvattu. Työn ensimmäinen osa käsittelee yleisesti keräyspaperin käsittelyyn liittyviä laitteita ja menetelmiä. Toimintaperiaatteita on kuvattu yleisellä tasolla ja FibreFlow® rumpu on sitten käsitelty muita laitteita tarkemmin. Työn toinen osa sisältää sitten laboratoriotestit paikallisilta tahteilta hankittujen näytteiden viskositeettien ja tiheyksien määrittämiseksi. Kokeet suoritettiin Kemiantekniikan osastolla Brookfield viskoosimetrillä. Joitain alustavia laskentoja tuotosta suoritettiin aikaisempien tietojen perusteella. Rumpua kun on valmistettu vuodesta 1976, on tietoa kertynyt runsaasti vuosien mittaan. Laskelmia varten valittiin mallinnettavaksi alueeksi vain yksittäinen reikä sihdistä jolle laskettiin massavirta. Käytetyt laadut olivat OCC ja DIP. Myös eri rumpukoot otettiin jossain määrin huomioon.
Resumo:
Uusien paperikoneiden kysyntä on vähentynyt ja jälkimarkkinointipalveluiden, kuten huoltojen ja varaosamyyntien, merkittävyys paperikoneliiketoiminnassa on kasvanut entisestään viime aikoina. Uudentyyppisiä palveluja kilpailuedun lisäämiseksi kehitellään jatkuvasti. Esimerkki tällaisesta palvelusta on sopimusperusteinen varastointipalvelu, jossa osat ovat myyjän varastossa kunnes asiakas ottaa ne käyttöön. Diplomityön tavoite on rakentaa malli varastoinnin kustannuslaskentaan ja laskea sen avulla varastointipalvelun kustannukset. Perinteinen toimitusketju monine varastoineen ei nykykäsityksen mukaan ole enää kustannustehokas. Yhä useammat yritykset kaupan ja teollisuuden aloilla ovat ryhtyneet soveltamaan VMI (Vendor Managed Inventory) teoriaa toimitusketjuissaan. Varastot ovat tällöin keskitettyjä, tiedonkulku toimitusketjun portaiden välillä on nopeaa ja kysyntään pystytään vastaamaan lyhyemmällä viiveellä sen ennakoitavuuden paranemisen takia. Työn tuloksena on toimintolaskentaan pohjautuva kustannuslaskentamalli, jota voidaan hyödyntää myös hinnoittelupäätöksiä tehtäessä. Työssä esitellään mallin soveltaminen eri tapauksiin ja ehdotetaan jatkotoimenpiteitä.
Resumo:
Previous studies have demonstrated that clinical pulpal pain can induce the expression of pro-inflammatory neuropeptides in the adjacent gingival crevice fluid (GCF). Vasoactive agents such as substance P (SP) are known to contribute to the inflammatory type of pain and are associated with increased blood flow. More recent animal studies have shown that application of capsaicin on alveolar mucosa provokes pain and neurogenic vasodilatation in the adjacent gingiva. Pain-associated inflammatory reactions may initiate expression of several pro- and anti-inflammatory mediators. Collagenase-2 (MMP-8) has been considered to be the major destructive protease, especially in the periodontitis-affected gingival crevice fluid (GCF). MMP-8 originates mostly from neutrophil leukocytes, the first line of defence cells that exist abundantly in GCF, especially in inflammation. With this background, we wished to clarify the spatial extensions and differences between tooth-pain stimulation and capsaicin-induced neurogenic vasodilatation in human gingiva. Experiments were carried out to study whether tooth stimulation and capsaicin stimulation of alveolar mucosa would induce changes in GCF MMP-8 levels and whether tooth stimulation would release neuropeptide SP in GCF. The experiments were carried out on healthy human volunteers. During the experiments, moderate and high intensity painful tooth stimulation was performed by a constant current tooth stimulator. Moderate tooth stimulation activates A-delta fibres, while high stimulation also activates C-fibres. Painful stimulation of the gingiva was achieved by topical application of capsaicin-moistened filter paper on the mucosal surface. Capsaicin is known to activate selectively nociceptive C-fibres of stimulated tissue. Pain-evoked vasoactive changes in gingivomucosal tissues were mapped by laser Doppler imaging (LDI), which is a sophisticated and non-invasive method for studying e.g. spatial and temporal characteristics of pain- and inflammation-evoked blood flow changes in gingivomucosal tissues. Pain-evoked release of MMP-8 in GCF samples was studied by immunofluorometric assay (IFMA) and Western immunoblotting. The SP levels in GCF were analysed by Enzyme immunoassay (EIA). During the experiments, subjective stimulus-evoked pain responses were determined by a visual analogue pain scale. Unilateral stimulation of alveolar mucosa and attached gingiva by capsaicin evoked a distinct neurogenic vasodilatation in the ipsilateral gingiva, which attenuated rapidly at the midline. Capsaicin stimulation of alveolar mucosa provoked clear inflammatory reactions. In contrast to capsaicin stimuli, tooth stimulation produced symmetrical vasodilatations bilaterally in the gingiva. The ipsilateral responses were significantly smaller during tooth stimulation than during capsaicin stimuli. The current finding – that tooth stimulation evokes bilateral vasodilatation while capsaicin stimulation of the gingiva mainly produces unilateral vasodilatation – emphasises the usefulness of LDI in clarifying spatial features of neurogenic vasoactive changes in the intra-oral tissues. Capsaicin stimulation of the alveolar mucosa induced significant elevations in MMP-8 levels and activation in GCF of the adjacent teeth. During the experiments, no marked changes occurred in MMP-8 levels in the GCF of distantly located teeth. Painful stimulation of the upper incisor provoked elevations in GCF MMP-8 and SP levels of the stimulated tooth. The GCF MMP-8 and SP levels of the non-stimulated teeth were not changed. These results suggest that capsaicin-induced inflammatory reactions in gingivomucosal tissues do not cross the midline in the anterior maxilla. The enhanced reaction found during stimulation of alveolar mucosa indicates that alveolar mucosa is more sensitive to chemical irritants than the attached gingiva. Analysis of these data suggests that capsaicin-evoked neurogenic inflammation in the gingiva can trigger the expression and activation of MMP-8 in GCF of the adjacent teeth. In this study, it is concluded that experimental tooth pain at C-fibre intensity can induce local elevations in MMP-8 and SP levels in GCF. Depending on the role of MMP-8 in inflammation, in addition to surrogated tissue destruction, the elevated MMP-8 in GCF may also reflect accelerated local defensive and anti-inflammatory reactions.
Resumo:
This thesis is focused on process intensification. Several significant problems and applications of this theme are covered. Process intensification is nowadays one of the most popular trends in chemical engineering and attempts have been made to develop a general, systematic methodology for intensification. This seems, however, to be very difficult, because intensified processes are often based on creativity and novel ideas. Monolith reactors and microreactors are successful examples of process intensification. They are usually multichannel devices in which a proper feed technique is important for creating even fluid distribution into the channels. Two different feed techniques were tested for monoliths. In the first technique a shower method was implemented by means of perforated plates. The second technique was a dispersion method using static mixers. Both techniques offered stable operation and uniform fluid distribution. The dispersion method enabled a wider operational range in terms of liquid superficial velocity. Using dispersion method, a volumetric gas-liquid mass transfer coefficient of 2 s-1 was reached. Flow patterns play a significant role in terms of the mixing performance of micromixers. Although the geometry of a T-mixer is simple, channel configurations and dimensions had a clear effect on mixing efficiency. The flow in the microchannel was laminar, but the formation of vortices promoted mixing in micro T-mixers. The generation of vortices was dependent on the channel dimensions, configurations and flow rate. Microreactors offer a high ratio of surface area to volume. Surface forces and interactions between fluids and surfaces are, therefore, often dominant factors. In certain cases, the interactions can be effectively utilised. Different wetting properties of solid materials (PTFE and stainless steel) were applied in the separation of immiscible liquid phases. A micro-scale plate coalescer with hydrophilic and hydrophobic surfaces was used for the continuous separation of organic and aqueous phases. Complete phase separation occurred in less than 20 seconds, whereas the separation time by settling exceeded 30 min. Fluid flows can be also intensified in suitable conditions. By adding certain additives into turbulent fluid flow, it was possible to reduce friction (drag) by 40 %. Drag reduction decreases frictional pressure drop in pipelines which leads to remarkable energy savings and decreases the size or number of pumping facilities required, e.g., in oil transport pipes. Process intensification enables operation often under more optimal conditions. The consequent cost savings from reduced use of raw materials and reduced waste lead to greater economic benefits in processing.
Resumo:
There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-effective hospital administration. Methods for automated processing of clinical text are emerging. The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality, and a road map for the technology development. Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow. Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising. Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality. The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.
Resumo:
Virtually every cell and organ in the human body is dependent on a proper oxygen supply. This is taken care of by the cardiovascular system that supplies tissues with oxygen precisely according to their metabolic needs. Physical exercise is one of the most demanding challenges the human circulatory system can face. During exercise skeletal muscle blood flow can easily increase some 20-fold and its proper distribution to and within muscles is of importance for optimal oxygen delivery. The local regulation of skeletal muscle blood flow during exercise remains little understood, but adenosine and nitric oxide may take part in this process. In addition to acute exercise, long-term vigorous physical conditioning also induces changes in the cardiovasculature, which leads to improved maximal physical performance. The changes are largely central, such as structural and functional changes in the heart. The function and reserve of the heart’s own vasculature can be studied by adenosine infusion, which according to animal studies evokes vasodilation via it’s a2A receptors. This has, however, never been addressed in humans in vivo and also studies in endurance athletes have shown inconsistent results regarding the effects of sport training on myocardial blood flow. This study was performed on healthy young adults and endurance athletes and local skeletal and cardiac muscle blod flow was measured by positron emission tomography. In the heart, myocardial blood flow reserve and adenosine A2A receptor density, and in skeletal muscle, oxygen extraction and consumption was also measured. The role of adenosine in the control of skeletal muscle blood flow during exercise, and its vasodilator effects, were addressed by infusing competitive inhibitors and adenosine into the femoral artery. The formation of skeletal muscle nitric oxide was also inhibited by a drug, with and without prostanoid blockade. As a result and conclusion, it can be said that skeletal muscle blood flow heterogeneity decreases with increasing exercise intensity most likely due to increased vascular unit recruitment, but exercise hyperemia is a very complex phenomenon that cannot be mimicked by pharmacological infusions, and no single regulator factor (e.g. adenosine or nitric oxide) accounts for a significant part of exercise-induced muscle hyperemia. However, in the present study it was observed for the first time in humans that nitric oxide is not only important regulator of the basal level of muscle blood flow, but also oxygen consumption, and together with prostanoids affects muscle blood flow and oxygen consumption during exercise. Finally, even vigorous endurance training does not seem to lead to supranormal myocardial blood flow reserve, and also other receptors than A2A mediate the vasodilator effects of adenosine. In respect to cardiac work, atheletes heart seems to be luxuriously perfused at rest, which may result from reduced oxygen extraction or impaired efficiency due to pronouncedly enhanced myocardial mass developed to excel in strenuous exercise.
Resumo:
The aim of the thesis is to analyze traffic flows and its development from North European companies` point of view to China and Russia using data from logistics questionnaire. Selected North European companies are large Finnish and Swedish companies. The questionnaire was sent via email to the target group. The study is based on the answers got from respondent companies from years 2006, 2009 and 2010. In the thesis Finnish Talouselämä newspaper and Swedish Affärsdata are used as a database to find the target companies for the survey. Respondents were most often logistics managers in companies. In the beginning of the thesis concepts of transportation logistics is presented, including container types, trade terms, axel loads in roads and in railways. Also there is information about warehousing types and terminals. After that, general information of Chinese and Russian transportation logistics is presented. Chinese and Russian issues are discussed in two sections. In both of them it is analyzed economic development, freight transport and trade balance. Some practical examples of factory inaugurations in China and Russia are presented that Finnish and Swedish companies have completed. In freight transport section different transportation modes, logistics outsourcing and problems of transportation logistics is discussed. The results of the thesis show that transportation flows between Europe and China is changing. Freight traffic from China to European countries will strengthen even more from the current base. When it comes to Russia and Europe, traffic flows seem to be changing from eastbound traffic to westbound traffic. It means that in the future it is expected more freight traffic from Russia to Europe. Some probable reasons for that are recent factory establishments in Russia and company interviews support also this observation. Effects of the economic recession are mainly seen in the lower transportation amounts in 2009.
Resumo:
The purpose of this study is to view credit risk from the financier’s point of view in a theoretical framework. Results and aspects of the previous studies regarding measuring credit risk with accounting based scoring models are also examined. The theoretical framework and previous studies are then used to support the empirical analysis which aims to develop a credit risk measure for a bank’s internal use or a risk management tool for a company to indicate its credit risk to the financier. The study covers a sample of Finnish companies from 12 different industries and four different company categories and employs their accounting information from 2004 to 2008. The empirical analysis consists of six stage methodology process which uses measures of profitability, liquidity, capital structure and cash flow to determine financier’s credit risk, define five significant risk classes and produce risk classification model. The study is confidential until 15.10.2012.
Resumo:
This thesis presents a three-dimensional, semi-empirical, steady state model for simulating the combustion, gasification, and formation of emissions in circulating fluidized bed (CFB) processes. In a large-scale CFB furnace, the local feeding of fuel, air, and other input materials, as well as the limited mixing rate of different reactants produce inhomogeneous process conditions. To simulate the real conditions, the furnace should be modelled three-dimensionally or the three-dimensional effects should be taken into account. The only available methods for simulating the large CFB furnaces three-dimensionally are semi-empirical models, which apply a relatively coarse calculation mesh and a combination of fundamental conservation equations, theoretical models and empirical correlations. The number of such models is extremely small. The main objective of this work was to achieve a model which can be applied to calculating industrial scale CFB boilers and which can simulate all the essential sub-phenomena: fluid dynamics, reactions, the attrition of particles, and heat transfer. The core of the work was to develop the model frame and the required sub-models for determining the combustion and sorbent reactions. The objective was reached, and the developed model was successfully used for studying various industrial scale CFB boilers combusting different types of fuel. The model for sorbent reactions, which includes the main reactions for calcitic limestones, was applied for studying the new possible phenomena occurring in the oxygen-fired combustion. The presented combustion and sorbent models and principles can be utilized in other model approaches as well, including other empirical and semi-empirical model approaches, and CFD based simulations. The main achievement is the overall model frame which can be utilized for the further development and testing of new sub-models and theories, and for concentrating the knowledge gathered from the experimental work carried out at bench scale, pilot scale and industrial scale apparatus, and from the computational work performed by other modelling methods.
Resumo:
The aim of this study was to simulate blood flow in thoracic human aorta and understand the role of flow dynamics in the initialization and localization of atherosclerotic plaque in human thoracic aorta. The blood flow dynamics in idealized and realistic models of human thoracic aorta were numerically simulated in three idealized and two realistic thoracic aorta models. The idealized models of thoracic aorta were reconstructed with measurements available from literature, and the realistic models of thoracic aorta were constructed by image processing Computed Tomographic (CT) images. The CT images were made available by South Karelia Central Hospital in Lappeenranta. The reconstruction of thoracic aorta consisted of operations, such as contrast adjustment, image segmentations, and 3D surface rendering. Additional design operations were performed to make the aorta model compatible for the numerical method based computer code. The image processing and design operations were performed with specialized medical image processing software. Pulsatile pressure and velocity boundary conditions were deployed as inlet boundary conditions. The blood flow was assumed homogeneous and incompressible. The blood was assumed to be a Newtonian fluid. The simulations with idealized models of thoracic aorta were carried out with Finite Element Method based computer code, while the simulations with realistic models of thoracic aorta were carried out with Finite Volume Method based computer code. Simulations were carried out for four cardiac cycles. The distribution of flow, pressure and Wall Shear Stress (WSS) observed during the fourth cardiac cycle were extensively analyzed. The aim of carrying out the simulations with idealized model was to get an estimate of flow dynamics in a realistic aorta model. The motive behind the choice of three aorta models with distinct features was to understand the dependence of flow dynamics on aorta anatomy. Highly disturbed and nonuniform distribution of velocity and WSS was observed in aortic arch, near brachiocephalic, left common artery, and left subclavian artery. On the other hand, the WSS profiles at the roots of branches show significant differences with geometry variation of aorta and branches. The comparison of instantaneous WSS profiles revealed that the model with straight branching arteries had relatively lower WSS compared to that in the aorta model with curved branches. In addition to this, significant differences were observed in the spatial and temporal profiles of WSS, flow, and pressure. The study with idealized model was extended to study blood flow in thoracic aorta under the effects of hypertension and hypotension. One of the idealized aorta models was modified along with the boundary conditions to mimic the thoracic aorta under the effects of hypertension and hypotension. The results of simulations with realistic models extracted from CT scans demonstrated more realistic flow dynamics than that in the idealized models. During systole, the velocity in ascending aorta was skewed towards the outer wall of aortic arch. The flow develops secondary flow patterns as it moves downstream towards aortic arch. Unlike idealized models, the distribution of flow was nonplanar and heavily guided by the artery anatomy. Flow cavitation was observed in the aorta model which was imaged giving longer branches. This could not be properly observed in the model with imaging containing a shorter length for aortic branches. The flow circulation was also observed in the inner wall of the aortic arch. However, during the diastole, the flow profiles were almost flat and regular due the acceleration of flow at the inlet. The flow profiles were weakly turbulent during the flow reversal. The complex flow patterns caused a non-uniform distribution of WSS. High WSS was distributed at the junction of branches and aortic arch. Low WSS was distributed at the proximal part of the junction, while intermedium WSS was distributed in the distal part of the junction. The pulsatile nature of the inflow caused oscillating WSS at the branch entry region and inner curvature of aortic arch. Based on the WSS distribution in the realistic model, one of the aorta models was altered to induce artificial atherosclerotic plaque at the branch entry region and inner curvature of aortic arch. Atherosclerotic plaque causing 50% blockage of lumen was introduced in brachiocephalic artery, common carotid artery, left subclavian artery, and aortic arch. The aim of this part of the study was first to study the effect of stenosis on flow and WSS distribution, understand the effect of shape of atherosclerotic plaque on flow and WSS distribution, and finally to investigate the effect of lumen blockage severity on flow and WSS distributions. The results revealed that the distribution of WSS is significantly affected by plaque with mere 50% stenosis. The asymmetric shape of stenosis causes higher WSS in branching arteries than in the cases with symmetric plaque. The flow dynamics within thoracic aorta models has been extensively studied and reported here. The effects of pressure and arterial anatomy on the flow dynamic were investigated. The distribution of complex flow and WSS is correlated with the localization of atherosclerosis. With the available results we can conclude that the thoracic aorta, with complex anatomy is the most vulnerable artery for the localization and development of atherosclerosis. The flow dynamics and arterial anatomy play a role in the localization of atherosclerosis. The patient specific image based models can be used to diagnose the locations in the aorta vulnerable to the development of arterial diseases such as atherosclerosis.
Resumo:
Case-based reasoning (CBR) is a recent approach to problem solving and learning that has got a lot of attention over the last years. In this work, the CBR methodology is used to reduce the time and amount of resources spent on carry out experiments to determine the viscosity of the new slurry. The aim of this work is: to develop a CBR system to support the decision making process about the type of slurries behavior, to collect a sufficient volume of qualitative data for case base, and to calculate the viscosity of the Newtonian slurries. Firstly in this paper, the literature review about the types of fluid flow, Newtonian and non-Newtonian slurries is presented. Some physical properties of the suspensions are also considered. The second part of the literature review provides an overview of the case-based reasoning field. Different models and stages of CBR cycles, benefits and disadvantages of this methodology are considered subsequently. Brief review of the CBS tools is also given in this work. Finally, some results of work and opportunities for system modernization are presented. To develop a decision support system for slurry viscosity determination, software application MS Office Excel was used. Designed system consists of three parts: workspace, the case base, and section for calculating the viscosity of Newtonian slurries. First and second sections are supposed to work with Newtonian and Bingham fluids. In the last section, apparent viscosity can be calculated for Newtonian slurries.
Resumo:
This Master´s thesis illustrates how growing a business ties up the company´s working capital and what the cost of committed capital. In order to manage a company´s working capital in rapid business growth phase, the thesis suggests that by monitoring and managing the operating and cash conversion cycles of customers´ projects, a company can find ways to secure the required amount of capital. The research method of this thesis was based on literature reviews and case study research. The theoretical review presents the concepts of working capital and provides the background for understanding how to improve working capital management. The company in subject is a global small and medium-sized enterprise that manufactures pumps and valves for demanding process conditions. The company is expanding, which creates lots of challenges. This thesis concentrates to the company´s working capital management and its efficiency through the supply chain and value chain perspective. The main elements of working capital management are inventory management, accounts receivable management and accounts payable management. Prepayments also play a significant role, particularly in project-based businesses. Developing companies´ working capital management requires knowledge from different kind of key operations´ in the company, like purchasing, production, sales, logistics and financing. The perspective to develop and describe working capital management is an operational. After literature reviews the thesis present pilot projects that formed the basis of a model to monitor working capital in the case company. Based on analysis and pilot projects, the thesis introduces a rough model for monitoring capital commitments in short time period. With the model the company can more efficiently monitor and manage their customer projects.
Resumo:
Formal methods provide a means of reasoning about computer programs in order to prove correctness criteria. One subtype of formal methods is based on the weakest precondition predicate transformer semantics and uses guarded commands as the basic modelling construct. Examples of such formalisms are Action Systems and Event-B. Guarded commands can intuitively be understood as actions that may be triggered when an associated guard condition holds. Guarded commands whose guards hold are nondeterministically chosen for execution, but no further control flow is present by default. Such a modelling approach is convenient for proving correctness, and the Refinement Calculus allows for a stepwise development method. It also has a parallel interpretation facilitating development of concurrent software, and it is suitable for describing event-driven scenarios. However, for many application areas, the execution paradigm traditionally used comprises more explicit control flow, which constitutes an obstacle for using the above mentioned formal methods. In this thesis, we study how guarded command based modelling approaches can be conveniently and efficiently scheduled in different scenarios. We first focus on the modelling of trust for transactions in a social networking setting. Due to the event-based nature of the scenario, the use of guarded commands turns out to be relatively straightforward. We continue by studying modelling of concurrent software, with particular focus on compute-intensive scenarios. We go from theoretical considerations to the feasibility of implementation by evaluating the performance and scalability of executing a case study model in parallel using automatic scheduling performed by a dedicated scheduler. Finally, we propose a more explicit and non-centralised approach in which the flow of each task is controlled by a schedule of its own. The schedules are expressed in a dedicated scheduling language, and patterns assist the developer in proving correctness of the scheduled model with respect to the original one.