14 resultados para Independent-particle shell model
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The objective of this thesis is to study wavelets and their role in turbulence applications. Under scrutiny in the thesis is the intermittency in turbulence models. Wavelets are used as a mathematical tool to study the intermittent activities that turbulence models produce. The first section generally introduces wavelets and wavelet transforms as a mathematical tool. Moreover, the basic properties of turbulence are discussed and classical methods for modeling turbulent flows are explained. Wavelets are implemented to model the turbulence as well as to analyze turbulent signals. The model studied here is the GOY (Gledzer 1973, Ohkitani & Yamada 1989) shell model of turbulence, which is a popular model for explaining intermittency based on the cascade of kinetic energy. The goal is to introduce better quantification method for intermittency obtained in a shell model. Wavelets are localized in both space (time) and scale, therefore, they are suitable candidates for the study of singular bursts, that interrupt the calm periods of an energy flow through various scales. The study concerns two questions, namely the frequency of the occurrence as well as the intensity of the singular bursts at various Reynolds numbers. The results gave an insight that singularities become more local as Reynolds number increases. The singularities become more local also when the shell number is increased at certain Reynolds number. The study revealed that the singular bursts are more frequent at Re ~ 107 than other cases with lower Re. The intermittency of bursts for the cases with Re ~ 106 and Re ~ 105 was similar, but for the case with Re ~ 104 bursts occured after long waiting time in a different fashion so that it could not be scaled with higher Re.
Resumo:
Particulate nanostructures are increasingly used for analytical purposes. Such particles are often generated by chemical synthesis from non-renewable raw materials. Generation of uniform nanoscale particles is challenging and particle surfaces must be modified to make the particles biocompatible and water-soluble. Usually nanoparticles are functionalized with binding molecules (e.g., antibodies or their fragments) and a label substance (if needed). Overall, producing nanoparticles for use in bioaffinity assays is a multistep process requiring several manufacturing and purification steps. This study describes a biological method of generating functionalized protein-based nanoparticles with specific binding activity on the particle surface and label activity inside the particles. Traditional chemical bioconjugation of the particle and specific binding molecules is replaced with genetic fusion of the binding molecule gene and particle backbone gene. The entity of the particle shell and binding moieties are synthesized from generic raw materials by bacteria, and fermentation is combined with a simple purification method based on inclusion bodies. The label activity is introduced during the purification. The process results in particles that are ready-to-use as reagents in bioaffinity. Apoferritin was used as particle body and the system was demonstrated using three different binding moieties: a small protein, a peptide and a single chain Fv antibody fragment that represents a complex protein including disulfide bridge.If needed, Eu3+ was used as label substance. The results showed that production system resulted in pure protein preparations, and the particles were of homogeneous size when visualized with transmission electron microscopy. Passively introduced label was stably associated with the particles, and binding molecules genetically fused to the particle specifically bound target molecules. Functionality of the particles in bioaffinity assays were successfully demonstrated with two types of assays; as labels and in particle-enhanced agglutination assay. This biological production procedure features many advantages that make the process especially suited for applications that have frequent and recurring requirements for homogeneous functional particles. The production process of ready, functional and watersoluble particles follows principles of “green chemistry”, is upscalable, fast and cost-effective.
Resumo:
Traditionally limestone has been used for the flue gas desulfurization in fluidized bed combustion. Recently, several studies have been carried out to examine the use of limestone in applications which enable the removal of carbon dioxide from the combustion gases, such as calcium looping technology and oxy-fuel combustion. In these processes interlinked limestone reactions occur but the reaction mechanisms and kinetics are not yet fully understood. To examine these phenomena, analytical and numerical models have been created. In this work, the limestone reactions were studied with aid of one-dimensional numerical particle model. The model describes a single limestone particle in the process as a function of time, the progress of the reactions and the mass and energy transfer in the particle. The model-based results were compared with experimental laboratory scale BFB results. It was observed that by increasing the temperature from 850 °C to 950 °C the calcination was enhanced but the sulfate conversion was no more improved. A higher sulfur dioxide concentration accelerated the sulfation reaction and based on the modeling, the sulfation is first order with respect to SO2. The reaction order of O2 seems to become zero at high oxygen concentrations.
Resumo:
This work is devoted to the development of numerical method to deal with convection diffusion dominated problem with reaction term, non - stiff chemical reaction and stiff chemical reaction. The technique is based on the unifying Eulerian - Lagrangian schemes (particle transport method) under the framework of operator splitting method. In the computational domain, the particle set is assigned to solve the convection reaction subproblem along the characteristic curves created by convective velocity. At each time step, convection, diffusion and reaction terms are solved separately by assuming that, each phenomenon occurs separately in a sequential fashion. Moreover, adaptivities and projection techniques are used to add particles in the regions of high gradients (steep fronts) and discontinuities and transfer a solution from particle set onto grid point respectively. The numerical results show that, the particle transport method has improved the solutions of CDR problems. Nevertheless, the method is time consumer when compared with other classical technique e.g., method of lines. Apart from this advantage, the particle transport method can be used to simulate problems that involve movingsteep/smooth fronts such as separation of two or more elements in the system.
Resumo:
The development of carbon capture and storage (CCS) has raised interest towards novel fluidised bed (FB) energy applications. In these applications, limestone can be utilized for S02 and/or CO2 capture. The conditions in the new applications differ from the traditional atmospheric and pressurised circulating fluidised bed (CFB) combustion conditions in which the limestone is successfully used for SO2 capture. In this work, a detailed physical single particle model with a description of the mass and energy transfer inside the particle for limestone was developed. The novelty of this model was to take into account the simultaneous reactions, changing conditions, and the effect of advection. Especially, the capability to study the cyclic behaviour of limestone on both sides of the calcination-carbonation equilibrium curve is important in the novel conditions. The significances of including advection or assuming diffusion control were studied in calcination. Especially, the effect of advection in calcination reaction in the novel combustion atmosphere was shown. The model was tested against experimental data; sulphur capture was studied in a laboratory reactor in different fluidised bed conditions. Different Conversion levels and sulphation patterns were examined in different atmospheres for one limestone type. The Conversion curves were well predicted with the model, and the mechanisms leading to the Conversion patterns were explained with the model simulations. In this work, it was also evaluated whether the transient environment has an effect on the limestone behaviour compared to the averaged conditions and in which conditions the effect is the largest. The difference between the averaged and transient conditions was notable only in the conditions which were close to the calcination-carbonation equilibrium curve. The results of this study suggest that the development of a simplified particle model requires a proper understanding of physical and chemical processes taking place in the particle during the reactions. The results of the study will be required when analysing complex limestone reaction phenomena or when developing the description of limestone behaviour in comprehensive 3D process models. In order to transfer the experimental observations to furnace conditions, the relevant mechanisms that take place need to be understood before the important ones can be selected for 3D process model. This study revealed the sulphur capture behaviour under transient oxy-fuel conditions, which is important when the oxy-fuel CFB process and process model are developed.
Resumo:
The purpose of this study was to investigate some important features of granular flows and suspension flows by computational simulation methods. Granular materials have been considered as an independent state ofmatter because of their complex behaviors. They sometimes behave like a solid, sometimes like a fluid, and sometimes can contain both phases in equilibrium. The computer simulation of dense shear granular flows of monodisperse, spherical particles shows that the collisional model of contacts yields the coexistence of solid and fluid phases while the frictional model represents a uniform flow of fluid phase. However, a comparison between the stress signals from the simulations and experiments revealed that the collisional model would result a proper match with the experimental evidences. Although the effect of gravity is found to beimportant in sedimentation of solid part, the stick-slip behavior associated with the collisional model looks more similar to that of experiments. The mathematical formulations based on the kinetic theory have been derived for the moderatesolid volume fractions with the assumption of the homogeneity of flow. In orderto make some simulations which can provide such an ideal flow, the simulation of unbounded granular shear flows was performed. Therefore, the homogeneous flow properties could be achieved in the moderate solid volume fractions. A new algorithm, namely the nonequilibrium approach was introduced to show the features of self-diffusion in the granular flows. Using this algorithm a one way flow can beextracted from the entire flow, which not only provides a straightforward calculation of self-diffusion coefficient but also can qualitatively determine the deviation of self-diffusion from the linear law at some regions nearby the wall inbounded flows. Anyhow, the average lateral self-diffusion coefficient, which was calculated by the aforementioned method, showed a desirable agreement with thepredictions of kinetic theory formulation. In the continuation of computer simulation of shear granular flows, some numerical and theoretical investigations were carried out on mass transfer and particle interactions in particulate flows. In this context, the boundary element method and its combination with the spectral method using the special capabilities of wavelets have been introduced as theefficient numerical methods to solve the governing equations of mass transfer in particulate flows. A theoretical formulation of fluid dispersivity in suspension flows revealed that the fluid dispersivity depends upon the fluid properties and particle parameters as well as the fluid-particle and particle-particle interactions.
Resumo:
A rotating machine usually consists of a rotor and bearings that supports it. The nonidealities in these components may excite vibration of the rotating system. The uncontrolled vibrations may lead to excessive wearing of the components of the rotating machine or reduce the process quality. Vibrations may be harmful even when amplitudes are seemingly low, as is usually the case in superharmonic vibration that takes place below the first critical speed of the rotating machine. Superharmonic vibration is excited when the rotational velocity of the machine is a fraction of the natural frequency of the system. In such a situation, a part of the machine’s rotational energy is transformed into vibration energy. The amount of vibration energy should be minimised in the design of rotating machines. The superharmonic vibration phenomena can be studied by analysing the coupled rotor-bearing system employing a multibody simulation approach. This research is focused on the modelling of hydrodynamic journal bearings and rotorbearing systems supported by journal bearings. In particular, the non-idealities affecting the rotor-bearing system and their effect on the superharmonic vibration of the rotating system are analysed. A comparison of computationally efficient journal bearing models is carried out in order to validate one model for further development. The selected bearing model is improved in order to take the waviness of the shaft journal into account. The improved model is implemented and analyzed in a multibody simulation code. A rotor-bearing system that consists of a flexible tube roll, two journal bearings and a supporting structure is analysed employing the multibody simulation technique. The modelled non-idealities are the shell thickness variation in the tube roll and the waviness of the shaft journal in the bearing assembly. Both modelled non-idealities may cause subharmonic resonance in the system. In multibody simulation, the coupled effect of the non-idealities can be captured in the analysis. Additionally one non-ideality is presented that does not excite the vibrations itself but affects the response of the rotorbearing system, namely the waviness of the bearing bushing which is the non-rotating part of the bearing system. The modelled system is verified with measurements performed on a test rig. In the measurements the waviness of bearing bushing was not measured and therefore it’s affect on the response was not verified. In conclusion, the selected modelling approach is an appropriate method when analysing the response of the rotor-bearing system. When comparing the simulated results to the measured ones, the overall agreement between the results is concluded to be good.
Resumo:
Multibody simulation model of the roller test rig is presented in this work. The roller test rig consists of a paper machine’s tube roll supported with a hard bearing type balancing machine. The simulation model includes non-idealities that are measured from the physical structure. These non-idealities are the shell thickness variation of the roll and roundness errors of the shafts of the roll. These kinds of non-idealities are harmful since they can cause subharmonic resonances of the rotor system. In this case, the natural vibration mode of the rotor is excited when the rotation speed is a fraction of the natural frequency of the system. With the simulation model, the half critical resonance is studied in detail and a sensitivity analysis is performed by simulating several analyses with slightly different input parameters. The model is verified by comparing the simulation results with those obtained by measuring the real structure. Comparison shows that good accuracy is achieved, since equivalent responses are achieved within the error limit of the input parameters.
Resumo:
Particle Image Velocimetry, PIV, is an optical measuring technique to obtain velocity information of a flow in interest. With PIV it is possible to achieve two or three dimensional velocity vector fields from a measurement area instead of a single point in a flow. Measured flow can be either in liquid or in gas form. PIV is nowadays widely applied to flow field studies. The need for PIV is to obtain validation data for Computational Fluid Dynamics calculation programs that has been used to model blow down experiments in PPOOLEX test facility in the Lappeenranta University of Technology. In this thesis PIV and its theoretical background are presented. All the subsystems that can be considered to be part of a PIV system are presented as well with detail. Emphasis is also put to the mathematics behind the image evaluation. The work also included selection and successful testing of a PIV system, as well as the planning of the installation to the PPOOLEX facility. Already in the preliminary testing PIV was found to be good addition to the measuring equipment for Nuclear Safety Research Unit of LUT. The installation to PPOOLEX facility was successful even though there were many restrictions considering it. All parts of the PIV system worked and they were found out to be appropriate for the planned use. Results and observations presented in this thesis are a good background to further PIV use.
Resumo:
The Travel and Tourism field is undergoing changes due to the rapid development of information technology and digital services. Online travel has profoundly changed the way travel and tourism organizations interact with their customers. Mobile technology such as mobile services for pocket devices (e.g. mobile phones) has the potential to take this development even further. Nevertheless, many issues have been highlighted since the early days of mobile services development (e.g. the lack of relevance, ease of use of many services). However, the wide adoption of smartphones and the mobile Internet in many countries as well as the formation of so-called ecosystems between vendors of mobile technology indicate that many of these issues have been overcome. Also when looking at the numbers of downloaded applications related to travel in application stores like Google Play, it seems obvious that mobile travel and tourism services are adopted and used by many individuals. However, as business is expected to start booming in the mobile era, many issues have a tendency to be overlooked. Travelers are generally on the go and thus services that work effectively in mobile settings (e.g. during a trip) are essential. Hence, the individuals’ perceived drivers and barriers to use mobile travel and tourism services in on-site or during trip settings seem particularly valuable to understand; thus this is one primary aim of the thesis. We are, however, also interested in understanding different types of mobile travel service users. Individuals may indeed be very different in their propensity to adopt and use technology based innovations (services). Research is also switching more from investigating issues of mobile service development to understanding individuals’ usage patterns of mobile services. But designing new mobile services may be a complex matter from a service provider perspective. Hence, our secondary aim is to provide insights into drivers and barriers of mobile travel and tourism service development from a holistic business model perspective. To accomplish the research objectives seven different studies have been conducted over a time period from 2002 – 2013. The studies are founded on and contribute to theories within diffusion of innovations, technology acceptance, value creation, user experience and business model development. Several different research methods are utilized: surveys, field and laboratory experiments and action research. The findings suggest that a successful mobile travel and tourism service is a service which supports one or several mobile motives (needs) of individuals such as spontaneous needs, time-critical arrangements, efficiency ambitions, mobility related needs (location features) and entertainment needs. The service could be customized to support travelers’ style of traveling (e.g. organized travel or independent travel) and should be easy to use, especially easy to take into use (access, install and learn) during a trip, without causing security concerns and/or financial risks for the user. In fact, the findings suggest that the most prominent barrier to the use of mobile travel and tourism services during a trip is an individual’s perceived financial cost (entry costs and usage costs). It should, however, be noted that regulations are put in place in the EU regarding data roaming prices between European countries and national telecom operators are starting to see ‘international data subscriptions’ as a sales advantage (e.g. Finnish Sonera provides a data subscription in the Baltic and Nordic region at the same price as in Finland), which will enhance the adoption of mobile travel and tourism services also in international contexts. In order to speed up the adoption rate travel service providers could consider e.g. more local initiatives of free Wi-Fi networks, development of services that can be used, at least to some extent, in an offline mode (do not require costly network access during a trip) and cooperation with telecom operators (e.g. lower usage costs for travelers who use specific mobile services or travel with specific vendors). Furthermore, based on a developed framework for user experience of mobile trip arrangements, the results show that a well-designed mobile site and/or native application, which preferably supports integration with other mobile services, is a must for true mobile presence. In fact, travel service providers who want to build a relationship with their customers need to consider a downloadable native application, but in order to be found through the mobile channel and make contact with potential new customers, a mobile website should be available. Moreover, we have made a first attempt with cluster analysis to identify user categories of mobile services in a travel and tourism context. The following four categories were identified: info-seekers, checkers, bookers and all-rounders. For example “all-rounders”, represented primarily by individuals who use their pocket device for almost any of the investigated mobile travel services, constituted primarily of 23 to 50 year old males with high travel frequency and great online experience. The results also indicate that travel service providers will increasingly become multi-channel providers. To manage multiple online channels, closely integrated and hybrid online platforms for different devices, supporting all steps in a traveler process should be considered. It could be useful for travel service providers to focus more on developing browser-based mobile services (HTML5-solutions) than native applications that work only with specific operating systems and for specific devices. Based on an action research study and utilizing a holistic business model framework called STOF we found that HTML5 as an emerging platform, at least for now, has some limitations regarding the development of the user experience and monetizing the application. In fact, a native application store (e.g. Google Play) may be a key mediator in the adoption of mobile travel and tourism services both from a traveler and a service provider perspective. Moreover, it must be remembered that many device and mobile operating system developers want service providers to specifically create services for their platforms and see native applications as a strategic advantage to sell more devices of a certain kind. The mobile telecom industry has moved into a battle of ecosystems where device makers, developers of operating systems and service developers are to some extent forced to choose their development platforms.
Resumo:
This thesis presents a one-dimensional, semi-empirical dynamic model for the simulation and analysis of a calcium looping process for post-combustion CO2 capture. Reduction of greenhouse emissions from fossil fuel power production requires rapid actions including the development of efficient carbon capture and sequestration technologies. The development of new carbon capture technologies can be expedited by using modelling tools. Techno-economical evaluation of new capture processes can be done quickly and cost-effectively with computational models before building expensive pilot plants. Post-combustion calcium looping is a developing carbon capture process which utilizes fluidized bed technology with lime as a sorbent. The main objective of this work was to analyse the technological feasibility of the calcium looping process at different scales with a computational model. A one-dimensional dynamic model was applied to the calcium looping process, simulating the behaviour of the interconnected circulating fluidized bed reactors. The model incorporates fundamental mass and energy balance solvers to semi-empirical models describing solid behaviour in a circulating fluidized bed and chemical reactions occurring in the calcium loop. In addition, fluidized bed combustion, heat transfer and core-wall layer effects were modelled. The calcium looping model framework was successfully applied to a 30 kWth laboratory scale and a pilot scale unit 1.7 MWth and used to design a conceptual 250 MWth industrial scale unit. Valuable information was gathered from the behaviour of a small scale laboratory device. In addition, the interconnected behaviour of pilot plant reactors and the effect of solid fluidization on the thermal and carbon dioxide balances of the system were analysed. The scale-up study provided practical information on the thermal design of an industrial sized unit, selection of particle size and operability in different load scenarios.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.
Stochastic particle models: mean reversion and burgers dynamics. An application to commodity markets
Resumo:
The aim of this study is to propose a stochastic model for commodity markets linked with the Burgers equation from fluid dynamics. We construct a stochastic particles method for commodity markets, in which particles represent market participants. A discontinuity in the model is included through an interacting kernel equal to the Heaviside function and its link with the Burgers equation is given. The Burgers equation and the connection of this model with stochastic differential equations are also studied. Further, based on the law of large numbers, we prove the convergence, for large N, of a system of stochastic differential equations describing the evolution of the prices of N traders to a deterministic partial differential equation of Burgers type. Numerical experiments highlight the success of the new proposal in modeling some commodity markets, and this is confirmed by the ability of the model to reproduce price spikes when their effects occur in a sufficiently long period of time.