50 resultados para force-field analysis
Resumo:
This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.
Resumo:
Large amplitude bus bar aeolian vibration may lead to post insulator damage. Different damping applications are used to decrease the risk of large amplitude aeolian vibration. In this paper the post insulator load caused by the bus bar aeolian vibration and the effect of damping methods are evaluated. The effects of three types of bus bar connectors and three types of primary structures are studied. Two actual damping devices, damping cable and their combinations are studied. The post insulator loads are studied with strain gage based custom made force sensors installed on the both ends of the post insulator and with the displacement sensor installed on the midpoint of the bus bar. The post insulator loads are calculated from the strain values and the damping properties are determined from the displacement history. The bus bar is deflected with a hanging weight. The weight is released and the bus bar is left to free damped vibration. Both actual bus bar vibration dampers RIBE and SBI were very effective against the aeolian vibration. Combining vibration damper with damping cable will increase the damping ratio but it may be unnecessary considering the extra effort. Bus bar connector type or primary structure have no effect on the vertical load. The bending moment at the post insulator with double sided bus bar connector is significantly higher than at the post insulator with single sided bus bar connector. No reliable conclusions about bus bar connector type effect can be done, but the roller bearing type or central bearing type connector may reduce the bending moment. The RHS steel frame as primary structure may increase the bending moment peak values since it is the least rigid primary structure type and it may start to vibrate as a response to the awakening force of the vibrating bus bar.
Resumo:
Inorganic-organic sol-gel hybrid coatings can be used for improving and modifying properties of wood-based materials. By selecting a proper precursor, wood can be made water repellent, decay-, moisture- or UV-resistant. However, to control the barrier properties of sol-gel coatings on wood substrates against moisture uptake and weathering, an understanding of the surface morphology and chemistry of the deposited sol-gel coatings on wood substrates is needed. Mechanical pulp is used in production of wood-containing printing papers. The physical and chemical fiber surface characteristics, as created in the chosen mechanical pulp manufacturing process, play a key role in controlling the properties of the end-use product. A detailed understanding of how process parameters influence fiber surfaces can help improving cost-effectiveness of pulp and paper production. The current work focuses on physico-chemical characterization of modified wood-based materials with surface sensitive analytical tools. The overall objectives were, through advanced microscopy and chemical analysis techniques, (i) to collect versatile information about the surface structures of Norway spruce thermomechanical pulp fiber walls and understand how they are influenced by the selected chemical treatments, and (ii) to clarify the effect of various sol-gel coatings on surface structural and chemical properties of wood-based substrates. A special emphasis was on understanding the effect of sol-gel coatings on the water repellency of modified wood and paper surfaces. In the first part of the work, effects of chemical treatment on micro- and nano-scale surface structure of 1st stage TMP latewood fibers from Norway spruce were investigated. The chemicals applied were buffered sodium oxalate and hydrochloric acid. The outer and the inner fiber wall layers of the untreated and chemically treated fibers were separately analyzed by light microscopy, atomic force microscopy and field-emission scanning electron microscopy. The selected characterization methods enabled the demonstration of the effect of different treatments on the fiber surface structure, both visually and quantitatively. The outer fiber wall areas appeared as intact bands surrounding the fiber and they were clearly rougher than areas of exposed inner fiber wall. The roughness of the outer fiber wall areas increased most in the sodium oxalate treatment. The results indicated formation of more surface pores on the exposed inner fiber wall areas than on the corresponding outer fiber wall areas as a result of the chemical treatments. The hydrochloric acid treatment seemed to increase the surface porosity of the inner wall areas. In the second part of the work, three silane-based sol-gel hybrid coatings were selected in order to improve moisture resistance of wood and paper substrates. The coatings differed from each other in terms of having different alkyl (CH3–, CH3-(CH2)7–) and fluorocarbon (CF3–) chains attached to the trialkoxysilane sol-gel precursor. The sol-gel coatings were deposited by a wet coating method, i.e. spraying or spreading by brush. The effect of solgel coatings on surface structural and chemical properties of wood-based substrates was studied by using advanced surface analyzing tools: atomic force microscopy, X-ray photoelectron spectroscopy and time-of-flight secondary ion spectroscopy. The results show that the applied sol-gel coatings, deposited as thin films or particulate coatings, have different effects on surface characteristics of wood and wood-based materials. The coating which has a long hydrocarbon chain (CH3-(CH2)7–) attached to the silane backbone (octyltriethoxysilane) produced the highest hydrophobicity for wood and wood-based materials.
Resumo:
Nanotubes are one of the most perspective materials in modern nanotechologies. It makes present investigation very actual. In this work magnetic properties of multi-walled nanotubes on polystyrene substrate are investigated by using quantum magnetometer SQUID. Main purpose was to obtain magnetic field and temperature dependences of magnetization and to compare them to existing theoretical models of magnetism in carbon-bases structures. During data analysis a mathematical algorithm for obtained data filtration was developed because measurement with quantum magnetometer assume big missives of number data, which contain accidental errors. Nature of errors is drift of SQUID signal, errors of different parts of measurement station. Nanotube samples on polystyrene substrate were studied with help of atomic force microscope. On the surface traces of nanotube were found contours, which were oriented in horizontal plane. This feature was caused by rolling method for samples. Detailed comparison of obtained dependences with information of other researches on this topic allows to obtain some conclusions about nature of magnetism in the samples. It emphasizes importance and actuality of this scientific work.
Resumo:
This thesis aims to find an effective way of conducting a target audience analysis (TAA) in cyber domain. There are two main focal points that are addressed; the nature of the cyber domain and the method of the TAA. Of the cyber domain the object is to find the opportunities, restrictions and caveats that result from its digital and temporal nature. This is the environment in which the TAA method is examined in this study. As the TAA is an important step of any psychological operation and critical to its success, the method used must cover all the main aspects affecting the choice of a proper target audience. The first part of the research was done by sending an open-ended questionnaire to operators in the field of information warfare both in Finland and abroad. As the results were inconclusive, the research was completed by assessing the applicability of United States Army Joint Publication FM 3-05.301 in the cyber domain via a theory-based content analysis. FM 3- 05.301 was chosen because it presents a complete method of the TAA process. The findings were tested against the results of the questionnaire and new scientific research in the field of psychology. The cyber domain was found to be “fast and vast”, volatile and uncontrollable. Although governed by laws to some extent, the cyber domain is unpredictable by nature and not controllable to reasonable amount. The anonymity and lack of verification often present in the digital channels mean that anyone can have an opinion, and any message sent may change or even be counterproductive to the original purpose. The TAA method of the FM 3-05.301 is applicable in the cyber domain, although some parts of the method are outdated and thus suggested to be updated if used in that environment. The target audience categories of step two of the process were replaced by new groups that exist in the digital environment. The accessibility assessment (step eight) was also redefined, as in the digital media the mere existence of a written text is typically not enough to convey the intended message to the target audience. The scientific studies made in computer sciences and both in psychology and sociology about the behavior of people in social media (and overall in cyber domain) call for a more extensive remake of the TAA process. This falls, however, out of the scope of this work. It is thus suggested that further research should be carried out in search of computer-assisted methods and a more thorough TAA process, utilizing the latest discoveries of human behavior. ---------------------------------------------------------------------------------------------------------------------------------- Tämän opinnäytetyön tavoitteena on löytää tehokas tapa kohdeyleisöanalyysin tekemiseksi kybertoimintaympäristössä. Työssä keskitytään kahteen ilmiöön: kybertoimintaympäristön luonteeseen ja kohdeyleisöanalyysin metodiin. Kybertoimintaympäristön osalta tavoitteena on löytää sen digitaalisesta ja ajallisesta luonteesta juontuvat mahdollisuudet, rajoitteet ja sudenkuopat. Tämä on se ympäristö jossa kohdeyleisöanalyysiä tarkastellaan tässä työssä. Koska kohdeyleisöanalyysi kuuluu olennaisena osana jokaiseen psykologiseen operaatioon ja on onnistumisen kannalta kriittinen tekijä, käytettävän metodin tulee pitää sisällään kaikki oikean kohdeyleisön valinnan kannalta merkittävät osa-alueet. Tutkimuksen ensimmäisessä vaiheessa lähetettiin avoin kysely informaatiosodankäynnin ammattilaisille Suomessa ja ulkomailla. Koska kyselyn tulokset eivät olleet riittäviä johtopäätösten tekemiseksi, tutkimusta jatkettiin tarkastelemalla Yhdysvaltojen armeijan kenttäohjesäännön FM 3-05.301 soveltuvuutta kybertoimintaympäristössä käytettäväksi teorialähtöisen sisällönanalyysin avulla. FM 3-05.301 valittiin koska se sisältää kokonaisvaltaisen kohdeyleisöanalyysiprosessin. Havaintoja verrattiin kyselytutkimuksen tuloksiin ja psykologian uusiin tutkimuksiin. Kybertoimintaympäristö on tulosten perusteella nopea ja valtava, jatkuvasti muuttuva ja kontrolloimaton. Vaikkakin lait hallitsevat kybertoimintaympäristöä jossakin määrin, on se silti luonteeltaan ennakoimaton eikä sitä voida luotettavasti hallita. Digitaalisilla kanavilla usein läsnäoleva nimettömyys ja tiedon tarkastamisen mahdottomuus tarkoittavat että kenellä tahansa voi olla mielipide asioista, ja mikä tahansa viesti voi muuttua, jopa alkuperäiseen tarkoitukseen nähden vastakkaiseksi. FM 3-05.301:n metodi toimii kybertoimintaympäristössä, vaikkakin jotkin osa-alueet ovat vanhentuneita ja siksi ne esitetään päivitettäväksi mikäli metodia käytetään kyseisessä ympäristössä. Kohdan kaksi kohdeyleisökategoriat korvattiin uusilla, digitaalisessa ympäristössä esiintyvillä ryhmillä. Lähestyttävyyden arviointi (kohta 8) muotoiltiin myös uudestaan, koska digitaalisessa mediassa pelkkä tekstin läsnäolo ei sellaisenaan tyypillisesti vielä riitä halutun viestin välittämiseen kohdeyleisölle. Tietotekniikan edistyminen ja psykologian sekä sosiologian aloilla tehty tieteellinen tutkimus ihmisten käyttäytymisestä sosiaalisessa mediassa (ja yleensä kybertoimintaympäristössä) mahdollistavat koko kohdeyleisöanalyysiprosessin uudelleenrakentamisen. Tässä työssä sitä kuitenkaan ei voida tehdä. Siksi esitetäänkin että lisätutkimusta tulisi tehdä sekä tietokoneavusteisten prosessien että vielä syvällisempien kohdeyleisöanalyysien osalta, käyttäen hyväksi viimeisimpiä ihmisen käyttäytymiseen liittyviä tutkimustuloksia.
Resumo:
Emerging markets have come to play a significant role in the world, not only due to their strong economic growth but because they have been able to foster an increasing number of innovative high technology oriented firms. However, as the markets continue to change and develop, there remain many companies in emerging markets that struggle with their competitiveness and innovativeness. To improve competitive capabilities, many scholars have come to favor interfirm cooperation, which is perceived to help companies access new knowledge and complementary resources and, by so doing, enables them to catch up quickly with Western competitors. Regardless of numerous attempts by strategic management scholars, the research field remains very fragmented and lacks understanding on how and when interfirm cooperation contributes to firm performance and competiveness in emerging markets. Furthermore, the reasons why interfirm R&D sometimes succeeds but fails at other times frequently remain unidentified. This thesis combines the extant literature on competitive and cooperative strategy, dynamic capabilities, and R&D cooperation while studying interfirm R&D relationships in and between Russian manufacturing companies. Employing primary survey data, the thesis presents numerous novel findings regarding the effect of R&D cooperation and different types of R&D partner on firms’ exploration and exploitation performance. Utilizing a competitive strategy framework enables these effects to be explained in more detail, and especially why interfirm cooperation, regardless of its potential, has had a modest effect on the general competitiveness of emerging market firms. This thesis contributes especially to the strategic management literature and presents a more holistic perspective on the usefulness of cooperative strategy in emerging markets. It provides a framework through which it is possible to assess the potential impacts of different R&D cooperation partners and to clarify the causal relationships between cooperation, performance, and long term competitiveness.
Resumo:
This study applied qualitative case study method for solving what kind of benefits salespeople and their customers perceived to gain when sales reps used a specific sales force automation tool, that defined the values and identified segment that best fit to each customer. The data consisting of four interviews was collected using semi-structured individual method and analyzed with thematic analysis technique. The analysis revealed five salespeople perceived benefits and four customer perceived benefits. Salespeople perceived benefits were improvements in customer knowledge, guidance of sales operations, salesperson-customer relationship building, time management and growing performance. Customer perceived benefits were information transmission, improved customer service, customer-salesperson relationship building and development of operations, which of the last was found as a new previously unrecognized customer benefit.
Resumo:
The objective of this thesis was to study the effect of pulsed electric field on the preparation of TiO2 nanoparticles via sol-gel method. The literature part deals with properties of different TiO2 crystal forms, principles of photocatalysis, sol-gel method and pulsed electric field processing. It was expected that the pulsed electric field would have an influence on crystallite size, specific surface area, polymorphism and photocatalytic activity of produced particles. TiO2 samples were prepared by using different frequencies and treatment times of pulsed electric field. The properties of produced TiO2 particles were examined X-ray diffraction (XRD), Raman spectroscopy and BET surface area analysis. The photocatalytic activities of produced TiO2 particles were determined by using them as photocatalysts for the degradation of formic acid under UVA-light. The photocatalytic activities of samples produced with sol-gel method were also compared with the commercial TiO2 powder Aeroxide® (Evonic Degussa GmbH). Pulsed electric field did not have an effect on the morphology of particles. Results from XRD and Raman analysis showed that all produced TiO2 samples were pure anatase. However, pulsed electric field did have an effect on crystallite size, specific surface area and photocatalytic activity of TiO2 particles. Generally, the crystallite sizes were smaller, specific surface areas larger and initial formic acid degradation rates higher for samples that were produced by applying the pulsed electric field. The higher photocatalytic activities were attributed to larger surface areas and smaller crystallite sizes. Though, with all of the TiO2 samples produced by the sol-gel method the initial formic acid degradation rates were significantly slower than with the commercial TiO2 powder.
Resumo:
The absolute nodal coordinate formulation was originally developed for the analysis of structures undergoing large rotations and deformations. This dissertation proposes several enhancements to the absolute nodal coordinate formulation based finite beam and plate elements. The main scientific contribution of this thesis relies on the development of elements based on the absolute nodal coordinate formulation that do not suffer from commonly known numerical locking phenomena. These elements can be used in the future in a number of practical applications, for example, analysis of biomechanical soft tissues. This study presents several higher-order Euler–Bernoulli beam elements, a simple method to alleviate Poisson’s and transverse shear locking in gradient deficient plate elements, and a nearly locking free gradient deficient plate element. The absolute nodal coordinate formulation based gradient deficient plate elements developed in this dissertation describe most of the common numerical locking phenomena encountered in the formulation of a continuum mechanics based description of elastic energy. Thus, with these fairly straightforwardly formulated elements that are comprised only of the position and transverse direction gradient degrees of freedom, the pathologies and remedies for the numerical locking phenomena are presented in a clear and understandable manner. The analysis of the Euler–Bernoulli beam elements developed in this study show that the choice of higher gradient degrees of freedom as nodal degrees of freedom leads to a smoother strain field. This improves the rate of convergence.
Resumo:
The thesis work models the squeezing of the tube and computes the fluid motion of a peristaltic pump. The simulations have been conducted by using COMSOL Multiphysics FSI module. The model is setup in axis symmetric with several simulation cases to have a clear understanding of the results. The model captures total displacement of the tube, velocity magnitude, and average pressure fluctuation of the fluid motion. A clear understanding and review of many mathematical and physical concepts are also discussed with their applications in real field. In order to solve the problems and work around the resource constraints, a thorough understanding of mass balance and momentum equations, finite element concepts, arbitrary Lagrangian-Eulerian method, one-way coupling method, two-way coupling method, and COMSOL Multiphysics simulation setup are understood and briefly narrated.
Resumo:
A small break loss-of-coolant accident (SBLOCA) is one of problems investigated in an NPP operation. Such accident can be analyzed using an experiment facility and TRACE thermal-hydraulic system code. A series of SBLOCA experiments was carried out on Parallel Channel Test Loop (PACTEL) facility, exploited together with Technical Research Centre of Finland VTT Energy and Lappeenranta University of Technology (LUT), in order to investigate two-phase phenomena related to a VVER-type reactor. The experiments and a TRACE model of the PACTEL facility are described in the paper. In addition, there is the TRACE code description with main field equations. At the work, calculations of a SBLOCA series are implemented and after the calculations, the thesis discusses the validation of TRACE and concludes with an assessment of the usefulness and accuracy of the code in calculating small breaks.
Resumo:
Gravitational phase separation is a common unit operation found in most large-scale chemical processes. The need for phase separation can arise e.g. from product purification or protection of downstream equipment. In gravitational phase separation, the phases separate without the application of an external force. This is achieved in vessels where the flow velocity is lowered substantially compared to pipe flow. If the velocity is low enough, the denser phase settles towards the bottom of the vessel while the lighter phase rises. To find optimal configurations for gravitational phase separator vessels, several different geometrical and internal design features were evaluated based on simulations using OpenFOAM computational fluid dynamics (CFD) software. The studied features included inlet distributors, vessel dimensions, demister configurations and gas phase outlet configurations. Simulations were conducted as single phase steady state calculations. For comparison, additional simulations were performed as dynamic single and two-phase calculations. The steady state single phase calculations provided indications on preferred configurations for most above mentioned features. The results of the dynamic simulations supported the utilization of the computationally faster steady state model as a practical engineering tool. However, the two-phase model provides more truthful results especially with flows where a single phase does not determine the flow characteristics.
Resumo:
Studying testis is complex, because the tissue has a very heterogeneous cell composition and its structure changes dynamically during development. In reproductive field, the cell composition is traditionally studied by morphometric methods such as immunohistochemistry and immunofluorescence. These techniques provide accurate quantitative information about cell composition, cell-cell association and localization of the cells of interest. However, the sample preparation, processing, staining and data analysis are laborious and may take several working days. Flow cytometry protocols coupled with DNA stains have played an important role in providing quantitative information of testicular cells populations ex vivo and in vitro studies. Nevertheless, the addition of specific cells markers such as intracellular antibodies would allow the more specific identification of cells of crucial interest during spermatogenesis. For this study, adult rat Sprague-Dawley rats were used for optimization of the flow cytometry protocol. Specific steps within the protocol were optimized to obtain a singlecell suspension representative of the cell composition of the starting material. Fixation and permeabilization procedure were optimized to be compatible with DNA stains and fluorescent intracellular antibodies. Optimization was achieved by quantitative analysis of specific parameters such as recovery of meiotic cells, amount of debris and comparison of the proportions of the various cell populations with already published data. As a result, a new and fast flow cytometry method coupled with DNA stain and intracellular antigen detection was developed. This new technique is suitable for analysis of population behavior and specific cells during postnatal testis development and spermatogenesis in rodents. This rapid protocol recapitulated the known vimentin and γH2AX protein expression patterns during rodent testis ontogenesis. Moreover, the assay was applicable for phenotype characterization of SCRbKO and E2F1KO mouse models.
Resumo:
Tämä diplomityö arvioi hitsauksen laadunhallintaohjelmistomarkkinoiden kilpailijoita. Kilpailukenttä on uusi ja ei ole tarkkaa tietoa siitä minkälaisia kilpailijoita on markkinoilla. Hitsauksen laadunhallintaohjelmisto auttaa yrityksiä takaamaan korkean laadun. Ohjelmisto takaa korkean laadun varmistamalla, että hitsaaja on pätevä, hän noudattaa hitsausohjeita ja annettuja parametreja. Sen lisäksi ohjelmisto kerää kaiken tiedon hitsausprosessista ja luo siitä vaadittavat dokumentit. Diplomityön teoriaosuus muodostuu kirjallisuuskatsauksesta ratkaisuliike-toimintaan, kilpailija-analyysin ja kilpailuvoimien teoriaan sekä hitsauksen laadunhallintaan. Työn empiriaosuus on laadullinen tutkimus, jossa tutkitaan kilpailevia hitsauksen laadunhallintaohjelmistoja ja haastatellaan ohjelmistojen käyttäjiä. Diplomityön tuloksena saadaan uusi kilpailija-analyysimalli hitsauksen laadunhallintaohjelmistoille. Mallin avulla voidaan arvostella ohjelmistot niiden tarjoamien primääri- ja sekundääriominaisuuksien perusteella. Toiseksi tässä diplomityössä analysoidaan nykyinen kilpailijatilanne hyödyntämällä juuri kehitettyä kilpailija-analyysimallia.
Resumo:
An exchange traded fund (ETF) is a financial instrument that tracks some predetermined index. Since their initial establishment in 1993, ETFs have grown in importance in the field of passive investing. The main reason for the growth of the ETF industry is that ETFs combine benefits of stock investing and mutual fund investing. Although ETFs resemble mutual funds in many ways, also many differences occur. In addition, ETFs not only differ from mutual funds but also differ among each other. ETFs can be divided into two categories, i.e. market capitalisation ETFs and fundamental (or strategic) ETFs, and further into subcategories depending on their fundament basis. ETFs are a useful tool for diversification especially for a long-term investor. Although the economic importance of ETFs has risen drastically during the past 25 years, the differences and risk-return characteristics of fundamental ETFs have yet been rather unstudied area. In effect, no previous research on market capitalisation and fundamental ETFs was found during the research process. For its part, this thesis seeks to fill this research gap. The studied data consist of 50 market capitalisation ETFs and 50 fundamental ETFs. The fundaments, on which the indices that the fundamental ETFs track, were not limited nor segregated into subsections. The two types of ETFs were studied at an aggregate level as two different research groups. The dataset ranges from June 2006 to December 2014 with 103 monthly observations. The data was gathered using Bloomberg Terminal. The analysis was conducted as an econometric performance analysis. In addition to other econometric measures, the methods that were used in the performance analysis included modified Value-at-Risk, modified Sharpe ratio and Treynor ratio. The results supported the hypothesis that passive market capitalisation ETFs outperform active fundamental ETFs in terms of risk-adjusted returns, though the difference is rather small. Nevertheless, when taking into account the higher overall trading costs of the fundamental ETFs, the underperformance gap widens. According to the research results, market capitalisation ETFs are a recommendable diversification instrument for a long-term investor. In addition to better risk-adjusted returns, passive ETFs are more transparent and the bases of their underlying indices are simpler than those of fundamental ETFs. ETFs are still a young financial innovation and hence data is scarcely available. On future research, it would be valuable to research the differences in risk-adjusted returns also between the subsections of fundamental ETFs.