881 resultados para Interaction modeling. Model-based development. Interaction evaluation.
Resumo:
Huonetilojen lämpöolosuhteiden hallinta on tärkeä osa talotekniikan suunnittelua. Tavallisesti huonetilan lämpöolosuhteita mallinnetaan menetelmillä, joissa lämpödynamiikkaa lasketaan huoneilmassa yhdessä laskentapisteessä ja rakenteissa seinäkohtaisesti. Tarkastelun kohteena on yleensä vain huoneilman lämpötila. Tämän diplomityön tavoitteena oli kehittää huoneilman lämpöolosuhteiden simulointimalli, jossa rakenteiden lämpödynamiikka lasketaan epästationaarisesti energia-analyysilaskennalla ja huoneilman virtauskenttä mallinnetaan valittuna ajanhetkenä stationaarisesti virtauslaskennalla. Tällöin virtauskentälle saadaan jakaumat suunnittelun kannalta olennaisista suureista, joita tyypillisesti ovat esimerkiksi ilman lämpötila ja nopeus. Simulointimallin laskentatuloksia verrattiin testihuonetiloissa tehtyihin mittauksiin. Tulokset osoittautuivat riittävän tarkoiksi talotekniikan suunnitteluun. Mallilla simuloitiin kaksi huonetilaa, joissa tarvittiin tavallista tarkempaa mallinnusta. Vertailulaskelmia tehtiin eri turbulenssimalleilla, diskretointitarkkuuksilla ja hilatiheyksillä. Simulointitulosten havainnollistamiseksi suunniteltiin asiakastuloste, jossa on esitetty suunnittelun kannalta olennaiset asiat. Simulointimallilla saatiin lisätietoa varsinkin lämpötilakerrostumista, joita tyypillisesti on arvioitu kokemukseen perustuen. Simulointimallin kehityksen taustana käsiteltiin rakennusten sisäilmastoa, lämpöolosuhteita ja laskentamenetelmiä sekä mallinnukseen soveltuvia kaupallisia ohjelmia. Simulointimallilla saadaan entistä tarkempaa ja yksityiskohtaisempaa tietoa lämpöolosuhteiden hallinnan suunnitteluun. Mallin käytön ongelmia ovat vielä virtauslaskennan suuri laskenta-aika, turbulenssin mallinnus, tuloilmalaitteiden reunaehtojen tarkka määritys ja laskennan konvergointi. Kehitetty simulointimalli tarjoaa hyvän perustan virtauslaskenta- ja energia-analyysiohjelmien kehittämiseksi ja yhdistämiseksi käyttäjäystävälliseksi talotekniikan suunnittelutyökaluksi.
Resumo:
Tutkimuksen tarkoituksena on selvittää, mitä on dynaaminen organisaatio- viestintä, minkälaisesta maailmankuvasta se ponnistaa ja mitä yhtymä- kohtia sillä on tietojohtamisen kolmiulotteiseen organisaatiomalliin. Tutkimus on teoreettinen synteesi, jonka avulla on tarkoitus löytää myös käytännön menetelmiä, joilla voidaan tukea dynaamista organisaatio- viestintää, organisaation dynaamisuutta sekä itseuudistumista. Tutkimuksen teoreettisina lähtökohtina ovat Pekka Aulan dynaamisen organisaatioviestinnän teoria sekä pääasiassa Pirjo Ståhlen luoma kolmiulotteinen organisaatiomalli ja siihen liittyvä itseuudistuvien systeemien teoriaehdotus. Tutkimuksen käytännönläheisessä osassa analysoidaan suomalaisia organisaatioviestinnän oppikirjoja dynaamisen organisaatioviestinnän teorian valossa. Lopuksi muodostetaan malli, joka tukee dynaamista organisaatioviestintää ja organisaatiota. Tutkimus osoittaa, että useiden teoreetikoiden mielestä organisaatioiden pitäisi luopua vanhentuneista organisaatiorakenteistaan ja kehittää työympäristöjään nostamalla inhimilliset voimavarat ensisijaisiksi sekä keskittymällä työntekijöiden välisen vuorovaikutuksen, tiedon sekä informaation vaihtoon.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
The chemistry of gold dissolution in alkaline cyanide solution has continually received attention and new rate equations expressing the gold leaching are still developed. The effect of leaching parameters on gold gold cyanidation is studied in this work in order to optimize the leaching process. A gold leaching model, based on the well-known shrinking-core model, is presented in this work. It is proposed that the reaction takes place at the reacting particle surface which is continuously reduced as the reaction proceeds. The model parameters are estimated by comparing experimental data and simulations. The experimental data used in this work was obtained from Ling et al. (1996) and de Andrade Lima and Hodouin (2005). Two different rate equations, where the unreacted amount of gold is considered in one equation, are investigated. In this work, it is presented that the reaction at the surface is the rate controlling step since there is no internal diffusion limitation. The model considering the effect of non-reacting gold shows that the reaction orders are consistent with the experimental observations reported by Ling et al. (1996) and de Andrade Lima and Hodouin (2005). However, it should be noted that the model obtained in this work is based on assumptions of no side reactions, no solid-liquid mass transfer resistances and no effect from temperature.
Resumo:
In modern day organizations there are an increasing number of IT devices such as computers, mobile phones and printers. These devices can be located and maintained by using specialized IT management applications. Costs related to a single device accumulate from various sources and are normally categorized as direct costs like hardware costs and indirect costs such as labor costs. These costs can be saved in a configuration management database and presented to users using web based development tools such as ASP.NET. The overall costs of IT devices during their lifecycle can be ten times higher than the actual purchase price of the product and ability to define and reduce these costs can save organizations noticeable amount of money. This Master’s Thesis introduces the research field of IT management and defines a custom framework model based on Information Technology Infrastructure Library (ITIL) best practices which is designed to be implemented as part of an existing IT management application for defining and presenting IT costs.
Resumo:
There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.
Resumo:
The article describes some concrete problems that were encountered when writing a two-level model of Mari morphology. Mari is an agglutinative Finno-Ugric language spoken in Russia by about 600 000 people. The work was begun in the 1980s on the basis of K. Koskenniemi’s Two-Level Morphology (1983), but in the latest stage R. Beesley’s and L. Karttunen’s Finite State Morphology (2003) was used. Many of the problems described in the article concern the inexplicitness of the rules in Mari grammars and the lack of information about the exact distribution of some suffixes, e.g. enclitics. The Mari grammars usually give complete paradigms for a few unproblematic verb stems, whereas the difficult or unclear forms of certain verbs are only superficially discussed. Another example of phenomena that are poorly described in grammars is the way suffixes with an initial sibilant combine to stems ending in a sibilant. The help of informants and searches from electronic corpora were used to overcome such difficulties in the development of the two-level model of Mari. The variation of the order of plural markers, case suffixes and possessive suffixes is a typical feature of Mari. The morphotactic rules constructed for Mari declensional forms tend to be recursive and their productivity must be limited by some technical device, such as filters. In the present model, certain plural markers were treated like nouns. The positional and functional versatility of the possessive suffixes can be regarded as the most challenging phenomenon in attempts to formalize the Mari morphology. Cyrillic orthography, which was used in the model, also caused problems. For instance, a Cyrillic letter may represent a sequence of two sounds, the first being part of the word stem while the other belongs to a suffix. In some cases, letters for voiced consonants are also generalized to represent voiceless consonants. Such orthographical conventions distance a morphological model based on orthography from the actual (morpho)phonological processes in the language.
Resumo:
A new damage model based on a micromechanical analysis of cracked [± θ / 90n ]s laminates subjected to multiaxial loads is proposed. The model predicts the onset and accumulation of transverse matrix cracks in uniformly stressed laminates, the effect of matrix cracks on the stiffness of the laminate, as well as the ultimate failure of the laminate. The model also accounts for the effect of the ply thickness on the ply strength. Predictions relating the elastic properties of several laminates and multiaxial loads are presented
Resumo:
Tämän työn tavoitteena on kehittää Basware Finlandille ohjelmistoprojektien onnistumisen arviointiin menetelmä, jolla yritys saa tietoonsa ovatko sen toimittamat järjestelmät todella saavuttaneet asiakkaan projektille asettamat tavoitteet. Työn lähtökohtana on kattava projektin onnistumista koskevan kirjallisuuden tutkiminen ja tämän pohjalta luotava malli projektien onnistumisen mittaamiseen vaikuttavista tekijöistä. Kyseisessä mallissa osoitetaan mitkä tekijät vaikuttavat oleellisesti projektin onnistumisen seurannan suorittamiseen sekä miten projekteja suorittavan organisaation tulisi lähteä kehittämään projektien onnistumisen arviointijärjestelmää. Työn varsinaisena tuloksena syntyi menetelmä, jonka avulla Basware voi seurata toimittamiensa tietojärjestelmäprojektien onnistumista asiakasnäkökulmasta. Lisäksi työssä analysoitiin ja esitettiin tämän perusteella parannusehdotuksia Baswaren tällä hetkellä käyttämää projektipalautekyselyyn.
Resumo:
The objective of the work has been to study why systems thinking should be used in combination with TQM, what are the main benefits of the integration and how it could best be done. The work analyzes the development of systems thinking and TQM with time and the main differences between them. The work defines prerequisites for adopting a systems approach and the organizational factors which embody the development of an efficient learning organization. The work proposes a model based on combination of an interactive management model and redesign to be used for application of systems approach with TQM in practice. The results of the work indicate that there are clear differences between systems thinking and TQM which justify their combination. Systems approach provides an additional complementary perspective to quality management. TQM is focused on optimizing operations at the operational level while interactive management and redesign of organization are focused on optimization operations at the conceptual level providing a holistic system for value generation. The empirical study demonstrates the applicability of the proposed model in one case study company but its application is tenable and possible also beyond this particular company. System dynamic modeling and other systems based techniques like cognitive mapping are useful methods for increasing understanding and learning about the behavior of systems. The empirical study emphasizes the importance of using a proper early warning system.
Resumo:
Tutkimuksen päätavoite on tuottaa toiminnan kehittämismalli Kaartin Jääkärirykmentille, joka on yksi maavoimien joukko-osastoista. Tutkimuksen osatavoitteina on luoda yleinen malli joukko-osaston toiminnan kehittämisen osa-alueista, muodostaa luodun mallin perusteella kyselylomake kehityskohteiden kartoittamiseksi sekä laaditulla lomakkeella selvittää joukko-osaston toiminnan kehitystarpeita. Toiminnan kehittäminen nähdään tutkimuksessa toimintojen tai toimintatapojen kehittämisenä, ja sitä tarkastellaan laatujohtamisen näkökulmasta. Tutkimus on luonteeltaan laadullinen empiirinen tutkimus, jolla pyritään uuden mallin luomiseen. Tiedonkeruumenetelminä olivat kirjallisuuskatsaus, puolistrukturoidut haastattelut ja kyselytutkimus. Kirjallisuuskatsausta edusti perehtyminen laatujohtamisen teoriaan. Laatujohtamisen teoriaa täydennettiin kahdella asiantuntijahaastattelulla, joista toinen tehtiin laatujohtamisen teorian asiantuntijalle ja toinen käytännön asiantuntijalle. Lisäksi tutkimuksessa perehdyttiin puolustusvoimien laadunhallinnan ja toiminnan kehittämisen ohjeistukseen. Puolustusvoimien ohjeistusta edustivat Pääesikunnan ja Maavoimien Esikunnan ohjeasiakirjat sekä yhden esimerkkijoukko-osaston ohjeistus. Laatujohtamisen teorian ja puolustusvoimien ohjeistuksen perusteella muodostettiin yleinen malli joukko-osaston toiminnan kehittämisen osa-alueista. Malli oli samalla tutkimuksen ensimmäinen osatavoite. Mallin perusteella toiminnan kehittäminen koostuu: toiminnan vakioinnista, toiminnan arvioinnista, toiminnan jatkuvasta parantamisesta ja innovatiivisuudesta. Näiden neljän kokonaisuuden perusteella luotiin kyselylomake, siten, että samalla kyselylomakkeella pystyttiin kartoittamaan Kaartin Jääkärirykmentin nykyisen toiminnan mahdollisia kehitystarpeita ja myöhemmässä vaiheessa mittamaan toiminnan kehittymistä. Kyselylomakkeen muodostaminen oli samalla tutkimuksen toinen osatavoite. Kyselylomakkeen avulla toteutettiin kyselytutkimus kehitystarpeiden selvittämiseksi. Tämä oli tutkimuksen kolmas osatavoite. Kyselytutkimuksen tulosten perusteella eniten kehitettävää oli toiminnan jatkuvassa parantamisessa. Tulosten perusteella muodostettiin vuosisuunnittelurytmiin sidottu, itsearviointiin ja yhteen jatkuvan parantamisen malliin pohjautuva, toiminnan kehittämisen malli Kaartin Jääkärirykmentille.
Resumo:
Traditionally limestone has been used for the flue gas desulfurization in fluidized bed combustion. Recently, several studies have been carried out to examine the use of limestone in applications which enable the removal of carbon dioxide from the combustion gases, such as calcium looping technology and oxy-fuel combustion. In these processes interlinked limestone reactions occur but the reaction mechanisms and kinetics are not yet fully understood. To examine these phenomena, analytical and numerical models have been created. In this work, the limestone reactions were studied with aid of one-dimensional numerical particle model. The model describes a single limestone particle in the process as a function of time, the progress of the reactions and the mass and energy transfer in the particle. The model-based results were compared with experimental laboratory scale BFB results. It was observed that by increasing the temperature from 850 °C to 950 °C the calcination was enhanced but the sulfate conversion was no more improved. A higher sulfur dioxide concentration accelerated the sulfation reaction and based on the modeling, the sulfation is first order with respect to SO2. The reaction order of O2 seems to become zero at high oxygen concentrations.
Resumo:
Prerequisites and effects of proactive and preventive psycho-social student welfare activities in Finnish preschool and elementary school were of interest in the present thesis. So far, Finnish student welfare work has mainly focused on interventions and individuals, and the voluminous possibilities to enhance well-being of all students as a part of everyday school work have not been fully exploited. Consequently, in this thesis three goals were set: (1) To present concrete examples of proactive and preventive psycho-social student welfare activities in Finnish basic education; (2) To investigate measurable positive effects of proactive and preventive activities; and (3) To investigate implementation of proactive and preventive activities in ecological contexts. Two prominent phenomena in preschool and elementary school years—transition to formal schooling and school bullying—were chosen as examples of critical situations that are appropriate targets for proactive and preventive psycho-social student welfare activities. Until lately, the procedures concerning both school transitions and school bullying have been rather problem-focused and reactive in nature. Theoretically, we lean on the bioecological model of development by Bronfenbrenner and Morris with concentric micro-, meso-, exo- and macrosystems. Data were drawn from two large-scale research projects, the longitudinal First Steps Study: Interactive Learning in the Child–Parent– Teacher Triangle, and the Evaluation Study of the National Antibullying Program KiVa. In Study I, we found that the academic skills of children from preschool–elementary school pairs that implemented several supportive activities during the preschool year developed more quickly from preschool to Grade 1 compared with the skills of children from pairs that used fewer practices. In Study II, we focused on possible effects of proactive and preventive actions on teachers and found that participation in the KiVa antibullying program influenced teachers‘ self-evaluated competence to tackle bullying. In Studies III and IV, we investigated factors that affect implementation rate of these proactive and preventive actions. In Study III, we found that principal‘s commitment and support for antibullying work has a clear-cut positive effect on implementation adherence of student lessons of the KiVa antibullying program. The more teachers experience support for and commitment to anti-bullying work from their principal, the more they report having covered KiVa student lessons and topics. In Study IV, we wanted to find out why some schools implement several useful and inexpensive transition practices, whereas other schools use only a few of them. We were interested in broadening the scope and looking at local-level (exosystem) qualities, and, in fact, the local-level activities and guidelines, along with teacherreported importance of the transition practices, were the only factors significantly associated with the implementation rate of transition practices between elementary schools and partner preschools. Teacher- and school-level factors available in this study turned out to be mostly not significant. To summarize, the results confirm that school-based promotion and prevention activities may have beneficial effects not only on students but also on teachers. Second, various top-down processes, such as engagement at the level of elementary school principals or local administration may enhance implementation of these beneficial activities. The main message is that when aiming to support the lives of children the primary focus should be on adults. In future, promotion of psychosocial well-being and the intrinsic value of inter- and intrapersonal skills need to be strengthened in the Finnish educational systems. Future research efforts in student welfare and school psychology, as well as focused training for psychologists in educational contexts, should be encouraged in the departments of psychology and education in Finnish universities. Moreover, a specific research centre for school health and well-being should be established.
Resumo:
The power rating of wind turbines is constantly increasing; however, keeping the voltage rating at the low-voltage level results in high kilo-ampere currents. An alternative for increasing the power levels without raising the voltage level is provided by multiphase machines. Multiphase machines are used for instance in ship propulsion systems, aerospace applications, electric vehicles, and in other high-power applications including wind energy conversion systems. A machine model in an appropriate reference frame is required in order to design an efficient control for the electric drive. Modeling of multiphase machines poses a challenge because of the mutual couplings between the phases. Mutual couplings degrade the drive performance unless they are properly considered. In certain multiphase machines there is also a problem of high current harmonics, which are easily generated because of the small current path impedance of the harmonic components. However, multiphase machines provide special characteristics compared with the three-phase counterparts: Multiphase machines have a better fault tolerance, and are thus more robust. In addition, the controlled power can be divided among more inverter legs by increasing the number of phases. Moreover, the torque pulsation can be decreased and the harmonic frequency of the torque ripple increased by an appropriate multiphase configuration. By increasing the number of phases it is also possible to obtain more torque per RMS ampere for the same volume, and thus, increase the power density. In this doctoral thesis, a decoupled d–q model of double-star permanent-magnet (PM) synchronous machines is derived based on the inductance matrix diagonalization. The double-star machine is a special type of multiphase machines. Its armature consists of two three-phase winding sets, which are commonly displaced by 30 electrical degrees. In this study, the displacement angle between the sets is considered a parameter. The diagonalization of the inductance matrix results in a simplified model structure, in which the mutual couplings between the reference frames are eliminated. Moreover, the current harmonics are mapped into a reference frame, in which they can be easily controlled. The work also presents methods to determine the machine inductances by a finite-element analysis and by voltage-source inverters on-site. The derived model is validated by experimental results obtained with an example double-star interior PM (IPM) synchronous machine having the sets displaced by 30 electrical degrees. The derived transformation, and consequently, the decoupled d–q machine model, are shown to model the behavior of an actual machine with an acceptable accuracy. Thus, the proposed model is suitable to be used for the model-based control design of electric drives consisting of double-star IPM synchronous machines.
Resumo:
Fluid particle breakup and coalescence are important phenomena in a number of industrial flow systems. This study deals with a gas-liquid bubbly flow in one wastewater cleaning application. Three-dimensional geometric model of a dispersion water system was created in ANSYS CFD meshing software. Then, numerical study of the system was carried out by means of unsteady simulations performed in ANSYS FLUENT CFD software. Single-phase water flow case was setup to calculate the entire flow field using the RNG k-epsilon turbulence model based on the Reynolds-averaged Navier-Stokes (RANS) equations. Bubbly flow case was based on a computational fluid dynamics - population balance model (CFD-PBM) coupled approach. Bubble breakup and coalescence were considered to determine the evolution of the bubble size distribution. Obtained results are considered as steps toward optimization of the cleaning process and will be analyzed in order to make the process more efficient.