919 resultados para Model-based Categorical Sequence Clustering
Resumo:
Résumé: Le développement rapide de nouvelles technologies comme l'imagerie médicale a permis l'expansion des études sur les fonctions cérébrales. Le rôle principal des études fonctionnelles cérébrales est de comparer l'activation neuronale entre différents individus. Dans ce contexte, la variabilité anatomique de la taille et de la forme du cerveau pose un problème majeur. Les méthodes actuelles permettent les comparaisons interindividuelles par la normalisation des cerveaux en utilisant un cerveau standard. Les cerveaux standards les plus utilisés actuellement sont le cerveau de Talairach et le cerveau de l'Institut Neurologique de Montréal (MNI) (SPM99). Les méthodes de recalage qui utilisent le cerveau de Talairach, ou celui de MNI, ne sont pas suffisamment précises pour superposer les parties plus variables d'un cortex cérébral (p.ex., le néocortex ou la zone perisylvienne), ainsi que les régions qui ont une asymétrie très importante entre les deux hémisphères. Le but de ce projet est d'évaluer une nouvelle technique de traitement d'images basée sur le recalage non-rigide et utilisant les repères anatomiques. Tout d'abord, nous devons identifier et extraire les structures anatomiques (les repères anatomiques) dans le cerveau à déformer et celui de référence. La correspondance entre ces deux jeux de repères nous permet de déterminer en 3D la déformation appropriée. Pour les repères anatomiques, nous utilisons six points de contrôle qui sont situés : un sur le gyrus de Heschl, un sur la zone motrice de la main et le dernier sur la fissure sylvienne, bilatéralement. Evaluation de notre programme de recalage est accomplie sur les images d'IRM et d'IRMf de neuf sujets parmi dix-huit qui ont participés dans une étude précédente de Maeder et al. Le résultat sur les images anatomiques, IRM, montre le déplacement des repères anatomiques du cerveau à déformer à la position des repères anatomiques de cerveau de référence. La distance du cerveau à déformer par rapport au cerveau de référence diminue après le recalage. Le recalage des images fonctionnelles, IRMf, ne montre pas de variation significative. Le petit nombre de repères, six points de contrôle, n'est pas suffisant pour produire les modifications des cartes statistiques. Cette thèse ouvre la voie à une nouvelle technique de recalage du cortex cérébral dont la direction principale est le recalage de plusieurs points représentant un sillon cérébral. Abstract : The fast development of new technologies such as digital medical imaging brought to the expansion of brain functional studies. One of the methodolgical key issue in brain functional studies is to compare neuronal activation between individuals. In this context, the great variability of brain size and shape is a major problem. Current methods allow inter-individual comparisions by means of normalisation of subjects' brains in relation to a standard brain. A largerly used standard brains are the proportional grid of Talairach and Tournoux and the Montreal Neurological Insititute standard brain (SPM99). However, there is a lack of more precise methods for the superposition of more variable portions of the cerebral cortex (e.g, neocrotex and perisyvlian zone) and in brain regions highly asymmetric between the two cerebral hemipsheres (e.g. planum termporale). The aim of this thesis is to evaluate a new image processing technique based on non-linear model-based registration. Contrary to the intensity-based, model-based registration uses spatial and not intensitiy information to fit one image to another. We extract identifiable anatomical features (point landmarks) in both deforming and target images and by their correspondence we determine the appropriate deformation in 3D. As landmarks, we use six control points that are situated: one on the Heschl'y Gyrus, one on the motor hand area, and one on the sylvian fissure, bilaterally. The evaluation of this model-based approach is performed on MRI and fMRI images of nine of eighteen subjects participating in the Maeder et al. study. Results on anatomical, i.e. MRI, images, show the mouvement of the deforming brain control points to the location of the reference brain control points. The distance of the deforming brain to the reference brain is smallest after the registration compared to the distance before the registration. Registration of functional images, i.e fMRI, doesn't show a significant variation. The small number of registration landmarks, i.e. six, is obvious not sufficient to produce significant modification on the fMRI statistical maps. This thesis opens the way to a new computation technique for cortex registration in which the main directions will be improvement of the registation algorithm, using not only one point as landmark, but many points, representing one particular sulcus.
Resumo:
The dynamical properties ofshaken granular materials are important in many industrial applications where the shaking is used to mix, segregate and transport them. In this work asystematic, large scale simulation study has been performed to investigate the rheology of dense granular media, in the presence of gas, in a three dimensional vertical cylinder filled with glass balls. The base wall of the cylinder is subjected to sinusoidal oscillation in the vertical direction. The viscoelastic behavior of glass balls during a collision, have been studied experimentally using a modified Newton's Cradle device. By analyzing the results of the measurements, using numerical model based on finite element method, the viscous damping coefficient was determinedfor the glass balls. To obtain detailed information about the interparticle interactions in a shaker, a simplified model for collision between particles of a granular material was proposed. In order to simulate the flow of surrounding gas, a formulation of the equations for fluid flow in a porous medium including particle forces was proposed. These equations are solved with Large Eddy Simulation (LES) technique using a subgrid-model originally proposed for compressible turbulent flows. For a pentagonal prism-shaped container under vertical vibrations, the results show that oscillon type structures were formed. Oscillons are highly localized particle-like excitations of the granular layer. This self-sustaining state was named by analogy with its closest large-scale analogy, the soliton, which was first documented by J.S. Russell in 1834. The results which has been reportedbyBordbar and Zamankhan(2005b)also show that slightly revised fluctuation-dissipation theorem might apply to shaken sand, which appears to be asystem far from equilibrium and could exhibit strong spatial and temporal variations in quantities such as density and local particle velocity. In this light, hydrodynamic type continuum equations were presented for describing the deformation and flow of dense gas-particle mixtures. The constitutive equation used for the stress tensor provides an effective viscosity with a liquid-like character at low shear rates and a gaseous-like behavior at high shear rates. The numerical solutions were obtained for the aforementioned hydrodynamic equations for predicting the flow dynamics ofdense mixture of gas and particles in vertical cylindrical containers. For a heptagonal prism shaped container under vertical vibrations, the model results were found to predict bubbling behavior analogous to those observed experimentally. This bubbling behavior may be explained by the unusual gas pressure distribution found in the bed. In addition, oscillon type structures were found to be formed using a vertically vibrated, pentagonal prism shaped container in agreement with computer simulation results. These observations suggest that the pressure distribution plays a key rolein deformation and flow of dense mixtures of gas and particles under vertical vibrations. The present models provide greater insight toward the explanation of poorly understood hydrodynamic phenomena in the field of granular flows and dense gas-particle mixtures. The models can be generalized to investigate the granular material-container wall interactions which would be an issue of high interests in the industrial applications. By following this approach ideal processing conditions and powder transport can be created in industrial systems.
Resumo:
Kassanhallintakirjallisuus on pitkälti normatiivista tai yksittäisiä kohteita ja niiden kassanhallinnanosa-alueita tarkastelevaa case-tutkimusta. Sen sijaan kassanhallintaa laajalla tutkimuskohdejoukolla strategia- ja järjestelmävalintojen näkökulmasta tarkastelevia tutkimuksia on tehty vain vähän. Tämä suomalaista kuntakenttää tarkastelevaeksploratiivinen tutkimus antaa kuvan rakenne-, strategia- ja järjestelmävalinnoista, joita kunnat ovat painottaneet kassanhallinnassaan vuosina 2000 - 2002. Tutkimuksen metodologisena viitekehyksenä käytetty kontingenssilähestymistapaan pohjautuva konfiguratiivinen systeemimalli mahdollisti suuren tutkimuskohdejoukonstrategia- ja järjestelmäkäytäntöjen erojen kvantitatiivisen analysoinnin. Ryhmittelyanalyysin avulla tutkimusdatasta muodostui neljä strategia- ja järjestelmäpainotuksiltaan toisistaan eroavaa kuntaryhmää, ja tutkimustulokset osoittivat kuntien kassanhallintakäytäntöjen olevan hyvin samankaltaisia yksityissektorin vastaaviin käytäntöihin verrattuna; myös julkissektorin kassanhallinnassa painotetaan kustannustehokkuutta. Kustannustehokkuusstrategian rinnalla vastaajakunnat painottivat sijoitus-, lainanhoito- ja riskienhallintastrategioita sekä em. strategioiden toteuttamista tukevia rakenne- ja järjestelmävalintoja. Myös pienempienkuntien havaittiin tukeutuneen samoihin strategia- ja järjestelmäpainotuksiin kuin isommat kunnat, vaikka esim. järjestelmien käytännön tietohallintaratkaisuissa saattaa esiintyä kuntakoosta johtuvia eroja. Lisäksi joustavuusstrategian painoarvo osana kuntien kassanhallintastrategioita oli suuri. Tämä on johdonmukaista, sillä kassapositioiden ennakoimattomat muutokset edellyttävät nopeaa päätöksentekoa. Kustannustehokkuusajattelulla, kassanhoitokokonaisuuden ymmärtämisellä ja uusien kassanhoitotekniikoiden sekä rahoitusinstrumenttien selektiivisellä käytöllä on mahdollista vaikuttaa kuntien rahoituksenhoidon nettokustannuksiin.
Resumo:
Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.
Resumo:
Background: Development of three classification trees (CT) based on the CART (Classification and Regression Trees), CHAID (Chi-Square Automatic Interaction Detection) and C4.5 methodologies for the calculation of probability of hospital mortality; the comparison of the results with the APACHE II, SAPS II and MPM II-24 scores, and with a model based on multiple logistic regression (LR). Methods: Retrospective study of 2864 patients. Random partition (70:30) into a Development Set (DS) n = 1808 and Validation Set (VS) n = 808. Their properties of discrimination are compared with the ROC curve (AUC CI 95%), Percent of correct classification (PCC CI 95%); and the calibration with the Calibration Curve and the Standardized Mortality Ratio (SMR CI 95%). Results: CTs are produced with a different selection of variables and decision rules: CART (5 variables and 8 decision rules), CHAID (7 variables and 15 rules) and C4.5 (6 variables and 10 rules). The common variables were: inotropic therapy, Glasgow, age, (A-a)O2 gradient and antecedent of chronic illness. In VS: all the models achieved acceptable discrimination with AUC above 0.7. CT: CART (0.75(0.71-0.81)), CHAID (0.76(0.72-0.79)) and C4.5 (0.76(0.73-0.80)). PCC: CART (72(69- 75)), CHAID (72(69-75)) and C4.5 (76(73-79)). Calibration (SMR) better in the CT: CART (1.04(0.95-1.31)), CHAID (1.06(0.97-1.15) and C4.5 (1.08(0.98-1.16)). Conclusion: With different methodologies of CTs, trees are generated with different selection of variables and decision rules. The CTs are easy to interpret, and they stratify the risk of hospital mortality. The CTs should be taken into account for the classification of the prognosis of critically ill patients.
Resumo:
The advent of new advances in mobile computing has changed the manner we do our daily work, even enabling us to perform collaborative activities. However, current groupware approaches do not offer an integrating and efficient solution that jointly tackles the flexibility and heterogeneity inherent to mobility as well as the awareness aspects intrinsic to collaborative environments. Issues related to the diversity of contexts of use are collected under the term plasticity. A great amount of tools have emerged offering a solution to some of these issues, although always focused on individual scenarios. We are working on reusing and specializing some already existing plasticity tools to the groupware design. The aim is to offer the benefits from plasticity and awareness jointly, trying to reach a real collaboration and a deeper understanding of multi-environment groupware scenarios. In particular, this paper presents a conceptual framework aimed at being a reference for the generation of plastic User Interfaces for collaborative environments in a systematic and comprehensive way. Starting from a previous conceptual framework for individual environments, inspired on the model-based approach, we introduce specific components and considerations related to groupware.
Resumo:
Background: The G1-to-S transition of the cell cycle in the yeast Saccharomyces cerevisiae involves an extensive transcriptional program driven by transcription factors SBF (Swi4-Swi6) and MBF (Mbp1-Swi6). Activation of these factors ultimately depends on the G1 cyclin Cln3. Results: To determine the transcriptional targets of Cln3 and their dependence on SBF or MBF, we first have used DNA microarrays to interrogate gene expression upon Cln3 overexpression in synchronized cultures of strains lacking components of SBF and/or MBF. Secondly, we have integrated this expression dataset together with other heterogeneous data sources into a single probabilistic model based on Bayesian statistics. Our analysis has produced more than 200 transcription factor-target assignments, validated by ChIP assays and by functional enrichment. Our predictions show higher internal coherence and predictive power than previous classifications. Our results support a model whereby SBF and MBF may be differentially activated by Cln3. Conclusions: Integration of heterogeneous genome-wide datasets is key to building accurate transcriptional networks. By such integration, we provide here a reliable transcriptional network at the G1-to-S transition in the budding yeast cell cycle. Our results suggest that to improve the reliability of predictions we need to feed our models with more informative experimental data.
Resumo:
Three models of flow resistance (a Keulegan-type logarithmic law and two models developed for large-scale roughness conditions: the full logarithmic law and a model based on an inflectional velocity profile) were calibrated, validated and compared using an extensive database (N = 1,533) from rivers and flumes, representative of a wide hydraulic and geomorphologic range in the field of gravel-bed and mountain channels. It is preferable to apply the model based on an inflectional velocity profile in the relative submergence (y/d90) interval between 0.5 and 15, while the full logarithmic law is preferable for values below 0.5. For high relative submergence, above 15, either the logarithmic law or the full logarithmic law can be applied. The models fitted to the coarser percentiles are preferable to those fitted to the median diameter, owing to the higher explanatory power achieved by setting a model, the smaller difference in the goodness-of-fit between the different models and the lower influence of the origin of the data (river or flume).
Resumo:
Se han calibrado, validado y comparado tres modelos de resistencia al flujo de contorno granular: un modelo potencial y otros dos modelos desarrollados para condiciones de alta rugosidad relativa (uno basado en una modificación de la ley logarítmica de Prandtl-von Karman y otro fundamentado en un perfil de velocidad configurado en dos zonas: una uniforme en las proximidades de los elementos de rugosidad y otra superior que sigue una distribución logarítmica). Se ha empleado para ello un numeroso conjunto de 1.533 datos tomados en ríos y en canales de laboratorio, representativo de un amplio intervalo hidráulico y geomorfológico en el ámbito de ríos de grava y de montaña. Han resultado preferibles las ecuaciones ajustadas con los percentiles granulométricos mayores (d90 o d84) que las ajustadas con el diámetro mediano (d50), debido a la mayor capacidad explicativa alcanzada dado un modelo, la menor diferencia en la bondad de ajuste entre los diferentes modelos y la menor influencia del origen de los datos (río o canal de laboratorio). Las ecuaciones ajustadas de acuerdo con los modelos en donde se contemplan condiciones de alta rugosidad relativa presentan predicciones similares, exceptuando el intervalo macrorrugoso (y/d90 < 1), en el que es preferible la correspondiente al modelo fundamentado en el perfil de velocidad configurado en dos zonas. Se recomienda restringir la aplicación de la ecuación ajustada con arreglo a la ley potencial al intervalo de y/d90 comprendido entre uno y veinte, puesto que fuera de dicho intervalo tiende a infraestimar notablemente la resistencia al flujo.
Resumo:
The goal of this work is to try to create a statistical model, based only on easily computable parameters from the CSP problem to predict runtime behaviour of the solving algorithms, and let us choose the best algorithm to solve the problem. Although it seems that the obvious choice should be MAC, experimental results obtained so far show, that with big numbers of variables, other algorithms perfom much better, specially for hard problems in the transition phase.
Resumo:
To compare the cost and effectiveness of the levonorgestrel-releasing intrauterine system (LNG-IUS) versus combined oral contraception (COC) and progestogens (PROG) in first-line treatment of dysfunctional uterine bleeding (DUB) in Spain. STUDY DESIGN: A cost-effectiveness and cost-utility analysis of LNG-IUS, COC and PROG was carried out using a Markov model based on clinical data from the literature and expert opinion. The population studied were women with a previous diagnosis of idiopathic heavy menstrual bleeding. The analysis was performed from the National Health System perspective, discounting both costs and future effects at 3%. In addition, a sensitivity analysis (univariate and probabilistic) was conducted. RESULTS: The results show that the greater efficacy of LNG-IUS translates into a gain of 1.92 and 3.89 symptom-free months (SFM) after six months of treatment versus COC and PROG, respectively (which represents an increase of 33% and 60% of symptom-free time). Regarding costs, LNG-IUS produces savings of 174.2-309.95 and 230.54-577.61 versus COC and PROG, respectively, after 6 months-5 years. Apart from cost savings and gains in SFM, quality-adjusted life months (QALM) are also favourable to LNG-IUS in all scenarios, with a range of gains between 1 and 2 QALM compared to COC and PROG. CONCLUSIONS: The results indicate that first-line use of the LNG-IUS is the dominant therapeutic option (less costly and more effective) in comparison with first-line use of COC or PROG for the treatment of DUB in Spain. LNG-IUS as first line is also the option that provides greatest health-related quality of life to patients.
Resumo:
Over the last 60 years, planting densities for apple have increased as improved management systems have been developed. Dwarfing rootstocks have been the key to the dramatic changes in tree size, spacing and early production. The Malling series of dwarfing rootstocks (M.9 and M.26) have been the most important dwarfing rootstocks in the world but are poorly adapted in some areas of the world and they are susceptible to the bacterial disease fire blight and the soil disease complex, apple replant disease which limits their uses in some areas. Rootstock breeding programs in several parts of the world are developing improved rootstocks with resistance to fire blight, and replant disease, and improved cold hardiness and yield efficiency. A second important trend has been the increasing importance of new cultivars. New cultivars have provided opportunities for higher prices until they are over-produced. A new trend is the "variety club" in which variety owners manage the production and marketing of a new unique cultivar to bring higher prices to the growers and variety owners. This has led to many fruit growers being unable to plant or grow some new cultivars. Important rootstock and cultivar genes have been mapped and can be used in marker assisted selection of future rootstock and cultivar selections. Other important improvements in apple culture include the development of pre-formed trees, the development of minimal pruning strategies and limb angle bending which have also contributed to the dramatic changes in early production in the 2nd-5th years after planting. Studies on light interception and distribution have led to improved tree forms with better fruit quality. Simple pruning strategies and labor positioning platform machines have resulted in partial mechanization of pruning which has reduced management costs. Improved plant growth regulators for thinning and the development of a thinning prediction model based on tree carbohydrate balance have improved the ability to produce the optimum fruit size and crop load. Other new plant growth regulators have also allowed control of shoot growth, control of preharvest fruit drop and control of fruit softening in storage after harvest. As we look to the future, there will be continued incremental improvement in our understanding of plant physiology that will lead to continued incremental improvements in orchard management but there is likely to be dramatic changes in orchard production systems through genomics research and genetic engineering. A greater understanding of the genetic control of dwarfing, precocity, rooting, vegetative growth, flowering, fruit growth and disease resistance which will lead to new varieties and rootstocks which are less expensive to grow and manage.
Resumo:
Työn teoriaosassa tarkastellaan ympäristökustannuksia ja niiden määrittämistä ympäristöraportoinnin tueksi. Ensin tarkastellaan ympäristökustannusten määrittämistä ja sen taustalla olevia ympäristölaskennan peruskysymyksiä. Seuraavaksi tutkitaan ympäristöraportointia ja sen ympäristökustannusten määrittämiselle asettamia vaatimuksia. Tämän jälkeen tarkastellaan ympäristökustannusten käyttöä ympäristöraportoinnin tukena. Työn päätavoitteena on luoda teoriaosan tarkastelun pohjalta laskentamalli Stora Enso Fine Paperin Varkauden sellutehtaalle. Laskentamalli rakennetaan sellutehtaan ympäristökustannusten ja ympäristövelan määrittämiseen sellutuotteille ympäristöraportoinnin tueksi. Ympäristövelka on ympäristökustannus, joka yrityksen on uhrattava ympäristönsuojeluun saavuttaakseen halutun tulevaisuuden mukaisen tason. Ympäristökustannukset ja ympäristövelka lasketaan ja kohdistetaan käyttämällä hyväksi toimintolaskentaa. Ympäristövelan määrityksessä käytetään päästöjen arvottamiseen Tammisen ja Kurjen mallia. Laskentamallin avulla pyritään selvittämään, soveltuuko toimintolaskentaan sekä Tammisen ja Kurjen malliin pohjautuva laskentamalli sellutehtaan ympäristökustannusten ja ympäristövelan määrittämiseen sellutuotteille. Tämän lisäksi haetaan vastausta sille, soveltuvatko laskentamallin konkreettiset tulokset käytettäväksi ympäristöraportoinnin tukena. Laskentamallin perusteella voidaan todeta toimintolaskennan soveltuvan hyvin sellutehtaan ympäristökustannusten ja ympäristövelan määrittämiseen sellutuotteille. Tammisen ja Kurjen arvostusmalli sisältää useita ongelmakohtia, eikä se laskentamallin perusteella sovellu sellutehtaan ympäristövelan määrittämiseen sellutuotteille. Laskentamallin tuloksena saatuja ympäristökustannuksia voidaan käyttää lähinnä toimipaikkojen välisen ympäristösuorituskyvyn vertaamiseen sekä oman toiminnan vertaamiseen suhteessa aikaisempaan toimintaan. Ympäristövelka tulisi ensisijaisesti nähdä tukemassa toimipaikkojen sisäistä ympäristösuorituskyvyn tarkastelua.
Resumo:
Objective: Imipenem is a broad spectrum antibiotic used to treat severe infections in critically ill patients. Imipenem pharmacokinetics (PK) was evaluated in a cohort of neonates treated in the Neonatal Intensive Care Unit of the Lausanne University Hospital. The objective of our study was to identify key demographic and clinical factors influencing imipenem exposure in this population. Method: PK data from neonates and infants with at least one imipenem concentration measured between 2002 and 2013 were analyzed applying population PK modeling methods. Measurement of plasma concentrations were performed upon the decision of the physician within the frame of a therapeutic drug monitoring (TDM) programme. Effects of demographic (sex, body weight, gestational age, postnatal age) and clinical factors (serum creatinine as a measure of kidney function; co-administration of furosemide, spironolactone, hydrochlorothiazide, vancomycin, metronidazole and erythromycin) on imipenem PK were explored. Model-based simulations were performed (with a median creatinine value of 46 μmol/l) to compare various dosing regimens with respect to their ability to maintain drug levels above predefined minimum inhibitory concentrations (MIC) for at least 40 % of the dosing interval. Results: A total of 144 plasma samples was collected in 68 neonates and infants, predominantly preterm newborns, with median gestational age of 27 weeks (24 - 41 weeks) and postnatal age of 21 days (2 - 153 days). A two-compartment model best characterized imipenem disposition. Actual body weight exhibited the greatest impact on PK parameters, followed by age (gestational age and postnatal age) and serum creatinine on clearance. They explain 19%, 9%, 14% and 9% of the interindividual variability in clearance respectively. Model-based simulations suggested that 15 mg/kg every 12 hours maintain drug concentrations over a MIC of 2 mg/l for at least 40% of the dosing interval during the first days of life, whereas neonates older than 14 days of life required a dose of 20 mg/kg every 12 hours. Conclusion: Dosing strategies based on body weight and post-natal age are recommended for imipenem in all critically ill neonates and infants. Most current guidelines seem adequate for newborns and TDM should be restricted to some particular clinical situations.
Resumo:
Huonetilojen lämpöolosuhteiden hallinta on tärkeä osa talotekniikan suunnittelua. Tavallisesti huonetilan lämpöolosuhteita mallinnetaan menetelmillä, joissa lämpödynamiikkaa lasketaan huoneilmassa yhdessä laskentapisteessä ja rakenteissa seinäkohtaisesti. Tarkastelun kohteena on yleensä vain huoneilman lämpötila. Tämän diplomityön tavoitteena oli kehittää huoneilman lämpöolosuhteiden simulointimalli, jossa rakenteiden lämpödynamiikka lasketaan epästationaarisesti energia-analyysilaskennalla ja huoneilman virtauskenttä mallinnetaan valittuna ajanhetkenä stationaarisesti virtauslaskennalla. Tällöin virtauskentälle saadaan jakaumat suunnittelun kannalta olennaisista suureista, joita tyypillisesti ovat esimerkiksi ilman lämpötila ja nopeus. Simulointimallin laskentatuloksia verrattiin testihuonetiloissa tehtyihin mittauksiin. Tulokset osoittautuivat riittävän tarkoiksi talotekniikan suunnitteluun. Mallilla simuloitiin kaksi huonetilaa, joissa tarvittiin tavallista tarkempaa mallinnusta. Vertailulaskelmia tehtiin eri turbulenssimalleilla, diskretointitarkkuuksilla ja hilatiheyksillä. Simulointitulosten havainnollistamiseksi suunniteltiin asiakastuloste, jossa on esitetty suunnittelun kannalta olennaiset asiat. Simulointimallilla saatiin lisätietoa varsinkin lämpötilakerrostumista, joita tyypillisesti on arvioitu kokemukseen perustuen. Simulointimallin kehityksen taustana käsiteltiin rakennusten sisäilmastoa, lämpöolosuhteita ja laskentamenetelmiä sekä mallinnukseen soveltuvia kaupallisia ohjelmia. Simulointimallilla saadaan entistä tarkempaa ja yksityiskohtaisempaa tietoa lämpöolosuhteiden hallinnan suunnitteluun. Mallin käytön ongelmia ovat vielä virtauslaskennan suuri laskenta-aika, turbulenssin mallinnus, tuloilmalaitteiden reunaehtojen tarkka määritys ja laskennan konvergointi. Kehitetty simulointimalli tarjoaa hyvän perustan virtauslaskenta- ja energia-analyysiohjelmien kehittämiseksi ja yhdistämiseksi käyttäjäystävälliseksi talotekniikan suunnittelutyökaluksi.