15 resultados para Data Streams Distribution
em Helda - Digital Repository of University of Helsinki
Resumo:
The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.
Resumo:
Maan törmäyskraaterien ikäjakauman mahdollinen ajallinen jaksollisuus on herättänyt laajaa keskustelua sen jälkeen, kun ilmiö ensimmäistä kertaa raportoitiin joukossa arvostettuja tieteellisiä artikkeleita vuonna 1984. Vaikka nykytiedon valossa on kyseenalaista perustuuko havaittu jaksollisuus todelliseen fysikaaliseen ilmiöön, on kuitenkin mahdollista, että jaksollisuus on todella olemassa ja se voitaisiin havaita laajemmalla ja tarkemmalla törmäyskraateriaineistolla. Tutkimuksessa luotiin simuloidut kraaterien ajalliset tiheys- ja kertymäfunktiot tapauksille, jossa kraaterit syntyvät joko täysin jaksollisella tai satunnaisella prosessilla. Näiden kahden ääritapauksen lisäksi luotiin jakaumat myös kahdelle niiden yhdistelmälle. Nämä mallit mahdollistavat myös erilaisten kraaterien iänmäärityksen epätarkkuuksien huomioonottamisen. Näistä jakaumista luotiin eri pituisia simuloituja kraaterien ikien aikasarjoja. Lopulta simuloiduista aikasarjoista pyrittiin Rayleigh'n menetelmän avulla etsimään jakaumassa ollutta jaksollisuutta. Tutkimuksemme perusteella ajallisen jaksollisuuden havaitseminen kraateriaikasarjoista on lähes mahdotonta mikäli vain yksi kolmasosa kraatereista on jaksollisen ilmiön aiheuttamia, vaikka nykyistä kraateriaineistoa laajempi ja tarkempi aineisto olisi tulevaisuudessa saatavilla. Mikäli kaksi kolmasosaa meteoriittitörmäyksistä on jaksollisia, sen havaitseminen on mahdollista, mutta vaatii huomattavasti tämän hetkistä kattavamman kraateriaineiston. Tutkimuksen perusteella on syytä epäillä, että havaittu kraaterien ajallinen jaksollisuus ei ole todellinen ilmiö.
Resumo:
Contamination of urban streams is a rising topic worldwide, but the assessment and investigation of stormwater induced contamination is limited by the high amount of water quality data needed to obtain reliable results. In this study, stream bed sediments were studied to determine their contamination degree and their applicability in monitoring aquatic metal contamination in urban areas. The interpretation of sedimentary metal concentrations is, however, not straightforward, since the concentrations commonly show spatial and temporal variations as a response to natural processes. The variations of and controls on metal concentrations were examined at different scales to increase the understanding of the usefulness of sediment metal concentrations in detecting anthropogenic metal contamination patterns. The acid extractable concentrations of Zn, Cu, Pb and Cd were determined from the surface sediments and water of small streams in the Helsinki Metropolitan region, southern Finland. The data consists of two datasets: sediment samples from 53 sites located in the catchment of the Stream Gräsanoja and sediment and water samples from 67 independent catchments scattered around the metropolitan region. Moreover, the sediment samples were analyzed for their physical and chemical composition (e.g. total organic carbon, clay-%, Al, Li, Fe, Mn) and the speciation of metals (in the dataset of the Stream Gräsanoja). The metal concentrations revealed that the stream sediments were moderately contaminated and caused no immediate threat to the biota. However, at some sites the sediments appeared to be polluted with Cu or Zn. The metal concentrations increased with increasing intensity of urbanization, but site specific factors, such as point sources, were responsible for the occurrence of the highest metal concentrations. The sediment analyses revealed, thus a need for more detailed studies on the processes and factors that cause the hot spot metal concentrations. The sediment composition and metal speciation analyses indicated that organic matter is a very strong indirect control on metal concentrations, and it should be accounted for when studying anthropogenic metal contamination patterns. The fine-scale spatial and temporal variations of metal concentrations were low enough to allow meaningful interpretation of substantial metal concentration differences between sites. Furthermore, the metal concentrations in the stream bed sediments were correlated with the urbanization of the catchment better than the total metal concentrations in the water phase. These results suggest that stream sediments show true potential for wider use in detecting the spatial differences in metal contamination of urban streams. Consequently, using the sediment approach regional estimates of the stormwater related metal contamination could be obtained fairly cost-effectively, and the stability and reliability of results would be higher compared to analyses of single water samples. Nevertheless, water samples are essential in analysing the dissolved concentrations of metals, momentary discharges from point sources in particular.
Resumo:
This thesis presents novel modelling applications for environmental geospatial data using remote sensing, GIS and statistical modelling techniques. The studied themes can be classified into four main themes: (i) to develop advanced geospatial databases. Paper (I) demonstrates the creation of a geospatial database for the Glanville fritillary butterfly (Melitaea cinxia) in the Åland Islands, south-western Finland; (ii) to analyse species diversity and distribution using GIS techniques. Paper (II) presents a diversity and geographical distribution analysis for Scopulini moths at a world-wide scale; (iii) to study spatiotemporal forest cover change. Paper (III) presents a study of exotic and indigenous tree cover change detection in Taita Hills Kenya using airborne imagery and GIS analysis techniques; (iv) to explore predictive modelling techniques using geospatial data. In Paper (IV) human population occurrence and abundance in the Taita Hills highlands was predicted using the generalized additive modelling (GAM) technique. Paper (V) presents techniques to enhance fire prediction and burned area estimation at a regional scale in East Caprivi Namibia. Paper (VI) compares eight state-of-the-art predictive modelling methods to improve fire prediction, burned area estimation and fire risk mapping in East Caprivi Namibia. The results in Paper (I) showed that geospatial data can be managed effectively using advanced relational database management systems. Metapopulation data for Melitaea cinxia butterfly was successfully combined with GPS-delimited habitat patch information and climatic data. Using the geospatial database, spatial analyses were successfully conducted at habitat patch level or at more coarse analysis scales. Moreover, this study showed it appears evident that at a large-scale spatially correlated weather conditions are one of the primary causes of spatially correlated changes in Melitaea cinxia population sizes. In Paper (II) spatiotemporal characteristics of Socupulini moths description, diversity and distribution were analysed at a world-wide scale and for the first time GIS techniques were used for Scopulini moth geographical distribution analysis. This study revealed that Scopulini moths have a cosmopolitan distribution. The majority of the species have been described from the low latitudes, sub-Saharan Africa being the hot spot of species diversity. However, the taxonomical effort has been uneven among biogeographical regions. Paper III showed that forest cover change can be analysed in great detail using modern airborne imagery techniques and historical aerial photographs. However, when spatiotemporal forest cover change is studied care has to be taken in co-registration and image interpretation when historical black and white aerial photography is used. In Paper (IV) human population distribution and abundance could be modelled with fairly good results using geospatial predictors and non-Gaussian predictive modelling techniques. Moreover, land cover layer is not necessary needed as a predictor because first and second-order image texture measurements derived from satellite imagery had more power to explain the variation in dwelling unit occurrence and abundance. Paper V showed that generalized linear model (GLM) is a suitable technique for fire occurrence prediction and for burned area estimation. GLM based burned area estimations were found to be more superior than the existing MODIS burned area product (MCD45A1). However, spatial autocorrelation of fires has to be taken into account when using the GLM technique for fire occurrence prediction. Paper VI showed that novel statistical predictive modelling techniques can be used to improve fire prediction, burned area estimation and fire risk mapping at a regional scale. However, some noticeable variation between different predictive modelling techniques for fire occurrence prediction and burned area estimation existed.
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.
Resumo:
The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.
Resumo:
In this study we used electro-spray ionization mass-spectrometry to determine phospholipid class and molecular species compositions in bacteriophages PM2, PRD1, Bam35 and phi6 as well as their hosts. To obtain compositional data of the individual leaflets, phospholipid transbilayer distribution in the viral membranes was studied. We found that 1) the membranes of all studied bacteriophage are enriched in PG as compared to the host membranes, 2) molecular species compositions in the phage and host membranes are similar, and 3) phospholipids in the viral membranes are distributed asymmetrically with phosphatidylglycerol enriched in the outer leaflet and phosphatidylethanolamine in the inner one (except Bam35). Alternative models for selective incorporation of phospholipids to phages and for the origins of the asymmetric phospholipid transbilayer distribution are discussed. Notably, the present data are also useful when constructing high resolution structural models of bacteriophages, since diffraction methods cannot provide a detailed structure of the membrane due to high motility of the lipids and lack of symmetric organization of membrane proteins.
Resumo:
Climate change contributes directly or indirectly to changes in species distributions, and there is very high confidence that recent climate warming is already affecting ecosystems. The Arctic has already experienced the greatest regional warming in recent decades, and the trend is continuing. However, studies on the northern ecosystems are scarce compared to more southerly regions. Better understanding of the past and present environmental change is needed to be able to forecast the future. Multivariate methods were used to explore the distributional patterns of chironomids in 50 shallow (≤ 10m) lakes in relation to 24 variables determined in northern Fennoscandia at the ecotonal area from the boreal forest in the south to the orohemiarctic zone in the north. Highest taxon richness was noted at middle elevations around 400 m a.s.l. Significantly lower values were observed from cold lakes situated in the tundra zone. Lake water alkalinity had the strongest positive correlation with the taxon richness. Many taxa had preference for lakes either on tundra area or forested area. The variation in the chironomid abundance data was best correlated with sediment organic content (LOI), lake water total organic carbon content, pH and air temperature, with LOI being the strongest variable. Three major lake groups were separated on the basis of their chironomid assemblages: (i) small and shallow organic-rich lakes, (ii) large and base-rich lakes, and (iii) cold and clear oligotrophic tundra lakes. Environmental variables best discriminating the lake groups were LOI, taxon richness, and Mg. When repeated, this kind of an approach could be useful and efficient in monitoring the effects of global change on species ranges. Many species of fast spreading insects, including chironomids, show a remarkable ability to track environmental changes. Based on this ability, past environmental conditions have been reconstructed using their chitinous remains in the lake sediment profiles. In order to study the Holocene environmental history of subarctic aquatic systems, and quantitatively reconstruct the past temperatures at or near the treeline, long sediment cores covering the last 10000 years (the Holocene) were collected from three lakes. Lower temperature values than expected based on the presence of pine in the catchment during the mid-Holocene were reconstructed from a lake with great water volume and depth. The lake provided thermal refuge for profundal, cold adapted taxa during the warm period. In a shallow lake, the decrease in the reconstructed temperatures during the late Holocene may reflect the indirect response of the midges to climate change through, e.g., pH change. The results from three lakes indicated that the response of chironomids to climate have been more or less indirect. However, concurrent shifts in assemblages of chironomids and vegetation in two lakes during the Holocene time period indicated that the midges together with the terrestrial vegetation had responded to the same ultimate cause, which most likely was the Holocene climate change. This was also supported by the similarity in the long-term trends in faunal succession for the chironomid assemblages in several lakes in the area. In northern Finnish Lapland the distribution of chironomids were significantly correlated with physical and limnological factors that are most likely to change as a result of future climate change. The indirect and individualistic response of aquatic systems, as reconstructed using the chironomid assemblages, to the climate change in the past suggests that in the future, the lake ecosystems in the north do not respond in one predictable way to the global climate change. Lakes in the north may respond to global climate change in various ways that are dependent on the initial characters of the catchment area and the lake.
Resumo:
In order to predict the current state and future development of Earth s climate, detailed information on atmospheric aerosols and aerosol-cloud-interactions is required. Furthermore, these interactions need to be expressed in such a way that they can be represented in large-scale climate models. The largest uncertainties in the estimate of radiative forcing on the present day climate are related to the direct and indirect effects of aerosol. In this work aerosol properties were studied at Pallas and Utö in Finland, and at Mount Waliguan in Western China. Approximately two years of data from each site were analyzed. In addition to this, data from two intensive measurement campaigns at Pallas were used. The measurements at Mount Waliguan were the first long term aerosol particle number concentration and size distribution measurements conducted in this region. They revealed that the number concentration of aerosol particles at Mount Waliguan were much higher than those measured at similar altitudes in other parts of the world. The particles were concentrated in the Aitken size range indicating that they were produced within a couple of days prior to reaching the site, rather than being transported over thousands of kilometers. Aerosol partitioning between cloud droplets and cloud interstitial particles was studied at Pallas during the two measurement campaigns, First Pallas Cloud Experiment (First PaCE) and Second Pallas Cloud Experiment (Second PaCE). The method of using two differential mobility particle sizers (DMPS) to calculate the number concentration of activated particles was found to agree well with direct measurements of cloud droplet. Several parameters important in cloud droplet activation were found to depend strongly on the air mass history. The effects of these parameters partially cancelled out each other. Aerosol number-to-volume concentration ratio was studied at all three sites using data sets with long time-series. The ratio was found to vary more than in earlier studies, but less than either aerosol particle number concentration or volume concentration alone. Both air mass dependency and seasonal pattern were found at Pallas and Utö, but only seasonal pattern at Mount Waliguan. The number-to-volume concentration ratio was found to follow the seasonal temperature pattern well at all three sites. A new parameterization for partitioning between cloud droplets and cloud interstitial particles was developed. The parameterization uses aerosol particle number-to-volume concentration ratio and aerosol particle volume concentration as the only information on the aerosol number and size distribution. The new parameterization is computationally more efficient than the more detailed parameterizations currently in use, but the accuracy of the new parameterization was slightly lower. The new parameterization was also compared to directly observed cloud droplet number concentration data, and a good agreement was found.
Resumo:
We present a measurement of the transverse momentum with respect to the jet axis (kt) of particles in jets produced in pp̅ collisions at √s=1.96 TeV. Results are obtained for charged particles in a cone of 0.5 radians around the jet axis in events with dijet invariant masses between 66 and 737 GeV/c2. The experimental data are compared to theoretical predictions obtained for fragmentation partons within the framework of resummed perturbative QCD using the modified leading log and next-to-modified leading log approximations. The comparison shows that trends in data are successfully described by the theoretical predictions, indicating that the perturbative QCD stage of jet fragmentation is dominant in shaping basic jet characteristics.
Resumo:
We present a measurement of the transverse momentum with respect to the jet axis ($k_{T}$) of particles in jets produced in $p\bar p$ collisions at $\sqrt{s}=1.96$ TeV. Results are obtained for charged particles within a cone of opening angle 0.5 radians around the jet axis in events with dijet invariant masses between 66 and 737 GeV/c$^{2}$. The experimental data are compared to theoretical predictions obtained for fragmentation partons within the framework of resummed perturbative QCD using the modified leading log and next-to-modified leading log approximations. The comparison shows that trends in data are successfully described by the theoretical predictions, indicating that the perturbative QCD stage of jet fragmentation is dominant in shaping basic jet characteristics.
Resumo:
The aim of the current study is to examine the influence of the channel external environment on power, and the effect of power on the distribution network structure within the People’s Republic of China. Throughout the study a dual research process was applied. The theory was constructed by elaborating the main theoretical premises of the study, the channel power theories, the political economy framework and the distribution network structure, but these marketing channel concepts were expanded with other perspectives from other disciplines. The main method applied was a survey conducted among 164 Chinese retailers, complemented by interviews, photographs, observations and census data from the field. This multi-method approach enabled not only to validate and triangulate the quantitative results, but to uncover serendipitous findings as well. The theoretical contribution of the current study to the theory of marketing channels power is the different view it takes on power. First, earlier power studies have taken the producer perspective, whereas the current study also includes a distributor perspective to the discussion. Second, many power studies have dealt with strongly dependent relationships, whereas the current study examines loosely dependent relationships. Power is dependent on unequal distribution of resources rather than based on high dependency. The benefit of this view is in realising that power resources and power strategies are separate concepts. The empirical material of the current study confirmed that at least some resources were significantly related to power strategies. The study showed that the dimension resources composed of technology, know-how and knowledge, managerial freedom and reputation was significantly related to non-coercive power. Third, the notion of different outcomes of power is a contribution of this study to the channels power theory even though not confirmed by the empirical results. Fourth, it was proposed that channel external environment other than the resources would also contribute to the channel power. These propositions were partially supported thus providing only partial contribution to the channel power theory. Finally, power was equally distributed among the different types of actors. The findings from the qualitative data suggest that different types of retailers can be classified according to the meaning the actors put into their business. Some are more business oriented, for others retailing is the only way to earn a living. The findings also suggest that in some actors both retailing and wholesaling functions emerge, and this has implications for the marketing channels structure.
Resumo:
The blood-brain barrier (BBB) is a unique barrier that strictly regulates the entry of endogenous substrates and xenobiotics into the brain. This is due to its tight junctions and the array of transporters and metabolic enzymes that are expressed. The determination of brain concentrations in vivo is difficult, laborious and expensive which means that there is interest in developing predictive tools of brain distribution. Predicting brain concentrations is important even in early drug development to ensure efficacy of central nervous system (CNS) targeted drugs and safety of non-CNS drugs. The literature review covers the most common current in vitro, in vivo and in silico methods of studying transport into the brain, concentrating on transporter effects. The consequences of efflux mediated by p-glycoprotein, the most widely characterized transporter expressed at the BBB, is also discussed. The aim of the experimental study was to build a pharmacokinetic (PK) model to describe p-glycoprotein substrate drug concentrations in the brain using commonly measured in vivo parameters of brain distribution. The possibility of replacing in vivo parameter values with their in vitro counterparts was also studied. All data for the study was taken from the literature. A simple 2-compartment PK model was built using the Stella™ software. Brain concentrations of morphine, loperamide and quinidine were simulated and compared with published studies. Correlation of in vitro measured efflux ratio (ER) from different studies was evaluated in addition to studying correlation between in vitro and in vivo measured ER. A Stella™ model was also constructed to simulate an in vitro transcellular monolayer experiment, to study the sensitivity of measured ER to changes in passive permeability and Michaelis-Menten kinetic parameter values. Interspecies differences in rats and mice were investigated with regards to brain permeability and drug binding in brain tissue. Although the PK brain model was able to capture the concentration-time profiles for all 3 compounds in both brain and plasma and performed fairly well for morphine, for quinidine it underestimated and for loperamide it overestimated brain concentrations. Because the ratio of concentrations in brain and blood is dependent on the ER, it is suggested that the variable values cited for this parameter and its inaccuracy could be one explanation for the failure of predictions. Validation of the model with more compounds is needed to draw further conclusions. In vitro ER showed variable correlation between studies, indicating variability due to experimental factors such as test concentration, but overall differences were small. Good correlation between in vitro and in vivo ER at low concentrations supports the possibility of using of in vitro ER in the PK model. The in vitro simulation illustrated that in the simulation setting, efflux is significant only with low passive permeability, which highlights the fact that the cell model used to measure ER must have low enough paracellular permeability to correctly mimic the in vivo situation.