899 resultados para Simulation analysis
Resumo:
The study assessed the economic efficiency of different strategies for the control of post-weaning multi-systemic wasting syndrome (PMWS) and porcine circovirus type 2 subclinical infection (PCV2SI), which have a major economic impact on the pig farming industry worldwide. The control strategies investigated consisted on the combination of up to 5 different control measures. The control measures considered were: (1) PCV2 vaccination of piglets (vac); (2) ensuring age adjusted diet for growers (diets); (3) reduction of stocking density (stock); (4) improvement of biosecurity measures (bios); and (5) total depopulation and repopulation of the farm for the elimination of other major pathogens (DPRP). A model was developed to simulate 5 years production of a pig farm with a 3-weekly batch system and with 100 sows. A PMWS/PCV2SI disease and economic model, based on PMWS severity scores, was linked to the production model in order to assess disease losses. This PMWS severity scores depends on the combination post-weaning mortality, PMWS morbidity in younger pigs and proportion of PCV2 infected pigs observed on farms. The economic analysis investigated eleven different farm scenarios, depending on the number of risk factors present before the intervention. For each strategy, an investment appraisal assessed the extra costs and benefits of reducing a given PMWS severity score to the average score of a slightly affected farm. The net present value obtained for each strategy was then multiplied by the corresponding probability of success to obtain an expected value. A stochastic simulation was performed to account for uncertainty and variability. For moderately affected farms PCV2 vaccination alone was the most cost-efficient strategy, but for highly affected farms it was either PCV2 vaccination alone or in combination with biosecurity measures, with the marginal profitability between 'vac' and 'vac+bios' being small. Other strategies such as 'diets', 'vac+diets' and 'bios+diets' were frequently identified as the second or third best strategy. The mean expected values of the best strategy for a moderately and a highly affected farm were £14,739 and £57,648 after 5 years, respectively. This is the first study to compare economic efficiency of control strategies for PMWS and PCV2SI. The results demonstrate the economic value of PCV2 vaccination, and highlight that on highly affected farms biosecurity measures are required to achieve optimal profitability. The model developed has potential as a farm-level decision support tool for the control of this economically important syndrome.
Resumo:
The considerable search for synergistic agents in cancer research is motivated by the therapeutic benefits achieved by combining anti-cancer agents. Synergistic agents make it possible to reduce dosage while maintaining or enhancing a desired effect. Other favorable outcomes of synergistic agents include reduction in toxicity and minimizing or delaying drug resistance. Dose-response assessment and drug-drug interaction analysis play an important part in the drug discovery process, however analysis are often poorly done. This dissertation is an effort to notably improve dose-response assessment and drug-drug interaction analysis. The most commonly used method in published analysis is the Median-Effect Principle/Combination Index method (Chou and Talalay, 1984). The Median-Effect Principle/Combination Index method leads to inefficiency by ignoring important sources of variation inherent in dose-response data and discarding data points that do not fit the Median-Effect Principle. Previous work has shown that the conventional method yields a high rate of false positives (Boik, Boik, Newman, 2008; Hennessey, Rosner, Bast, Chen, 2010) and, in some cases, low power to detect synergy. There is a great need for improving the current methodology. We developed a Bayesian framework for dose-response modeling and drug-drug interaction analysis. First, we developed a hierarchical meta-regression dose-response model that accounts for various sources of variation and uncertainty and allows one to incorporate knowledge from prior studies into the current analysis, thus offering a more efficient and reliable inference. Second, in the case that parametric dose-response models do not fit the data, we developed a practical and flexible nonparametric regression method for meta-analysis of independently repeated dose-response experiments. Third, and lastly, we developed a method, based on Loewe additivity that allows one to quantitatively assess interaction between two agents combined at a fixed dose ratio. The proposed method makes a comprehensive and honest account of uncertainty within drug interaction assessment. Extensive simulation studies show that the novel methodology improves the screening process of effective/synergistic agents and reduces the incidence of type I error. We consider an ovarian cancer cell line study that investigates the combined effect of DNA methylation inhibitors and histone deacetylation inhibitors in human ovarian cancer cell lines. The hypothesis is that the combination of DNA methylation inhibitors and histone deacetylation inhibitors will enhance antiproliferative activity in human ovarian cancer cell lines compared to treatment with each inhibitor alone. By applying the proposed Bayesian methodology, in vitro synergy was declared for DNA methylation inhibitor, 5-AZA-2'-deoxycytidine combined with one histone deacetylation inhibitor, suberoylanilide hydroxamic acid or trichostatin A in the cell lines HEY and SKOV3. This suggests potential new epigenetic therapies in cell growth inhibition of ovarian cancer cells.
Resumo:
Despite major advances in the study of glioma, the quantitative links between intra-tumor molecular/cellular properties, clinically observable properties such as morphology, and critical tumor behaviors such as growth and invasiveness remain unclear, hampering more effective coupling of tumor physical characteristics with implications for prognosis and therapy. Although molecular biology, histopathology, and radiological imaging are employed in this endeavor, studies are severely challenged by the multitude of different physical scales involved in tumor growth, i.e., from molecular nanoscale to cell microscale and finally to tissue centimeter scale. Consequently, it is often difficult to determine the underlying dynamics across dimensions. New techniques are needed to tackle these issues. Here, we address this multi-scalar problem by employing a novel predictive three-dimensional mathematical and computational model based on first-principle equations (conservation laws of physics) that describe mathematically the diffusion of cell substrates and other processes determining tumor mass growth and invasion. The model uses conserved variables to represent known determinants of glioma behavior, e.g., cell density and oxygen concentration, as well as biological functional relationships and parameters linking phenomena at different scales whose specific forms and values are hypothesized and calculated based on in vitro and in vivo experiments and from histopathology of tissue specimens from human gliomas. This model enables correlation of glioma morphology to tumor growth by quantifying interdependence of tumor mass on the microenvironment (e.g., hypoxia, tissue disruption) and on the cellular phenotypes (e.g., mitosis and apoptosis rates, cell adhesion strength). Once functional relationships between variables and associated parameter values have been informed, e.g., from histopathology or intra-operative analysis, this model can be used for disease diagnosis/prognosis, hypothesis testing, and to guide surgery and therapy. In particular, this tool identifies and quantifies the effects of vascularization and other cell-scale glioma morphological characteristics as predictors of tumor-scale growth and invasion.
Resumo:
Withdrawal reflexes of the mollusk Aplysia exhibit sensitization, a simple form of long-term memory (LTM). Sensitization is due, in part, to long-term facilitation (LTF) of sensorimotor neuron synapses. LTF is induced by the modulatory actions of serotonin (5-HT). Pettigrew et al. developed a computational model of the nonlinear intracellular signaling and gene network that underlies the induction of 5-HT-induced LTF. The model simulated empirical observations that repeated applications of 5-HT induce persistent activation of protein kinase A (PKA) and that this persistent activation requires a suprathreshold exposure of 5-HT. This study extends the analysis of the Pettigrew model by applying bifurcation analysis, singularity theory, and numerical simulation. Using singularity theory, classification diagrams of parameter space were constructed, identifying regions with qualitatively different steady-state behaviors. The graphical representation of these regions illustrates the robustness of these regions to changes in model parameters. Because persistent protein kinase A (PKA) activity correlates with Aplysia LTM, the analysis focuses on a positive feedback loop in the model that tends to maintain PKA activity. In this loop, PKA phosphorylates a transcription factor (TF-1), thereby increasing the expression of an ubiquitin hydrolase (Ap-Uch). Ap-Uch then acts to increase PKA activity, closing the loop. This positive feedback loop manifests multiple, coexisting steady states, or multiplicity, which provides a mechanism for a bistable switch in PKA activity. After the removal of 5-HT, the PKA activity either returns to its basal level (reversible switch) or remains at a high level (irreversible switch). Such an irreversible switch might be a mechanism that contributes to the persistence of LTM. The classification diagrams also identify parameters and processes that might be manipulated, perhaps pharmacologically, to enhance the induction of memory. Rational drug design, to affect complex processes such as memory formation, can benefit from this type of analysis.
Resumo:
Despite major advances in the study of glioma, the quantitative links between intra-tumor molecular/cellular properties, clinically observable properties such as morphology, and critical tumor behaviors such as growth and invasiveness remain unclear, hampering more effective coupling of tumor physical characteristics with implications for prognosis and therapy. Although molecular biology, histopathology, and radiological imaging are employed in this endeavor, studies are severely challenged by the multitude of different physical scales involved in tumor growth, i.e., from molecular nanoscale to cell microscale and finally to tissue centimeter scale. Consequently, it is often difficult to determine the underlying dynamics across dimensions. New techniques are needed to tackle these issues. Here, we address this multi-scalar problem by employing a novel predictive three-dimensional mathematical and computational model based on first-principle equations (conservation laws of physics) that describe mathematically the diffusion of cell substrates and other processes determining tumor mass growth and invasion. The model uses conserved variables to represent known determinants of glioma behavior, e.g., cell density and oxygen concentration, as well as biological functional relationships and parameters linking phenomena at different scales whose specific forms and values are hypothesized and calculated based on in vitro and in vivo experiments and from histopathology of tissue specimens from human gliomas. This model enables correlation of glioma morphology to tumor growth by quantifying interdependence of tumor mass on the microenvironment (e.g., hypoxia, tissue disruption) and on the cellular phenotypes (e.g., mitosis and apoptosis rates, cell adhesion strength). Once functional relationships between variables and associated parameter values have been informed, e.g., from histopathology or intra-operative analysis, this model can be used for disease diagnosis/prognosis, hypothesis testing, and to guide surgery and therapy. In particular, this tool identifies and quantifies the effects of vascularization and other cell-scale glioma morphological characteristics as predictors of tumor-scale growth and invasion.
Resumo:
1 Natural soil profiles may be interpreted as an arrangement of parts which are characterized by properties like hydraulic conductivity and water retention function. These parts form a complicated structure. Characterizing the soil structure is fundamental in subsurface hydrology because it has a crucial influence on flow and transport and defines the patterns of many ecological processes. We applied an image analysis method for recognition and classification of visual soil attributes in order to model flow and transport through a man-made soil profile. Modeled and measured saturation-dependent effective parameters were compared. We found that characterizing and describing conductivity patterns in soils with sharp conductivity contrasts is feasible. Differently, solving flow and transport on the basis of these conductivity maps is difficult and, in general, requires special care for representation of small-scale processes.
Resumo:
The discovery of grid cells in the medial entorhinal cortex (MEC) permits the characterization of hippocampal computation in much greater detail than previously possible. The present study addresses how an integrate-and-fire unit driven by grid-cell spike trains may transform the multipeaked, spatial firing pattern of grid cells into the single-peaked activity that is typical of hippocampal place cells. Previous studies have shown that in the absence of network interactions, this transformation can succeed only if the place cell receives inputs from grids with overlapping vertices at the location of the place cell's firing field. In our simulations, the selection of these inputs was accomplished by fast Hebbian plasticity alone. The resulting nonlinear process was acutely sensitive to small input variations. Simulations differing only in the exact spike timing of grid cells produced different field locations for the same place cells. Place fields became concentrated in areas that correlated with the initial trajectory of the animal; the introduction of feedback inhibitory cells reduced this bias. These results suggest distinct roles for plasticity of the perforant path synapses and for competition via feedback inhibition in the formation of place fields in a novel environment. Furthermore, they imply that variability in MEC spiking patterns or in the rat's trajectory is sufficient for generating a distinct population code in a novel environment and suggest that recalling this code in a familiar environment involves additional inputs and/or a different mode of operation of the network.
Resumo:
Calmodulin (CaM) is a ubiquitous Ca(2+) buffer and second messenger that affects cellular function as diverse as cardiac excitability, synaptic plasticity, and gene transcription. In CA1 pyramidal neurons, CaM regulates two opposing Ca(2+)-dependent processes that underlie memory formation: long-term potentiation (LTP) and long-term depression (LTD). Induction of LTP and LTD require activation of Ca(2+)-CaM-dependent enzymes: Ca(2+)/CaM-dependent kinase II (CaMKII) and calcineurin, respectively. Yet, it remains unclear as to how Ca(2+) and CaM produce these two opposing effects, LTP and LTD. CaM binds 4 Ca(2+) ions: two in its N-terminal lobe and two in its C-terminal lobe. Experimental studies have shown that the N- and C-terminal lobes of CaM have different binding kinetics toward Ca(2+) and its downstream targets. This may suggest that each lobe of CaM differentially responds to Ca(2+) signal patterns. Here, we use a novel event-driven particle-based Monte Carlo simulation and statistical point pattern analysis to explore the spatial and temporal dynamics of lobe-specific Ca(2+)-CaM interaction at the single molecule level. We show that the N-lobe of CaM, but not the C-lobe, exhibits a nano-scale domain of activation that is highly sensitive to the location of Ca(2+) channels, and to the microscopic injection rate of Ca(2+) ions. We also demonstrate that Ca(2+) saturation takes place via two different pathways depending on the Ca(2+) injection rate, one dominated by the N-terminal lobe, and the other one by the C-terminal lobe. Taken together, these results suggest that the two lobes of CaM function as distinct Ca(2+) sensors that can differentially transduce Ca(2+) influx to downstream targets. We discuss a possible role of the N-terminal lobe-specific Ca(2+)-CaM nano-domain in CaMKII activation required for the induction of synaptic plasticity.
Resumo:
Besides its primary role in producing food and fiber, agriculture also has relevant effects on several other functions, such as management of renewable natural resources. Climate change (CC) may lead to new trade-offs between agricultural functions or aggravate existing ones, but suitable agricultural management may maintain or even improve the ability of agroecosystems to supply these functions. Hence, it is necessary to identify relevant drivers (e.g., cropping practices, local conditions) and their interactions, and how they affect agricultural functions in a changing climate. The goal of this study was to use a modeling framework to analyze the sensitivity of indicators of three important agricultural functions, namely crop yield (food and fiber production function), soil erosion (soil conservation function), and nutrient leaching (clean water provision function), to a wide range of agricultural practices for current and future climate conditions. In a two-step approach, cropping practices that explain high proportions of variance of the different indicators were first identified by an analysis of variance-based sensitivity analysis. Then, most suitable combinations of practices to achieve best performance with respect to each indicator were extracted, and trade-offs were analyzed. The procedure was applied to a region in western Switzerland, considering two different soil types to test the importance of local environmental constraints. Results show that the sensitivity of crop yield and soil erosion due to management is high, while nutrient leaching mostly depends on soil type. We found that the influence of most agricultural practices does not change significantly with CC; only irrigation becomes more relevant as a consequence of decreasing summer rainfall. Trade-offs were identified when focusing on best performances of each indicator separately, and these were amplified under CC. For adaptation to CC in the selected study region, conservation soil management and the use of cropped grasslands appear to be the most suitable options to avoid trade-offs.
Resumo:
Any functionally important mutation is embedded in an evolutionary matrix of other mutations. Cladistic analysis, based on this, is a method of investigating gene effects using a haplotype phylogeny to define a set of tests which localize causal mutations to branches of the phylogeny. Previous implementations of cladistic analysis have not addressed the issue of analyzing data from related individuals, though in human studies, family data are usually needed to obtain unambiguous haplotypes. In this study, a method of cladistic analysis is described in which haplotype effects are parameterized in a linear model which accounts for familial correlations. The method was used to study the effect of apolipoprotein (Apo) B gene variation on total-, LDL-, and HDL-cholesterol, triglyceride, and Apo B levels in 121 French families. Five polymorphisms defined Apo B haplotypes: the signal peptide Insertion/deletion, Bsp 1286I, XbaI, MspI, and EcoRI. Eleven haplotypes were found, and a haplotype phylogeny was constructed and used to define a set of tests of haplotype effects on lipid and apo B levels.^ This new method of cladistic analysis, the parametric method, found significant effects for single haplotypes for all variables. For HDL-cholesterol, 3 clusters of evolutionarily-related haplotypes affecting levels were found. Haplotype effects accounted for about 10% of the genetic variance of triglyceride and HDL-cholesterol levels. The results of the parametric method were compared to those of a method of cladistic analysis based on permutational testing. The permutational method detected fewer haplotype effects, even when modified to account for correlations within families. Simulation studies exploring these differences found evidence of systematic errors in the permutational method due to the process by which haplotype groups were selected for testing.^ The applicability of cladistic analysis to human data was shown. The parametric method is suggested as an improvement over the permutational method. This study has identified candidate haplotypes for sequence comparisons in order to locate the functional mutations in the Apo B gene which may influence plasma lipid levels. ^
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
The aim of this study was to explore potential causes and mechanisms for the sequence and temporal pattern of tree taxa, specifically for the shift from shrub-tundra to birch–juniper woodland during and after the transition from the Oldest Dryas to the Bølling–Allerød in the region surrounding the lake Gerzensee in southern Central Europe. We tested the influence of climate, forest dynamics, community dynamics compared to other causes for delays. For this aim temperature reconstructed from a δ18O-record was used as input driving the multi-species forest-landscape model TreeMig. In a stepwise scenario analysis, population dynamics along with pollen production and transport were simulated and compared with pollen-influx data, according to scenarios of different δ18O/temperature sensitivities, different precipitation levels, with/without inter-specific competition, and with/without prescribed arrival of species. In the best-fitting scenarios, the effects on competitive relationships, pollen production, spatial forest structure, albedo, and surface roughness were examined in more detail. The appearance of most taxa in the data could only be explained by the coldest temperature scenario with a sensitivity of 0.3‰/°C, corresponding to an anomaly of − 15 °C. Once the taxa were present, their temporal pattern was shaped by competition. The later arrival of Pinus could not be explained even by the coldest temperatures, and its timing had to be prescribed by first observations in the pollen record. After the arrival into the simulation area, the expansion of Pinus was further influenced by competitors and minor climate oscillations. The rapid change in the simulated species composition went along with a drastic change in forest structure, leaf area, albedo, and surface roughness. Pollen increased only shortly after biomass. Based on our simulations, two alternative potential scenarios for the pollen pattern can be given: either very cold climate suppressed most species in the Oldest Dryas, or they were delayed by soil formation or migration. One taxon, Pinus, was delayed by migration and then additionally hindered by competition. Community dynamics affected the pattern in two ways: potentially by facilitation, i.e. by nitrogen-fixing pioneer species at the onset, whereas the later pattern was clearly shaped by competition. The simulated structural changes illustrate how vegetation on a larger scale could feed back to the climate system. For a better understanding, a more integrated simulation approach covering also the immigration from refugia would be necessary, for this combines climate-driven population dynamics, migration, individual pollen production and transport, soil dynamics, and physiology of individual pollen production.
Resumo:
I modeled the cumulative impact of hydroelectric projects with and without commercial fishing weirs and water-control dams on the production, survival to the sea, and potential fecundity of migrating female silver-phase American eels, Anguilla rostrata in the Kennebec River basin, Maine, This river basin has 22 hydroelectric projects, 73 water-control dams, and 15 commercial fishing weir sites. The modeled area included an 8,324 km(2) segment of the drainage area between Merrymeeting Bay and the upper limit of American eel distribution in the basin. One set of input,, (assumed or real values) concerned population structure (Le., population density and sex ratio changes throughout the basin, female length-class distribution, and drainage area between dams), Another set concerned factors influencing survival and potential fecundity of migrating American eels (i.e., pathway sequences through projects, survival rate per project by length-class. and length-fecundity relationship). Under baseline conditions about 402,400 simulated silver female American eels would be produced annually reductions in their numbers due to dams and weirs would reduce the realized fecundity (i.e., the number of eggs produced by all females that survived the migration). Without weirs or water-control dams, about 63% of the simulated silverphase American eels survived their freshwater spawning migration run to the sea when the survival rate at each hydroelectric dam was 9017, 40% survived at 80% survival per dam, and 18% survived at 60% survival per dam. Removing the lowermost hydroelectric dam on the Kennebec River increased survival by 6.0-7.6% for the basin. The efficient commercial weirs reduced survival to the sea to 69-76%( of what it would have been without weirs', regardless of survival rates at hydroelectric dams. Water-control dams had little impact on production in this basin because most were located in the upper reaches of tributaries. Sensitivity analysis led to the conclusion that small changes in population density and female length distribution had greater effects on survival and realized fecundity than similar changes in turbine survival rate. The latter became more important as turbine survival rate decreased. Therefore, it might be more fruitful to determine population distribution in basins of interest than to determine mortality rate at each hydroelectric project.
Resumo:
A reliable and robust routing service for Flying Ad-Hoc Networks (FANETs) must be able to adapt to topology changes, and also to recover the quality level of the delivered multiple video flows under dynamic network topologies. The user experience on watching live videos must also be satisfactory even in scenarios with network congestion, buffer overflow, and packet loss ratio, as experienced in many FANET multimedia applications. In this paper, we perform a comparative simulation study to assess the robustness, reliability, and quality level of videos transmitted via well-known beaconless opportunistic routing protocols. Simulation results shows that our developed protocol XLinGO achieves multimedia dissemination with Quality of Experience (QoE) support and robustness in a multi-hop, multi-flow, and mobile networks, as required in many multimedia FANET scenarios.
Resumo:
A rain-on-snow flood occurred in the Bernese Alps, Switzerland, on 10 October 2011, and caused significant damage. As the flood peak was unpredicted by the flood forecast system, questions were raised concerning the causes and the predictability of the event. Here, we aimed to reconstruct the anatomy of this rain-on-snow flood in the Lötschen Valley (160 km2) by analyzing meteorological data from the synoptic to the local scale and by reproducing the flood peak with the hydrological model WaSiM-ETH (Water Flow and Balance Simulation Model). This in order to gain process understanding and to evaluate the predictability. The atmospheric drivers of this rain-on-snow flood were (i) sustained snowfall followed by (ii) the passage of an atmospheric river bringing warm and moist air towards the Alps. As a result, intensive rainfall (average of 100 mm day-1) was accompanied by a temperature increase that shifted the 0° line from 1500 to 3200 m a.s.l. (meters above sea level) in 24 h with a maximum increase of 9 K in 9 h. The south-facing slope of the valley received significantly more precipitation than the north-facing slope, leading to flooding only in tributaries along the south-facing slope. We hypothesized that the reason for this very local rainfall distribution was a cavity circulation combined with a seeder-feeder-cloud system enhancing local rainfall and snowmelt along the south-facing slope. By applying and considerably recalibrating the standard hydrological model setup, we proved that both latent and sensible heat fluxes were needed to reconstruct the snow cover dynamic, and that locally high-precipitation sums (160 mm in 12 h) were required to produce the estimated flood peak. However, to reproduce the rapid runoff responses during the event, we conceptually represent likely lateral flow dynamics within the snow cover causing the model to react "oversensitively" to meltwater. Driving the optimized model with COSMO (Consortium for Small-scale Modeling)-2 forecast data, we still failed to simulate the flood because COSMO-2 forecast data underestimated both the local precipitation peak and the temperature increase. Thus we conclude that this rain-on-snow flood was, in general, predictable, but requires a special hydrological model setup and extensive and locally precise meteorological input data. Although, this data quality may not be achieved with forecast data, an additional model with a specific rain-on-snow configuration can provide useful information when rain-on-snow events are likely to occur.