145 resultados para Stochastic Translog Cost Frontier


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research has highlighted the existence of a social bias in the extent to which children have access to childcare. In general, children living in higher income households are more likely to be cared for in childcare centres. While the existence of a social bias in access to childcare services has been clearly demonstrated, we currently lack a clear explanation as to why this is the case. This paper uses a unique dataset based on survey data collected specifically to study patterns of childcare use in the Swiss canton of Vaud (N = 875). The paper exploits the variation in the way childcare is organised within the canton. Childcare is a municipal policy, as a result of which there are twenty-nine different systems in operation. Fees are progressive everywhere, but variation is substantial. Availability is also very different. This peculiar institutional setup provides an ideal situation to examine the determinants of childcare use by different income groups. Our findings suggest that differences in the fees charged to low-income households, as well as the degree of progressivity of the fee structure, are significant predictors of use, while availability seems to matter less.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIM: The study aimed to compare the rate of success and cost of anal fistula plug (AFP) insertion and endorectal advancement flap (ERAF) for anal fistula. METHOD: Patients receiving an AFP or ERAF for a complex single fistula tract, defined as involving more than a third of the longitudinal length of of the anal sphincter, were registered in a prospective database. A regression analysis was performed of factors predicting recurrence and contributing to cost. RESULTS: Seventy-one patients (AFP 31, ERAF 40) were analysed. Twelve (39%) recurrences occurred in the AFP and 17 (43%) in the ERAF group (P = 1.00). The median length of stay was 1.23 and 2.0 days (P < 0.001), respectively, and the mean cost of treatment was euro5439 ± euro2629 and euro7957 ± euro5905 (P = 0.021), respectively. On multivariable analysis, postoperative complications, underlying inflammatory bowel disease and fistula recurring after previous treatment were independent predictors of de novo recurrence. It also showed that length of hospital stay ≤ 1 day to be the most significant independent contributor to lower cost (P = 0.023). CONCLUSION: Anal fistula plug and ERAF were equally effective in treating fistula-in-ano, but AFP has a mean cost saving of euro2518 per procedure compared with ERAF. The higher cost for ERAF is due to a longer median length of stay.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The public health burden of coronary artery disease (CAD) is important. Perfusion cardiac magnetic resonance (CMR) is generally accepted to detect and monitor CAD. Few studies have so far addressed its costs and costeffectiveness. Objectives: To compare in a large CMR registry the costs of a CMR-guided strategy vs two hypothetical invasive strategies for the diagnosis and the treatment of patients with suspected CAD. Methods: In 3'647 patients with suspected CAD included prospectively in the EuroCMR Registry (59 centers; 18 countries) costs were calculated for diagnostic examinations, revascularizations as well as for complication management over a 1-year follow-up. Patients with ischemia-positive CMR underwent an invasive X-ray coronary angiography (CXA) and revascularization at the discretion of the treating physician (=CMR+CXA strategy). Ischemia was found in 20.9% of patients and 17.4% of them were revascularized. In ischemia-negative patients by CMR, cardiac death and non-fatal myocardial infarctions occurred in 0.38%/y. In a hypothetical invasive arm the costs were calculated for an initial CXA followed by FFR testing in vessels with ≥50% diameter stenoses (=CXA+FFR strategy). To model this hypothetical arm, the same proportion of ischemic patients and outcome was assumed as for the CMR+CXA strategy. The coronary stenosis - FFR relationship reported in the literature was used to derive the proportion of patients with ≥50% diameter stenoses (Psten) in the study cohort. The costs of a CXA-only strategy were also calculated. Calculations were performed from a third payer perspective for the German, UK, Swiss, and US healthcare systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent theory predicts harsh and stochastic conditions to generally promote the evolution of cooperation. Here, we test experimentally whether stochasticity in economic losses also affects the value of reputation in indirect reciprocity, a type of cooperation that is very typical for humans. We used a repeated helping game with observers. One subject (the "Unlucky") lost some money, another one (the "Passer-by") could reduce this loss by accepting a cost to herself, thereby building up a reputation that could be used by others in later interactions. The losses were either stable or stochastic, but the average loss over time and the average efficiency gains of helping were kept constant in both treatments. We found that players with a reputation of being generous were generally more likely to receive help by others, such that investing into a good reputation generated long-term benefits that compensated for the immediate costs of helping. Helping frequencies were similar in both treatments, but players with a reputation to be selfish lost more resources under stochastic conditions. Hence, returns on investment were steeper when losses varied than when they did not. We conclude that this type of stochasticity increases the value of reputation in indirect reciprocity.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The costs related to the treatment of infected total joint arthroplasties represent an ever groving burden to the society. Different patient-adapted therapeutic options like débridement and retention, 1- or 2-step exchange can be used. If a 2-step exchange is used we have to consider short (2-4 weeks) or long (>4-6 weeks) interval treatment. The Swiss DRG (Diagnose related Groups) determines the reimboursement the hopsital receives for the treatment of an infected total arthroplasty. The review assesses the cost-effectiveness of hospitalisation practices linked to surgical treatment in the two-stage exchange of a prosthetic-joint infection. The aim of this retrospectiv study is to compare the economical impact between a short (2 to 4 weeks) versus a long (6 weeks and above) interval during a two-satge procedure to determine the financial impact. Retrospectiv study of the patients with a two-stage procedure for a hip or knee prosthetic joint infection at CHUV hospital Lausanne (Switzerland) between 2012 and 2013. The review analyses the correlation between the interval length and the length of the hospital stay as well as with the costs and revenues per hospital stay. In average there is a loss of 40′000 Euro per hospitalisation for the treatment of prosthetic joint infection. Revenues never cover all the costs, even with a short interval procedure. This economical loss increases with the length of the hospital stay if a long-term intervall is choosen. The review explores potential for improvement in reimbourement practices and hospitalisation practices in the current Swiss healthcare setting. There should be alternative setups to decrease the burden of medical costs by a) increase the reimboursment for the treatment of infected total joints or by b) splitting the hospital stay with partners (rapid transfer after first operation from center hospital to level 2 hospital and retransfer for second operation to center) in order to increase revenues.