965 resultados para regularly entered default judgment set aside without costs
Resumo:
Background: Preclinical data indicate activity of mammalian target of rapamycin inhibitors and synergistic activity together with radiotherapy in glioblastoma. The aim of this trial is to assess the therapeutic activity of temsirolimus (CCI-779), an intravenous mTOR inhibitor, in patients with newly diagnosed glioblastoma with unmethylated O6 methlyguanine-DNA-methlytransferase (MGMT)promoter. Methods: Patients (n=257) with newly diagnosed glioblastoma after open surgical biopsy or resection fulfilling basic eligibility criteria underwent a central MGMT promoter analysis using quantitative methylation specific PCR. Patients with glioblastoma harboring an unmethylated MGMT promoter (n=111) were randomized 1:1 between radiotherapy (60 Gy; 5 times 2 Gy per week) plus concomitant and six cycles of maintenance temozolomide or radiotherapy plus weekly temsirolimus at 25 mg flat dose to be continued until progression or undue toxicity. Primary endpoint was overall survival at 12 months (OS12). Sample size of the investigational treatment arm required 54 patients to assess adequacy of temsirolimus activity set at 80%. More than 38 patients alive at 12 months in the per protocol population was considered a positive signal. A control arm of 54 patients treated with the standard of care was implemented to evaluate the assumptions on OS12. Results: Between December 2009 and October 2012, 111 pts in 14 centers were randomized and treated. Median age was 55 and 58 years in the temsirolimus and standard arm, respectively. Most patients (95.5%) had a WHO performance status of 0 or 1. Both therapies were properly administered with a median of 13 cycles of maintenance temsirolimus. In the per protocolpopulation, exactly 38 patients treated with temsirolimus (out of 54 eligible) reached OS12. In the intention to treat population OS12 was 72.2% [95% CI (58.2, 82.2)] in the temozolomide arm and 69.6% [95% CI (55.8, 79.9) in the temsirolimus arm [HR=1.16 95% CI (0.77, 1.76), p=0.47]. Conclusions: The therapeutic activity of temsirolimus in patients with newly diagnosed glioblastoma with an unmethylated MGMT promoter is too low.
Resumo:
In this paper we prove that the Mas-Colell bargaining set coincides with the core for three-player balanced and superadditive cooperative games. This is no longer true without the superadditivity condition or for games with more than three-players. Furthermore, under the same assumptions, the coincidence between the Mas-Collel and the individual rational bargaining set (Vohra (1991)) is revealed. Keywords: Cooperative game, Mas-Colell bargaining set, balancedness, individual rational bargaining set. JEL classi fication: C71, D63, D71.
Resumo:
A haplotype is an m-long binary vector. The XOR-genotype of two haplotypes is the m-vector of their coordinate-wise XOR. We study the following problem: Given a set of XOR-genotypes, reconstruct their haplotypes so that the set of resulting haplotypes can be mapped onto a perfect phylogeny (PP) tree. The question is motivated by studying population evolution in human genetics and is a variant of the PP haplotyping problem that has received intensive attention recently. Unlike the latter problem, in which the input is '' full '' genotypes, here, we assume less informative input and so may be more economical to obtain experimentally. Building on ideas of Gusfield, we show how to solve the problem in polynomial time by a reduction to the graph realization problem. The actual haplotypes are not uniquely determined by the tree they map onto and the tree itself may or may not be unique. We show that tree uniqueness implies uniquely determined haplotypes, up to inherent degrees of freedom, and give a sufficient condition for the uniqueness. To actually determine the haplotypes given the tree, additional information is necessary. We show that two or three full genotypes suffice to reconstruct all the haplotypes and present a linear algorithm for identifying those genotypes.
Resumo:
Justificación y objetivos: El estudio PREDyCES® tuvo dos objetivos principales. Primero, analizar la prevalencia de desnutrición hospitalaria (DH) en España tanto al ingreso como al alta, y segundo, estimar sus costes asociados. Métodos: Estudio nacional, transversal, observacional, multicéntrico, en condiciones de práctica clínica habitual que evaluó la presencia de desnutrición hospitalaria al ingreso y al alta mediante el NRS-2002®. Una extensión del estudio analizó la incidencia de complicaciones asociadas a la desnutrición, el exceso de estancia hospitalaria y los costes sanitarios asociados a la DH. Resultados: La prevalencia de desnutrición observada según el NRS-2002® fue del 23.7%. El análisis multivariante mostró que la edad, el género, la presencia de enfermedad oncológica, diabetes mellitus, disfagia y la polimedicación fueron los factores principales que se asociaron a la presencia de desnutrición. La DH se asoció a un incremento de la estancia hospitalaria, especialmente en aquellos pacientes que ingresaron sin desnutrición y que presentaron desnutrición al alta (15.2 vs 8.0 días; p < 0.001), con un coste adicional asociado de 5.829€ por paciente. Conclusiones: Uno de cada cuatro pacientes en los hospitales españoles se encuentra desnutrido. Esta condición se asocia a un exceso de estancia hospitalaria y costes asociados, especialmente en pacientes que se desnutren durante su hospitalización. Se debería generalizar el cribado nutricional sistemático con el objetivo de implementar intervenciones nutricionales de conocida eficacia.
Local re-inversion coronary MR angiography: arterial spin-labeling without the need for subtraction.
Resumo:
PURPOSE: To implement a double-inversion bright-blood coronary MR angiography sequence using a cylindrical re-inversion prepulse for selective visualization of the coronary arteries. MATERIALS AND METHODS: Local re-inversion bright-blood magnetization preparation was implemented using a nonselective inversion followed by a cylindrical aortic re-inversion prepulse. After an inversion delay that allows for in-flow of the labeled blood-pool into the coronary arteries, three-dimensional radial steady-state free-precession (SSFP) imaging (repetition/echo time, 7.2/3.6 ms; flip angle, 120 degrees, 16 profiles per RR interval; field of view, 360 mm; matrix, 512, twelve 3-mm slices) is performed. Coronary MR angiography was performed in three healthy volunteers and in one patient on a commercial 1.5 Tesla whole-body MR System. RESULTS: In all subjects, coronary arteries were selectively visualized with positive contrast. In addition, a middle-grade stenosis of the proximal right coronary artery was seen in one patient. CONCLUSION: A novel T1 contrast-enhancement strategy is presented for selective visualization of the coronary arteries without extrinsic contrast medium application. In comparison to former arterial spin-labeling schemes, the proposed magnetization preparation obviates the need for a second data set and subtraction.
Resumo:
For the development and evaluation of cardiac magnetic resonance (MR) imaging sequences and methodologies, the availability of a periodically moving phantom to model respiratory and cardiac motion would be of substantial benefit. Given the specific physical boundary conditions in an MR environment, the choice of materials and power source of such phantoms is heavily restricted. Sophisticated commercial solutions are available; however, they are often relatively costly and user-specific modifications may not easily be implemented. We therefore sought to construct a low-cost MR-compatible motion phantom that could be easily reproduced and had design flexibility. A commercially available K'NEX construction set (Hyper Space Training Tower, K'NEX Industries, Inc., Hatfield, PA) was used to construct a periodically moving phantom head. The phantom head performs a translation with a superimposed rotation, driven by a motor over a 2-m rigid rod. To synchronize the MR data acquisition with phantom motion (without introducing radiofrequency-related image artifacts), a fiberoptic control unit generates periodic trigger pulses synchronized to the phantom motion. Total material costs of the phantom are US$ < 200.00, and a total of 80 man-hours were required to design and construct the original phantom. With schematics of the present solution, the phantom reproduction may be achieved in approximately 15 man-hours. The presented MR-compatible periodically moving phantom can easily be reproduced, and user-specific modifications may be implemented. Such an approach allows a detailed investigation of motion-related phenomena in MR images.
Resumo:
Background: Medical and pharmacological direct costs of cigarette smoking cessation programmes are not covered by health insurance in several countries despite documented cost-effectiveness. Design: prospective cost identification study of a 9-week programme in Switzerland. Methods: A total of 481 smokers were followed-up for 9 weeks. Socio-demographic characteristics, number of outpatient visits, dosage and frequency of use of nicotine replacement therapy (NRT) as well as date of relapse were prospectively collected. Individual cost of care until relapse or programme end as well as cost per week of follow-up were computed. Comparisons were carried out between the groups with or without relapse at the end of the programme. Results: Of the 209 men and 272 women included, 347 patients (72%) finished the programme. Among them, 240 patients (70%) succeeded in quitting and 107 patients (30%) relapsed. As compared with the group relapsing by the end of the programme, the group succeeding in quitting was more often living in a couple (68% vs. 55%, p = 0.029). Their mean weekly costs of visits were higher (CHF 81.2 ± 6.1 vs. 78.4 ± 7.6, p = 0.001), while their mean weekly costs for NRT were similar (CHF 24.2 ± 12.6 vs. 25.4 ± 15.9, p = 0.711). Mean total costs per week were similar (CHF 105.4 ± 15.4 vs. 103.8 ± 19.4, p = 0.252). More intensive NRT at week 4 increased the probability not to relapse at the end of the programme. Conclusions: Over 9 weeks, medical and pharmacological costs of stopping smoking are low. Good medical and social support as well as adequate NRT seem to play a role in successful quitting.
Resumo:
We propose a multivariate approach to the study of geographic species distribution which does not require absence data. Building on Hutchinson's concept of the ecological niche, this factor analysis compares, in the multidimensional space of ecological variables, the distribution of the localities where the focal species was observed to a reference set describing the whole study area. The first factor extracted maximizes the marginality of the focal species, defined as the ecological distance between the species optimum and the mean habitat within the reference area. The other factors maximize the specialization of this focal species, defined as the ratio of the ecological variance in mean habitat to that observed for the focal species. Eigenvectors and eigenvalues are readily interpreted and can be used to build habitat-suitability maps. This approach is recommended in Situations where absence data are not available (many data banks), unreliable (most cryptic or rare species), or meaningless (invaders). We provide an illustration and validation of the method for the alpine ibex, a species reintroduced in Switzerland which presumably has not yet recolonized its entire range.
Resumo:
Toiminnanohjausjärjestelmän implementointi ja sen mukanaan tuomat muutokset tuotekustannuslaskentaan asettavat haasteita yritykselle. Metallitoimialalla toimivassa yrityksessä on havaittu samat haasteet implementoitaessa SAP R/3 toiminnanohjausjärjestelmää ja sen tuotekustannuslaskentatoiminnallisuutta. SAP R/3 tuotekustannuslogiikka tarvitsee tietoa järjestelmän ulkopuolelta, minkä huomioimatta jättäminen vaikuttaa suoraan laskentatarkkuuteen. Diplomityössä kehitetään sekä standardoitu prosessi että laskentajärjestelmä, joiden avulla pystytään laskemaan tarvittavat niin toimintokustannukset eri teräspalvelukeskuksen kuormituspisteille kuin kustannustenvyörytysarvot. Lasketut arvot muodostavat tarvittavat tekijät SAP R/3 tuotekustannuslaskennan master dataan. Tavoitteena on edesauttaa läpinäkyvän kustannustiedon muodostumista. Diplomityö pohjautuu ns. vesiputousmalliin (SDLC). Ensin tunnistetaan reunaehdot ympäristöstä, jossa tuotekustannuslaskenta toteutetaan. Tämä asettaa joustamattomia komponentteja kehitettävälle laskentajärjestelmälle. Joustavat komponentit sen sijaan antavat vapautta laskentajärjestelmälle. Yhdistämällä joustamattomat ja joustavat komponentit saavutetaan järjestelmä, jolla voidaan täydentää SAP R/3 tuotekustannuslaskennan puutteellisuutta.
Resumo:
1 Summary This dissertation deals with two major aspects of corporate governance that grew in importance during the last years: the internal audit function and financial accounting education. In three essays, I contribute to research on these topics which are embedded in the broader corporate governance literature. The first two essays consist of experimental investigations of internal auditors' judgments. They deal with two research issues for which accounting research lacks evidence: The effectiveness of internal controls and the potentially conflicting role of the internal audit function between management and the audit committee. The findings of the first two essays contribute to the literature on internal auditors' judgment and the role of the internal audit function as a major cornerstone of corporate governance. The third essay theoretically examines a broader issue but also relates to the overall research question of this dissertation: What contributes to effective corporate governance? This last essay takes the perspective that the root for quality corporate governance is appropriate financial accounting education. r develop a public interest approach to accounting education that contributes to the literature on adequate accounting education with respect to corporate governance and accounting harmonization. The increasing importance of both the internal audit function and accounting education for corporate governance can be explained by the same recent fundamental changes that still affect accounting research and practice. First, the Sarbanes-Oxley Act of 2002 (SOX, 2002) and the 8th EU Directive (EU, 2006) have led to a bigger role for the internal audit function in corporate governance. Their implications regarding the implementation of audit committees and their oversight over internal controls are extensive. As a consequence, the internal audit function has become increasingly important for corporate governance and serves a new master (i.e. the audit committee) within the company in addition to management. Second, the SOX (2002) and the 8th EU Directive introduced additional internal control mechanisms that are expected to contribute to the reliability of financial information. As a consequence, the internal audit function is expected to contribute to a greater extent to the reliability of financial statements. Therefore, effective internal control mechanisms that strengthen objective judgments and independence become important. This is especially true when external- auditors rely on the work of internal auditors in the context of the International Standard on Auditing (ISA) 610 and the equivalent US Statement on Auditing Standards (SAS) 65 (see IFAC, 2009 and AICPA, 1990). Third, the harmonization of international reporting standards is increasingly promoted by means of a principles-based approach. It is the leading approach since a study of the SEC (2003) that was required by the SOX (2002) in section 108(d) was in favor of this approach. As a result, the Financial Accounting Standards Board (FASB) and the International Accounting Standards Board (IASB) commit themselves to the development of compatible accounting standards based on a principles-based approach. Moreover, since the Norwalk Agreement of 2002, the two standard setters have developed exposure drafts for a common conceptual framework that will be the basis for accounting harmonization. The new .framework will be in favor of fair value measurement and accounting for real-world economic phenomena. These changes in terms of standard setting lead to a trend towards more professional judgment in the accounting process. They affect internal and external auditors, accountants, and managers in general. As a consequence, a new competency set for preparers and users of financial statements is required. The basil for this new competency set is adequate accounting education (Schipper, 2003). These three issues which affect corporate governance are the initial point of this dissertation and constitute its motivation. Two broad questions motivated a scientific examination in three essays: 1) What are major aspects to be examined regarding the new role of the internal audit function? 2) How should major changes in standard setting affect financial accounting education? The first question became apparent due to two published literature reviews by Gramling et al. (2004) and Cohen, Krishnamoorthy & Wright (2004). These studies raise various questions for future research that are still relevant and which motivate the first two essays of my dissertation. In the first essay, I focus on the role of the internal audit function as one cornerstone of corporate governance and its potentially conflicting role of serving both management and the audit committee (IIA, 2003). In an experimental study, I provide evidence on the challenges for internal auditors in their role as servant for two masters -the audit committee and management -and how this influences internal auditors' judgment (Gramling et al. 2004; Cohen, Krishnamoorthy & Wright, 2004). I ask if there is an expectation gap between what internal auditors should provide for corporate governance in theory compared to what internal auditors are able to provide in practice. In particular, I focus on the effect of serving two masters on the internal auditor's independence. I argue that independence is hardly achievable if the internal audit function serves two masters with conflicting priorities. The second essay provides evidence on the effectiveness of accountability as an internal control mechanism. In general, internal control mechanisms based on accountability were enforced by the SOX (2002) and the 8th EU Directive. Subsequently, many companies introduced sub-certification processes that should contribute to an objective judgment process. Thus, these mechanisms are important to strengthen the reliability of financial statements. Based on a need for evidence on the effectiveness of internal control mechanisms (Brennan & Solomon, 2008; Gramling et al. 2004; Cohen, Krishnamoorthy & Wright, 2004; Solomon & Trotman, 2003), I designed an experiment to examine the joint effect of accountability and obedience pressure in an internal audit setting. I argue that obedience pressure potentially can lead to a negative influence on accountants' objectivity (e.g. DeZoort & Lord, 1997) whereas accountability can mitigate this negative effect. My second main research question - How should major changes in standard setting affect financial accounting education? - is investigated in the third essay. It is motivated by the observation during my PhD that many conferences deal with the topic of accounting education but very little is published about what needs to be done. Moreover, the Endings in the first two essays of this thesis and their literature review suggest that financial accounting education can contribute significantly to quality corporate governance as argued elsewhere (Schipper, 2003; Boyce, 2004; Ghoshal, 2005). In the third essay of this thesis, I therefore focus on approaches to financial accounting education that account for the changes in standard setting and also contribute to corporate governance and accounting harmonization. I argue that the competency set that is required in practice changes due to major changes in standard setting. As the major contribution of the third article, I develop a public interest approach for financial accounting education. The major findings of this dissertation can be summarized as follows. The first essay provides evidence to an important research question raised by Gramling et al. (2004, p. 240): "If the audit committee and management have different visions for the corporate governance role of the IAF, which vision will dominate?" According to the results of the first essay, internal auditors do follow the priorities of either management or the audit committee based on the guidance provided by the Chief Audit executive. The study's results question whether the independence of the internal audit function is actually achievable. My findings contribute to research on internal auditors' judgment and the internal audit function's independence in the broader frame of corporate governance. The results are also important for practice because independence is a major justification for a positive contribution of the internal audit function to corporate governance. The major findings of the second essay indicate that the duty to sign work results - a means of holding people accountable -mitigates the negative effect of obedience pressure on reliability. Hence, I found evidence that control .mechanisms relying on certifications may enhance the reliability of financial information. These findings contribute to the literature on the effectiveness of internal control mechanisms. They are also important in the light of sub-certification processes that resulted from the Sarbanes-Oxley Act and the 8th EU Directive. The third essay contributes to the literature by developing a measurement framework that accounts for the consequences of major trends in standard setting. Moreovér, it shows how these trends affect the required .competency set of people dealing with accounting issues. Based on this work, my main contribution is the development of a public interest approach for the design of adequate financial accounting curricula. 2 Serving two masters: Experimental evidence on the independence of internal auditors Abstract Twenty nine internal auditors participated in a study that examines the independence of internal auditors in their potentially competing roles of serving two masters: the audit committee and management. Our main hypothesis suggests that internal auditors' independence is not achievable in an institutional setting in which internal auditors are accountable to two different parties with potentially differing priorities. We test our hypothesis in an experiment in which the treatment consisted of two different instructions of the Chief audit executive; one stressing the priority of management (cost reduction) and one stressing the priority of the audit committee (effectiveness). Internal auditors had to evaluate internal controls and their inherent costs of different processes which varied in their degree of task complexity. Our main results indicate that internal auditors' evaluation of the processes is significantly different when task complexity is high. Our findings suggest that internal auditors do follow the priorities of either management or the audit committee depending on the instructions of a superior internal auditor. The study's results question whether the independence of the internal audit function is actually achievable. With our findings, we contribute to research on internal auditors' judgment and the internal audit function's independence in the frame of corporate governance.
Resumo:
Percutaneous cricothyroidotomy may be a lifesaving procedure for airway obstruction, which cannot be relieved by endotracheal intubation and can be performed with specially designed instruments. A new device, the "Quicktrach", was evaluated by an anatomical preparation, flow and resistance measurements, and puncture of the cricothyroid membrane in 55 corpses. The size of the parts of the instrument (needle, plastic cannula, depth gauge) in relation to the size of the larynx is adequate, thus there is little likelihood of perforation of the posterior wall of the larynx. Resistance of the plastic cannula is sufficiently low to allow for adequate ventilation. The duration of time until the cannula is positioned properly in the trachea is significantly shorter, when an incision prior to the puncture is done (83 +/- 88 seconds without incision versus 35 +/- 41 seconds with incision; mean +/- SD). The "Quicktrach" is easy to apply even by inexperienced persons. The incidence of damage to the larynx (lesions including fractures of the thyroid, cricoid and 1. tracheal cartilage in 18%; soft tissue injury in 9%) is relatively high, however considering the live saving character of the procedure these numbers appear to be acceptable. Technical problems which occur with the use of the device are discussed and suggestions for improvement are made.
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
The economical competitiveness of various power plant alternatives is compared. The comparison comprises merely electricity producing power plants. Combined heat and power (CHP) producing power will cover part of the future power deficit in Finland, but also condensing power plants for base load production will be needed. The following types of power plants are studied: nuclear power plant, combined cycle gas turbine plant, coal-fired condensing power plant, peat-fired condensing power plant, wood-fired condensing power plant and wind power plant. The calculations are carried out by using the annuity method with a real interest rate of 5 % per annum and with a fixed price level as of January 2008. With the annual peak load utilization time of 8000 hours (corresponding to a load factor of 91,3 %) the production costs would be for nuclear electricity 35,0 €/MWh, for gas based electricity 59,2 €/MWh and for coal based electricity 64,4 €/MWh, when using a price of 23 €/tonCO2 for the carbon dioxide emission trading. Without emission trading the production cost of gas electricity is 51,2 €/MWh and that of coal electricity 45,7 €/MWh and nuclear remains the same (35,0 €/MWh) In order to study the impact of changes in the input data, a sensitivity analysis has been carried out. It reveals that the advantage of the nuclear power is quite clear. E.g. the nuclear electricity is rather insensitive to the changes of nuclear fuel price, whereas for natural gas alternative the rising trend of gas price causes the greatest risk. Furthermore, increase of emission trading price improves the competitiveness of the nuclear alternative. The competitiveness and payback of the nuclear power investment is studied also as such by using various electricity market prices for determining the revenues generated by the investment. The profitability of the investment is excellent, if the market price of electricity is 50 €/MWh or more.
Resumo:
Energy consumption and energy efficiency have become an issue. Energy consumption is rising all over the world and because of that, and the climate change, energy is becoming more and more expensive. Buildings are major consumers of energy, and inside the buildings the major consumers are heating, ventilation and air-conditioning systems. They usually run at constant speed without efficient control. In most cases HVAC equipment is also oversized. Traditionally heating, ventilation and air-conditioning systems have been sized to meet conditions that rarely occur. The theory part in this thesis represents the basics of life cycle costs and calculations for the whole life cycle of a system. It also represents HVAC systems, equipment, systems controls and ways to save energy in these systems. The empirical part of this thesis represents life cycle cost calculations for HVAC systems. With these calculations it is possible to compute costs for the whole life cycle for the wanted variables. Life cycle costs make it possible to compare which variable causes most of the costs from the whole life point of view. Life cycle costs were studied through two real life cases which were focused on two different kinds of HVAC systems. In both of these cases the renovations were already made, so that the comparison between the old and the new, now existing system would be easier. The study indicates that energy can be saved in HVAC systems by using variable speed drive as a control method.