948 resultados para Optimal Stochastic Control
Resumo:
The objectives of this study were to evaluate baby corn yield, green corn yield, and grain yield in corn cultivar BM 3061, with weed control achieved via a combination of hoeing and intercropping with gliricidia, and determine how sample size influences weed growth evaluation accuracy. A randomized block design with ten replicates was used. The cultivar was submitted to the following treatments: A = hoeings at 20 and 40 days after corn sowing (DACS), B = hoeing at 20 DACS + gliricidia sowing after hoeing, C = gliricidia sowing together with corn sowing + hoeing at 40 DACS, D = gliricidia sowing together with corn sowing, and E = no hoeing. Gliricidia was sown at a density of 30 viable seeds m-2. After harvesting the mature ears, the area of each plot was divided into eight sampling units measuring 1.2 m² each to evaluate weed growth (above-ground dry biomass). Treatment A provided the highest baby corn, green corn, and grain yields. Treatment B did not differ from treatment A with respect to the yield values for the three products, and was equivalent to treatment C for green corn yield, but was superior to C with regard to baby corn weight and grain yield. Treatments D and E provided similar yields and were inferior to the other treatments. Therefore, treatment B is a promising one. The relation between coefficient of experimental variation (CV) and sample size (S) to evaluate growth of the above-ground part of the weeds was given by the equation CV = 37.57 S-0.15, i.e., CV decreased as S increased. The optimal sample size indicated by this equation was 4.3 m².
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
The investments have always been considered as an essential backbone and so-called ‘locomotive’ for the competitive economies. However, in various countries, the state has been put under tight budget constraints for the investments in capital intensive projects. In response to this situation, the cooperation between public and private sector has grown based on public-private mechanism. The promotion of favorable arrangement for collaboration between public and private sectors for the provision of policies, services, and infrastructure in Russia can help to address the problems of dry ports development that neither municipalities nor the private sector can solve alone. Especially, the stimulation of public-private collaboration is significant under the exposure to externalities that affect the magnitude of the risks during all phases of project realization. In these circumstances, the risk in the projects also is becoming increasingly a part of joint research and risk management practice, which is viewed as a key approach, aiming to take active actions on existing global and specific factors of uncertainties. Meanwhile, a relatively little progress has been made on the inclusion of the resilience aspects into the planning process of a dry ports construction that would instruct the capacity planner, on how to mitigate the occurrence of disruptions that may lead to million dollars of losses due to the deviation of the future cash flows from the expected financial flows on the project. The current experience shows that the existing methodological base is developed fragmentary within separate steps of supply chain risk management (SCRM) processes: risk identification, risk evaluation, risk mitigation, risk monitoring and control phases. The lack of the systematic approach hinders the solution of the problem of risk management processes of dry port implementation. Therefore, management of various risks during the investments phases of dry port projects still presents a considerable challenge from the practical and theoretical points of view. In this regard, the given research became a logical continuation of fundamental research, existing in the financial models and theories (e.g., capital asset pricing model and real option theory), as well as provided a complementation for the portfolio theory. The goal of the current study is in the design of methods and models for the facilitation of dry port implementation through the mechanism of public-private partnership on the national market that implies the necessity to mitigate, first and foremost, the shortage of the investments and consequences of risks. The problem of the research was formulated on the ground of the identified contradictions. They rose as a continuation of the trade-off between the opportunities that the investors can gain from the development of terminal business in Russia (i.e. dry port implementation) and risks. As a rule, the higher the investment risk, the greater should be their expected return. However, investors have a different tolerance for the risks. That is why it would be advisable to find an optimum investment. In the given study, the optimum relates to the search for the efficient portfolio, which can provide satisfaction to the investor, depending on its degree of risk aversion. There are many theories and methods in finance, concerning investment choices. Nevertheless, the appropriateness and effectiveness of particular methods should be considered with the allowance of the specifics of the investment projects. For example, the investments in dry ports imply not only the lump sum of financial inflows, but also the long-term payback periods. As a result, capital intensity and longevity of their construction determine the necessity from investors to ensure the return on investment (profitability), along with the rapid return on investment (liquidity), without precluding the fact that the stochastic nature of the project environment is hardly described by the formula-based approach. The current theoretical base for the economic appraisals of the dry port projects more often perceives net present value (NPV) as a technique superior to other decision-making criteria. For example, the portfolio theory, which considers different risk preference of an investor and structures of utility, defines net present value as a better criterion of project appraisal than discounted payback period (DPP). Meanwhile, in business practice, the DPP is more popular. Knowing that the NPV is based on the assumptions of certainty of project life, it cannot be an accurate appraisal approach alone to determine whether or not the project should be accepted for the approval in the environment that is not without of uncertainties. In order to reflect the period or the project’s useful life that is exposed to risks due to changes in political, operational, and financial factors, the second capital budgeting criterion – discounted payback period is profoundly important, particularly for the Russian environment. Those statements represent contradictions that exist in the theory and practice of the applied science. Therefore, it would be desirable to relax the assumptions of portfolio theory and regard DPP as not fewer relevant appraisal approach for the assessment of the investment and risk measure. At the same time, the rationality of the use of both project performance criteria depends on the methods and models, with the help of which these appraisal approaches are calculated in feasibility studies. The deterministic methods cannot ensure the required precision of the results, while the stochastic models guarantee the sufficient level of the accuracy and reliability of the obtained results, providing that the risks are properly identified, evaluated, and mitigated. Otherwise, the project performance indicators may not be confirmed during the phase of project realization. For instance, the economic and political instability can result in the undoing of hard-earned gains, leading to the need for the attraction of the additional finances for the project. The sources of the alternative investments, as well as supportive mitigation strategies, can be studied during the initial phases of project development. During this period, the effectiveness of the investments undertakings can also be improved by the inclusion of the various investors, e.g. Russian Railways’ enterprises and other private companies in the dry port projects. However, the evaluation of the effectiveness of the participation of different investors in the project lack the methods and models that would permit doing the particular feasibility study, foreseeing the quantitative characteristics of risks and their mitigation strategies, which can meet the tolerance of the investors to the risks. For this reason, the research proposes a combination of Monte Carlo method, discounted cash flow technique, the theory of real options, and portfolio theory via a system dynamics simulation approach. The use of this methodology allows for comprehensive risk management process of dry port development to cover all aspects of risk identification, risk evaluation, risk mitigation, risk monitoring, and control phases. A designed system dynamics model can be recommended for the decision-makers on the dry port projects that are financed via a public-private partnership. It permits investors to make a decision appraisal based on random variables of net present value and discounted payback period, depending on different risks factors, e.g. revenue risks, land acquisition risks, traffic volume risks, construction hazards, and political risks. In this case, the statistical mean is used for the explication of the expected value of the DPP and NPV; the standard deviation is proposed as a characteristic of risks, while the elasticity coefficient is applied for rating of risks. Additionally, the risk of failure of project investments and guaranteed recoupment of capital investment can be considered with the help of the model. On the whole, the application of these modern methods of simulation creates preconditions for the controlling of the process of dry port development, i.e. making managerial changes and identifying the most stable parameters that contribute to the optimal alternative scenarios of the project realization in the uncertain environment. System dynamics model allows analyzing the interactions in the most complex mechanism of risk management process of the dry ports development and making proposals for the improvement of the effectiveness of the investments via an estimation of different risk management strategies. For the comparison and ranking of these alternatives in their order of preference to the investor, the proposed indicators of the efficiency of the investments, concerning the NPV, DPP, and coefficient of variation, can be used. Thus, rational investors, who averse to taking increased risks unless they are compensated by the commensurate increase in the expected utility of a risky prospect of dry port development, can be guided by the deduced marginal utility of investments. It is computed on the ground of the results from the system dynamics model. In conclusion, the outlined theoretical and practical implications for the management of risks, which are the key characteristics of public-private partnerships, can help analysts and planning managers in budget decision-making, substantially alleviating the effect from various risks and avoiding unnecessary cost overruns in dry port projects.
Resumo:
The aim of this thesis is to price options on equity index futures with an application to standard options on S&P 500 futures traded on the Chicago Mercantile Exchange. Our methodology is based on stochastic dynamic programming, which can accommodate European as well as American options. The model accommodates dividends from the underlying asset. It also captures the optimal exercise strategy and the fair value of the option. This approach is an alternative to available numerical pricing methods such as binomial trees, finite differences, and ad-hoc numerical approximation techniques. Our numerical and empirical investigations demonstrate convergence, robustness, and efficiency. We use this methodology to value exchange-listed options. The European option premiums thus obtained are compared to Black's closed-form formula. They are accurate to four digits. The American option premiums also have a similar level of accuracy compared to premiums obtained using finite differences and binomial trees with a large number of time steps. The proposed model accounts for deterministic, seasonally varying dividend yield. In pricing futures options, we discover that what matters is the sum of the dividend yields over the life of the futures contract and not their distribution.
Resumo:
The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and deterministic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel metaheuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS metaheuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.
Resumo:
The curse of dimensionality is a major problem in the fields of machine learning, data mining and knowledge discovery. Exhaustive search for the most optimal subset of relevant features from a high dimensional dataset is NP hard. Sub–optimal population based stochastic algorithms such as GP and GA are good choices for searching through large search spaces, and are usually more feasible than exhaustive and determinis- tic search algorithms. On the other hand, population based stochastic algorithms often suffer from premature convergence on mediocre sub–optimal solutions. The Age Layered Population Structure (ALPS) is a novel meta–heuristic for overcoming the problem of premature convergence in evolutionary algorithms, and for improving search in the fitness landscape. The ALPS paradigm uses an age–measure to control breeding and competition between individuals in the population. This thesis uses a modification of the ALPS GP strategy called Feature Selection ALPS (FSALPS) for feature subset selection and classification of varied supervised learning tasks. FSALPS uses a novel frequency count system to rank features in the GP population based on evolved feature frequencies. The ranked features are translated into probabilities, which are used to control evolutionary processes such as terminal–symbol selection for the construction of GP trees/sub-trees. The FSALPS meta–heuristic continuously refines the feature subset selection process whiles simultaneously evolving efficient classifiers through a non–converging evolutionary process that favors selection of features with high discrimination of class labels. We investigated and compared the performance of canonical GP, ALPS and FSALPS on high–dimensional benchmark classification datasets, including a hyperspectral image. Using Tukey’s HSD ANOVA test at a 95% confidence interval, ALPS and FSALPS dominated canonical GP in evolving smaller but efficient trees with less bloat expressions. FSALPS significantly outperformed canonical GP and ALPS and some reported feature selection strategies in related literature on dimensionality reduction.
Resumo:
This thesis investigated the modulation of dynamic contractile function and energetics of work by posttetanic potentiation (PTP). Mechanical experiments were conducted in vitro using software-controlled protocols to stimulate/determine contractile function during ramp shortening, and muscles were frozen during parallel incubations for biochemical analysis. The central feature of this research was the comparison of fast hindlimb muscles from wildtype and skeletal myosin light chain kinase knockout (skMLCK-/-) mice that does not express the primary mechanism for PTP: myosin regulatory light chain (RLC) phosphorylation. In contrast to smooth/cardiac muscles where RLC phosphorylation is indispensable, its precise physiological role in skeletal muscle is unclear. It was initially determined that tetanic potentiation was shortening speed dependent, and this sensitivity of the PTP mechanism to muscle shortening extended the stimulation frequency domain over which PTP was manifest. Thus, the physiological utility of RLC phosphorylation to augment contractile function in vivo may be more extensive than previously considered. Subsequent experiments studied the contraction-type dependence for PTP and demonstrated that the enhancement of contractile function was dependent on force level. Surprisingly, in the absence of RLC phosphorylation, skMLCK-/- muscles exhibited significant concentric PTP; consequently, up to ~50% of the dynamic PTP response in wildtype muscle may be attributed to an alternate mechanism. When the interaction of PTP and the catchlike property (CLP) was examined, we determined that unlike the acute augmentation of peak force by the CLP, RLC phosphorylation produced a longer-lasting enhancement of force and work in the potentiated state. Nevertheless, despite the apparent interference between these mechanisms, both offer physiological utility and may be complementary in achieving optimal contractile function in vivo. Finally, when the energetic implications of PTP were explored, we determined that during a brief period of repetitive concentric activation, total work performed was ~60% greater in wildtype vs. skMLCK-/- muscles but there was no genotype difference in High-Energy Phosphate Consumption or Economy (i.e. HEPC: work). In summary, this thesis provides novel insight into the modulatory effects of PTP and RLC phosphorylation, and through the observation of alternative mechanisms for PTP we further develop our understanding of the history-dependence of fast skeletal muscle function.
Resumo:
Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps. Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac. Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations. Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps. Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins
Resumo:
Controlled choice over public schools is a common policy of school boards in the United States. It attempts giving choice to parents while maintaining racial and ethnic balance at schools. This paper provides a foundation for controlled school choice programs. We develop a natural notion of fairness and show that assignments, which are fair for same type students and constrained non-wasteful, always exist in controlled choice problems; a "controlled" version of the student proposing deferred acceptance algorithm (CDAA) always finds such an assignment which is also weakly Pareto-optimal. CDAA provides a practical solution for controlled school choice programs.
Resumo:
Introduction: Les efforts globaux pour contrôler la tuberculose sont présentement restreints par la prévalence croissante du VIH/SIDA. Quoique les éclosions de la tuberculose multi résistante (TB-MDR) soient fréquemment rapportées parmi les populations atteintes du SIDA, le lien entre VIH/SIDA et le développement de résistance n’est pas clair. Objectifs: Cette recherche visait à : (1) développer une base de connaissances concernant les facteurs associés à des éclosions de la TB-MDR parmi les patients atteints du VIH/SIDA; (2) utiliser ce cadre de connaissances pour accroître des mesures préliminaires pour mieux contrôler la tuberculose pulmonaire chez les patients atteints du VIH/SIDA; et (3) afin d’améliorer l’application des ces mesures, affiner les techniques bactériologiques existantes pour Mycobacterium tuberculosis. Méthodologie: Quatre études ont été réalisées : (1) Une étude longitudinale pour identifier les facteurs associés avec une éclosion de la TB-MDR parmi les patients atteints du SIDA qui ont reçu le traitement directement supervisé de courte durée (DOTS) pour la tuberculose pulmonaire au Lima et au Pérou entre 1999 et 2005; (2) Une étude transversale pour décrire différentes étapes de l’histoire naturelle de la tuberculose, la prévalence et les facteurs associés avec la mycobactérie qu’on retrouve dans les selles des patients atteints du SIDA; (3) Un projet pilote pour développer des stratégies de dépistage pour la tuberculose pulmonaire parmi les patients hospitalisés atteints du SIDA, en utilisant l’essaie Microscopic Observation Drug Susceptibility (MODS); et (4) Une étude laboratoire pour identifier les meilleures concentrations critiques pour détecter les souches MDR de M. tuberculosis en utilisant l’essaie MODS. Résultats : Étude 1 démontre qu’une épidémie de TB-MDR parmi les patients atteints du SIDA qui ont reçu DOTS pour la tuberculose pulmonaire ait été causée par la superinfection du clone de M. tuberculosis plutôt que le développement de la résistance secondaire. Bien que ce clone ait été plus commun parmi la cohorte de patients atteints du SIDA, il n’avait aucune différence de risque pour superinfection entre les patients avec ou sans SIDA. Ces résultats suggèrent qu’un autre facteur, possiblement associé à la diarrhée, peu contribuer à la prévalence élevée de ce clone chez les patients atteints du SIDA. Étude 2 suggère que chez la plupart des patients atteints du SIDA il a été retrouvé une mycobactérie dans leurs selles alors qu’ils étaient en phase terminale au niveau de la tuberculose pulmonaire. Or, les patients atteints du SIDA ayant été hospitalisés pendant les deux dernières années pour une autre condition médicale sont moins à risque de se retrouver avec une mycobactérie dans leurs selles. Étude 3 confirme que la tuberculose pulmonaire a été commune à tous les patients hospitalisés atteints du SIDA, mais diagnostiquée incorrectement en utilisant les critères cliniques présentement recommandés pour la tuberculose. Or, l’essaie MODS a détecté pour la plupart de ces cas. De plus, MODS a été également efficace quand la méthode a été dirigée aux patients soupçonnés d’avoir la tuberculose, à cause de leurs symptômes. Étude 4 démontre les difficultés de détecter les souches de M. tuberculosis avec une faible résistance contre ethambutol et streptomycine en utilisant l’essai MODS avec les concentrations de drogue présentement recommandées pour un milieu de culture. Cependant, l’utilité diagnostique de MODS peut être améliorée ; modifier les concentrations critiques et utiliser deux plaques et non une, pour des tests réguliers. Conclusion: Nos études soulèvent la nécessité d’améliorer le diagnostic et le traitement de la tuberculose parmi les patients atteints du SIDA, en particulier ceux qui vivent dans des régions avec moins de ressources. Par ailleurs, nos résultats font ressortir les effets indirects que les soins de santé ont sur les patients infectés par le VIH et qu’ils peuvent avoir sur le développement de la tuberculose.
Resumo:
Nous considérons des processus de diffusion, définis par des équations différentielles stochastiques, et puis nous nous intéressons à des problèmes de premier passage pour les chaînes de Markov en temps discret correspon- dant à ces processus de diffusion. Comme il est connu dans la littérature, ces chaînes convergent en loi vers la solution des équations différentielles stochas- tiques considérées. Notre contribution consiste à trouver des formules expli- cites pour la probabilité de premier passage et la durée de la partie pour ces chaînes de Markov à temps discret. Nous montrons aussi que les résultats ob- tenus convergent selon la métrique euclidienne (i.e topologie euclidienne) vers les quantités correspondantes pour les processus de diffusion. En dernier lieu, nous étudions un problème de commande optimale pour des chaînes de Markov en temps discret. L’objectif est de trouver la valeur qui mi- nimise l’espérance mathématique d’une certaine fonction de coût. Contraire- ment au cas continu, il n’existe pas de formule explicite pour cette valeur op- timale dans le cas discret. Ainsi, nous avons étudié dans cette thèse quelques cas particuliers pour lesquels nous avons trouvé cette valeur optimale.
Resumo:
La balance énergétique (dépense énergétique et ingestion calorique) est au centre du contrôle de la masse corporelle. L’activité physique peut par ailleurs réduire l’appétit et l’ingestion calorique, un phénomène qu’on appelle aussi l’effet anorexigène de l’activité physique. Cependant, l’hormone orexigénique, liée à une diminution de l’appétit, diminue pendant l’exercice pour remonter rapidement après l’effort. Le but de ce mémoire était de déterminer si l’ingestion calorique est réduite quand l’exercice précède immédiatement le repas comparativement à une condition où il y a une pause entre l’exercice et le repas. Pour ce faire, douze garçons non obèses (15-20 ans) ont pris part à l’étude. Chaque participant était évalué individuellement pour les deux tâches suivantes, et ce, dans un ordre aléatoire : 1) Ex = 30 minutes d’exercice (70% VO2max) suivi immédiatement par un buffet à volonté à midi; 2) Expause = 30 minutes d’exercice (70% VO2max) suivi d’une pause de 135 minutes et d’un buffet à volonté à midi. Les visites étaient précédées par un déjeuner standard et complétées avec une collation à volonté durant l’après-midi et un souper type buffet à volonté pour souper. Alors que les résultats ont révélé que la faim était similaire en tout temps, l’ingestion calorique au diner était plus basse pour la condition Ex que pour la condition Expause (5 072 vs 5 718 kJ; p < 0,05). Aucune différence significative n’a été notée pour la collation de l’après-midi et le souper. Chose intéressante, l’ingestion calorique des lipides était plus basse au diner avec une ingestion de 1 604 kJ pour la condition Ex versus 2 085 kJ pour la condition Expause (p < 0,05). Cette étude est la première à investiguer l’effet du positionnement optimal de l’activité physique pour réduire l’ingestion calorique et elle révèle qu’être actif physiquement juste avant le repas joue un rôle sur la diminution de l’ingestion calorique indépendamment des sensations d’appétit. L’absence d’une compensation durant le reste de la journée suggère de plus qu’une balance énergétique négative, incluant une réduction de la consommation de lipides, peut être plus facilement atteinte en positionnant l’activité physique juste avant un repas.
Resumo:
Department of Statistics, Cochin University of Science and Technology
Resumo:
Optimal control theory is a powerful tool for solving control problems in quantum mechanics, ranging from the control of chemical reactions to the implementation of gates in a quantum computer. Gradient-based optimization methods are able to find high fidelity controls, but require considerable numerical effort and often yield highly complex solutions. We propose here to employ a two-stage optimization scheme to significantly speed up convergence and achieve simpler controls. The control is initially parametrized using only a few free parameters, such that optimization in this pruned search space can be performed with a simplex method. The result, considered now simply as an arbitrary function on a time grid, is the starting point for further optimization with a gradient-based method that can quickly converge to high fidelities. We illustrate the success of this hybrid technique by optimizing a geometric phase gate for two superconducting transmon qubits coupled with a shared transmission line resonator, showing that a combination of Nelder-Mead simplex and Krotov’s method yields considerably better results than either one of the two methods alone.