893 resultados para Two-stage MCMC method
Resumo:
Le développement d’un médicament est non seulement complexe mais les retours sur investissment ne sont pas toujours ceux voulus ou anticipés. Plusieurs médicaments échouent encore en Phase III même avec les progrès technologiques réalisés au niveau de plusieurs aspects du développement du médicament. Ceci se traduit en un nombre décroissant de médicaments qui sont commercialisés. Il faut donc améliorer le processus traditionnel de développement des médicaments afin de faciliter la disponibilité de nouveaux produits aux patients qui en ont besoin. Le but de cette recherche était d’explorer et de proposer des changements au processus de développement du médicament en utilisant les principes de la modélisation avancée et des simulations d’essais cliniques. Dans le premier volet de cette recherche, de nouveaux algorithmes disponibles dans le logiciel ADAPT 5® ont été comparés avec d’autres algorithmes déjà disponibles afin de déterminer leurs avantages et leurs faiblesses. Les deux nouveaux algorithmes vérifiés sont l’itératif à deux étapes (ITS) et le maximum de vraisemblance avec maximisation de l’espérance (MLEM). Les résultats de nos recherche ont démontré que MLEM était supérieur à ITS. La méthode MLEM était comparable à l’algorithme d’estimation conditionnelle de premier ordre (FOCE) disponible dans le logiciel NONMEM® avec moins de problèmes de rétrécissement pour les estimés de variances. Donc, ces nouveaux algorithmes ont été utilisés pour la recherche présentée dans cette thèse. Durant le processus de développement d’un médicament, afin que les paramètres pharmacocinétiques calculés de façon noncompartimentale soient adéquats, il faut que la demi-vie terminale soit bien établie. Des études pharmacocinétiques bien conçues et bien analysées sont essentielles durant le développement des médicaments surtout pour les soumissions de produits génériques et supergénériques (une formulation dont l'ingrédient actif est le même que celui du médicament de marque, mais dont le profil de libération du médicament est différent de celui-ci) car elles sont souvent les seules études essentielles nécessaires afin de décider si un produit peut être commercialisé ou non. Donc, le deuxième volet de la recherche visait à évaluer si les paramètres calculer d’une demi-vie obtenue à partir d'une durée d'échantillonnage réputée trop courte pour un individu pouvaient avoir une incidence sur les conclusions d’une étude de bioéquivalence et s’ils devaient être soustraits d’analyses statistiques. Les résultats ont démontré que les paramètres calculer d’une demi-vie obtenue à partir d'une durée d'échantillonnage réputée trop courte influençaient de façon négative les résultats si ceux-ci étaient maintenus dans l’analyse de variance. Donc, le paramètre de surface sous la courbe à l’infini pour ces sujets devrait être enlevé de l’analyse statistique et des directives à cet effet sont nécessaires a priori. Les études finales de pharmacocinétique nécessaires dans le cadre du développement d’un médicament devraient donc suivre cette recommandation afin que les bonnes décisions soient prises sur un produit. Ces informations ont été utilisées dans le cadre des simulations d’essais cliniques qui ont été réalisées durant la recherche présentée dans cette thèse afin de s’assurer d’obtenir les conclusions les plus probables. Dans le dernier volet de cette thèse, des simulations d’essais cliniques ont amélioré le processus du développement clinique d’un médicament. Les résultats d’une étude clinique pilote pour un supergénérique en voie de développement semblaient très encourageants. Cependant, certaines questions ont été soulevées par rapport aux résultats et il fallait déterminer si le produit test et référence seraient équivalents lors des études finales entreprises à jeun et en mangeant, et ce, après une dose unique et des doses répétées. Des simulations d’essais cliniques ont été entreprises pour résoudre certaines questions soulevées par l’étude pilote et ces simulations suggéraient que la nouvelle formulation ne rencontrerait pas les critères d’équivalence lors des études finales. Ces simulations ont aussi aidé à déterminer quelles modifications à la nouvelle formulation étaient nécessaires afin d’améliorer les chances de rencontrer les critères d’équivalence. Cette recherche a apporté des solutions afin d’améliorer différents aspects du processus du développement d’un médicament. Particulièrement, les simulations d’essais cliniques ont réduit le nombre d’études nécessaires pour le développement du supergénérique, le nombre de sujets exposés inutilement au médicament, et les coûts de développement. Enfin, elles nous ont permis d’établir de nouveaux critères d’exclusion pour des analyses statistiques de bioéquivalence. La recherche présentée dans cette thèse est de suggérer des améliorations au processus du développement d’un médicament en évaluant de nouveaux algorithmes pour des analyses compartimentales, en établissant des critères d’exclusion de paramètres pharmacocinétiques (PK) pour certaines analyses et en démontrant comment les simulations d’essais cliniques sont utiles.
Resumo:
Les documents publiés par des entreprises, tels les communiqués de presse, contiennent une foule d’informations sur diverses activités des entreprises. C’est une source précieuse pour des analyses en intelligence d’affaire. Cependant, il est nécessaire de développer des outils pour permettre d’exploiter cette source automatiquement, étant donné son grand volume. Ce mémoire décrit un travail qui s’inscrit dans un volet d’intelligence d’affaire, à savoir la détection de relations d’affaire entre les entreprises décrites dans des communiqués de presse. Dans ce mémoire, nous proposons une approche basée sur la classification. Les méthodes de classifications existantes ne nous permettent pas d’obtenir une performance satisfaisante. Ceci est notamment dû à deux problèmes : la représentation du texte par tous les mots, qui n’aide pas nécessairement à spécifier une relation d’affaire, et le déséquilibre entre les classes. Pour traiter le premier problème, nous proposons une approche de représentation basée sur des mots pivots c’est-à-dire les noms d’entreprises concernées, afin de mieux cerner des mots susceptibles de les décrire. Pour le deuxième problème, nous proposons une classification à deux étapes. Cette méthode s’avère plus appropriée que les méthodes traditionnelles de ré-échantillonnage. Nous avons testé nos approches sur une collection de communiqués de presse dans le domaine automobile. Nos expérimentations montrent que les approches proposées peuvent améliorer la performance de classification. Notamment, la représentation du document basée sur les mots pivots nous permet de mieux centrer sur les mots utiles pour la détection de relations d’affaire. La classification en deux étapes apporte une solution efficace au problème de déséquilibre entre les classes. Ce travail montre que la détection automatique des relations d’affaire est une tâche faisable. Le résultat de cette détection pourrait être utilisé dans une analyse d’intelligence d’affaire.
Resumo:
Optimal control theory is a powerful tool for solving control problems in quantum mechanics, ranging from the control of chemical reactions to the implementation of gates in a quantum computer. Gradient-based optimization methods are able to find high fidelity controls, but require considerable numerical effort and often yield highly complex solutions. We propose here to employ a two-stage optimization scheme to significantly speed up convergence and achieve simpler controls. The control is initially parametrized using only a few free parameters, such that optimization in this pruned search space can be performed with a simplex method. The result, considered now simply as an arbitrary function on a time grid, is the starting point for further optimization with a gradient-based method that can quickly converge to high fidelities. We illustrate the success of this hybrid technique by optimizing a geometric phase gate for two superconducting transmon qubits coupled with a shared transmission line resonator, showing that a combination of Nelder-Mead simplex and Krotov’s method yields considerably better results than either one of the two methods alone.
Resumo:
Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression
Resumo:
Dynamic optimization methods have become increasingly important over the last years in economics. Within the dynamic optimization techniques employed, optimal control has emerged as the most powerful tool for the theoretical economic analysis. However, there is the need to advance further and take account that many dynamic economic processes are, in addition, dependent on some other parameter different than time. One can think of relaxing the assumption of a representative (homogeneous) agent in macro- and micro-economic applications allowing for heterogeneity among the agents. For instance, the optimal adaptation and diffusion of a new technology over time, may depend on the age of the person that adopted the new technology. Therefore, the economic models must take account of heterogeneity conditions within the dynamic framework. This thesis intends to accomplish two goals. The first goal is to analyze and revise existing environmental policies that focus on defining the optimal management of natural resources over time, by taking account of the heterogeneity of environmental conditions. Thus, the thesis makes a policy orientated contribution in the field of environmental policy by defining the necessary changes to transform an environmental policy based on the assumption of homogeneity into an environmental policy which takes account of heterogeneity. As a result the newly defined environmental policy will be more efficient and likely also politically more acceptable since it is tailored more specifically to the heterogeneous environmental conditions. Additionally to its policy orientated contribution, this thesis aims making a methodological contribution by applying a new optimization technique for solving problems where the control variables depend on two or more arguments --- the so-called two-stage solution approach ---, and by applying a numerical method --- the Escalator Boxcar Train Method --- for solving distributed optimal control problems, i.e., problems where the state variables, in addition to the control variables, depend on two or more arguments. Chapter 2 presents a theoretical framework to determine optimal resource allocation over time for the production of a good by heterogeneous producers, who generate a stock externalit and derives government policies to modify the behavior of competitive producers in order to achieve optimality. Chapter 3 illustrates the method in a more specific context, and integrates the aspects of quality and time, presenting a theoretical model that allows to determine the socially optimal outcome over time and space for the problem of waterlogging in irrigated agricultural production. Chapter 4 of this thesis concentrates on forestry resources and analyses the optimal selective-logging regime of a size-distributed forest.
Resumo:
We introduce a procedure for association based analysis of nuclear families that allows for dichotomous and more general measurements of phenotype and inclusion of covariate information. Standard generalized linear models are used to relate phenotype and its predictors. Our test procedure, based on the likelihood ratio, unifies the estimation of all parameters through the likelihood itself and yields maximum likelihood estimates of the genetic relative risk and interaction parameters. Our method has advantages in modelling the covariate and gene-covariate interaction terms over recently proposed conditional score tests that include covariate information via a two-stage modelling approach. We apply our method in a study of human systemic lupus erythematosus and the C-reactive protein that includes sex as a covariate.
Resumo:
This paper presents a unique two-stage image restoration framework especially for further application of a novel rectangular poor-pixels detector, which, with properties of miniature size, light weight and low power consumption, has great value in the micro vision system. To meet the demand of fast processing, only a few measured images shifted up to subpixel level are needed to join the fusion operation, fewer than those required in traditional approaches. By maximum likelihood estimation with a least squares method, a preliminary restored image is linearly interpolated. After noise removal via Canny operator based level set evolution, the final high-quality restored image is achieved. Experimental results demonstrate effectiveness of the proposed framework. It is a sensible step towards subsequent image understanding and object identification.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.
Resumo:
This paper explores the possibility of combining moderate vacuum frying followed by post-frying high vacuum application during the oil drainage stage, with the aim to reduce oil content in potato chips. Potato slices were initially vacuum fried under two operating conditions (140 °C, 20 kPa and 162 °C, 50.67 kPa) until the moisture content reached 10 and 15 % (wet basis), prior to holding the samples in the head space under high vacuum level (1.33 kPa). This two-stage process was found to lower significantly the amount of oil taken up by potato chips by an amount as high as 48 %, compared to drainage at the same pressure as the frying pressure. Reducing the pressure value to 1.33 kPa reduced the water saturation temperature (11 °C), causing the product to continuously lose moisture during the course of drainage. Continuous release of water vapour prevented the occluded surface oil from penetrating into the product structure and released it from the surface of the product. When frying and drainage occurred at the same pressure, the temperature of the product fell below the water saturation temperature soon after it was lifted out of the oil, which resulted in the oil getting sucked into the product. Thus, lowering the pressure after frying to a value well below the frying pressure is a promising method to lower oil uptake by the product.
Resumo:
This paper proposes a method for describing the distribution of observed temperatures on any day of the year such that the distribution and summary statistics of interest derived from the distribution vary smoothly through the year. The method removes the noise inherent in calculating summary statistics directly from the data thus easing comparisons of distributions and summary statistics between different periods. The method is demonstrated using daily effective temperatures (DET) derived from observations of temperature and wind speed at De Bilt, Holland. Distributions and summary statistics are obtained from 1985 to 2009 and compared to the period 1904–1984. A two-stage process first obtains parameters of a theoretical probability distribution, in this case the generalized extreme value (GEV) distribution, which describes the distribution of DET on any day of the year. Second, linear models describe seasonal variation in the parameters. Model predictions provide parameters of the GEV distribution, and therefore summary statistics, that vary smoothly through the year. There is evidence of an increasing mean temperature, a decrease in the variability in temperatures mainly in the winter and more positive skew, more warm days, in the summer. In the winter, the 2% point, the value below which 2% of observations are expected to fall, has risen by 1.2 °C, in the summer the 98% point has risen by 0.8 °C. Medians have risen by 1.1 and 0.9 °C in winter and summer, respectively. The method can be used to describe distributions of future climate projections and other climate variables. Further extensions to the methodology are suggested.
Resumo:
In Sub-Saharan Africa (SSA) the technological advances of the Green Revolution (GR) have not been very successful. However, the efforts being made to re-introduce the revolution call for more socio-economic research into the adoption and the effects of the new technologies. The paper discusses an investigation on the effects of GR technology adoption on poverty among households in Ghana. Maximum likelihood estimation of a poverty model within the framework of Heckman's two stage method of correcting for sample selection was employed. Technology adoption was found to have positive effects in reducing poverty. Other factors that reduce poverty include education, credit, durable assets, living in the forest belt and in the south of the country. Technology adoption itself was also facilitated by education, credit, non-farm income and household labour supply as well as living in urban centres. Inarguably, technology adoption can be taken seriously by increasing the levels of complementary inputs such as credit, extension services and infrastructure. Above all, the fundamental problems of illiteracy, inequality and lack of effective markets must be addressed through increasing the levels of formal and non-formal education, equitable distribution of the 'national cake' and a more pragmatic management of the ongoing Structural Adjustment Programme.
Resumo:
This paper studies the signalling effect of the consumption−wealth ratio (cay) on German stock returns via vector error correction models (VECMs). The effect of cay on U.S. stock returns has been recently confirmed by Lettau and Ludvigson with a two−stage method. In this paper, performance of the VECMs and the two−stage method are compared in both German and U.S. data. It is found that the VECMs are more suitable to study the effect of cay on stock returns than the two−stage method. Using the Conditional−Subset VECM, cay signals real stock returns and excess returns in both data sets significantly. The estimated coefficient on cay for stock returns turns out to be two times greater in U.S. data than in German data. When the two−stage method is used, cay has no significant effect on German stock returns. Besides, it is also found that cay signals German wealth growth and U.S. income growth significantly.
Resumo:
A two-stage linear-in-the-parameter model construction algorithm is proposed aimed at noisy two-class classification problems. The purpose of the first stage is to produce a prefiltered signal that is used as the desired output for the second stage which constructs a sparse linear-in-the-parameter classifier. The prefiltering stage is a two-level process aimed at maximizing a model's generalization capability, in which a new elastic-net model identification algorithm using singular value decomposition is employed at the lower level, and then, two regularization parameters are optimized using a particle-swarm-optimization algorithm at the upper level by minimizing the leave-one-out (LOO) misclassification rate. It is shown that the LOO misclassification rate based on the resultant prefiltered signal can be analytically computed without splitting the data set, and the associated computational cost is minimal due to orthogonality. The second stage of sparse classifier construction is based on orthogonal forward regression with the D-optimality algorithm. Extensive simulations of this approach for noisy data sets illustrate the competitiveness of this approach to classification of noisy data problems.
Resumo:
We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behaviour on a parking lot. Experiments show an improvement of ~ 30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added.
Resumo:
Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm’s robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities’ space generated by the classifiers at stage 1. The proposed method was ranked the 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.