962 resultados para Constrained Minimization
Resumo:
Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data, or with compositional data, like percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated
Resumo:
The present thesis in focused on the minimization of experimental efforts for the prediction of pollutant propagation in rivers by mathematical modelling and knowledge re-use. Mathematical modelling is based on the well known advection-dispersion equation, while the knowledge re-use approach employs the methods of case based reasoning, graphical analysis and text mining. The thesis contribution to the pollutant transport research field consists of: (1) analytical and numerical models for pollutant transport prediction; (2) two novel techniques which enable the use of variable parameters along rivers in analytical models; (3) models for the estimation of pollutant transport characteristic parameters (velocity, dispersion coefficient and nutrient transformation rates) as functions of water flow, channel characteristics and/or seasonality; (4) the graphical analysis method to be used for the identification of pollution sources along rivers; (5) a case based reasoning tool for the identification of crucial information related to the pollutant transport modelling; (6) and the application of a software tool for the reuse of information during pollutants transport modelling research. These support tools are applicable in the water quality research field and in practice as well, as they can be involved in multiple activities. The models are capable of predicting pollutant propagation along rivers in case of both ordinary pollution and accidents. They can also be applied for other similar rivers in modelling of pollutant transport in rivers with low availability of experimental data concerning concentration. This is because models for parameter estimation developed in the present thesis enable the calculation of transport characteristic parameters as functions of river hydraulic parameters and/or seasonality. The similarity between rivers is assessed using case based reasoning tools, and additional necessary information can be identified by using the software for the information reuse. Such systems represent support for users and open up possibilities for new modelling methods, monitoring facilities and for better river water quality management tools. They are useful also for the estimation of environmental impact of possible technological changes and can be applied in the pre-design stage or/and in the practical use of processes as well.
Resumo:
In this work it is presented a systematic procedure for constructing the solution of a large class of nonlinear conduction heat transfer problems through the minimization of quadratic functionals like the ones usually employed for linear descriptions. The proposed procedure gives rise to an efficient and easy way for carrying out numerical simulations of nonlinear heat transfer problems by means of finite elements. To illustrate the procedure a particular problem is simulated by means of a finite element approximation.
Resumo:
The aim of this study was to analyse mothers’ working time patterns across 22 European countries. The focus was on three questions: how much mothers prefer to work, how much they actually work, and to what degree their preferred and actual working times are (in)consistent with each other. The focus was on cross-national differences in mothers’ working time patterns, comparison of mothers’ working times to that of childless women and fathers, as well as on individual- and country-level factors that explain the variation between them. In the theoretical background, the departure point was an integrative theoretical approach where the assumption is that there are various kinds of explanations for the differences in mothers’ working time patterns – namely structural, cultural and institutional – , and that these factors are laid in two levels: individual- and country-levels. Data were extracted from the European Social Survey (ESS) 2010 / 2011. The results showed that mothers’ working time patterns, both preferred and actual working times, varied across European countries. Four clusters were formed to illustrate the differences. In the full-time pattern, full-time work was the most important form of work, leaving all other working time forms marginal. The full-time pattern was perceived in terms of preferred working times in Bulgaria and Portugal. In polarised pattern countries, fulltime work was also important, but it was accompanied by a large share of mothers not working at all. In the case of preferred working times, many Eastern and Southern European countries followed it whereas in terms of actual working times it included all Eastern and Southern European countries as well as Finland. The combination pattern was characterised by the importance of long part-time hours and full-time work. It was the preferred working time pattern in the Nordic countries, France, Slovenia, and Spain, but Belgium, Denmark, France, Norway, and Sweden followed it in terms of actual working times. The fourth cluster that described mothers’ working times was called the part-time pattern, and it was illustrated by the prevalence of short and long part-time work. In the case of preferred working times, it was followed in Belgium, Germany, Ireland, the Netherlands and Switzerland. Besides Belgium, the part-time pattern was followed in the same countries in terms of actual working times. The consistency between preferred and actual working times was rather strong in a majority of countries. However, six countries fell under different working time patterns when preferred and actual working times were compared. Comparison of working mothers’, childless women’s, and fathers’ working times showed that differences between these groups were surprisingly small. It was only in part-time pattern countries that working mothers worked significantly shorter hours than working childless women and fathers. Results therefore revealed that when mothers’ working times are under study, an important question regarding the population examined is whether it consists of all mothers or only working mothers. Results moreover supported the use of the integrative theoretical approach when studying mothers’ working time patterns. Results indicate that mothers’ working time patterns in all countries are shaped by various opportunities and constraints, which are comprised of structural, cultural, institutional, and individual-level factors.
Resumo:
Bioenergi ses som en viktig del av det nu- och framtida sortimentet av inhemsk energi. Svartlut, bark och skogsavfall täcker mer än en femtedel av den inhemska energianvändningen. Produktionsanläggningar kan fungera ofullständigt och en mängd gas-, partikelutsläpp och tjära produceras samtidigt och kan leda till beläggningsbildning och korrosion. Orsaken till dessa problem är ofta obalans i processen: vissa föreningar anrikas i processen och superjämviktstillstånd är bildas. I denna doktorsavhandling presenteras en ny beräkningsmetod, med vilken man kan beskriva superjämviktstillståndet, de viktigaste kemiska reaktionerna, processens värmeproduktion och tillståndsstorheter samtidigt. Beräkningsmetoden grundar sig på en unik frienergimetod med bivillkor som har utvecklats vid VTT. Den här så kallade CFE-metoden har tidigare utnyttjats i pappers-, metall- och kemiindustrin. Applikationer för bioenergi, vilka är demonstrerade i doktorsavhandlingen, är ett nytt användingsområde för metoden. Studien visade att beräkningsmetoden är väl lämpad för högtemperaturenergiprocesser. Superjämviktstillstånden kan uppstå i dessa processer och det kemiska systemet kan definieras med några bivillkor. Typiska tillämpningar är förbränning av biomassa och svartlut, förgasning av biomassa och uppkomsten av kväveoxider. Också olika sätt att definiera superjämviktstillstånd presenterades i doktorsavhandlingen: empiriska konstanter, empiriska hastighetsuttryck eller reaktionsmekanismer kan användas. Resultaten av doktorsavhandlingen kan utnyttjas i framtiden i processplaneringen och i undersökning av nya tekniska lösningar för förgasning, förbränningsteknik och biobränslen. Den presenterade metoden är ett bra alternativ till de traditionella mekanistiska och fenomenmodeller och kombinerar de bästa delarna av både. --------------------------------------------------------------- Bioenergia on tärkeä osa nykyistä ja tulevaa kotimaista energiapalettia. Mustalipeä, kuori ja metsätähteet kattavat yli viidenneksen kotimaisesta energian kulutuksesta. Tuotantolaitokset eivät kuitenkaan aina toimi täydellisesti ja niiden prosesseissa syntyy erilaisia kaasu- ja hiukkaspäästöjä, tervoja sekä prosessilaitteita kuluttavia saostumia ja ruostumista. Usein syy näihin ongelmiin on prosessissa esiintyvä epätasapainotila: tietyt yhdisteet rikastuvat prosessissa ja muodostavat supertasapainotiloja. Väitöstyössä kehitettiin uusi laskentamenetelmä, jolla voidaan kuvata nämä supertasapainotilat, tärkeimmät niihin liittyvät kemialliset reaktiot, prosessin lämmöntuotanto ja tilansuureet yhtä aikaa. Laskentamenetelmä perustuu VTT:llä kehitettyyn ainutlaatuiseen rajoitettuun vapaaenergiamenetelmään. Tätä niin kutsuttua CFE-menetelmää on aiemmin sovelluttu onnistuneesti muun muassa paperi-, metalli- ja kemianteollisuudessa. Väitöstyössä esitetyt bioenergiasovellukset ovat uusi sovellusalue menetelmälle. Työ osoitti laskentatavan soveltuvan hyvin korkealämpöisiin energiatekniikan prosesseihin, joissa kemiallista systeemiä rajoittavia tekijöitä oli rajallinen määrä ja siten super-tasapainotila saattoi muodostua prosessin aikana. Tyypillisiä sovelluskohteita ovat biomassan ja mustalipeän poltto, biomassan kaasutus ja typpioksidipäästöt. Työn aikana arvioitiin myös erilaisia tapoja määritellä super-tasapainojen muodostumista rajoittavat tekijät. Rajoitukset voitiin tehdä teollisiin mittauksiin pohjautuen, kokeellisia malleja hyödyntäen tai mekanistiseen reaktiokinetiikkaan perustuen. Tulevaisuudessa väitöstyön tuloksia voidaan hyödyntää prosessisuunnittelussa ja tutkittaessa uusia teknisiä ratkaisuja kaasutus- ja polttotekniikoissa sekä biopolttoaineiden tutkimuksessa. Kehitetty menetelmä tarjoaa hyvän vaihtoehdon perinteisille mekanistisille ja ilmiömalleille yhdistäen näiden parhaita puolia.
Resumo:
The first example of a [5+2] cycloaddition reaction wherein the olefin of the vinylcyclopropyl moiety is constrained in a carbocycle was explored, and possible reasons on the lack of reactivity of the substrate were studied. A simple model substrate was synthesized and subjected to cycloaddition conditions to determine if the reason for the lack of reactivity was related to the complexity of the substrate, or if the lack of “conjugative character” of the cyclopropyl ring with respect to the olefin is responsible. A more complex bicyclic substrate possessing an angular methyl group at the ring junction was also synthesized and explored, with evidence supporting the current theory of deconjugation of the cyclopropyl moiety.
Resumo:
We study the problem of provision and cost-sharing of a public good in large economies where exclusion, complete or partial, is possible. We search for incentive-constrained efficient allocation rules that display fairness properties. Population monotonicity says that an increase in population should not be detrimental to anyone. Demand monotonicity states that an increase in the demand for the public good (in the sense of a first-order stochastic dominance shift in the distribution of preferences) should not be detrimental to any agent whose preferences remain unchanged. Under suitable domain restrictions, there exists a unique incentive-constrained efficient and demand-monotonic allocation rule: the so-called serial rule. In the binary public good case, the serial rule is also the only incentive-constrained efficient and population-monotonic rule.
Resumo:
A single object must be allocated to at most one of n agents. Money transfers are possible and preferences are quasilinear. We offer an explicit description of the individually rational mechanisms which are Pareto-optimal in the class of feasible, strategy-proof, anonymous and envy-free mechanisms. These mechanisms form a one-parameter infinite family; the Vickrey mechanism is the only Groves mechanism in that family.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
Resumo:
Cette thèse décrit deux thèmes principaux: 1) la conception, la synthèse, et l'évaluation biophysique des nucléosides tricycliques, et 2) la synthèse de nagilactone B, un produit naturel norditerpenoïde dilactone de la famille de produits naturels “podolactone”. Le premier chapitre décrit la stratégie de design rationnel des nucléosides nommé “restriction conformationnelle double” basée sur les études de modélisation structurales des duplex ADN–ARN modifiés. Cette stratégie implique un blocage du cycle furanose dans une configuration de type N- ou S, et une restriction de la rotation torsionelle autour de l’angle γ. La première contrainte a été incorporée avec un pont méthylène entre l’oxygène en position 2′ et le carbone 4′ du nucléoside. Cette stratégie a été inspirée par les acides nucléiques bloqués (ou “locked nucleic acid”, LNA). La deuxième contrainte a été réalisée en ajoutant un carbocycle supplémentaire dans l'échafaud de l’acide nucléique bloqué. Les défis synthétiques de la formation des nucléotides modifiés à partir des carbohydrates sont décrits ainsi que les améliorations aux stabilités thermiques qu’ils apportent aux duplex oligonucléïques dont ils font partie. Chapitres deux et trois décrivent le développement de deux voies synthétiques complémentaires pour la formation du noyau de nagilactone B. Ce produit naturel a des implications pour le syndrome de Hutchinson–Gilford, à cause de son habilité de jouer le rôle de modulateur de l’épissage d’ARN pré-messager de lamine A. Ce produit naturel contient sept stereocentres différents, dont deux quaternaires et deux comprenant un syn-1,2-diol, ainsi que des lactones à cinq ou six membres, où le cycle à six ressemble à un groupement α-pyrone. La synthèse a débuté avec la cétone de Wieland-Miescher qui a permis d’adresser les défis structurels ainsi qu’explorer les fonctionnalisations des cycles A, B et D du noyau de nagilactone B.
Resumo:
We study the relaxational dynamics of the one-spin facilitated Ising model introduced by Fredrickson and Andersen. We show the existence of a critical time which separates an initial regime in which the relaxation is exponentially fast and aging is absent from a regime in which relaxation becomes slow and aging effects are present. The presence of this fast exponential process and its associated critical time is in agreement with some recent experimental results on fragile glasses.
Resumo:
The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.
Resumo:
Resumen tomado de la publicaci??n