894 resultados para Model-Based Design


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Malaria continues to infect millions and kill hundreds of thousands of people worldwide each year, despite over a century of research and attempts to control and eliminate this infectious disease. Challenges such as the development and spread of drug resistant malaria parasites, insecticide resistance to mosquitoes, climate change, the presence of individuals with subpatent malaria infections which normally are asymptomatic and behavioral plasticity in the mosquito hinder the prospects of malaria control and elimination. In this thesis, mathematical models of malaria transmission and control that address the role of drug resistance, immunity, iron supplementation and anemia, immigration and visitation, and the presence of asymptomatic carriers in malaria transmission are developed. A within-host mathematical model of severe Plasmodium falciparum malaria is also developed. First, a deterministic mathematical model for transmission of antimalarial drug resistance parasites with superinfection is developed and analyzed. The possibility of increase in the risk of superinfection due to iron supplementation and fortification in malaria endemic areas is discussed. The model results calls upon stakeholders to weigh the pros and cons of iron supplementation to individuals living in malaria endemic regions. Second, a deterministic model of transmission of drug resistant malaria parasites, including the inflow of infective immigrants, is presented and analyzed. The optimal control theory is applied to this model to study the impact of various malaria and vector control strategies, such as screening of immigrants, treatment of drug-sensitive infections, treatment of drug-resistant infections, and the use of insecticide-treated bed nets and indoor spraying of mosquitoes. The results of the model emphasize the importance of using a combination of all four controls tools for effective malaria intervention. Next, a two-age-class mathematical model for malaria transmission with asymptomatic carriers is developed and analyzed. In development of this model, four possible control measures are analyzed: the use of long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic, and screening and treatment of asymptomatic individuals. The numerical results show that a disease-free equilibrium can be attained if all four control measures are used. A common pitfall for most epidemiological models is the absence of real data; model-based conclusions have to be drawn based on uncertain parameter values. In this thesis, an approach to study the robustness of optimal control solutions under such parameter uncertainty is presented. Numerical analysis of the optimal control problem in the presence of parameter uncertainty demonstrate the robustness of the optimal control approach that: when a comprehensive control strategy is used the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the design of cost-effective strategies for disease control with multiple interventions, even under considerable uncertainty of model parameters. Finally, a separate work modeling the within-host Plasmodium falciparum infection in humans is presented. The developed model allows re-infection of already-infected red blood cells. The model hypothesizes that in severe malaria due to parasite quest for survival and rapid multiplication, the Plasmodium falciparum can be absorbed in the already-infected red blood cells which accelerates the rupture rate and consequently cause anemia. Analysis of the model and parameter identifiability using Markov chain Monte Carlo methods is presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this thesis is to understand how to create and develop a successful place brand and how to manage it systematically. The thesis thoroughly explains the phenomenon of place brands and place branding and presents different sub-categories of place branding. The theoretical part of the thesis provides a wide overview on the prevailing literature of place branding, place brand development and place brand management, which form the basis of the thesis’ theoretical framework. The theoretical evidence is gathered from a case living area. The living area is developed by one construction company, which has a significant role in the construction industry in Finland. The empirical evidence is gathered through semi-structured in-depth interviews by interviewing the new living area’s carefully selected stakeholder groups. Afterwards the empirical data is analyzed and reflected to the theoretical findings. After examining the case living area, the thesis will present a new living area branding process model based on prevailing theories and empirical findings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Building Information Modeling – BIM is widely spreading in the Architecture, Engineering, and Construction (AEC) industries. Manufacturers of building elements are also starting to provide more and more objects of their products. The ideal availability and distribution for these models is not yet stabilized. Usual goal of a manufacturer is to get their model into design as early as possible. Finding the ways to satisfy customer needs with a superior service would help to achieve this goal. This study aims to seek what case company’s customers want out of the model and what they think is the ideal way to obtain these models and what are the desired functionalities for this service. This master’s thesis uses a modified version of lead user method to gain understanding of what the needs are in a longer term. In this framework also benchmarking of current solutions and their common model functions is done. Empirical data is collected with survey and interviews. As a result this thesis provides understanding that what is the information customer uses when obtaining a model, what kind of model is expected to be achieved and how is should the process optimally function. Based on these results ideal service is pointed out.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Active magnetic bearing is a type of bearing which uses magnetic field to levitate the rotor. These bearings require continuous control of the currents in electromagnets and data from position of the rotor and the measured current from electromagnets. Because of this different identification methods can be implemented with no additional hardware. In this thesis the focus was to implement and test identification methods for active magnetic bearing system and to update the rotor model. Magnetic center calibration is a method used to locate the magnetic center of the rotor. Rotor model identification is an identification method used to identify the rotor model. Rotor model update is a method used to update the rotor model based on identification data. These methods were implemented and tested with a real machine where rotor was levitated with active magnetic bearings and the functionality of the methods was ensured. Methods were developed with further extension in mind and also with the possibility to apply them for different machines with ease.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT The motivation for this paper stems from the steady decline in the share of consumer expenditures on goods produced in the global south, coupled with the (empirically ambiguous) Singer/Prebisch hypothesis that this can be explained by a secular decline in the southern terms of trade. Drawing on these sources of inspiration, the paper sets out to study the dynamics of the terms of trade using a multi-sector growth model based on the principle of cumulative causation. The upshot is a North-South model of growth and trade in which the evolution of the terms of trade depends on differential rates of productivity growth in different sectors of the economy - and in which terms of trade dynamics may not be the best guide as to whether or not there is an uneven development problem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Diplomityön tavoitteena on ollut kehittää elintarvikeyrityksen palveluruokamarkkinoille uusi innovatiivinen toimintamalli haastattelujen, havainnoinnin ja kirjallisuuden perusteella. Tutkimusongelman päätutkimuskysymys on: Miten yrityksen palveluruoka- eli valmisruokatuotteille suunnitellaan innovatiivinen toimintamalli tuoteperheen ja jakelun suhteen? Tavoitteena on kartoittaa yrityksen palveluruoan nykytilanne ja luoda toimintamalli, joka toisi yritykselle selkeää lisäarvoa ja kilpailuetua. Teoreettisessa osuudessa selvitetään elintarviketeollisuuden, valmisruoan ja elintarvikelogistiikan tilannetta suomalaisesta ja globaalista näkökulmasta. Työssä on tarkasteltu elintarvikepuolen nykytilannetta ja tulevaisuuden suuntauksia tuotteiden, ruokailutottumusten ja logistiikan osalta. Innovaatioiden osalta työ painottuu palveluruokapuolen konseptin ja uuden logistisen yhteistyöhön pohjautuvan mallin kehittämiseen. Työn käytännön osiossa on tehty case-tutkimus ”yrityksen palveluruoan nykytilanne tuotteiden ja logistiikan osalta”. Toimialan markkinatilannetta on kartoitettu, jotta hahmotetaan, missä on potentiaalisimmat markkinat palveluruoalle. Työn tavoitteena on ollut kehittää Hoviruoka Oy:lle palveluruoan segmentointiin ja logistiikkaan liittyvä innovatiivinen toimintamalli. Toimintamallin avulla yritys saa kilpailuetua ja taloudellista hyötyä kilpailijoihin nähden, ja on edelläkävijä kyseisen toimintamallin ansiosta. Työn tuloksissa on käsitelty keskeiset tulokset ja mahdolliset jatkotoimenpiteet. Työssä kehitettiin yritykselle uusi toimintamalli palveluruoka puolelle. Työlle asetetut tavoitteet saavutettiin kaikkien kolmen - tekijän, yliopiston ja yrityksen - kannalta.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is well accepted that structural studies with model membranes are of considerable value in understanding the structure of biological membranes. Many studies with models of pure phospholipids have been done; but the effects of divalent cations and protein on these models would make these studies more applicable to intact membrane. The present study, performed with above view, is a structural analysis of divalent io~cardio1ipin complexes using the technique of x-ray diffraction. Cardiolipin, precipitated from dilute solution by divalent ionscalcium, magnesium and barium, contains little water and the structure formed is similar to the structure of pure cardiolipin with low water content. The calcium-cardiolipin complex forms a pure hexagonal type II phase that exists from 40 to 400 C. The molar ratio of calcium and cardiolipin in the complex is 1 : 1. Cardiolipin, precipitated with magnesium and barium forms two co-existing phases, lamellar and hexagonal, the relative quantity of the two phases being dependent on temperature. The hexagonal phase type II consisting of water filled channels formed by adding calcium to cardiolipin may have a remarkable permeability property in intact membrane. Pure cardiolipin and insulin at pH 3.0 and 4.0 precipitate but form no organised structure. Lecithin/cardiolipin and insulin precipitated at pH 3.0 give a pure lamellar phase. As the lecithin/cardiolipin molar ratio changes from 93/7 to SO/50, (a) the repeat distance of the lamellar changes from 72.8 X to 68.2 A; (b) the amount of protein bound increases in such a way that cardiolipin/insulin molar ratio in the complex reaches a maximum constant value at lecithin/cardiolipin molar ratio 70/30. A structural model based on these data shows that the molecular arrangement of lipid and protein is a lipid bilayer coated with protein molecules. The lipid-protein interaction is chiefly electrostatic and little, if any, hydrophobic bonding occurs in this particular system. So, the proposed model is essentially the same as Davson-Daniellifs model of biological membrane.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Affiliation: Département de Biochimie, Université de Montréal

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis was created in Word and converted to PDF using Mac OS X 10.7.5 Quartz PDFContext.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Le triméthoprime (TMP) est un antibiotique communément utilisé depuis les années 60. Le TMP est un inhibiteur de la dihydrofolate réductase (DHFR) bactérienne chromosomale. Cette enzyme est responsable de la réduction du dihydrofolate (DHF) en tétrahydrofolate (THF) chez les bactéries, qui lui, est essentiel à la synthèse des purines et ainsi, à la prolifération cellulaire. La résistance bactérienne au TMP est documentée depuis plus de 30 ans. Une des causes de cette résistance provient du fait que certaines souches bactériennes expriment une DHFR plasmidique, la DHFR R67. La DHFR R67 n'est pas affectée par le TMP, et peut ainsi remplacer la DHFR chromosomale lorsque celle-ci est inhibée par le TMP. À ce jour, aucun inhibiteur spécifique de la DHFR R67 est connu. En découvrant des inhibiteurs contre la DHFR R67, il serait possible de lever la résistance au TMP que la DHFR R67 confère aux bactéries. Afin de découvrir des inhibiteurs de DHFR R67, les approches de design à base de fragments et de criblage virtuel ont été choisies. L'approche de design à base de fragments a permis d'identifier sept composés simples et de faible poids moléculaire (fragments) inhibant faiblement la DHFR R67. À partir de ces fragments, des composés plus complexes et symétriques, inhibant la DHFR R67 dans l'ordre du micromolaire, ont été élaborés. Des études cinétiques ont montré que ces inhibiteurs sont compétitifs et qu'au moins deux molécules se lient simultanément dans le site actif de la DHFR R67. L'étude d'analogues des inhibiteurs micromolaires de la DHFR R67 a permis de déterminer que la présence de groupements carboxylate, benzimidazole et que la longueur des molécules influencent la puissance des inhibiteurs. Une étude par arrimage moléculaire, appuyée par les résultats in vitro, a permis d'élaborer un modèle qui suggère que les résidus Lys32, Gln67 et Ile68 seraient impliqués dans la liaison avec les inhibiteurs. Le criblage virtuel de la librairie de 80 000 composés de Maybridge avec le logiciel Moldock, et les essais d'inhibition in vitro des meilleurs candidats, a permis d'identifier quatre inhibiteurs micromolaires appartenant à des familles distinctes des composés précédemment identifiés. Un second criblage virtuel, d'une banque de 6 millions de composés, a permis d'identifier trois inhibiteurs micromolaires toujours distincts. Ces résultats offrent la base à partir de laquelle il sera possible de développer iv des composés plus efficaces et possédant des propriétés phamacologiquement acceptables dans le but de développer un antibiotique pouvant lever la résistance au TMP conféré par la DHFR R67.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dans les études sur le transport, les modèles de choix de route décrivent la sélection par un utilisateur d’un chemin, depuis son origine jusqu’à sa destination. Plus précisément, il s’agit de trouver dans un réseau composé d’arcs et de sommets la suite d’arcs reliant deux sommets, suivant des critères donnés. Nous considérons dans le présent travail l’application de la programmation dynamique pour représenter le processus de choix, en considérant le choix d’un chemin comme une séquence de choix d’arcs. De plus, nous mettons en œuvre les techniques d’approximation en programmation dynamique afin de représenter la connaissance imparfaite de l’état réseau, en particulier pour les arcs éloignés du point actuel. Plus précisément, à chaque fois qu’un utilisateur atteint une intersection, il considère l’utilité d’un certain nombre d’arcs futurs, puis une estimation est faite pour le restant du chemin jusqu’à la destination. Le modèle de choix de route est implanté dans le cadre d’un modèle de simulation de trafic par événements discrets. Le modèle ainsi construit est testé sur un modèle de réseau routier réel afin d’étudier sa performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nous étudions la gestion de centres d'appels multi-compétences, ayant plusieurs types d'appels et groupes d'agents. Un centre d'appels est un système de files d'attente très complexe, où il faut généralement utiliser un simulateur pour évaluer ses performances. Tout d'abord, nous développons un simulateur de centres d'appels basé sur la simulation d'une chaîne de Markov en temps continu (CMTC), qui est plus rapide que la simulation conventionnelle par événements discrets. À l'aide d'une méthode d'uniformisation de la CMTC, le simulateur simule la chaîne de Markov en temps discret imbriquée de la CMTC. Nous proposons des stratégies pour utiliser efficacement ce simulateur dans l'optimisation de l'affectation des agents. En particulier, nous étudions l'utilisation des variables aléatoires communes. Deuxièmement, nous optimisons les horaires des agents sur plusieurs périodes en proposant un algorithme basé sur des coupes de sous-gradients et la simulation. Ce problème est généralement trop grand pour être optimisé par la programmation en nombres entiers. Alors, nous relaxons l'intégralité des variables et nous proposons des méthodes pour arrondir les solutions. Nous présentons une recherche locale pour améliorer la solution finale. Ensuite, nous étudions l'optimisation du routage des appels aux agents. Nous proposons une nouvelle politique de routage basé sur des poids, les temps d'attente des appels, et les temps d'inoccupation des agents ou le nombre d'agents libres. Nous développons un algorithme génétique modifié pour optimiser les paramètres de routage. Au lieu d'effectuer des mutations ou des croisements, cet algorithme optimise les paramètres des lois de probabilité qui génèrent la population de solutions. Par la suite, nous développons un algorithme d'affectation des agents basé sur l'agrégation, la théorie des files d'attente et la probabilité de délai. Cet algorithme heuristique est rapide, car il n'emploie pas la simulation. La contrainte sur le niveau de service est convertie en une contrainte sur la probabilité de délai. Par après, nous proposons une variante d'un modèle de CMTC basé sur le temps d'attente du client à la tête de la file. Et finalement, nous présentons une extension d'un algorithme de coupe pour l'optimisation stochastique avec recours de l'affectation des agents dans un centre d'appels multi-compétences.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Le foie est un organe vital ayant une capacité de régénération exceptionnelle et un rôle crucial dans le fonctionnement de l’organisme. L’évaluation du volume du foie est un outil important pouvant être utilisé comme marqueur biologique de sévérité de maladies hépatiques. La volumétrie du foie est indiquée avant les hépatectomies majeures, l’embolisation de la veine porte et la transplantation. La méthode la plus répandue sur la base d'examens de tomodensitométrie (TDM) et d'imagerie par résonance magnétique (IRM) consiste à délimiter le contour du foie sur plusieurs coupes consécutives, un processus appelé la «segmentation». Nous présentons la conception et la stratégie de validation pour une méthode de segmentation semi-automatisée développée à notre institution. Notre méthode représente une approche basée sur un modèle utilisant l’interpolation variationnelle de forme ainsi que l’optimisation de maillages de Laplace. La méthode a été conçue afin d’être compatible avec la TDM ainsi que l' IRM. Nous avons évalué la répétabilité, la fiabilité ainsi que l’efficacité de notre méthode semi-automatisée de segmentation avec deux études transversales conçues rétrospectivement. Les résultats de nos études de validation suggèrent que la méthode de segmentation confère une fiabilité et répétabilité comparables à la segmentation manuelle. De plus, cette méthode diminue de façon significative le temps d’interaction, la rendant ainsi adaptée à la pratique clinique courante. D’autres études pourraient incorporer la volumétrie afin de déterminer des marqueurs biologiques de maladie hépatique basés sur le volume tels que la présence de stéatose, de fer, ou encore la mesure de fibrose par unité de volume.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.