908 resultados para Predictive Analytics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les plantes sont essentielles pour les sociétés humaines. Notre alimentation quotidienne, les matériaux de constructions et les sources énergétiques dérivent de la biomasse végétale. En revanche, la compréhension des multiples aspects développementaux des plantes est encore peu exploitée et représente un sujet de recherche majeur pour la science. L'émergence des technologies à haut débit pour le séquençage de génome à grande échelle ou l'imagerie de haute résolution permet à présent de produire des quantités énormes d'information. L'analyse informatique est une façon d'intégrer ces données et de réduire la complexité apparente vers une échelle d'abstraction appropriée, dont la finalité est de fournir des perspectives de recherches ciblées. Ceci représente la raison première de cette thèse. En d'autres termes, nous appliquons des méthodes descriptives et prédictives combinées à des simulations numériques afin d'apporter des solutions originales à des problèmes relatifs à la morphogénèse à l'échelle de la cellule et de l'organe. Nous nous sommes fixés parmi les objectifs principaux de cette thèse d'élucider de quelle manière l'interaction croisée des phytohormones auxine et brassinosteroïdes (BRs) détermine la croissance de la cellule dans la racine du méristème apical d'Arabidopsis thaliana, l'organisme modèle de référence pour les études moléculaires en plantes. Pour reconstruire le réseau de signalement cellulaire, nous avons extrait de la littérature les informations pertinentes concernant les relations entre les protéines impliquées dans la transduction des signaux hormonaux. Le réseau a ensuite été modélisé en utilisant un formalisme logique et qualitatif pour pallier l'absence de données quantitatives. Tout d'abord, Les résultats ont permis de confirmer que l'auxine et les BRs agissent en synergie pour contrôler la croissance de la cellule, puis, d'expliquer des observations phénotypiques paradoxales et au final, de mettre à jour une interaction clef entre deux protéines dans la maintenance du méristème de la racine. Une étude ultérieure chez la plante modèle Brachypodium dystachion (Brachypo- dium) a révélé l'ajustement du réseau d'interaction croisée entre auxine et éthylène par rapport à Arabidopsis. Chez ce dernier, interférer avec la biosynthèse de l'auxine mène à la formation d'une racine courte. Néanmoins, nous avons isolé chez Brachypodium un mutant hypomorphique dans la biosynthèse de l'auxine qui affiche une racine plus longue. Nous avons alors conduit une analyse morphométrique qui a confirmé que des cellules plus anisotropique (plus fines et longues) sont à l'origine de ce phénotype racinaire. Des analyses plus approfondies ont démontré que la différence phénotypique entre Brachypodium et Arabidopsis s'explique par une inversion de la fonction régulatrice dans la relation entre le réseau de signalisation par l'éthylène et la biosynthèse de l'auxine. L'analyse morphométrique utilisée dans l'étude précédente exploite le pipeline de traitement d'image de notre méthode d'histologie quantitative. Pendant la croissance secondaire, la symétrie bilatérale de l'hypocotyle est remplacée par une symétrie radiale et une organisation concentrique des tissus constitutifs. Ces tissus sont initialement composés d'une douzaine de cellules mais peuvent aisément atteindre des dizaines de milliers dans les derniers stades du développement. Cette échelle dépasse largement le seuil d'investigation par les moyens dits 'traditionnels' comme l'imagerie directe de tissus en profondeur. L'étude de ce système pendant cette phase de développement ne peut se faire qu'en réalisant des coupes fines de l'organe, ce qui empêche une compréhension des phénomènes cellulaires dynamiques sous-jacents. Nous y avons remédié en proposant une stratégie originale nommée, histologie quantitative. De fait, nous avons extrait l'information contenue dans des images de très haute résolution de sections transverses d'hypocotyles en utilisant un pipeline d'analyse et de segmentation d'image à grande échelle. Nous l'avons ensuite combiné avec un algorithme de reconnaissance automatique des cellules. Cet outil nous a permis de réaliser une description quantitative de la progression de la croissance secondaire révélant des schémas développementales non-apparents avec une inspection visuelle classique. La formation de pôle de phloèmes en structure répétée et espacée entre eux d'une longueur constante illustre les bénéfices de notre approche. Par ailleurs, l'exploitation approfondie de ces résultats a montré un changement de croissance anisotropique des cellules du cambium et du phloème qui semble en phase avec l'expansion du xylème. Combinant des outils génétiques et de la modélisation biomécanique, nous avons démontré que seule la croissance plus rapide des tissus internes peut produire une réorientation de l'axe de croissance anisotropique des tissus périphériques. Cette prédiction a été confirmée par le calcul du ratio des taux de croissance du xylème et du phloème au cours de développement secondaire ; des ratios élevés sont effectivement observés et concomitant à l'établissement progressif et tangentiel du cambium. Ces résultats suggèrent un mécanisme d'auto-organisation établi par un gradient de division méristématique qui génèrent une distribution de contraintes mécaniques. Ceci réoriente la croissance anisotropique des tissus périphériques pour supporter la croissance secondaire. - Plants are essential for human society, because our daily food, construction materials and sustainable energy are derived from plant biomass. Yet, despite this importance, the multiple developmental aspects of plants are still poorly understood and represent a major challenge for science. With the emergence of high throughput devices for genome sequencing and high-resolution imaging, data has never been so easy to collect, generating huge amounts of information. Computational analysis is one way to integrate those data and to decrease the apparent complexity towards an appropriate scale of abstraction with the aim to eventually provide new answers and direct further research perspectives. This is the motivation behind this thesis work, i.e. the application of descriptive and predictive analytics combined with computational modeling to answer problems that revolve around morphogenesis at the subcellular and organ scale. One of the goals of this thesis is to elucidate how the auxin-brassinosteroid phytohormone interaction determines the cell growth in the root apical meristem of Arabidopsis thaliana (Arabidopsis), the plant model of reference for molecular studies. The pertinent information about signaling protein relationships was obtained through the literature to reconstruct the entire hormonal crosstalk. Due to a lack of quantitative information, we employed a qualitative modeling formalism. This work permitted to confirm the synergistic effect of the hormonal crosstalk on cell elongation, to explain some of our paradoxical mutant phenotypes and to predict a novel interaction between the BREVIS RADIX (BRX) protein and the transcription factor MONOPTEROS (MP),which turned out to be critical for the maintenance of the root meristem. On the same subcellular scale, another study in the monocot model Brachypodium dystachion (Brachypodium) revealed an alternative wiring of auxin-ethylene crosstalk as compared to Arabidopsis. In the latter, increasing interference with auxin biosynthesis results in progressively shorter roots. By contrast, a hypomorphic Brachypodium mutant isolated in this study in an enzyme of the auxin biosynthesis pathway displayed a dramatically longer seminal root. Our morphometric analysis confirmed that more anisotropic cells (thinner and longer) are principally responsible for the mutant root phenotype. Further characterization pointed towards an inverted regulatory logic in the relation between ethylene signaling and auxin biosynthesis in Brachypodium as compared to Arabidopsis, which explains the phenotypic discrepancy. Finally, the morphometric analysis of hypocotyl secondary growth that we applied in this study was performed with the image-processing pipeline of our quantitative histology method. During its secondary growth, the hypocotyl reorganizes its primary bilateral symmetry to a radial symmetry of highly specialized tissues comprising several thousand cells, starting with a few dozens. However, such a scale only permits observations in thin cross-sections, severely hampering a comprehensive analysis of the morphodynamics involved. Our quantitative histology strategy overcomes this limitation. We acquired hypocotyl cross-sections from tiled high-resolution images and extracted their information content using custom high-throughput image processing and segmentation. Coupled with an automated cell type recognition algorithm, it allows precise quantitative characterization of vascular development and reveals developmental patterns that were not evident from visual inspection, for example the steady interspace distance of the phloem poles. Further analyses indicated a change in growth anisotropy of cambial and phloem cells, which appeared in phase with the expansion of xylem. Combining genetic tools and computational modeling, we showed that the reorientation of growth anisotropy axis of peripheral tissue layers only occurs when the growth rate of central tissue is higher than the peripheral one. This was confirmed by the calculation of the ratio of the growth rate xylem to phloem throughout secondary growth. High ratios are indeed observed and concomitant with the homogenization of cambium anisotropy. These results suggest a self-organization mechanism, promoted by a gradient of division in the cambium that generates a pattern of mechanical constraints. This, in turn, reorients the growth anisotropy of peripheral tissues to sustain the secondary growth.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Perinteisesti ajoneuvojen markkinointikampanjoissa kohderyhmät muodostetaan yksinkertaisella kriteeristöllä koskien henkilön tai hänen ajoneuvonsa ominaisuuksia. Ennustavan analytiikan avulla voidaan tuottaa kohderyhmänmuodostukseen teknisesti kompleksisia mutta kuitenkin helppokäyttöisiä menetelmiä. Tässä työssä on sovellettu luokittelu- ja regressiomenetelmiä uuden auton ostajien joukkoon. Tämän työn menetelmiksi on rajattu tukivektorikone sekä Coxin regressiomalli. Coxin regression avulla on tutkittu elinaika-analyysien soveltuvuutta ostotapahtuman tapahtumahetken mallintamiseen. Luokittelu tukivektorikonetta käyttäen onnistuu tehtävässään noin 72% tapauksissa. Tukivektoriregressiolla mallinnetun hankintahetken virheen keskiarvo on noin neljä kuukautta. Työn tulosten perusteella myös elinaika-analyysin käyttö ostotapahtuman tapahtumahetken mallintamiseen on menetelmänä käyttökelpoinen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Yritysten syvällinen ymmärrys työntekijöistä vaatii yrityksiltä monipuolista panostusta tiedonhallintaan. Tämän yhdistäminen ennakoivaan analytiikkaan ja tiedonlouhintaan mahdollistaa yrityksille uudenlaisen ulottuvuuden kehittää henkilöstöhallinnon toimintoja niin työntekijöiden kuin yrityksen etujen mukaisesti. Tutkielman tavoitteena oli selvittää tiedonlouhinnan hyödyntämistä henkilöstöhallinnossa. Tutkielma toteutettiin konstruktiivistä menetelmää hyödyntäen. Teoreettinen viitekehys keskittyi ennakoivan analytiikan ja tiedonlouhinnan konseptin ymmärtämiseen. Tutkielman empiriaosuus rakentui kvalitatiiviseen ja kvantitatiiviseen osiin. Kvalitatiivinen osa koostui tutkielman esitutkimuksesta, jossa käsiteltiin ennakoivan analytiikan ja tiedonlouhinnan hyödyntämistä. Kvantitatiivinen osa rakentui tiedonlouhintaprojektiin, joka toteutettiin henkilöstöhallintoon tutkien henkilöstövaihtuvuutta. Esitutkimuksen tuloksena tiedonlouhinnan hyödyntämisen haasteiksi ilmeni muun muassa tiedon omistajuus, osaaminen ja ymmärrys mahdollisuuksista. Tiedonlouhintaprojektin tuloksena voidaan todeta, että tutkimuksessa sovelletuista korrelaatioiden tutkimisista ja logistisesta regressioanalyysistä oli havaittavissa tilastollisia riippuvuuksia vapaaehtoisesti poistuvien työntekijöiden osalta.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Liiketoiminta-analytiikka on yksi yritysten suorituskyvyn johtamisen osa-alue, joka on viime aikoina noussut vahvasti esille yritysten kilpailuedun mahdollistavana avaintekijänä. Tämän tutkimuksen tavoitteena oli kartoittaa yritysten liiketoiminta-analytiikan nykytila ja tarpeet Suomessa. Tutkimus on luonteeltaan kvalitatiivinen vertaileva tutkimus. Tutkimuksen empiirinen aineisto kerättiin kahden menetelmän yhdistelmänä. Liiketoiminta-analytiikan hyödyntämisessä edistyneempien yrityksien asiantuntijoille toteutettiin haastattelut. Lisäksi toteutettiin sähköpostitse lomakemuotoinen kyselytutkimus, jotta saavutettaisiin kattavampi näkemys analytiikan markkinoista. Tutkimuksessa on kartoitettu, miten Suomessa ymmärretään liiketoiminta- analytiikan käsite eri yrityksien analytiikan asiantuntijoiden toimesta, sekä minkälaisissa päätöksentekotilanteissa liiketoiminta-analytiikkaa hyödynnetään ja minkälaisilla tavoilla. Lisäksi on selvitetty, miten liiketoiminta-analytiikan kehittämistä ja analytiikan kyvykkyyksiä hallitaan yrityksissä. Liiketoiminta-analytiikka on Suomessa tietyillä toimialoilla erittäin kehittynyttä, mutta yleisesti ollaan jäljessä alan edelläkävijöitä ja esimerkiksi Ruotsia. Liiketoiminta-analytiikan hyödyntäminen ja tarpeet ovat pitkälti kohdistuneet päätöksentekotilanteisiin, joissa yritys kohtaa asiakkaansa. Suurin yksittäinen este liiketoiminta-analytiikan hyödyntämiselle on resurssi- ja osaamisvaje.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Kilpailuetua tavoittelevan yrityksen pitää kyetä jalostamaan tietoa ja tunnistamaan sen avulla uusia tulevaisuuden mahdollisuuksia. Tulevaisuuden mielikuvien luomiseksi yrityksen on tunnettava toimintaympäristönsä ja olla herkkänä havaitsemaan muutostrendit ja muut toimintaympäristön signaalit. Ympäristön elintärkeät signaalit liittyvät kilpailijoihin, teknologian kehittymiseen, arvomaailman muutoksiin, globaaleihin väestötrendeihin tai jopa ympäristön muutoksiin. Spatiaaliset suhteet ovat peruspilareita käsitteellistää maailmaamme. Pitney (2015) on arvioinut, että 80 % kaikesta bisnesdatasta sisältää jollakin tavoin viittauksia paikkatietoon. Siitä huolimatta paikkatietoa on vielä huonosti hyödynnetty yritysten strategisten päätösten tukena. Teknologioiden kehittyminen, tiedon nopea siirto ja paikannustekniikoiden integroiminen eri laitteisiin ovat mahdollistaneet sen, että paikkatietoa hyödyntäviä palveluja ja ratkaisuja tullaan yhä enemmän näkemään yrityskentässä. Tutkimuksen tavoitteena oli selvittää voiko location intelligence toimia strategisen päätöksenteon tukena ja jos voi, niin miten. Työ toteutettiin konstruktiivista tutkimusmenetelmää käyttäen, jolla pyritään ratkaisemaan jokin relevantti ongelma. Konstruktiivinen tutkimus tehtiin tiiviissä yhteistyössä kolmen pk-yrityksen kanssa ja siihen haastateltiin kuutta eri strategiasta vastaavaa henkilöä. Tutkimuksen tuloksena löydettiin, että location intelligenceä voidaan hyödyntää strategisen päätöksenteon tukena usealla eri tasolla. Yksinkertaisimmassa karttaratkaisussa halutut tiedot tuodaan kartalle ja luodaan visuaalinen esitys, jonka avulla johtopäätöksien tekeminen helpottuu. Toisen tason karttaratkaisu pitää sisällään sekä sijainti- että ominaisuustietoa, jota on yhdistetty eri lähteistä. Tämä toisen tason karttaratkaisu on usein kuvailevaa analytiikkaa, joka mahdollistaa erilaisten ilmiöiden analysoinnin. Kolmannen eli ylimmän tason karttaratkaisu tarjoaa ennakoivaa analytiikkaa ja malleja tulevaisuudesta. Tällöin ohjelmaan koodataan älykkyyttä, jossa informaation keskinäisiä suhteita on määritelty joko tiedon louhintaa tai tilastollisia analyysejä hyödyntäen. Tutkimuksen johtopäätöksenä voidaan todeta, että location intelligence pystyy tarjoamaan lisäarvoa strategisen päätöksenteon tueksi, mikäli yritykselle on hyödyllistä ymmärtää eri ilmiöiden, asiakastarpeiden, kilpailijoiden ja markkinamuutoksien maantieteellisiä eroavaisuuksia. Parhaimmillaan location intelligence -ratkaisu tarjoaa luotettavan analyysin, jossa tieto välittyy muuttumattomana päätöksentekijältä toiselle ja johtopäätökseen johtaneita syitä on mahdollista palata tarkastelemaan tarvittaessa uudelleen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le macchine impiegate nei processi di produzione industriale sono soggette a usura e destinate a esibire malfunzionamenti qualora non venga attuata un'attenta opera di manutenzione preventiva. In questa tesi è proposta una proof of concept relativa alla manutenzione predittiva la quale, analizzando i segnali trasmessi dai sensori installati sulla macchina, mira a segnalare in tempo utile i guasti futuri, onde consentire l'attività manutentiva prima che si verifichi il guasto.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bisphosphonates represent a unique class of drugs that effectively treat and prevent a variety of bone-related disorders including metastatic bone disease and osteoporosis. High tolerance and high efficacy rates quickly ranked bisphosphonates as the standard of care for bone-related diseases. However, in the early 2000s, case reports began to surface that linked bisphosphonates with osteonecrosis of the jaw (ONJ). Since that time, studies conducted have corroborated the linkage. However, as with most disease states, many factors can contribute to the onset of disease. The aim of this study was to determine which comorbid factors presented an increased risk for developing ONJ in cancer patients.^ Using a case-control study design, investigators used a combination of ICD-9 codes and chart review to identify confirmed cases of ONJ at The University of Texas M. D. Anderson Cancer Center (MDACC). Each case was then matched to five controls based on age, gender, race/ethnicity, and primary cancer diagnosis. Data querying and chart review provided information on variables of interest. These variables included bisphosphonate exposure, glucocorticoids exposure, smoking history, obesity, and diabetes. Statistical analysis was conducted using PASW (Predictive Analytics Software) Statistics, Version 18 (SPSS Inc., Chicago, Illinois).^ One hundred twelve (112) cases were identified as confirmed cases of ONJ. Variables were run using univariate logistic regression to determine significance (p < .05); significant variables were included in the final conditional logistic regression model. Concurrent use of bisphosphonates and glucocorticoids (OR, 18.60; CI, 8.85 to 39.12; p < .001), current smokers (OR, 2.52; CI, 1.21 to 5.25; p = .014), and presence of diabetes (OR, 1.84; CI, 1.06 to 3.20; p = .030) were found to increase the risk for developing ONJ. Obesity was not associated significantly with ONJ development.^ In this study, cancer patients that received bisphosphonates as part of their therapeutic regimen were found to have an 18-fold increase in their risk of developing ONJ. Other factors included smoking and diabetes. More studies examining the concurrent use of glucocorticoids and bisphosphonates may be able to strengthen any correlations.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este estudo centra-se numa investigação sobre o conceito de trabalho em open space na nova sede no Porto da empresa Energias de Portugal - SA, tendo em conta a estratégia implementada e os resultados conseguidos. Para isso, dissecámos as premissas apresentadas aos trabalhadores na cerimónia de inauguração do novo espaço - “O Open Space opera como plataforma de comunicação e de partilha de informação” e “O Open Space responde às necessidades dos trabalhadores, criando ambientes de trabalho modernos e funcionais”. A fim de avaliarmos o campo empírico, construímos e enviámos em formato eletrónico para o e-mail de todos os trabalhadores da amostra um instrumento de medida que denominámos de open space (OS). As ilações retiradas estão baseadas nos resultados analisados e discutidos após processamento em SPSS - predictive analytics software and solutions e em reports gráficos. O open space da EDP Porto é um local moderno e funcional, privilegiado em relação à fluidez e partilha de informação, capaz de manifestar estratégias de negócio e de salientar aspetos da marca e da cultura da Empresa. A formação/informação sobre comportamentos e regras básicas a seguir na partilha de um mesmo espaço, as razões de negócio que levam a organização a mudar o espaço de trabalho, a par das vantagens que ambas as partes podem tirar do novo conceito, influencia positivamente ou negativamente a perceção da mudança e o estado emocional dos trabalhadores. O ruído, a temperatura ambiente, a concentração ou a privacidade, são alguns dos fatores que poderão variar com o layout e funcionam como condicionantes de uma maior ou menor satisfação ambiental. No entanto, existem sempre questões que permanecem pendentes e foi nesse contexto que deixámos algumas propostas para novas investigações num trabalho científico que nunca se esgota. / This study focuses on researching the concept of working in an open space in the new Oporto’s headquarters of the company Energias de Portugal - SA, given the strategy implemented and the results achieved. For this, we dissected the assumptions presented to workers at the inauguration ceremony of the new space - "The Open Space operates as a platform for communication and information sharing" and "The Open Space responds to the needs of workers, creating modern and functional workplaces". In order to evaluate the empirical side, we built and sent, in electronic format, an e-mail to all the workers of the sample with a measurement tool that we called the open space (OS). The conclusions are based on the results analyzed and discussed after being processed in SPSS - predictive analytics software and solutions and graphs in reports. The open space of the EDP Oporto is a modern and functional place, privileged in relation to fluidity and information sharing, capable of manifesting business strategies and highlight aspects of the brand and culture of the Company. The training/information on behaviors and basic rules to follow when sharing the same space, the business reasons that lead the organization to change the workspace, along with the advantages that both parties can benefit from the new concept, influence positively or negatively the perception of change and the emotional state of workers. Noise, temperature, concentration or privacy, are some of the factors that may vary with the layout and function as constraints in a greater or lesser environmental satisfaction. However, there are always issues that remain outstanding and it was in this context that we made some proposals for further research in a scientific paper that never runs out.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The idea behind the project is to develop a methodology for analyzing and developing techniques for the diagnosis and the prediction of the state of charge and health of lithium-ion batteries for automotive applications. For lithium-ion batteries, residual functionality is measured in terms of state of health; however, this value cannot be directly associated with a measurable value, so it must be estimated. The development of the algorithms is based on the identification of the causes of battery degradation, in order to model and predict the trend. Therefore, models have been developed that are able to predict the electrical, thermal and aging behavior. In addition to the model, it was necessary to develop algorithms capable of monitoring the state of the battery, online and offline. This was possible with the use of algorithms based on Kalman filters, which allow the estimation of the system status in real time. Through machine learning algorithms, which allow offline analysis of battery deterioration using a statistical approach, it is possible to analyze information from the entire fleet of vehicles. Both systems work in synergy in order to achieve the best performance. Validation was performed with laboratory tests on different batteries and under different conditions. The development of the model allowed to reduce the time of the experimental tests. Some specific phenomena were tested in the laboratory, and the other cases were artificially generated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New business and technology platforms are required to sustainably manage urban water resources [1,2]. However, any proposed solutions must be cognisant of security, privacy and other factors that may inhibit adoption and hence impact. The FP7 WISDOM project (funded by the European Commission - GA 619795) aims to achieve a step change in water and energy savings via the integration of innovative Information and Communication Technologies (ICT) frameworks to optimize water distribution networks and to enable change in consumer behavior through innovative demand management and adaptive pricing schemes [1,2,3]. The WISDOM concept centres on the integration of water distribution, sensor monitoring and communication systems coupled with semantic modelling (using ontologies, potentially connected to BIM, to serve as intelligent linkages throughout the entire framework) and control capabilities to provide for near real-time management of urban water resources. Fundamental to this framework are the needs and operational requirements of users and stakeholders at domestic, corporate and city levels and this requires the interoperability of a number of demand and operational models, fed with data from diverse sources such as sensor networks and crowsourced information. This has implications regarding the provenance and trustworthiness of such data and how it can be used in not only the understanding of system and user behaviours, but more importantly in the real-time control of such systems. Adaptive and intelligent analytics will be used to produce decision support systems that will drive the ability to increase the variability of both supply and consumption [3]. This in turn paves the way for adaptive pricing incentives and a greater understanding of the water-energy nexus. This integration is complex and uncertain yet being typical of a cyber-physical system, and its relevance transcends the water resource management domain. The WISDOM framework will be modeled and simulated with initial testing at an experimental facility in France (AQUASIM – a full-scale test-bed facility to study sustainable water management), then deployed and evaluated in in two pilots in Cardiff (UK) and La Spezia (Italy). These demonstrators will evaluate the integrated concept providing insight for wider adoption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Phase I trials use a small number of patients to define a maximum tolerated dose (MTD) and the safety of new agents. We compared data from phase I and registration trials to determine whether early trials predicted later safety and final dose. We searched the U.S. Food and Drug Administration (FDA) website for drugs approved in nonpediatric cancers (January 1990-October 2012). The recommended phase II dose (R2PD) and toxicities from phase I were compared with doses and safety in later trials. In 62 of 85 (73%) matched trials, the dose from the later trial was within 20% of the RP2D. In a multivariable analysis, phase I trials of targeted agents were less predictive of the final approved dose (OR, 0.2 for adopting ± 20% of the RP2D for targeted vs. other classes; P = 0.025). Of the 530 clinically relevant toxicities in later trials, 70% (n = 374) were described in phase I. A significant relationship (P = 0.0032) between increasing the number of patients in phase I (up to 60) and the ability to describe future clinically relevant toxicities was observed. Among 28,505 patients in later trials, the death rate that was related to drug was 1.41%. In conclusion, dosing based on phase I trials was associated with a low toxicity-related death rate in later trials. The ability to predict relevant toxicities correlates with the number of patients on the initial phase I trial. The final dose approved was within 20% of the RP2D in 73% of assessed trials.