904 resultados para Network Analysis Methods
Resumo:
The major objective of this project is to evaluate image analysis for characterizing air voids in Portland cement contract (PCC) and asphalt concrete (AC) and aggregate gradation in asphalt concrete. Phase 1 of this project has concentrated on evaluation and refinement of sample preparation techniques, evaluation of methods and instruments for conducting image analysis, and finally, analysis and comparison of a select portion of samples. Preliminary results suggest a strong correlation between the results obtained from the linear traverse method and image analysis methods for determining percent air voids in concrete. Preliminary work with asphalt samples has shown that damage caused by a high vacuum of the conventional scanning electron microscope (SEM) may too disruptive. Alternative solutions have been explored, including confocal microscopy and low vacuum electron microscopy. Additionally, a conventional high vacuum SEM operating at a marginal operating vacuum may suffice.
Resumo:
The major objective of this research project is to utilize thermal analysis techniques in conjunction with x-ray analysis methods to identify and explain chemical reactions that promote aggregate related deterioration in Portland cement concrete. The first year of this project has been spent obtaining and analyzing limestone and dolomite samples that exhibit a wide range of field service performance. Most of the samples chosen for the study also had laboratory durability test information (ASTM C 666, method B) that was readily available. Preliminary test results indicate that a strong relationship exists between the average crystallite size of the limestone (calcite) specimens and their apparent decomposition temperatures as measured by thermogravimetric analysis. Also, premature weight loss in the thermogravimetric analysis tests appeared to be related to the apparent decomposition temperature of the various calcite test specimens.
Resumo:
La gouvernance de l'Internet est une thématique récente dans la politique mondiale. Néanmoins, elle est devenue au fil des années un enjeu économique et politique important. La question a même pris une importance particulière au cours des derniers mois en devenant un sujet d'actualité récurrent. Forte de ce constat, c ette recherche retrace l'histoire de la gouvernance de l'Internet depuis son émergence comme enjeu politique dans les années 1980 jusqu'à la fin du Sommet Mondial sur la Société de l'Information (SMSI) en 2005. Plutôt que de se focaliser sur l'une ou l'autre des institutions impliquées dans la régulation du réseau informatique mondial, cette recherche analyse l'émergence et l'évolution historique d'un espace de luttes rassemblant un nombre croissant d'acteurs différents. Cette évolution est décrite à travers le prisme de la relation dialectique entre élites et non-élites et de la lutte autour de la définition de la gouvernance de l'Internet. Cette thèse explore donc la question de comment les relations au sein des élites de la gouvernance de l'Internet et entre ces élites et les non-élites expliquent l'emergence, l'évolution et la structuration d'un champ relativement autonome de la politique mondiale centré sur la gouvernance de l'Internet. Contre les perspectives dominantes réaliste et libérales, cette recherche s'ancre dans une approche issue de la combinaison des traditions hétérodoxes en économie politique internationale et des apports de la sociologie politique internationale. Celle-ci s'articule autour des concepts de champ, d'élites et d'hégémonie. Le concept de champ, développé par Bourdieu inspire un nombre croissant d'études de la politique mondiale. Il permet à la fois une étude différenciée de la mondialisation et l'émergence d'espaces de lutte et de domination au niveau transnational. La sociologie des élites, elle, permet une approche pragmatique et centrée sur les acteurs des questions de pouvoir dans la mondialisation. Cette recherche utilise plus particulièrement le concept d'élite du pouvoir de Wright Mills pour étudier l'unification d'élites a priori différentes autour de projets communs. Enfin, cette étude reprend le concept néo-gramscien d'hégémonie afin d'étudier à la fois la stabilité relative du pouvoir d'une élite garantie par la dimension consensuelle de la domination, et les germes de changement contenus dans tout ordre international. A travers l'étude des documents produits au cours de la période étudiée et en s'appuyant sur la création de bases de données sur les réseaux d'acteurs, cette étude s'intéresse aux débats qui ont suivi la commercialisation du réseau au début des années 1990 et aux négociations lors du SMSI. La première période a abouti à la création de l'Internet Corporation for Assigned Names and Numbers (ICANN) en 1998. Cette création est le résultat de la recherche d'un consensus entre les discours dominants des années 1990. C'est également le fruit d'une coalition entre intérêts au sein d'une élite du pouvoir de la gouvernance de l'Internet. Cependant, cette institutionnalisation de l'Internet autour de l'ICANN excluait un certain nombre d'acteurs et de discours qui ont depuis tenté de renverser cet ordre. Le SMSI a été le cadre de la remise en cause du mode de gouvernance de l'Internet par les États exclus du système, des universitaires et certaines ONG et organisations internationales. C'est pourquoi le SMSI constitue la seconde période historique étudiée dans cette thèse. La confrontation lors du SMSI a donné lieu à une reconfiguration de l'élite du pouvoir de la gouvernance de l'Internet ainsi qu'à une redéfinition des frontières du champ. Un nouveau projet hégémonique a vu le jour autour d'éléments discursifs tels que le multipartenariat et autour d'insitutions telles que le Forum sur la Gouvernance de l'Internet. Le succès relatif de ce projet a permis une stabilité insitutionnelle inédite depuis la fin du SMSI et une acceptation du discours des élites par un grand nombre d'acteurs du champ. Ce n'est que récemment que cet ordre a été remis en cause par les pouvoirs émergents dans la gouvernance de l'Internet. Cette thèse cherche à contribuer au débat scientifique sur trois plans. Sur le plan théorique, elle contribue à l'essor d'un dialogue entre approches d'économie politique mondiale et de sociologie politique internationale afin d'étudier à la fois les dynamiques structurelles liées au processus de mondialisation et les pratiques localisées des acteurs dans un domaine précis. Elle insiste notamment sur l'apport de les notions de champ et d'élite du pouvoir et sur leur compatibilité avec les anlayses néo-gramsciennes de l'hégémonie. Sur le plan méthodologique, ce dialogue se traduit par une utilisation de méthodes sociologiques telles que l'anlyse de réseaux d'acteurs et de déclarations pour compléter l'analyse qualitative de documents. Enfin, sur le plan empirique, cette recherche offre une perspective originale sur la gouvernance de l'Internet en insistant sur sa dimension historique, en démontrant la fragilité du concept de gouvernance multipartenaire (multistakeholder) et en se focalisant sur les rapports de pouvoir et les liens entre gouvernance de l'Internet et mondialisation. - Internet governance is a recent issue in global politics. However, it gradually became a major political and economic issue. It recently became even more important and now appears regularly in the news. Against this background, this research outlines the history of Internet governance from its emergence as a political issue in the 1980s to the end of the World Summit on the Information Society (WSIS) in 2005. Rather than focusing on one or the other institution involved in Internet governance, this research analyses the emergence and historical evolution of a space of struggle affecting a growing number of different actors. This evolution is described through the analysis of the dialectical relation between elites and non-elites and through the struggle around the definition of Internet governance. The thesis explores the question of how the relations among the elites of Internet governance and between these elites and non-elites explain the emergence, the evolution, and the structuration of a relatively autonomous field of world politics centred around Internet governance. Against dominant realist and liberal perspectives, this research draws upon a cross-fertilisation of heterodox international political economy and international political sociology. This approach focuses on concepts such as field, elites and hegemony. The concept of field, as developed by Bourdieu, is increasingly used in International Relations to build a differentiated analysis of globalisation and to describe the emergence of transnational spaces of struggle and domination. Elite sociology allows for a pragmatic actor-centred analysis of the issue of power in the globalisation process. This research particularly draws on Wright Mill's concept of power elite in order to explore the unification of different elites around shared projects. Finally, this thesis uses the Neo-Gramscian concept of hegemony in order to study both the consensual dimension of domination and the prospect of change contained in any international order. Through the analysis of the documents produced within the analysed period, and through the creation of databases of networks of actors, this research focuses on the debates that followed the commercialisation of the Internet throughout the 1990s and during the WSIS. The first time period led to the creation of the Internet Corporation for Assigned Names and Numbers (ICANN) in 1998. This creation resulted from the consensus-building between the dominant discourses of the time. It also resulted from the coalition of interests among an emerging power elite. However, this institutionalisation of Internet governance around the ICANN excluded a number of actors and discourses that resisted this mode of governance. The WSIS became the institutional framework within which the governance system was questioned by some excluded states, scholars, NGOs and intergovernmental organisations. The confrontation between the power elite and counter-elites during the WSIS triggered a reconfiguration of the power elite as well as a re-definition of the boundaries of the field. A new hegemonic project emerged around discursive elements such as the idea of multistakeholderism and institutional elements such as the Internet Governance Forum. The relative success of the hegemonic project allowed for a certain stability within the field and an acceptance by most non-elites of the new order. It is only recently that this order began to be questioned by the emerging powers of Internet governance. This research provides three main contributions to the scientific debate. On the theoretical level, it contributes to the emergence of a dialogue between International Political Economy and International Political Sociology perspectives in order to analyse both the structural trends of the globalisation process and the located practices of actors in a given issue-area. It notably stresses the contribution of concepts such as field and power elite and their compatibility with a Neo-Gramscian framework to analyse hegemony. On the methodological level, this perspective relies on the use of mixed methods, combining qualitative content analysis with social network analysis of actors and statements. Finally, on the empirical level, this research provides an original perspective on Internet governance. It stresses the historical dimension of current Internet governance arrangements. It also criticise the notion of multistakeholde ism and focuses instead on the power dynamics and the relation between Internet governance and globalisation.
Resumo:
Väitöstutkimuksessa on tarkasteltuinfrapunaspektroskopian ja monimuuttujaisten aineistonkäsittelymenetelmien soveltamista kiteytysprosessin monitoroinnissa ja kidemäisen tuotteen analysoinnissa. Parhaillaan kiteytysprosessitutkimuksessa maailmanlaajuisesti tutkitaan intensiivisesti erilaisten mittausmenetelmien soveltamista kiteytysprosessin ilmiöidenjatkuvaan mittaamiseen niin nestefaasista kuin syntyvistä kiteistäkin. Lisäksi tuotteen karakterisointi on välttämätöntä tuotteen laadun varmistamiseksi. Erityisesti lääkeaineiden valmistuksessa kiinnostusta tämäntyyppiseen tutkimukseen edistää Yhdysvaltain elintarvike- ja lääkeaineviraston (FDA) prosessianalyyttisiintekniikoihin (PAT) liittyvä ohjeistus, jossa määritellään laajasti vaatimukset lääkeaineiden valmistuksessa ja tuotteen karakterisoinnissa tarvittaville mittauksille turvallisten valmistusprosessien takaamiseksi. Jäähdytyskiteytyson erityisesti lääketeollisuudessa paljon käytetty erotusmenetelmä kiinteän raakatuotteen puhdistuksessa. Menetelmässä puhdistettava kiinteä raaka-aine liuotetaan sopivaan liuottimeen suhteellisen korkeassa lämpötilassa. Puhdistettavan aineen liukoisuus käytettävään liuottimeen laskee lämpötilan laskiessa, joten systeemiä jäähdytettäessä liuenneen aineen konsentraatio prosessissa ylittää liukoisuuskonsentraation. Tällaiseen ylikylläiseen systeemiin pyrkii muodostumaan uusia kiteitä tai olemassa olevat kiteet kasvavat. Ylikylläisyys on yksi tärkeimmistä kidetuotteen laatuun vaikuttavista tekijöistä. Jäähdytyskiteytyksessä syntyvän tuotteen ominaisuuksiin voidaan vaikuttaa mm. liuottimen valinnalla, jäähdytyprofiililla ja sekoituksella. Lisäksi kiteytysprosessin käynnistymisvaihe eli ensimmäisten kiteiden muodostumishetki vaikuttaa tuotteen ominaisuuksiin. Kidemäisen tuotteen laatu määritellään kiteiden keskimääräisen koon, koko- ja muotojakaumansekä puhtauden perusteella. Lääketeollisuudessa on usein vaatimuksena, että tuote edustaa tiettyä polymorfimuotoa, mikä tarkoittaa molekyylien kykyä järjestäytyä kidehilassa usealla eri tavalla. Edellä mainitut ominaisuudet vaikuttavat tuotteen jatkokäsiteltävyyteen, kuten mm. suodattuvuuteen, jauhautuvuuteen ja tabletoitavuuteen. Lisäksi polymorfiamuodolla on vaikutusta moniin tuotteen käytettävyysominaisuuksiin, kuten esim. lääkeaineen liukenemisnopeuteen elimistössä. Väitöstyössä on tutkittu sulfatiatsolin jäähdytyskiteytystä käyttäen useita eri liuotinseoksia ja jäähdytysprofiileja sekä tarkasteltu näiden tekijöiden vaikutustatuotteen laatuominaisuuksiin. Infrapunaspektroskopia on laajalti kemian alan tutkimuksissa sovellettava menetelmä. Siinä mitataan tutkittavan näytteenmolekyylien värähtelyjen aiheuttamia spektrimuutoksia IR alueella. Tutkimuksessa prosessinaikaiset mittaukset toteutettiin in-situ reaktoriin sijoitettavalla uppoanturilla käyttäen vaimennettuun kokonaisheijastukseen (ATR) perustuvaa Fourier muunnettua infrapuna (FTIR) spektroskopiaa. Jauhemaiset näytteet mitattiin off-line diffuusioheijastukseen (DRIFT) perustuvalla FTIR spektroskopialla. Monimuuttujamenetelmillä (kemometria) voidaan useita satoja, jopa tuhansia muuttujia käsittävä spektridata jalostaa kvalitatiiviseksi (laadulliseksi) tai kvantitatiiviseksi (määrälliseksi) prosessia kuvaavaksi informaatioksi. Väitöstyössä tarkasteltiin laajasti erilaisten monimuuttujamenetelmien soveltamista mahdollisimman monipuolisen prosessia kuvaavan informaation saamiseksi mitatusta spektriaineistosta. Väitöstyön tuloksena on ehdotettu kalibrointirutiini liuenneen aineen konsentraation ja edelleen ylikylläisyystason mittaamiseksi kiteytysprosessin aikana. Kalibrointirutiinin kehittämiseen kuuluivat aineiston hyvyyden tarkastelumenetelmät, aineiston esikäsittelymenetelmät, varsinainen kalibrointimallinnus sekä mallin validointi. Näin saadaan reaaliaikaista informaatiota kiteytysprosessin ajavasta voimasta, mikä edelleen parantaa kyseisen prosessin tuntemusta ja hallittavuutta. Ylikylläisyystason vaikutuksia syntyvän kidetuotteen laatuun seurattiin usein kiteytyskokein. Työssä on esitetty myös monimuuttujaiseen tilastolliseen prosessinseurantaan perustuva menetelmä, jolla voidaan ennustaa spontaania primääristä ytimenmuodostumishetkeä mitatusta spektriaineistosta sekä mahdollisesti päätellä ydintymisessä syntyvä polymorfimuoto. Ehdotettua menetelmää hyödyntäen voidaan paitsi ennakoida kideytimien muodostumista myös havaita mahdolliset häiriötilanteet kiteytysprosessin alkuhetkillä. Syntyvää polymorfimuotoa ennustamalla voidaan havaita ei-toivotun polymorfin ydintyminen,ja mahdollisesti muuttaa kiteytyksen ohjausta halutun polymorfimuodon saavuttamiseksi. Monimuuttujamenetelmiä sovellettiin myös kiteytyspanosten välisen vaihtelun määrittämiseen mitatusta spektriaineistosta. Tämäntyyppisestä analyysistä saatua informaatiota voidaan hyödyntää kiteytysprosessien suunnittelussa ja optimoinnissa. Väitöstyössä testattiin IR spektroskopian ja erilaisten monimuuttujamenetelmien soveltuvuutta kidetuotteen polymorfikoostumuksen nopeaan määritykseen. Jauhemaisten näytteiden luokittelu eri polymorfeja sisältäviin näytteisiin voitiin tehdä käyttäen tarkoitukseen soveltuvia monimuuttujaisia luokittelumenetelmiä. Tämä tarjoaa nopean menetelmän jauhemaisen näytteen polymorfikoostumuksen karkeaan arviointiin, eli siihen mitä yksittäistä polymorfia kyseinen näyte pääasiassa sisältää. Varsinainen kvantitatiivinen analyysi, eli sen selvittäminen paljonko esim. painoprosentteina näyte sisältää eri polymorfeja, vaatii kaikki polymorfit kattavan fysikaalisen kalibrointisarjan, mikä voi olla puhtaiden polymorfien huonon saatavuuden takia hankalaa.
Resumo:
Tärkeä tehtävä ympäristön tarkkailussa on arvioida ympäristön nykyinen tila ja ihmisen siihen aiheuttamat muutokset sekä analysoida ja etsiä näiden yhtenäiset suhteet. Ympäristön muuttumista voidaan hallita keräämällä ja analysoimalla tietoa. Tässä diplomityössä on tutkittu vesikasvillisuudessa hai vainuja muutoksia käyttäen etäältä hankittua mittausdataa ja kuvan analysointimenetelmiä. Ympäristön tarkkailuun on käytetty Suomen suurimmasta järvestä Saimaasta vuosina 1996 ja 1999 otettuja ilmakuvia. Ensimmäinen kuva-analyysin vaihe on geometrinen korjaus, jonka tarkoituksena on kohdistaa ja suhteuttaa otetut kuvat samaan koordinaattijärjestelmään. Toinen vaihe on kohdistaa vastaavat paikalliset alueet ja tunnistaa kasvillisuuden muuttuminen. Kasvillisuuden tunnistamiseen on käytetty erilaisia lähestymistapoja sisältäen valvottuja ja valvomattomia tunnistustapoja. Tutkimuksessa käytettiin aitoa, kohinoista mittausdataa, minkä perusteella tehdyt kokeet antoivat hyviä tuloksia tutkimuksen onnistumisesta.
Resumo:
The objective of this work was to combine the advantages of the dried blood spot (DBS) sampling process with the highly sensitive and selective negative-ion chemical ionization tandem mass spectrometry (NICI-MS-MS) to analyze for recent antidepressants including fluoxetine, norfluoxetine, reboxetine, and paroxetine from micro whole blood samples (i.e., 10 microL). Before analysis, DBS samples were punched out, and antidepressants were simultaneously extracted and derivatized in a single step by use of pentafluoropropionic acid anhydride and 0.02% triethylamine in butyl chloride for 30 min at 60 degrees C under ultrasonication. Derivatives were then separated on a gas chromatograph coupled with a triple-quadrupole mass spectrometer operating in negative selected reaction monitoring mode for a total run time of 5 min. To establish the validity of the method, trueness, precision, and selectivity were determined on the basis of the guidelines of the "Société Française des Sciences et des Techniques Pharmaceutiques" (SFSTP). The assay was found to be linear in the concentration ranges 1 to 500 ng mL(-1) for fluoxetine and norfluoxetine and 20 to 500 ng mL(-1) for reboxetine and paroxetine. Despite the small sampling volume, the limit of detection was estimated at 20 pg mL(-1) for all the analytes. The stability of DBS was also evaluated at -20 degrees C, 4 degrees C, 25 degrees C, and 40 degrees C for up to 30 days. Furthermore, the method was successfully applied to a pharmacokinetic investigation performed on a healthy volunteer after oral administration of a single 40-mg dose of fluoxetine. Thus, this validated DBS method combines an extractive-derivative single step with a fast and sensitive GC-NICI-MS-MS technique. Using microliter blood samples, this procedure offers a patient-friendly tool in many biomedical fields such as checking treatment adherence, therapeutic drug monitoring, toxicological analyses, or pharmacokinetic studies.
Resumo:
The objective of the thesis was to explore the nature and characteristics of customer-related internal communication in a global industrial matrix organization during a specific customer relationship, and how it could be improved. The theoretical part of the study views the field of the concepts of intra-organizational information and knowledge sharing. The theoretical part also views the internal communications influences to customer relationships, its problematic, and the suggestions to improve internal communication in literature. The empirical part of the study was conducted with the Content Analysis and the Social Network Analysis as research methods. The data was collected by interviews and a questionnaire. Internal communication was observed first generally within the organization from the point of view of a certain business, and secondly, during a specific customer relationship at personal level and at departmental level. The results of the study describe the nature and characteristics of internal communication in the organization. The results give 13 suggestions for improving internal communication in the organization. Although the study has been done in one specific organization, it also offers insights for other organizations as well as managers to improve their internal communication.
Resumo:
Que ce soit d'un point de vue, urbanistique, social, ou encore de la gouvernance, l'évolution des villes est un défi majeur de nos sociétés contemporaines. En offrant la possibilité d'analyser des configurations spatiales et sociales existantes ou en tentant de simuler celles à venir, les systèmes d'information géographique sont devenus incontournables dans la gestion et dans la planification urbaine. En cinq ans la population de la ville de Lausanne est passée de 134'700 à 140'570 habitants, alors que les effectifs de l'école publique ont crû de 12'200 à 13'500 élèves. Cet accroissement démographique associé à un vaste processus d'harmonisation de la scolarité obligatoire en Suisse ont amené le Service des écoles à mettre en place et à développer en collaboration avec l'université de Lausanne des solutions SIG à même de répondre à différentes problématiques spatiales. Établies en 1989, les limites des établissements scolaires (bassins de recrutement) ont dû être redéfinies afin de les réadapter aux réalités d'un paysage urbain et politique en pleine mutation. Dans un contexte de mobilité et de durabilité, un système d'attribution de subventions pour les transports publics basé sur la distance domicile-école et sur l'âge des écoliers, a été conçu. La réalisation de ces projets a nécessité la construction de bases de données géographiques ainsi que l'élaboration de nouvelles méthodes d'analyses exposées dans ce travail. Cette thèse s'est ainsi faite selon une dialectique permanente entre recherches théoriques et nécessités pratiques. La première partie de ce travail porte sur l'analyse du réseau piéton de la ville. La morphologie du réseau est investiguée au travers d'approches multi-échelles du concept de centralité. La première conception, nommée sinuo-centralité ("straightness centrality"), stipule qu'être central c'est être relié aux autres en ligne droite. La deuxième, sans doute plus intuitive, est intitulée centricité ("closeness centrality") et exprime le fait qu'être central c'est être proche des autres (fig. 1, II). Les méthodes développées ont pour but d'évaluer la connectivité et la marchabilité du réseau, tout en suggérant de possibles améliorations (création de raccourcis piétons). Le troisième et dernier volet théorique expose et développe un algorithme de transport optimal régularisé. En minimisant la distance domicile-école et en respectant la taille des écoles, l'algorithme permet de réaliser des scénarios d'enclassement. L'implémentation des multiplicateurs de Lagrange offre une visualisation du "coût spatial" des infrastructures scolaires et des lieux de résidence des écoliers. La deuxième partie de cette thèse retrace les aspects principaux de trois projets réalisés dans le cadre de la gestion scolaire. À savoir : la conception d'un système d'attribution de subventions pour les transports publics, la redéfinition de la carte scolaire, ou encore la simulation des flux d'élèves se rendant à l'école à pied. *** May it be from an urbanistic, a social or from a governance point of view, the evolution of cities is a major challenge in our contemporary societies. By giving the opportunity to analyse spatial and social configurations or attempting to simulate future ones, geographic information systems cannot be overlooked in urban planning and management. In five years, the population of the city of Lausanne has grown from 134'700 to 140'570 inhabitants while the numbers in public schools have increased from 12'200 to 13'500 students. Associated to a considerable harmonisation process of compulsory schooling in Switzerland, this demographic rise has driven schooling services, in collaboration with the University of Lausanne, to set up and develop GIS capable of tackling various spatial issues. Established in 1989, the school districts had to be altered so that they might fit the reality of a continuously changing urban and political landscape. In a context of mobility and durability, an attribution system for public transport subventions based on the distance between residence and school and on the age of the students was designed. The implementation of these projects required the built of geographical databases as well as the elaboration of new analysis methods exposed in this thesis. The first part of this work focuses on the analysis of the city's pedestrian network. Its morphology is investigated through multi-scale approaches of the concept of centrality. The first conception, named the straightness centrality, stipulates that being central is being connected to the others in a straight line. The second, undoubtedly more intuitive, is called closeness centrality and expresses the fact that being central is being close to the others. The goal of the methods developed is to evaluate the connectivity and walkability of the network along with suggesting possible improvements (creation of pedestrian shortcuts).The third and final theoretical section exposes and develops an algorithm of regularised optimal transport. By minimising home to school distances and by respecting school capacity, the algorithm enables the production of student allocation scheme. The implementation of the Lagrange multipliers offers a visualisation of the spatial cost associated to the schooling infrastructures and to the student home locations. The second part of this thesis recounts the principal aspects of three projects fulfilled in the context of school management. It focuses namely on the built of an attribution system for public transport subventions, a school redistricting process and on simulating student pedestrian flows.
Resumo:
This thesis was focussed on statistical analysis methods and proposes the use of Bayesian inference to extract information contained in experimental data by estimating Ebola model parameters. The model is a system of differential equations expressing the behavior and dynamics of Ebola. Two sets of data (onset and death data) were both used to estimate parameters, which has not been done by previous researchers in (Chowell, 2004). To be able to use both data, a new version of the model has been built. Model parameters have been estimated and then used to calculate the basic reproduction number and to study the disease-free equilibrium. Estimates of the parameters were useful to determine how well the model fits the data and how good estimates were, in terms of the information they provided about the possible relationship between variables. The solution showed that Ebola model fits the observed onset data at 98.95% and the observed death data at 93.6%. Since Bayesian inference can not be performed analytically, the Markov chain Monte Carlo approach has been used to generate samples from the posterior distribution over parameters. Samples have been used to check the accuracy of the model and other characteristics of the target posteriors.
Resumo:
This work highlights and analyzes the citations and co-citations by different authors, countries and institutions in series of researches on biofuel. These relations represent a knowledge map which shows the areas of research by different countries and authors. The contributions of different institutions were also shown. With this knowledge map developed, areas of research that still need more attention as well as the most important studies were highlighted. The software used for this analysis is Citespace which is developed by Chaomei Chen. Sources of information are articles retrieved from ISI Web of Science. Biofuel as a renewable form of energy is discussed. Its sources, types, methods of production, effects, market, and producers were discussed. Also plans and strategies that aim at boosting world biofuel market were listed as well as recent researches on it. Knowledge mapping, its types and methods as well as the method and software used for the analysis were also discussed.
Resumo:
Recent years have produced great advances in the instrumentation technology. The amount of available data has been increasing due to the simplicity, speed and accuracy of current spectroscopic instruments. Most of these data are, however, meaningless without a proper analysis. This has been one of the reasons for the overgrowing success of multivariate handling of such data. Industrial data is commonly not designed data; in other words, there is no exact experimental design, but rather the data have been collected as a routine procedure during an industrial process. This makes certain demands on the multivariate modeling, as the selection of samples and variables can have an enormous effect. Common approaches in the modeling of industrial data are PCA (principal component analysis) and PLS (projection to latent structures or partial least squares) but there are also other methods that should be considered. The more advanced methods include multi block modeling and nonlinear modeling. In this thesis it is shown that the results of data analysis vary according to the modeling approach used, thus making the selection of the modeling approach dependent on the purpose of the model. If the model is intended to provide accurate predictions, the approach should be different than in the case where the purpose of modeling is mostly to obtain information about the variables and the process. For industrial applicability it is essential that the methods are robust and sufficiently simple to apply. In this way the methods and the results can be compared and an approach selected that is suitable for the intended purpose. Differences in data analysis methods are compared with data from different fields of industry in this thesis. In the first two papers, the multi block method is considered for data originating from the oil and fertilizer industries. The results are compared to those from PLS and priority PLS. The third paper considers applicability of multivariate models to process control for a reactive crystallization process. In the fourth paper, nonlinear modeling is examined with a data set from the oil industry. The response has a nonlinear relation to the descriptor matrix, and the results are compared between linear modeling, polynomial PLS and nonlinear modeling using nonlinear score vectors.
Resumo:
This is a study of team social networks, their antecedents and outcomes. In focusing attention on the structural configuration of the team this research contributes to a new wave of thinking concerning group social capital. The research site was a random sample of Finnish work organisations. The data consisted of 499 employees in 76 teams representing 48 different organisations. A systematic literature review and quantitative methods were used in conducting the research: the former primarily to establish the current theoretical position on the relationships among the variables and the latter to test these relationships. Social network analysis was the primary method used in identifying the social-network relations among the work-team members. The first and key contribution of this study is that it relates the structuralnetwork properties of work teams to behavioural outcomes, attitudinal outcomes and, ultimately, team performance. Moreover, it shows that addressing attitudinal outcomes is also important in terms of team performance; attitudinal outcomes (team identity) mediated the relationship between the team’s performance and its social network. The second contribution is that it examines the possible antecedents of the social structure. It is thus one response to Salancik’s (1995) call for a network theory in that it explains why certain network characteristics exist. Itdemonstrates that irrespective of whether or not a team is heterogeneous in terms of age or gender, educational diversity may protect it from centralisation. However, heterogeneity in terms of gender turned out to have a negative impact on density. Thirdly, given the observation that the benefits of (team) networks are typically theorised and modelled without reference to the nature of the relationships comprising the structure, the study directly tested whether team knowledge mediated the effects of instrumental and expressive network relationships on team performance. Furthermore, with its focus on expressive networks that link the workplace to a more informal world, which have been rather neglected in previous research, it enhances knowledge of teams andnetworks. The results indicate that knowledge sharing fully mediates the influence of complementarities between dense and fragmented instrumental network relationships, thus providing empirical validation of the implicit understanding that networks transfer knowledge. Fourthly, the study findings suggest that an optimal configuration of the work-team social-network structure combines both bridging and bonding social relationships.
Resumo:
Appearance of trust in regional, co-operative networks In our times, the value of social networks has been widely acknowledged. One can say that it is important for private persons to get networked, whilst it is even a must for companies and organizations in business life. This doctor's thesis examines three co-operative regional networks. Networks are located in Western Uusimaa (Länsi-Uusimaa) region in southernmost Finland, and they had both public organizations and private companies as participants (later called ‘players’). Initially, all of them were co-financed from public funds, and two of them are still operational while writing this. The main target of these networks has been to act as learning networks. The learning network stands for an ensemble of research and development units and workplaces constituting a common forum for learning. The main focus in this study has been on qualitative and structural characteristics of the networks, and how they are relating with intrinsic trust. In addition to the development of trust, it has been studied, at what level organizational learning within the networks takes place, and lastly, what kind of factors facilitate the development of social capital. The theoretical framework for the study is built on analysing trust and social capital. It is a 'mission impossible' to find single definitions for such major concepts. In this study, from the research questions' point of view it has been more relevant to concentrate on the aspects of networking and the relationships between the participating organizations. The total view in this study is very network-centric, and therefore those theories which have similar point of view have been prioritized. Such is the theory about structural holes by Ronald S. Burt (1992). It has been widely applied; especially his views on constraints affecting players in networks. The purpose of this study has not been to create new theories or to analyse and compare thoroughly the existing theoretical trends. Instead, the existing theories have provided the study with conceptual tools, which have been utilized for supporting the empirical results. The aim has been to create an explanatory case study consisting relevant discussion on the relationship between the network characteristics and the appearance of trust. The conceptual categorization for confidence vs. trust created by Niklas Luhmann (1979) is another important theoretical building block. In most cases, co-operation in networks is initiated by people already trusting in each other and willing to work together. However, personal trust is not sufficient in the long run to sustain the co-operation within the network: more abstract systemic trust described by Luhmann must also emerge. In the networks with different structures and at different development phases, these forms of trust appear at different levels. In this study, Luhmann’s systemic trust as a term has been replaced by the concept of 'trust in network as a system'. Structural characteristics of a network (density, centrality, structural holes etc.) have been selected to explain the creation of social capital and trust. The ability to adapt new information is essential for the development of social capital. Qualitative analysis for development phase has been used, and the Learning Network Maturity Test by Leenamaija Otala (2000) and her work have been applied. Thus, the qualitative characteristics and the structural characteristics of the networks are utilized together, when the creation of social capital and appearance of trust are assessed. Social Network Analysis, questionnaires and interviews have been the research methods. Quantitative and qualitative data have been combined. There is a similarity in viewpoints to research data with Extensive Case Study method, in which different cases are searched by exploring various cases and comparing certain common features between them and generic models. Development of trust, social capital and organizational learning has been explained in the study by comparing the networks in hand. Being a case study, it doesn't have targets to provide with general results and findings like conventional surveys. However, in this work phenomena and mechanisms related to them are interpreted from the empirical data. Key finding of this study is that the networks with high structural equality and clear target setting enable building trust to the network as a system. When systemic trust is present, e.g. changes in personnel involved in the co-operation won't hinder the network from remaining operational. On the other hand, if the players are not well motivated to co-operate, if the network is extremely centralized structurally, or if the network has players holding very much more beneficial position compared to the others, systemic trust won't develop: trust tends to remain at the personal level, and is directed to some players only. Such networks won't generate results and benefits to its players, and most probably they won’t live very long. In other words, learning networks cannot solely be based on willingness to learn, but also on willingness to co-operate.
Resumo:
Preference relations, and their modeling, have played a crucial role in both social sciences and applied mathematics. A special category of preference relations is represented by cardinal preference relations, which are nothing other than relations which can also take into account the degree of relation. Preference relations play a pivotal role in most of multi criteria decision making methods and in the operational research. This thesis aims at showing some recent advances in their methodology. Actually, there are a number of open issues in this field and the contributions presented in this thesis can be grouped accordingly. The first issue regards the estimation of a weight vector given a preference relation. A new and efficient algorithm for estimating the priority vector of a reciprocal relation, i.e. a special type of preference relation, is going to be presented. The same section contains the proof that twenty methods already proposed in literature lead to unsatisfactory results as they employ a conflicting constraint in their optimization model. The second area of interest concerns consistency evaluation and it is possibly the kernel of the thesis. This thesis contains the proofs that some indices are equivalent and that therefore, some seemingly different formulae, end up leading to the very same result. Moreover, some numerical simulations are presented. The section ends with some consideration of a new method for fairly evaluating consistency. The third matter regards incomplete relations and how to estimate missing comparisons. This section reports a numerical study of the methods already proposed in literature and analyzes their behavior in different situations. The fourth, and last, topic, proposes a way to deal with group decision making by means of connecting preference relations with social network analysis.
Resumo:
The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.