887 resultados para network cost models
Resumo:
Tutkimuksen päätavoitteena on luoda yleispätevä elinkaarimalli, jota voidaan hyö-dyntää laitetason kunnossapitoon liittyvissä päätöksentekotilanteissa eri teollisuu-denaloilla. Elinkaarimalliin yhdistettiin arvoajattelu, jonka pohjalta mallin käyttäjät voivat valita ja painottaa omien näkemyksiensä mukaan arvoa luovat elementit sekä nähdä muille verkostojäsenille arvoa lisäävät tekijät. Aiemmissa elinkaarimalleissa on keskitytty pääasiassa yhteen tarkastelunäkökulmaan, joten verkostotason työkalulle on selkeä tarve. Myöskään arvoajattelua ei ole aiemmin yhdistetty elinkaarilaskentaa soveltaviin malleihin. Työn osalta käytettiin konstruktiivista tutkimusotetta. Elinkaarimallia kehitettiin ja testattiin kaivosteollisuuden yritysverkostoon kuuluvien kahden case-tapauksen avulla. Tutkimuksen teoreettinen osa pohjautui työn oleellisempiin osa-alueisiin, kuten kunnossapitoon, arvoelementteihin ja elinkaarilaskentaan. Työn empiirisessä osiossa hyödynnettiin verkostoon kuuluvien yritysedustajien haastatteluita sekä yritysten tietojärjestelmästä saatuja tietoja. Arvoelementtien muodostuksessa käy-tettiin hyväksi myös muiden teollisuusyritysten ajatuksia ja näkökulmia. Työn merkittävin tulos on ensimmäisen version kehittäminen arvopohjaisesta elinkaarimallista. Elinkaarimalli antaa tuloksina jokaisen verkostojäsenen osalta laitteen kunnossapidon kustannukset ja tuotot vuositasolla sekä kumulatiivisesti koko elinkaaren ajalta niin reaali- kuin nykyarvonakin. Mallin tuloksena saadaan myös yhteistyön myötä kasvaneen lisäarvon suuruus ja jakautuminen verkoston toimijoiden välillä heidän valitsemiensa arvoelementtien mukaisesti. Näin ollen mallia voidaan käyttää tulevaisuuden ennustamisen lisäksi menneiden kustannusten ja tuottojen seurantamallina. Kokonaisuudessaan arvopohjaisen elinkaarimallin tuloksia voidaan hyödyntää laitetason kunnossapitoon liittyvissä sopimusneuvotte-luissa niin arvoa luovien elementtien kuin kustannus- ja tuottotekijöidenkin osalta. Kokonaisuudessaan arvoajattelun yhdistäminen ja hyödyntäminen elinkaarilas-kennassa on vielä alkuvaiheessa, joten kyseinen osa-alue onkin varsin järkevä jat-kotutkimuskohde. Avoimen ja tiiviin yritysyhteistyön kautta saadaan arvopohjai-sesta elinkaarimallista jatkokehitettyä erittäin hyödyllinen työkalu yritysverkosto-jen käytettäväksi. Kiristyvässä kilpailussa verkoston kokonaisarvon kasvu ja ja-kautuminen ovat tulevaisuudessa erittäin keskeisiä aihealueita, joten tämäntyylisten verkostotyökalujen hyödyntäminen nostanee ehdottomasti suosiotaan.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
Industrial maintenance can be executed internally, acquired from the original equipment manufacturer or outsourced to a service provider, and this concludes in many different kind of business relationships. To maximize the total value in a maintenance business relationship it is important to know what the partner values. The value of maintenance services can be considered to consist of value elements and the perceived total value for the customer and the service provider is the sum of these value elements. The specific objectives of this thesis are to identify the most important value elements for the maintenance service customer and provider and also to recognize where the value elements differ. The study was executed as a statistical analysis using the survey method. The data has been collected by an online survey sent to 345 maintenance service professionals in Finland. In the survey, four different types of value elements were considered: the customer’s high critical and low critical items and the service provider’s core and support service. The most valued elements by the respondents were reliability, safety at work, environmental safety, and operator knowledge. The least valued elements were asset management factors and access to markets. Statistically significant differences in value elements between service types were also found. As a managerial implication a value gap profile is presented. This Master’s Thesis is part of the MaiSeMa (Industrial Maintenance Services in a Renewing Business Network: Identify, Model and Manage Value) research project where network decision models are created to identify, model and manage the value of maintenance services.
Resumo:
Tämän pro gradu -tutkielman tavoitteena oli tutkia millainen on hyvä ympäristö-johtamisen verkostomalli kunnalle. Lisäksi työssä tarkasteltiin mitä hyötyjä haittoja on kunnan verkostomaisesti organisoidussa ympäristöjohtamisessa verrattuna ei-verkostomaiseen toteutukseen. Työssä selvitettiin myös millaiset verkostojohtamisen mallit ovat tehokkaita ja mitkä ovat verkoston osapuolten odotukset kunnan ympäristöjohtamisen verkostolle. Tutkielman lopputuloksena oli ympäristöjohtamisen verkostomalli kunnalle. Tutkielman teoreettisen viitekehyksen muodostivat julkishallinnon verkoston hallintamalli, verkostojohtaminen, ympäristöjohtaminen ja tehokkuus. Nämä kolme käsitettä muodostivat työn verkostomallin. Tutkimusmenetelmänä työssä käytettiin laadullista tapaustutkimusta, ja tutkimuksen kohteena oli kuntaorganisaation ympäristöjohtamisen verkosto, joka käsitti ympäristöjohtamisen asiantuntijatyöryhmän. Tutkimusaineisto kerättiin teemahaastatteluin ja tutkimuksessa haastateltiin kolmetoista asiantuntijatyöryhmän jäsentä yksilö-, pari- ja ryhmähaastatteluin.
Resumo:
Nopeasti muuttuvissa kilpailutilanteissa varsinkin pienet ja keskisuuret ohjelmistoyritykset joutuvat kilpailuetuja saavuttaakseen tekemään strategisia päätöksiä, joiden vaikutukset voidaan todeta vasta pitkän ajan kuluttua. Sen vuoksi yritysjohto tarvitsee päätöksentekoa varten tukijärjestelmiä, jotka sekä tuottavat informaatiota päätöksenteon tueksi että auttavat vähentämään päätöksistä aiheutuneita riskejä. Tämän työn tavoitteena oli kehittää ohjelmistoyritysten ohjelmistotuoteliiketoimintaa varten elinkaarikustannusmalli, jonka avulla yritysjohto voi arvioida ohjelmistotuotteiden koko-naiskustannuksia koko elinkaaren ajalta. Elinkaarikustannusmallia tutkittiin sekä elinkaarikustannusten että ohjelmistotuotantoprosessien teoreettisissa viitekehyksissä. Empiirinen tieto kerättiin tutkimukseen osallistuneen ohjelmistoyrityksen avulla. Tutkimuksessa kehitetty elinkaarikustannusmalli eroaa monista muista tutkituista kustannusmalleista siinä, että se lähestyy elinkaarikustannusten problematiikkaa strategisesta näkökulmasta, kun taas monet muut mallit toteuttavat tietoteknistä lähestymistapaa. Siten ohjelmistoyrityksen johto voi ohjata tuoteliiketoimintaa osana strategista päätöksentekoa sekä tuotteen elinkaaren kokonaiskustannusten avulla että elinkaarikustannusmallin epäsuorien vaikutusten kautta
Resumo:
The hyper-star interconnection network was proposed in 2002 to overcome the drawbacks of the hypercube and its variations concerning the network cost, which is defined by the product of the degree and the diameter. Some properties of the graph such as connectivity, symmetry properties, embedding properties have been studied by other researchers, routing and broadcasting algorithms have also been designed. This thesis studies the hyper-star graph from both the topological and algorithmic point of view. For the topological properties, we try to establish relationships between hyper-star graphs with other known graphs. We also give a formal equation for the surface area of the graph. Another topological property we are interested in is the Hamiltonicity problem of this graph. For the algorithms, we design an all-port broadcasting algorithm and a single-port neighbourhood broadcasting algorithm for the regular form of the hyper-star graphs. These algorithms are both optimal time-wise. Furthermore, we prove that the folded hyper-star, a variation of the hyper-star, to be maixmally fault-tolerant.
Resumo:
In the last decade, the potential macroeconomic effects of intermittent large adjustments in microeconomic decision variables such as prices, investment, consumption of durables or employment – a behavior which may be justified by the presence of kinked adjustment costs – have been studied in models where economic agents continuously observe the optimal level of their decision variable. In this paper, we develop a simple model which introduces infrequent information in a kinked adjustment cost model by assuming that agents do not observe continuously the frictionless optimal level of the control variable. Periodic releases of macroeconomic statistics or dividend announcements are examples of such infrequent information arrivals. We first solve for the optimal individual decision rule, that is found to be both state and time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. Our model has the distinct characteristic that a vast number of agents tend to act together, and more so when uncertainty is large. The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. We show that these results differ substantially from the ones obtained with full information adjustment cost models.
Resumo:
La thèse comporte trois essais en microéconomie appliquée. En utilisant des modèles d’apprentissage (learning) et d’externalité de réseau, elle étudie le comportement des agents économiques dans différentes situations. Le premier essai de la thèse se penche sur la question de l’utilisation des ressources naturelles en situation d’incertitude et d’apprentissage (learning). Plusieurs auteurs ont abordé le sujet, mais ici, nous étudions un modèle d’apprentissage dans lequel les agents qui consomment la ressource ne formulent pas les mêmes croyances a priori. Le deuxième essai aborde le problème générique auquel fait face, par exemple, un fonds de recherche désirant choisir les meilleurs parmi plusieurs chercheurs de différentes générations et de différentes expériences. Le troisième essai étudie un modèle particulier d’organisation d’entreprise dénommé le marketing multiniveau (multi-level marketing). Le premier chapitre est intitulé "Renewable Resource Consumption in a Learning Environment with Heterogeneous beliefs". Nous y avons utilisé un modèle d’apprentissage avec croyances hétérogènes pour étudier l’exploitation d’une ressource naturelle en situation d’incertitude. Il faut distinguer ici deux types d’apprentissage : le adaptive learning et le learning proprement dit. Ces deux termes ont été empruntés à Koulovatianos et al (2009). Nous avons montré que, en comparaison avec le adaptive learning, le learning a un impact négatif sur la consommation totale par tous les exploitants de la ressource. Mais individuellement certains exploitants peuvent consommer plus la ressource en learning qu’en adaptive learning. En effet, en learning, les consommateurs font face à deux types d’incitations à ne pas consommer la ressource (et donc à investir) : l’incitation propre qui a toujours un effet négatif sur la consommation de la ressource et l’incitation hétérogène dont l’effet peut être positif ou négatif. L’effet global du learning sur la consommation individuelle dépend donc du signe et de l’ampleur de l’incitation hétérogène. Par ailleurs, en utilisant les variations absolues et relatives de la consommation suite à un changement des croyances, il ressort que les exploitants ont tendance à converger vers une décision commune. Le second chapitre est intitulé "A Perpetual Search for Talent across Overlapping Generations". Avec un modèle dynamique à générations imbriquées, nous avons étudié iv comment un Fonds de recherche devra procéder pour sélectionner les meilleurs chercheurs à financer. Les chercheurs n’ont pas la même "ancienneté" dans l’activité de recherche. Pour une décision optimale, le Fonds de recherche doit se baser à la fois sur l’ancienneté et les travaux passés des chercheurs ayant soumis une demande de subvention de recherche. Il doit être plus favorable aux jeunes chercheurs quant aux exigences à satisfaire pour être financé. Ce travail est également une contribution à l’analyse des Bandit Problems. Ici, au lieu de tenter de calculer un indice, nous proposons de classer et d’éliminer progressivement les chercheurs en les comparant deux à deux. Le troisième chapitre est intitulé "Paradox about the Multi-Level Marketing (MLM)". Depuis quelques décennies, on rencontre de plus en plus une forme particulière d’entreprises dans lesquelles le produit est commercialisé par le biais de distributeurs. Chaque distributeur peut vendre le produit et/ou recruter d’autres distributeurs pour l’entreprise. Il réalise des profits sur ses propres ventes et reçoit aussi des commissions sur la vente des distributeurs qu’il aura recrutés. Il s’agit du marketing multi-niveau (multi-level marketing, MLM). La structure de ces types d’entreprise est souvent qualifiée par certaines critiques de système pyramidal, d’escroquerie et donc insoutenable. Mais les promoteurs des marketing multi-niveau rejettent ces allégations en avançant que le but des MLMs est de vendre et non de recruter. Les gains et les règles de jeu sont tels que les distributeurs ont plus incitation à vendre le produit qu’à recruter. Toutefois, si cette argumentation des promoteurs de MLMs est valide, un paradoxe apparaît. Pourquoi un distributeur qui désire vraiment vendre le produit et réaliser un gain recruterait-il d’autres individus qui viendront opérer sur le même marché que lui? Comment comprendre le fait qu’un agent puisse recruter des personnes qui pourraient devenir ses concurrents, alors qu’il est déjà établi que tout entrepreneur évite et même combat la concurrence. C’est à ce type de question que s’intéresse ce chapitre. Pour expliquer ce paradoxe, nous avons utilisé la structure intrinsèque des organisations MLM. En réalité, pour être capable de bien vendre, le distributeur devra recruter. Les commissions perçues avec le recrutement donnent un pouvoir de vente en ce sens qu’elles permettent au recruteur d’être capable de proposer un prix compétitif pour le produit qu’il désire vendre. Par ailleurs, les MLMs ont une structure semblable à celle des multi-sided markets au sens de Rochet et Tirole (2003, 2006) et Weyl (2010). Le recrutement a un effet externe sur la vente et la vente a un effet externe sur le recrutement, et tout cela est géré par le promoteur de l’organisation. Ainsi, si le promoteur ne tient pas compte de ces externalités dans la fixation des différentes commissions, les agents peuvent se tourner plus ou moins vers le recrutement.
Resumo:
En synthèse d’images, reproduire les effets complexes de la lumière sur des matériaux transluminescents, tels que la cire, le marbre ou la peau, contribue grandement au réalisme d’une image. Malheureusement, ce réalisme supplémentaire est couteux en temps de calcul. Les modèles basés sur la théorie de la diffusion visent à réduire ce coût en simulant le comportement physique du transport de la lumière sous surfacique tout en imposant des contraintes de variation sur la lumière incidente et sortante. Une composante importante de ces modèles est leur application à évaluer hiérarchiquement l’intégrale numérique de l’illumination sur la surface d’un objet. Cette thèse révise en premier lieu la littérature actuelle sur la simulation réaliste de la transluminescence, avant d’investiguer plus en profondeur leur application et les extensions des modèles de diffusion en synthèse d’images. Ainsi, nous proposons et évaluons une nouvelle technique d’intégration numérique hiérarchique utilisant une nouvelle analyse fréquentielle de la lumière sortante et incidente pour adapter efficacement le taux d’échantillonnage pendant l’intégration. Nous appliquons cette théorie à plusieurs modèles qui correspondent à l’état de l’art en diffusion, octroyant une amélioration possible à leur efficacité et précision.
Resumo:
Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.
Resumo:
Title: Data-Driven Text Generation using Neural Networks Speaker: Pavlos Vougiouklis, University of Southampton Abstract: Recent work on neural networks shows their great potential at tackling a wide variety of Natural Language Processing (NLP) tasks. This talk will focus on the Natural Language Generation (NLG) problem and, more specifically, on the extend to which neural network language models could be employed for context-sensitive and data-driven text generation. In addition, a neural network architecture for response generation in social media along with the training methods that enable it to capture contextual information and effectively participate in public conversations will be discussed. Speaker Bio: Pavlos Vougiouklis obtained his 5-year Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki in 2013. He was awarded an MSc degree in Software Engineering from the University of Southampton in 2014. In 2015, he joined the Web and Internet Science (WAIS) research group of the University of Southampton and he is currently working towards the acquisition of his PhD degree in the field of Neural Network Approaches for Natural Language Processing. Title: Provenance is Complicated and Boring — Is there a solution? Speaker: Darren Richardson, University of Southampton Abstract: Paper trails, auditing, and accountability — arguably not the sexiest terms in computer science. But then you discover that you've possibly been eating horse-meat, and the importance of provenance becomes almost palpable. Having accepted that we should be creating provenance-enabled systems, the challenge of then communicating that provenance to casual users is not trivial: users should not have to have a detailed working knowledge of your system, and they certainly shouldn't be expected to understand the data model. So how, then, do you give users an insight into the provenance, without having to build a bespoke system for each and every different provenance installation? Speaker Bio: Darren is a final year Computer Science PhD student. He completed his undergraduate degree in Electronic Engineering at Southampton in 2012.
Resumo:
Identification of cost-determinant variables and evaluation of their degree of influence play an essential role in building reliable cost models and enhance the competitive edge of quantity surveyors as well as contracting organisations. Sixty-seven variables affecting pre-tender construction cost estimates are identified through literature and interviews. These factors are grouped into six categories and a comparison analysis of their impact is conducted. Priority ranking of cost-influencing factors is carried out using a questionnaire survey commissioned amongst quantity surveyors based in the UK. Findings of this survey indicate that there is a strong agreement between quantity surveyors in ranking cost-influencing factors of construction projects. Comparisons between the outcomes of this research and other related studies are presented.
Resumo:
Constructing a building is a long process which can take several years. Most building services products are installed while a building is constructed, but they are not operated until the building is commissioned. The warranty term for the building service systems may cover the time starting from their installation to the end of the warranty period. Prior to the commissioning of the building, the building services systems are protected by warranty although they are not operated. The bum in time for such systems is important when warranty costs is analyzed. In this paper, warranty cost models for products with burn in periods are presented. Two burn in policies are developed to optimize the total mean warranty cost. A special case on the relationship between the failure rates of the product at the dormant state and at the I operating state is presented.
Resumo:
The performance benefit when using grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effects of synchronization overheads, mainly due to the high variability in the execution times of the different tasks, which, in turn, is accentuated by the large heterogeneity of grid nodes. In this paper we design hierarchical, queuing network performance models able to accurately analyze grid architectures and applications. Thanks to the model results, we introduce a new allocation policy based on a combination between task partitioning and task replication. The models are used to study two real applications and to evaluate the performance benefits obtained with allocation policies based on task replication.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.