895 resultados para cost model
Resumo:
The need for high bandwidth, due to the explosion of new multi\-media-oriented IP-based services, as well as increasing broadband access requirements is leading to the need of flexible and highly reconfigurable optical networks. While transmission bandwidth does not represent a limit due to the huge bandwidth provided by optical fibers and Dense Wavelength Division Multiplexing (DWDM) technology, the electronic switching nodes in the core of the network represent the bottleneck in terms of speed and capacity for the overall network. For this reason DWDM technology must be exploited not only for data transport but also for switching operations. In this Ph.D. thesis solutions for photonic packet switches, a flexible alternative with respect to circuit-switched optical networks are proposed. In particular solutions based on devices and components that are expected to mature in the near future are proposed, with the aim to limit the employment of complex components. The work presented here is the result of part of the research activities performed by the Networks Research Group at the Department of Electronics, Computer Science and Systems (DEIS) of the University of Bologna, Italy. In particular, the work on optical packet switching has been carried on within three relevant research projects: the e-Photon/ONe and e-Photon/ONe+ projects, funded by the European Union in the Sixth Framework Programme, and the national project OSATE funded by the Italian Ministry of Education, University and Scientific Research. The rest of the work is organized as follows. Chapter 1 gives a brief introduction to network context and contention resolution in photonic packet switches. Chapter 2 presents different strategies for contention resolution in wavelength domain. Chapter 3 illustrates a possible implementation of one of the schemes proposed in chapter 2. Then, chapter 4 presents multi-fiber switches, which employ jointly wavelength and space domains to solve contention. Chapter 5 shows buffered switches, to solve contention in time domain besides wavelength domain. Finally chapter 6 presents a cost model to compare different switch architectures in terms of cost.
Resumo:
Dieser Artikel setzt sich mit den Einflussgrößen auf die Energieeffizienz von Intralogistikressourcen am Beispiel des Tragkettenförderers auseinander. Es wird beleuchtet durch welche Methoden der Energiebedarf und -verbrauch ermittelt werden kann und wie Abhängigkeiten und Wechselwirkungen der Einflussgrößen auf den Energieverbrauch bestimmt werden. Aus den Ergebnissen werden Maßnahmen abgeleitet, durch die der Energieverbrauch von Intralogistikressourcen reduziert werden kann. Zudem wird ein Ausblick auf Energieverbrauchserhöhungen durch die zeitliche Veränderung infolge von Verschleißerscheinungen gegeben. Die ermittelten Erkenntnisse bilden die Grundlage für die Erstellung eines Kostenmodells, das die real auftretenden Lebenszykluskosten für Energie und Instandhaltung transparenter als bestehende Kostenmodelle darstellt. Mit dem Modell sollen Energie- und Instandhaltungskosten über den Lebenszyklus verringert werden, indem die Instandhaltungsmaßnahmen aus Gesamtkostensicht optimal geplant werden.
Resumo:
Cross-sectional designs, longitudinal designs in which a single cohort is followed over time, and mixed-longitudinal designs in which several cohorts are followed for a shorter period are compared by their precision, potential for bias due to age, time and cohort effects, and feasibility. Mixed longitudinal studies have two advantages over longitudinal studies: isolation of time and age effects and shorter completion time. Though the advantages of mixed-longitudinal studies are clear, choosing an optimal design is difficult, especially given the number of possible combinations of the number of cohorts and number of overlapping intervals between cohorts. The purpose of this paper is to determine the optimal design for detecting differences in group growth rates.^ The type of mixed-longitudinal study appropriate for modeling both individual and group growth rates is called a "multiple-longitudinal" design. A multiple-longitudinal study typically requires uniform or simultaneous entry of subjects, who are each observed till the end of the study.^ While recommendations for designing pure-longitudinal studies have been made by Schlesselman (1973b), Lefant (1990) and Helms (1991), design recommendations for multiple-longitudinal studies have never been published. It is shown that by using power analyses to determine the minimum number of occasions per cohort and minimum number of overlapping occasions between cohorts, in conjunction with a cost model, an optimal multiple-longitudinal design can be determined. An example of systolic blood pressure values for cohorts of males and cohorts of females, ages 8 to 18 years, is given. ^
Resumo:
Effective static analyses have been proposed which infer bounds on the number of resolutions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of a given platform in order to determine the valúes of certain parameters for that platform. These parameters calibrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in that concrete platform. The approach has been implemented and integrated in the CiaoPP system.
Resumo:
Effective static analyses have been proposed which infer bounds on the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of the platform in order to determine the valúes of certain parameters for a given platform. These parameters calíbrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.
Resumo:
Effective static analyses have been proposed which allow inferring functions which bound the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and such bounds have been shown useful in a number of applications, such as granularity control in parallel execution. On the other hand, in certain distributed computation scenarios where different platforms come into play, with each platform having different capabilities, it is more interesting to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution time. With this objective in mind, we propose a method which allows inferring upper and lower bounds on the execution times of procedures of a program in a given execution platform. The approach combines compile-time cost bounds analysis with a one-time profiling of the platform in order to determine the values of certain constants for that platform. These constants calibrate a cost model which from then on is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.
Resumo:
A relation between Cost Of Energy, COE, maximum allowed tip speed, and rated wind speed, is obtained for wind turbines with a given goal rated power. The wind regime is characterised by the corresponding parameters of the probability density function of wind speed. The non-dimensional characteristics of the rotor: number of blades, the blade radial distributions of local solidity, twist angle, and airfoil type, play the role of parameters in the mentioned relation. The COE is estimated using a cost model commonly used by the designers. This cost model requires basic design data such as the rotor radius and the ratio between the hub height and the rotor radius. Certain design options, DO, related to the technology of the power plant, tower and blades are also required as inputs. The function obtained for the COE can be explored to �nd those values of rotor radius that give rise to minimum cost of energy for a given wind regime as the tip speed limitation changes. The analysis reveals that iso-COE lines evolve parallel to iso-radius lines for large values of limit tip speed but that this is not the case for small values of the tip speed limits. It is concluded that, as the tip speed limit decreases, the optimum decision for keeping minimum COE values can be: a) reducing the rotor radius for places with high weibull scale parameter or b) increasing the rotor radius for places with low weibull scale parameter
Resumo:
In this paper we describe an hybrid algorithm for an even number of processors based on an algorithm for two processors and the Overlapping Partition Method for tridiagonal systems. Moreover, we compare this hybrid method with the Partition Wang’s method in a BSP computer. Finally, we compare the theoretical computation cost of both methods for a Cray T3D computer, using the cost model that BSP model provides.
Resumo:
The recreational-use value of hiking in the Bellenden Ker National Park, Australia has been estimated using a zonal travel cost model. Multiple destination visitors have been accounted for by converting visitors' own ordinal ranking of the various sites visited to numerical weights, using an expected-value approach. The value of hiking and camping in this national park was found to be $AUS 250,825 per year, or $AUS 144,45 per visitor per year, which is similar to findings from other studies valuing recreational benefits. The management of the park can use these estimates when considering the introduction of a system of user pays fees. In addition, they might be important when decisions need to be made about the allocation of resources for maintenance or upgrade of tracks and facilities.
Resumo:
Multiresolution (or multi-scale) techniques make it possible for Web-based GIS applications to access large dataset. The performance of such systems relies on data transmission over network and multiresolution query processing. In the literature the latter has received little research attention so far, and the existing methods are not capable of processing large dataset. In this paper, we aim to improve multiresolution query processing in an online environment. A cost model for such query is proposed first, followed by three strategies for its optimization. Significant theoretical improvement can be observed when comparing against available methods. Application of these strategies is also discussed, and similar performance enhancement can be expected if implemented in online GIS applications.
Resumo:
The aim of this research was to improve the quantitative support to project planning and control principally through the use of more accurate forecasting for which new techniques were developed. This study arose from the observation that in most cases construction project forecasts were based on a methodology (c.1980) which relied on the DHSS cumulative cubic cost model and network based risk analysis (PERT). The former of these, in particular, imposes severe limitations which this study overcomes. Three areas of study were identified, namely growth curve forecasting, risk analysis and the interface of these quantitative techniques with project management. These fields have been used as a basis for the research programme. In order to give a sound basis for the research, industrial support was sought. This resulted in both the acquisition of cost profiles for a large number of projects and the opportunity to validate practical implementation. The outcome of this research project was deemed successful both in theory and practice. The new forecasting theory was shown to give major reductions in projection errors. The integration of the new predictive and risk analysis technologies with management principles, allowed the development of a viable software management aid which fills an acknowledged gap in current technology.
Resumo:
Biomass-To-Liquid (BTL) is one of the most promising low carbon processes available to support the expanding transportation sector. This multi-step process produces hydrocarbon fuels from biomass, the so-called “second generation biofuels” that, unlike first generation biofuels, have the ability to make use of a wider range of biomass feedstock than just plant oils and sugar/starch components. A BTL process based on gasification has yet to be commercialized. This work focuses on the techno-economic feasibility of nine BTL plants. The scope was limited to hydrocarbon products as these can be readily incorporated and integrated into conventional markets and supply chains. The evaluated BTL systems were based on pressurised oxygen gasification of wood biomass or bio-oil and they were characterised by different fuel synthesis processes including: Fischer-Tropsch synthesis, the Methanol to Gasoline (MTG) process and the Topsoe Integrated Gasoline (TIGAS) synthesis. This was the first time that these three fuel synthesis technologies were compared in a single, consistent evaluation. The selected process concepts were modelled using the process simulation software IPSEpro to determine mass balances, energy balances and product distributions. For each BTL concept, a cost model was developed in MS Excel to estimate capital, operating and production costs. An uncertainty analysis based on the Monte Carlo statistical method, was also carried out to examine how the uncertainty in the input parameters of the cost model could affect the output (i.e. production cost) of the model. This was the first time that an uncertainty analysis was included in a published techno-economic assessment study of BTL systems. It was found that bio-oil gasification cannot currently compete with solid biomass gasification due to the lower efficiencies and higher costs associated with the additional thermal conversion step of fast pyrolysis. Fischer-Tropsch synthesis was the most promising fuel synthesis technology for commercial production of liquid hydrocarbon fuels since it achieved higher efficiencies and lower costs than TIGAS and MTG. None of the BTL systems were competitive with conventional fossil fuel plants. However, if government tax take was reduced by approximately 33% or a subsidy of £55/t dry biomass was available, transport biofuels could be competitive with conventional fuels. Large scale biofuel production may be possible in the long term through subsidies, fuels price rises and legislation.
Resumo:
Insbesondere bei Antriebssystemen stehen die Energiekosten neben den Anschaffungskosten im Fokus. Jedoch bleiben weitere Folgekosten, die im Laufe des Betriebs eines Antriebssystems in einem Fördermittel entstehen, meist unberücksichtigt. Dieser Artikel beschreibt einen Ansatz, wie sich Lebenszykluskosten von Antriebssystemen in Stetigfördertechnik prognostizieren lassen. Mit Hilfe von allgemein bekannten Normen und Richtlinien kann der Lebenszyklus eines Antriebssystems von der Projektierung über die Herstellung bis zur Entsorgung nach dem Betrieb in Kostenarten eingeteilt und veranschaulicht werden. Unter Verwendung von direkter Verrechnung als auch der Kalkulation mit Prozesskosten wird eine hinreichende Genauigkeit anhand definierter Prozessketten erreicht. Auf Basis dieser Kostenkalkulationen kann ein mehrstufiges Prognosemodell gebildet werden. Somit konnten durch das entwickelnde Modell Anlagenbeispiele untersucht und berechnet werden.
Resumo:
The central product of the DRAMA (Dynamic Re-Allocation of Meshes for parallel Finite Element Applications) project is a library comprising a variety of tools for dynamic re-partitioning of unstructured Finite Element (FE) applications. The input to the DRAMA library is the computational mesh, and corresponding costs, partitioned into sub-domains. The core library functions then perform a parallel computation of a mesh re-allocation that will re-balance the costs based on the DRAMA cost model. We discuss the basic features of this cost model, which allows a general approach to load identification, modelling and imbalance minimisation. Results from crash simulations are presented which show the necessity for multi-phase/multi-constraint partitioning components.
Resumo:
Tämän tutkimuksen päätavoitteena oli luoda laskentamalli identiteetin- ja käyttöoikeuksien hallintajärjestelmien kustannus- ja tulosvaikutuksista. Mallin tarkoitus oli toimia järjestelmätoimittajien apuvälineenä, jolla mahdolliset asiakkaat voidaan paremmin vakuuttaa järjestelmän kustannushyödyistä myyntitilanteessa. Vastaavia kustannusvaikutuksia mittaavia malleja on rakennettu hyvin vähän, ja tässä tutkimuksessa rakennettu malli eroaa niistä sekä järjestelmätoimittajan työkustannusten että tietoturvariskien huomioimisen osalta. Laskentamallin toimivuuden todentamiseksi syntynyttä laskentamallia testattiin kahdessa yrityksessä, joiden käytössä on keskitetty identiteetinhallintajärjestelmä. Testaus suoritettiin syöttämällä yrityksen tiedot laskentamalliin ja vertaamalla mallin antamia tuloksia yrityksen havaitsemiin kustannusvaikutuksiin. Sekä kirjallisuuskatsauksen että laskentamallin testaamisen perusteella voidaan todeta, että identiteetinhallintaprosessin merkittävimmät kustannustekijät ovat identiteettien luomiseen ja muutoksiin kuluva työaika sekä näiden toimintojen aiheuttama työntekijän tehokkuuden laskeminen prosessin aikana. Tutkimuksen perusteella keskitettyjen identiteetinhallintajärjestelmien avulla on mahdollista saavuttaa merkittäviä kustannussäästöjä identiteetinhallintaprosessin toiminnoista, lisenssikustannuksista sekä IT-palvelukustannuksista. Kaikki kustannussäästöt eivät kuitenkaan ole konkreettisia, vaan liittyvät esimerkiksi työtehokkuuden nousemiseen järjestelmän ansiosta. Kustannusvaikutusten lisäksi identiteetinhallintajärjestelmät tarjoavat muita hyötyjä, joiden rahallisen arvon laskeminen on erittäin haastavaa. Laskentamallin käytön haasteina ovatkin konkreettisten ja epäsuorien kustannussäästöjen tunnistaminen ja arvottaminen sekä investoinnin kokonaishyötyjen arvioinnin vaikeus.