887 resultados para network cost models
Resumo:
WDM (Wavelength-Division Multiplexing) optiset verkot on tällä hetkellä suosituin tapa isojen määrän tietojen siirtämiseen. Jokaiselle liittymälle määrätään reitin ja aallonpituus joka linkin varten. Tarvittavan reitin ja aallon pituuden löytäminen kutsutaan RWA-ongelmaksi. Tämän työn kuvaa mahdollisia kustannuksen mallein ratkaisuja RWA-ongelmaan. Olemassa on paljon erilaisia optimoinnin tavoitteita. Edellä mainittuja kustannuksen malleja perustuu näillä tavoitteilla. Kustannuksen malleja antavat tehokkaita ratkaisuja ja algoritmeja. The multicommodity malli on käsitelty tässä työssä perusteena RV/A-kustannuksen mallille. Myöskin OB käsitelty heuristisia menetelmiä RWA-ongelman ratkaisuun. Työn loppuosassa käsitellään toteutuksia muutamalle mallille ja erilaisia mahdollisuuksia kustannuksen mallein parantamiseen.
Resumo:
This thesis describes the procedure and results from four years research undertaken through the IHD (Interdisciplinary Higher Degrees) Scheme at Aston University in Birmingham, sponsored by the SERC (Science and Engineering Research Council) and Monk Dunstone Associates, Chartered Quantity Surveyors. A stochastic networking technique VERT (Venture Evaluation and Review Technique) was used to model the pre-tender costs of public health, heating ventilating, air-conditioning, fire protection, lifts and electrical installations within office developments. The model enabled the quantity surveyor to analyse, manipulate and explore complex scenarios which previously had defied ready mathematical analysis. The process involved the examination of historical material costs, labour factors and design performance data. Components and installation types were defined and formatted. Data was updated and adjusted using mechanical and electrical pre-tender cost indices and location, selection of contractor, contract sum, height and site condition factors. Ranges of cost, time and performance data were represented by probability density functions and defined by constant, uniform, normal and beta distributions. These variables and a network of the interrelationships between services components provided the framework for analysis. The VERT program, in this particular study, relied upon Monte Carlo simulation to model the uncertainties associated with pre-tender estimates of all possible installations. The computer generated output in the form of relative and cumulative frequency distributions of current element and total services costs, critical path analyses and details of statistical parameters. From this data alternative design solutions were compared, the degree of risk associated with estimates was determined, heuristics were tested and redeveloped, and cost significant items were isolated for closer examination. The resultant models successfully combined cost, time and performance factors and provided the quantity surveyor with an appreciation of the cost ranges associated with the various engineering services design options.
Resumo:
This paper addresses the problem of allocating the cost of the transmission network to generators and demands. A physically-based network usage procedure is proposed. This procedure exhibits desirable apportioning properties and is easy to implement and understand. A case study based on the IEEE 24-bus system is used to illustrate the working of the proposed technique. Some relevant conclusions are finally drawn.
Resumo:
This work is devoted to Study and discuss the main methods to solve the network cost allocation problem both for generators and demands. From the presented, compared and discussed methods, the first one is based on power injections, the second deals with proportional sharing factors, the third is based upon Equivalent Bilateral Exchanges, the fourth analyzes the power How sensitivity in relation to the power injected, and the last one is based on Z(bus) network matrix. All the methods are initially illustrated using a 4-bus system. In addition, the IEEE 24-bus RTS system is presented for further comparisons and analysis. Appropriate conclusions are finally drawn. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Over the last decade, Brazil has pioneered an innovative model of branchless banking, known as correspondent banking, involving distribution partnership between banks, several kinds of retailers and a variety of other participants, which have allowed an unprecedented growth in bank outreach and became a reference worldwide. However, despite the extensive number of studies recently developed focusing on Brazilian branchless banking, there exists a clear research gap in the literature. It is still necessary to identify the different business configurations involving network integration through which the branchless banking channel can be structured, as well as the way they relate to the range of bank services delivered. Given this gap, our objective is to investigate the relationship between network integration models and services delivered through the branchless banking channel. Based on twenty interviews with managers involved with the correspondent banking business and data collected on almost 300 correspondent locations, our research is developed in two steps. First, we created a qualitative taxonomy through which we identified three classes of network integration models. Second, we performed a cluster analysis to explain the groups of financial services that fit each model. By contextualizing correspondents' network integration processes through the lens of transaction costs economics, our results suggest that the more suited to deliver social-oriented, "pro-poor'' services the channel is, the more it is controlled by banks. This research offers contributions to managers and policy makers interested in understanding better how different correspondent banking configurations are related with specific portfolios of services. Researchers interested in the subject of branchless banking can also benefit from the taxonomy presented and the transaction costs analysis of this kind of banking channel, which has been adopted in a number of developing countries all over the world now. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper studies the energy-efficiency and service characteristics of a recently developed energy-efficient MAC protocol for wireless sensor networks in simulation and on a real sensor hardware testbed. This opportunity is seized to illustrate how simulation models can be verified by cross-comparing simulation results with real-world experiment results. The paper demonstrates that by careful calibration of simulation model parameters, the inevitable gap between simulation models and real-world conditions can be reduced. It concludes with guidelines for a methodology for model calibration and validation of sensor network simulation models.
Resumo:
Mode of access: Internet.
Resumo:
Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent. In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is indeed a reasonable space cost model. In particular, for the first time we are able to measure also sub-linear space complexities. Moreover, we transfer this result to the call-by-value case. Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system.
Resumo:
Electricity distribution network operation (NO) models are challenged as they are expected to continue to undergo changes during the coming decades in the fairly developed and regulated Nordic electricity market. Network asset managers are to adapt to competitive technoeconomical business models regarding the operation of increasingly intelligent distribution networks. Factors driving the changes for new business models within network operation include: increased investments in distributed automation (DA), regulative frameworks for annual profit limits and quality through outage cost, increasing end-customer demands, climatic changes and increasing use of data system tools, such as Distribution Management System (DMS). The doctoral thesis addresses the questions a) whether there exist conditions and qualifications for competitive markets within electricity distribution network operation and b) if so, identification of limitations and required business mechanisms. This doctoral thesis aims to provide an analytical business framework, primarily for electric utilities, for evaluation and development purposes of dedicated network operation models to meet future market dynamics within network operation. In the thesis, the generic build-up of a business model has been addressed through the use of the strategicbusiness hierarchy levels of mission, vision and strategy for definition of the strategic direction of the business followed by the planning, management and process execution levels of enterprisestrategy execution. Research questions within electricity distribution network operation are addressed at the specified hierarchy levels. The results of the research represent interdisciplinary findings in the areas of electrical engineering and production economics. The main scientific contributions include further development of the extended transaction cost economics (TCE) for government decisions within electricity networks and validation of the usability of the methodology for the electricity distribution industry. Moreover, DMS benefit evaluations in the thesis based on the outage cost calculations propose theoretical maximum benefits of DMS applications equalling roughly 25% of the annual outage costs and 10% of the respective operative costs in the case electric utility. Hence, the annual measurable theoretical benefits from the use of DMS applications are considerable. The theoretical results in the thesis are generally validated by surveys and questionnaires.
Resumo:
We study the dynamics of a game-theoretic network formation model that yields large-scale small-world networks. So far, mostly stochastic frameworks have been utilized to explain the emergence of these networks. On the other hand, it is natural to seek for game-theoretic network formation models in which links are formed due to strategic behaviors of individuals, rather than based on probabilities. Inspired by Even-Dar and Kearns (2007), we consider a more realistic model in which the cost of establishing each link is dynamically determined during the course of the game. Moreover, players are allowed to put transfer payments on the formation of links. Also, they must pay a maintenance cost to sustain their direct links during the game. We show that there is a small diameter of at most 4 in the general set of equilibrium networks in our model. Unlike earlier model, not only the existence of equilibrium networks is guaranteed in our model, but also these networks coincide with the outcomes of pairwise Nash equilibrium in network formation. Furthermore, we provide a network formation simulation that generates small-world networks. We also analyze the impact of locating players in a hierarchical structure by constructing a strategic model, where a complete b-ary tree is the seed network.
Resumo:
Data were collected and analysed from seven field sites in Australia, Brazil and Colombia on weather conditions and the severity of anthracnose disease of the tropical pasture legume Stylosanthes scabra caused by Colletotrichum gloeosporioides. Disease severity and weather data were analysed using artificial neural network (ANN) models developed using data from some or all field sites in Australia and/or South America to predict severity at other sites. Three series of models were developed using different weather summaries. of these, ANN models with weather for the day of disease assessment and the previous 24 h period had the highest prediction success, and models trained on data from all sites within one continent correctly predicted disease severity in the other continent on more than 75% of days; the overall prediction error was 21.9% for the Australian and 22.1% for the South American model. of the six cross-continent ANN models trained on pooled data for five sites from two continents to predict severity for the remaining sixth site, the model developed without data from Planaltina in Brazil was the most accurate, with >85% prediction success, and the model without Carimagua in Colombia was the least accurate, with only 54% success. In common with multiple regression models, moisture-related variables such as rain, leaf surface wetness and variables that influence moisture availability such as radiation and wind on the day of disease severity assessment or the day before assessment were the most important weather variables in all ANN models. A set of weights from the ANN models was used to calculate the overall risk of anthracnose for the various sites. Sites with high and low anthracnose risk are present in both continents, and weather conditions at centres of diversity in Brazil and Colombia do not appear to be more conducive than conditions in Australia to serious anthracnose development.
Resumo:
As a new modeling method, support vector regression (SVR) has been regarded as the state-of-the-art technique for regression and approximation. In this study, the SVR models had been introduced and developed to predict body and carcass-related characteristics of 2 strains of broiler chicken. To evaluate the prediction ability of SVR models, we compared their performance with that of neural network (NN) models. Evaluation of the prediction accuracy of models was based on the R-2, MS error, and bias. The variables of interest as model output were BW, empty BW, carcass, breast, drumstick, thigh, and wing weight in 2 strains of Ross and Cobb chickens based on intake dietary nutrients, including ME (kcal/bird per week), CP, TSAA, and Lys, all as grams per bird per week. A data set composed of 64 measurements taken from each strain were used for this analysis, where 44 data lines were used for model training, whereas the remaining 20 lines were used to test the created models. The results of this study revealed that it is possible to satisfactorily estimate the BW and carcass parts of the broiler chickens via their dietary nutrient intake. Through statistical criteria used to evaluate the performance of the SVR and NN models, the overall results demonstrate that the discussed models can be effective for accurate prediction of the body and carcass-related characteristics investigated here. However, the SVR method achieved better accuracy and generalization than the NN method. This indicates that the new data mining technique (SVR model) can be used as an alternative modeling tool for NN models. However, further reevaluation of this algorithm in the future is suggested.
Resumo:
In order to model the synchronization of brain signals, a three-node fully-connected network is presented. The nodes are considered to be voltage control oscillator neurons (VCON) allowing to conjecture about how the whole process depends on synaptic gains, free-running frequencies and delays. The VCON, represented by phase-locked loops (PLL), are fully-connected and, as a consequence, an asymptotically stable synchronous state appears. Here, an expression for the synchronous state frequency is derived and the parameter dependence of its stability is discussed. Numerical simulations are performed providing conditions for the use of the derived formulae. Model differential equations are hard to be analytically treated, but some simplifying assumptions combined with simulations provide an alternative formulation for the long-term behavior of the fully-connected VCON network. Regarding this kind of network as models for brain frequency signal processing, with each PLL representing a neuron (VCON), conditions for their synchronization are proposed, considering the different bands of brain activity signals and relating them to synaptic gains, delays and free-running frequencies. For the delta waves, the synchronous state depends strongly on the delays. However, for alpha, beta and theta waves, the free-running individual frequencies determine the synchronous state. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a novel approach to WLAN propagation models for use in indoor localization. The major goal of this work is to eliminate the need for in situ data collection to generate the Fingerprinting map, instead, it is generated by using analytical propagation models such as: COST Multi-Wall, COST 231 average wall and Motley- Keenan. As Location Estimation Algorithms kNN (K-Nearest Neighbour) and WkNN (Weighted K-Nearest Neighbour) were used to determine the accuracy of the proposed technique. This work is based on analytical and measurement tools to determine which path loss propagation models are better for location estimation applications, based on Receive Signal Strength Indicator (RSSI).This study presents different proposals for choosing the most appropriate values for the models parameters, like obstacles attenuation and coefficients. Some adjustments to these models, particularly to Motley-Keenan, considering the thickness of walls, are proposed. The best found solution is based on the adjusted Motley-Keenan and COST models that allows to obtain the propagation loss estimation for several environments.Results obtained from two testing scenarios showed the reliability of the adjustments, providing smaller errors in the measured values values in comparison with the predicted values.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.