799 resultados para Utility-based performance measures


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this paper is to provide some insight into the relations that exist between cell level and message level performance guarantees in the context of ATM networks. Cell level guarantees are typically what the network is capable of providing, while message level guarantees are the ones of interest to users. It is, therefore, important to understand how the two are related, and which factors influence this relation. There are many different performance measures that are of importance, and in this paper we try to touch on the (three) most relevant ones. This includes comparing cell and message loss probabilities, average cell and message delays, and cell and message jitter. Specifically, we show that cell and message loss probabilities can exhibit significant differences, which strongly depend on traffic characteristics such as peak rate and burst size, i.e., for a fixed cell loss probability, the message loss probability can greatly vary when peak rate and burst size change. One reason for this sensitivity, is that message loss depends on what happen to all the cells in a message. For delay and jitter, we also find that peak rate and burst size play a role in determining the relation between cell and message performance. However, this sensitivity is not as acute as with losses since message delay and jitter are typically determined by the performance seen by only one cell, the last cell in a message. In the paper, we provide quantitative examples that illustrate the range of behaviors and identify the impact of different parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several replacement policies for web caches have been proposed and studied extensively in the literature. Different replacement policies perform better in terms of (i) the number of objects found in the cache (cache hit), (ii) the network traffic avoided by fetching the referenced object from the cache, or (iii) the savings in response time. In this paper, we propose a simple and efficient replacement policy (hereafter known as SE) which improves all three performance measures. Trace-driven simulations were done to evaluate the performance of SE. We compare SE with two widely used and efficient replacement policies, namely Least Recently Used (LRU) and Least Unified Value (LUV) algorithms. Our results show that SE performs at least as well as, if not better than, both these replacement policies. Unlike various other replacement policies proposed in literature, our SE policy does not require parameter tuning or a-priori trace analysis and has an efficient and simple implementation that can be incorporated in any existing proxy server or web server with ease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vehicular ad hoc network (VANET) applications are principally categorized into safety and commercial applications. Efficient traffic management for routing an emergency vehicle is of paramount importance in safety applications of VANETs. In the first case, a typical example of a high dense urban scenario is considered to demonstrate the role of penetration ratio for achieving reduced travel time between source and destination points. The major requirement for testing these VANET applications is a realistic simulation approach which would justify the results prior to actual deployment. A Traffic Simulator coupled with a Network Simulator using a feedback loop feature is apt for realistic simulation of VANETs. Thus, in this paper, we develop the safety application using traffic control interface (TraCI), which couples SUMO (traffic simulator) and NS2 (network simulator). Likewise, the mean throughput is one of the necessary performance measures for commercial applications of VANETs. In the next case, commercial applications have been considered wherein the data is transferred amongst vehicles (V2V) and between roadside infrastructure and vehicles (I2V), for which the throughput is assessed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Designing and optimizing high performance microprocessors is an increasingly difficult task due to the size and complexity of the processor design space, high cost of detailed simulation and several constraints that a processor design must satisfy. In this paper, we propose the use of empirical non-linear modeling techniques to assist processor architects in making design decisions and resolving complex trade-offs. We propose a procedure for building accurate non-linear models that consists of the following steps: (i) selection of a small set of representative design points spread across processor design space using latin hypercube sampling, (ii) obtaining performance measures at the selected design points using detailed simulation, (iii) building non-linear models for performance using the function approximation capabilities of radial basis function networks, and (iv) validating the models using an independently and randomly generated set of design points. We evaluate our model building procedure by constructing non-linear performance models for programs from the SPEC CPU2000 benchmark suite with a microarchitectural design space that consists of 9 key parameters. Our results show that the models, built using a relatively small number of simulations, achieve high prediction accuracy (only 2.8% error in CPI estimates on average) across a large processor design space. Our models can potentially replace detailed simulation for common tasks such as the analysis of key microarchitectural trends or searches for optimal processor design points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using enhanced distributed channel access (EDCA). We build upon the fixed point analysis and performance insights. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures. The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial Neural Networks (ANNs) have been found to be a robust tool to model many non-linear hydrological processes. The present study aims at evaluating the performance of ANN in simulating and predicting ground water levels in the uplands of a tropical coastal riparian wetland. The study involves comparison of two network architectures, Feed Forward Neural Network (FFNN) and Recurrent Neural Network (RNN) trained under five algorithms namely Levenberg Marquardt algorithm, Resilient Back propagation algorithm, BFGS Quasi Newton algorithm, Scaled Conjugate Gradient algorithm, and Fletcher Reeves Conjugate Gradient algorithm by simulating the water levels in a well in the study area. The study is analyzed in two cases-one with four inputs to the networks and two with eight inputs to the networks. The two networks-five algorithms in both the cases are compared to determine the best performing combination that could simulate and predict the process satisfactorily. Ad Hoc (Trial and Error) method is followed in optimizing network structure in all cases. On the whole, it is noticed from the results that the Artificial Neural Networks have simulated and predicted the water levels in the well with fair accuracy. This is evident from low values of Normalized Root Mean Square Error and Relative Root Mean Square Error and high values of Nash-Sutcliffe Efficiency Index and Correlation Coefficient (which are taken as the performance measures to calibrate the networks) calculated after the analysis. On comparison of ground water levels predicted with those at the observation well, FFNN trained with Fletcher Reeves Conjugate Gradient algorithm taken four inputs has outperformed all other combinations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There have been several studies on the performance of TCP controlled transfers over an infrastructure IEEE 802.11 WLAN, assuming perfect channel conditions. In this paper, we develop an analytical model for the throughput of TCP controlled file transfers over the IEEE 802.11 DCF with different packet error probabilities for the stations, accounting for the effect of packet drops on the TCP window. Our analysis proceeds by combining two models: one is an extension of the usual TCP-over-DCF model for an infrastructure WLAN, where the throughput of a station depends on the probability that the head-of-the-line packet at the Access Point belongs to that station; the second is a model for the TCP window process for connections with different drop probabilities. Iterative calculations between these models yields the head-of-the-line probabilities, and then, performance measures such as the throughputs and packet failure probabilities can be derived. We find that, due to MAC layer retransmissions, packet losses are rare even with high channel error probabilities and the stations obtain fair throughputs even when some of them have packet error probabilities as high as 0.1 or 0.2. For some restricted settings we are also able to model tail-drop loss at the AP. Although involving many approximations, the model captures the system behavior quite accurately, as compared with simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Establishing functional relationships between multi-domain protein sequences is a non-trivial task. Traditionally, delineating functional assignment and relationships of proteins requires domain assignments as a prerequisite. This process is sensitive to alignment quality and domain definitions. In multi-domain proteins due to multiple reasons, the quality of alignments is poor. We report the correspondence between the classification of proteins represented as full-length gene products and their functions. Our approach differs fundamentally from traditional methods in not performing the classification at the level of domains. Our method is based on an alignment free local matching scores (LMS) computation at the amino-acid sequence level followed by hierarchical clustering. As there are no gold standards for full-length protein sequence classification, we resorted to Gene Ontology and domain-architecture based similarity measures to assess our classification. The final clusters obtained using LMS show high functional and domain architectural similarities. Comparison of the current method with alignment based approaches at both domain and full-length protein showed superiority of the LMS scores. Using this method we have recreated objective relationships among different protein kinase sub-families and also classified immunoglobulin containing proteins where sub-family definitions do not exist currently. This method can be applied to any set of protein sequences and hence will be instrumental in analysis of large numbers of full-length protein sequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo ha sido presentado como ponencia en el XI Congreso Internacional de la Asociación de Dirección y Economía de la Empresa (AEDEM), celebrada en París en septiembre de 2002.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a component of a three-year cooperative effort of the Washington State Department of Ecology and the National Oceanic and Atmospheric Administration, surficial sediment samples from 100 locations in southern Puget Sound were collected in 1999 to determine their relative quality based on measures of toxicity, chemical contamination, and benthic infaunal assemblage structure. The survey encompassed an area of approximately 858 km2, ranging from East and Colvos Passages south to Oakland Bay, and including Hood Canal. Toxic responses were most severe in some of the industrialized waterways of Tacoma’s Commencement Bay. Other industrialized harbors in which sediments induced toxic responses on smaller scales included the Port of Olympia, Oakland Bay at Shelton, Gig Harbor, Port Ludlow, and Port Gamble. Based on the methods selected for this survey, the spatial extent of toxicity for the southern Puget Sound survey area was 0% of the total survey area for amphipod survival, 5.7% for urchin fertilization, 0.2% for microbial bioluminescence, and 5- 38% with the cytochrome P450 HRGS assay. Measurements of trace metals, PAHs, PCBs, chlorinated pesticides, other organic chemicals, and other characteristics of the sediments, indicated that 20 of the 100 samples collected had one or more chemical concentrations that exceeded applicable, effects-based sediment guidelines and/or Washington State standards. Chemical contamination was highest in eight samples collected in or near the industrialized waterways of Commencement Bay. Samples from the Thea Foss and Middle Waterways were primarily contaminated with a mixture of PAHs and trace metals, whereas those from Hylebos Waterway were contaminated with chlorinated organic hydrocarbons. The remaining 12 samples with elevated chemical concentrations primarily had high levels of other chemicals, including bis(2-ethylhexyl) phthalate, benzoic acid, benzyl alcohol, and phenol. The characteristics of benthic infaunal assemblages in south Puget Sound differed considerably among locations and habitat types throughout the study area. In general, many of the small embayments and inlets throughout the study area had infaunal assemblages with relatively low total abundance, taxa richness, evenness, and dominance values, although total abundance values were very high in some cases, typically due to high abundance of one organism such as the polychaete Aphelochaeta sp. N1. The majority of the samples collected from passages, outer embayments, and larger bodies of water tended to have infaunal assemblages with higher total abundance, taxa richness, evenness, and dominance values. Two samples collected in the Port of Olympia near a superfund cleanup site had no living organisms in them. A weight-of-evidence approach used to simultaneously examine all three “sediment quality triad” parameters, identified 11 stations (representing 4.4 km2, 0.5% of the total study area) with sediment toxicity, chemical contamination, and altered benthos (i.e., degraded sediment quality), 36 stations (493.5 km2, 57.5% total study area) with no toxicity or chemical contamination (i.e., high sediment quality), 35 stations (274.1 km2, 32.0% total study area) with one impaired sediment triad parameter (i.e., intermediate/high sediment quality), and 18 stations (85.7km2, 10.0% total study area) with two impaired sediment parameters (i.e., intermediate/degraded quality sediments). Generally, upon comparison, the number of stations with degraded sediments based upon the sediment quality triad of data was slightly greater in the central Puget Sound than in the northern and southern Puget Sound study areas, with the percent of the total study area degraded in each region decreasing from central to north to south (2.8, 1.3 and 0.5%, respectively). Overall, the sediments collected in Puget Sound during the combined 1997-1999 surveys were among the least contaminated relative to other marine bays and estuaries studied by NOAA using equivalent methods. (PDF contains 351 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Processos de produção precisam ser avaliados continuamente para que funcionem de modo mais eficaz e eficiente possível. Um conjunto de ferramentas utilizado para tal finalidade é denominado controle estatístico de processos (CEP). Através de ferramentas do CEP, o monitoramento pode ser realizado periodicamente. A ferramenta mais importante do CEP é o gráfico de controle. Nesta tese, foca-se no monitoramento de uma variável resposta, por meio dos parâmetros ou coeficientes de um modelo de regressão linear simples. Propõe-se gráficos de controle χ2 adaptativos para o monitoramento dos coeficientes do modelo de regressão linear simples. Mais especificamente, são desenvolvidos sete gráficos de controle χ2 adaptativos para o monitoramento de perfis lineares, a saber: gráfico com tamanho de amostra variável; intervalo de amostragem variável; limites de controle e de advertência variáveis; tamanho de amostra e intervalo de amostragem variáveis; tamanho de amostra e limites variáveis; intervalo de amostragem e limites variáveis e por fim, com todos os parâmetros de projeto variáveis. Medidas de desempenho dos gráficos propostos foram obtidas através de propriedades de cadeia de Markov, tanto para a situação zero-state como para a steady-state, verificando-se uma diminuição do tempo médio até um sinal no caso de desvios pequenos a moderados nos coeficientes do modelo de regressão do processo de produção. Os gráficos propostos foram aplicados a um exemplo de um processo de fabricação de semicondutores. Além disso, uma análise de sensibilidade dos mesmos é feita em função de desvios de diferentes magnitudes nos parâmetros do processo, a saber, no intercepto e na inclinação, comparando-se o desempenho entre os gráficos desenvolvidos e também com o gráfico χ2 com parâmetros fixos. Os gráficos propostos nesta tese são adequados para vários tipos de aplicações. Neste trabalho também foi considerado características de qualidade as quais são representadas por um modelo de regressão não-linear. Para o modelo de regressão não-linear considerado, a proposta é utilizar um método que divide o perfil não-linear em partes lineares, mais especificamente, um algoritmo para este fim, proposto na literatura, foi utilizado. Desta forma, foi possível validar a técnica proposta, mostrando que a mesma é robusta no sentido que permite tipos diferentes de perfis não-lineares. Aproxima-se, portanto um perfil não-linear por perfis lineares por partes, o que proporciona o monitoramento de cada perfil linear por gráficos de controle, como os gráficos de controle desenvolvidos nesta tese. Ademais apresenta-se a metodologia de decompor um perfil não-linear em partes lineares de forma detalhada e completa, abrindo espaço para ampla utilização.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The United States Coral Reef Task Force (USCRTF) was established in 1998 by Presidential Executive Order 13089 to lead U.S. efforts to preserve and protect coral reef ecosystems. Current, accurate, and consistent maps greatly enhance efforts to preserve and manage coral reef ecosystems. With comprehensive maps and habitat assessments, coral reef managers can be more effective in designing and implementing a variety of conservation measures, including: • Long-term monitoring programs with accurate baselines from which to track changes; • Place-based conservation measures such as marine protected areas (MPAs); and • Targeted research to better understand the oceanographic and ecological processes affecting coral reef ecosystem health. The National Oceanic and Atmospheric Administration’s (NOAA) National Ocean Service (NOS) is tasked with leading the coral ecosystem mapping element of the U.S. Coral Reef Task Force (CRTF) under the authority of the Presidential Executive Order 13089 to map and manage the coral reefs of the United States.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current design codes for floating offshore structures are based on measures of short-term reliability. That is, a design storm is selected via an extreme value analysis of the environmental conditions and the reliability of the vessel in that design storm is computed. Although this approach yields valuable information on the vessel motions, it does not produce a statistically rigorous assessment of the lifetime probability of failure. An alternative approach is to perform a long-term reliability analysis in which consideration is taken of all sea states potentially encountered by the vessel during the design life. Although permitted as a design approach in current design codes, the associated computational expense generally prevents its use in practice. A new efficient approach to long-term reliability analysis is presented here, the results of which are compared with a traditional short-term analysis for the surge motion of a representative moored FPSO in head seas. This serves to illustrate the failure probabilities actually embedded within current design code methods, and the way in which design methods might be adapted to achieve a specified target safety level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer modelling approaches have significant potential to enable decision-making about various aspects of responsive manufacturing. In order to understand the system prior to the selection of any responsiveness strategy, multiple process segments of organisations need to be modelled. The article presents a novel systematic approach for creating coherent sets of unified enterprise, simulation and other supporting models that collectively facilitate responsiveness. In this approach, enterprise models are used to explicitly define relatively enduring relationships between (i) production planning and control (PPC) processes, that implement a particular strategy and (ii) process-oriented elements of production systems, that are work loaded by the PPC processes. Coherent simulation models, can in part be derived from the enterprise models, so that they computer execute production system behaviours. In this way, time-based performance outcomes can be simulated; so that the impacts of alternative PPC strategies on the planning and controlling historical or forecasted patterns of workflow, through (current and possible future) production system models, can be analysed. The article describes the unified modelling approach conceived and its application in a furniture industry case study small and medium enterprise (SME). Copyright © 2010 Inderscience Enterprises Ltd.