986 resultados para Round Robin Database Measurement Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two types of peeling experiments are performed in the present research. One is for the Al film/Al2O3 substrate system with an adhesive layer between the film and the substrate. The other one is for the Cu film/Al2O3 substrate system without adhesive layer between the film and the substrate, and the Cu films are electroplated onto the Al2O3 substrates. For the case with adhesive layer, two kinds of adhesives are selected, which are all the mixtures of epoxy and polyimide with mass ratios 1:1.5 and 1:1, respectively. The relationships between energy release rate, the film thickness and the adhesive layer thickness are measured during the steady-state peeling process. The effects of the adhesive layer on the energy release rate are analyzed. Using the experimental results, several analytical criteria for the steady-state peeling based on the bending model and on the two-dimensional finite element analysis model are critically assessed. Through assessment of analytical models, we find that the cohesive zone criterion based on the beam bend model is suitable for a weak interface strength case and it describes a macroscale fracture process zone case, while the two-dimensional finite element model is effective to both the strong interface and weak interface, and it describes a small-scale fracture process zone case. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To construct high performance Web servers, system builders are increasingly turning to distributed designs. An important challenge that arises in distributed Web servers is the need to direct incoming connections to individual hosts. Previous methods for connection routing have employed a centralized node which handles all incoming requests. In contrast, we propose a distributed approach, called Distributed Packet Rewriting (DPR), in which all hosts of the distributed system participate in connection routing. We argue that this approach promises better scalability and fault-tolerance than the centralized approach. We describe our implementation of four variants of DPR and compare their performance. We show that DPR provides performance comparable to centralized alternatives, measured in terms of throughput and delay under the SPECweb96 benchmark. Finally, we argue that DPR is particularly attractive both for small scale systems and for systems following the emerging trend toward increasingly intelligent I/O subsystems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose and evaluate an implementation of a prototype scalable web server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host---namely, the host with the lowest number of established connections. We use the low-overhead Distributed Packet Rewriting (DPR) technique to redirect TCP connections. In our prototype, each host keeps information about connections in hash tables and linked lists. Every time a packet arrives, it is examined to see if it has to be redirected or not. Load information is maintained using periodic broadcasts amongst the cluster hosts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a multiple femtocell deployment in a small area which shares spectrum with the underlaid macrocell. We design a joint energy and radio spectrum scheme which aims not only for co-existence with the macrocell, but also for an energy-efficient implementation of the multi-femtocells. Particularly, aggregate energy usage on dense femtocell channels is formulated taking into account the cost of both the spectrum and energy usage. We investigate an energy-and-spectral efficient approach to balance between the two costs by varying the number of active sub-channels and their energy. The proposed scheme is addressed by deriving closed-form expressions for the interference towards the macrocell and the outage capacity. Analytically, discrete regions under which the most promising outage capacity is achieved by the same size of active sub-channels are introduced. Through a joint optimization of the sub-channels and their energy, properties can be found for the maximum outage capacity under realistic constraints. Using asymptotic and numerical analysis, it can be noticed that in a dense femtocell deployment, the optimum utilization of the energy and the spectrum to maximize the outage capacity converges towards a round-robin scheduling approach for a very small outage threshold. This is the inverse of the traditional greedy approach. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of multicores is becoming widespread inthe field of embedded systems, many of which have real-time requirements. Hence, ensuring that real-time applications meet their timing constraints is a pre-requisite before deploying them on these systems. This necessitates the consideration of the impact of the contention due to shared lowlevel hardware resources like the front-side bus (FSB) on the Worst-CaseExecution Time (WCET) of the tasks. Towards this aim, this paper proposes a method to determine an upper bound on the number of bus requests that tasks executing on a core can generate in a given time interval. We show that our method yields tighter upper bounds in comparison with the state of-the-art. We then apply our method to compute the extra contention delay incurred by tasks, when they are co-scheduled on different cores and access the shared main memory, using a shared bus, access to which is granted using a round-robin arbitration (RR) protocol.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Satellite data are increasingly used to provide observation-based estimates of the effects of aerosols on climate. The Aerosol-cci project, part of the European Space Agency's Climate Change Initiative (CCI), was designed to provide essential climate variables for aerosols from satellite data. Eight algorithms, developed for the retrieval of aerosol properties using data from AATSR (4), MERIS (3) and POLDER, were evaluated to determine their suitability for climate studies. The primary result from each of these algorithms is the aerosol optical depth (AOD) at several wavelengths, together with the Ångström exponent (AE) which describes the spectral variation of the AOD for a given wavelength pair. Other aerosol parameters which are possibly retrieved from satellite observations are not considered in this paper. The AOD and AE (AE only for Level 2) were evaluated against independent collocated observations from the ground-based AERONET sun photometer network and against “reference” satellite data provided by MODIS and MISR. Tools used for the evaluation were developed for daily products as produced by the retrieval with a spatial resolution of 10 × 10 km2 (Level 2) and daily or monthly aggregates (Level 3). These tools include statistics for L2 and L3 products compared with AERONET, as well as scoring based on spatial and temporal correlations. In this paper we describe their use in a round robin (RR) evaluation of four months of data, one month for each season in 2008. The amount of data was restricted to only four months because of the large effort made to improve the algorithms, and to evaluate the improvement and current status, before larger data sets will be processed. Evaluation criteria are discussed. Results presented show the current status of the European aerosol algorithms in comparison to both AERONET and MODIS and MISR data. The comparison leads to a preliminary conclusion that the scores are similar, including those for the references, but the coverage of AATSR needs to be enhanced and further improvements are possible for most algorithms. None of the algorithms, including the references, outperforms all others everywhere. AATSR data can be used for the retrieval of AOD and AE over land and ocean. PARASOL and one of the MERIS algorithms have been evaluated over ocean only and both algorithms provide good results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Thesis focused on hardware based Load balancing solution of web traffic through a load balancer F5 content switch. In this project, the implemented scenario for distributing HTTPtraffic load is based on different CPU usages (processing speed) of multiple member servers.Two widely used load balancing algorithms Round Robin (RR) and Ratio model (weighted Round Robin) are implemented through F5 load balancer. For evaluating the performance of F5 content switch, some experimental tests has been taken on implemented scenarios using RR and Ratio model load balancing algorithms. The performance is examined in terms of throughput (bits/sec) and Response time of member servers in a load balancing pool. From these experiments we have observed that Ratio Model load balancing algorithm is most suitable in the environment of load balancing servers with different CPU usages as it allows assigning the weight according to CPU usage both in static and dynamic load balancing of servers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Load balance is a critical issue in distributed systems, such as server grids. In this paper, we propose a Balanced Load Queue (BLQ) model, which combines the queuing theory and hydro-dynamic theory, to model load balance in server grids. Base on the BLQ model, we claim that if the system is in the state of global fairness, then the performance of the whole system is the best. We propose a load balanced algorithm based on the model: the algorithm tries its best to keep the system in the global fairness status using job deviation. We present three strategies: best node, best neighbour, and random selection, for job deviation. A number of experiments are conducted for the comparison of the three strategies, and the results show that the best neighbour strategy is the best among the proposed strategies. Furthermore, the proposed algorithm with best neighbour strategy is better than the traditional round robin algorithm in term of processing delay, and the proposed algorithm needs very limited system information and is robust.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DDoS attack source traceback is an open and challenging problem. Deterministic packet marking (DPM) is a simple and effective traceback mechanism, but the current DPM based traceback schemes are not practical due to their scalability constraint. We noticed a factor that only a limited number of computers and routers are involved in an attack session. Therefore, we only need to mark these involved nodes for traceback purpose, rather than marking every node of the Internet as the existing schemes doing. Based on this finding, we propose a novel marking on demand (MOD) traceback scheme based on the DPM mechanism. In order to traceback to involved attack source, what we need to do is to mark these involved ingress routers using the traditional DPM strategy. Similar to existing schemes, we require participated routers to install a traffic monitor. When a monitor notices a surge of suspicious network flows, it will request a unique mark from a globally shared MOD server, and mark the suspicious flows with the unique marks. At the same time, the MOD server records the information of the marks and their related requesting IP addresses. Once a DDoS attack is confirmed, the victim can obtain the attack sources by requesting the MOD server with the marks extracted from attack packets. Moreover, we use the marking space in a round-robin style, which essentially addresses the scalability problem of the existing DPM based traceback schemes. We establish a mathematical model for the proposed traceback scheme, and thoroughly analyze the system. Theoretical analysis and extensive real-world data experiments demonstrate that the proposed traceback method is feasible and effective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O material apresenta políticas de escalonamento de processos e threads. O escalonamento de processos (ou Escalonamento do processador) trata da decisão sobre qual processo será executado em um determinado instante e por qual processador. O material apresenta também algoritmos de escalonamento relevantes, incluindo exemplos de algoritmos preemptivos e não-preemptivos, objetivos e critérios do escalonamento e diferentes tipos de escalonamentos: Escalonamento FIFO (first-in first-out), Escalonamento circular RR (Round-Robin ), Escalonamento SPF (Shortest Process First), Escalonamento SRT (Shortest Remaining Time), Escalonamento FSS (Fair Share Scheduling), Escalonamento de tempo real, Escalonamento de threads Java – JVM, Escalonamento no Windows XP e UNIX.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A videoaula apresenta o escalonamento de processos, com foco para as políticas de escalonamento do processador. Destaca o escalonamento com prioridades (estática ou dinâmica), o funcionamento do escalonamento de threads em Java, os níveis de escalonamento (alto nível, nível intermediário, baixo nível) e os critérios que são levados em conta pelo algoritmo de escalonamento. Apresenta também os objetivos e critérios do escalonamento e seus seguintes tipos: Escalonamento FIFO (first-in first-out), Escalonamento circular RR (Round-Robin ), Escalonamento SPF (Shortest Process First), Escalonamento SRT (Shortest Remaining Time), Escalonamento FSS (Fair Share Scheduling) - Escalonamento por fração justa, Escalonamento de tempo real, Escalonamento de threads Java – JVM, Escalonamento no Windows XP e UNIX.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The third primary production algorithm round robin (PPARR3) compares output from 24 models that estimate depth-integrated primary production from satellite measurements of ocean color, as well as seven general circulation models (GCMs) coupled with ecosystem or biogeochemical models. Here we compare the global primary production fields corresponding to eight months of 1998 and 1999 as estimated from common input fields of photosynthetically-available radiation (PAR), sea-surface temperature (SST), mixed-layer depth, and chlorophyll concentration. We also quantify the sensitivity of the ocean-color-based models to perturbations in their input variables. The pair-wise correlation between ocean-color models was used to cluster them into groups or related output, which reflect the regions and environmental conditions under which they respond differently. The groups do not follow model complexity with regards to wavelength or depth dependence, though they are related to the manner in which temperature is used to parameterize photosynthesis. Global average PP varies by a factor of two between models. The models diverged the most for the Southern Ocean, SST under 10 degrees C, and chlorophyll concentration exceeding 1 mg Chlm(-3). Based on the conditions under which the model results diverge most, we conclude that current ocean-color-based models are challenged by high-nutrient low-chlorophyll conditions, and extreme temperatures or chlorophyll concentrations. The GCM-based models predict comparable primary production to those based on ocean color: they estimate higher values in the Southern Ocean, at low SST, and in the equatorial band, while they estimate lower values in eutrophic regions (probably because the area of high chlorophyll concentrations is smaller in the GCMs). Further progress in primary production modeling requires improved understanding of the effect of temperature on photosynthesis and better parameterization of the maximum photosynthetic rate. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A methodology for analyzing the solar access and its influence on both air temperature and thermal comfort of the urban environment was here developed by applying the potentiality of GIS tools. Urban canyons in a specific area of a Brazilian medium sized city were studied. First, a computational algorithm was applied in order to allow the determination of sky view factors (SVF) and sun-paths in urban canyons. Then, air temperatures in 40 measurement points were collected within the study area. Solar radiation values of these canyons were determined and subsequently stored in a GIS database. The creation of thermal maps for the whole neighbourhood was possible due to a statistical treatment of the data, by promoting the interpolation of values. All data could then be spatially cross-examined. In addition, thermal comfort maps for summer and winter periods were generated. The methodology allowed the identification of thermal tendencies within the neighbourhood, what can be useful in the conception of guidelines for urban planning purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this action research study of my classroom of 10th grade Algebra II students, I investigated three related areas. First, I looked at how heterogeneous cooperative groups, where students in the group are responsible to present material, increase the number of students on task and the time on task when compared to individual practice. I noticed that their time on task might have been about the same, but they were communicating with each other mathematically. The second area I examined was the effect heterogeneous cooperative groups had on the teacher’s and the students’ verbal and nonverbal problem solving skills and understanding when compared to individual practice. At the end of the action research, students were questioning each other, and the instructor was answering questions only when the entire group had a question. The third area of data collection focused on what effect heterogeneous cooperative groups had on students’ listening skills when compared to individual practice. In the research I implemented individual quizzes and individual presentations. Both of these had a positive effect on listing in the groups. As a result of this research, I plan to continue implementing the round robin style of in- class practice with heterogeneous grouping and randomly selected individual presentations. For individual accountability I will continue the practice of individual quizzes one to two times a week.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this action research study of my classroom of 10th grade Algebra II students, I investigated three related areas. First, I looked at how heterogeneous cooperative groups, where students in the group are responsible to present material, increase the number of students on task and the time on task when compared to individual practice. I noticed that their time on task might have been about the same, but they were communicating with each other mathematically. The second area I examined was the effect heterogeneous cooperative groups had on the teacher’s and the students’ verbal and nonverbal problem solving skills and understanding when compared to individual practice. At the end of the action research, students were questioning each other, and the instructor was answering questions only when the entire group had a question. The third area of data collection focused on what effect heterogeneous cooperative groups had on students’ listening skills when compared to individual practice. In the research I implemented individual quizzes and individual presentations. Both of these had a positive effect on listing in the groups. As a result of this research, I plan to continue implementing the round robin style of in- class practice with heterogeneous grouping and randomly selected individual presentations. For individual accountability I will continue the practice of individual quizzes one to two times a week.