963 resultados para Sharp-tailed grouse.
Resumo:
We examine the question of whether to employ the first-come-first-served (FCFS) discipline or the processor-sharing (PS) discipline at the hosts in a distributed server system. We are interested in the case in which service times are drawn from a heavy-tailed distribution, and so have very high variability. Traditional wisdom when task sizes are highly variable would prefer the PS discipline, because it allows small tasks to avoid being delayed behind large tasks in a queue. However, we show that system performance can actually be significantly better under FCFS queueing, if each task is assigned to a host based on the task's size. By task assignment, we mean an algorithm that inspects incoming tasks and assigns them to hosts for service. The particular task assignment policy we propose is called SITA-E: Size Interval Task Assignment with Equal Load. Surprisingly, under SITA-E, FCFS queueing typically outperforms the PS discipline by a factor of about two, as measured by mean waiting time and mean slowdown (waiting time of task divided by its service time). We compare the FCFS/SITA-E policy to the processor-sharing case analytically; in addition we compare it to a number of other policies in simulation. We show that the benefits of SITA-E are present even in small-scale distributed systems (four or more hosts). Furthermore, SITA-E is a static policy that does not incorporate feedback knowledge of the state of the hosts, which allows for a simple and scalable implementation.
Resumo:
We consider the problem of task assignment in a distributed system (such as a distributed Web server) in which task sizes are drawn from a heavy-tailed distribution. Many task assignment algorithms are based on the heuristic that balancing the load at the server hosts will result in optimal performance. We show this conventional wisdom is less true when the task size distribution is heavy-tailed (as is the case for Web file sizes). We introduce a new task assignment policy, called Size Interval Task Assignment with Variable Load (SITA-V). SITA-V purposely operates the server hosts at different loads, and directs smaller tasks to the lighter-loaded hosts. The result is that SITA-V provably decreases the mean task slowdown by significant factors (up to 1000 or more) where the more heavy-tailed the workload, the greater the improvement factor. We evaluate the tradeoff between improvement in slowdown and increase in waiting time in a system using SITA-V, and show conditions under which SITA-V represents a particularly appealing policy. We conclude with a discussion of the use of SITA-V in a distributed Web server, and show that it is attractive because it has a simple implementation which requires no communication from the server hosts back to the task router.
Resumo:
Fast forward error correction codes are becoming an important component in bulk content delivery. They fit in naturally with multicast scenarios as a way to deal with losses and are now seeing use in peer to peer networks as a basis for distributing load. In particular, new irregular sparse parity check codes have been developed with provable average linear time performance, a significant improvement over previous codes. In this paper, we present a new heuristic for generating codes with similar performance based on observing a server with an oracle for client state. This heuristic is easy to implement and provides further intuition into the need for an irregular heavy tailed distribution.
Resumo:
Previous studies have shown that giving preferential treatment to short jobs helps reduce the average system response time, especially when the job size distribution possesses the heavy-tailed property. Since it has been shown that the TCP flow length distribution also has the same property, it is natural to let short TCP flows enjoy better service inside the network. Analyzing such discriminatory system requires modification to traditional job scheduling models since usually network traffic managers do not have detailed knowledge about individual flows such as their lengths. The Multi-Level (ML) queue, proposed by Kleinrock, can b e used to characterize such system. In an ML queueing system, the priority of a flow is reduced as the flow stays longer. We present an approximate analysis of the ML queueing system to obtain a closed-form solution of the average system response time function for general flow size distributions. We show that the response time of short flows can be significantly reduced without penalizing long flows.
Resumo:
Internet measurements show that the size distribution of Web-based transactions is usually very skewed; a few large requests constitute most of the total traffic. Motivated by the advantages of scheduling algorithms which favor short jobs, we propose to perform differentiated control over Web-based transactions to give preferential service to short web requests. The control is realized through service semantics provided by Internet Traffic Managers, a Diffserv-like architecture. To evaluate the performance of such a control system, it is necessary to have a fast but accurate analytical method. To this end, we model the Internet as a time-shared system and propose a numerical approach which utilizes Kleinrock's conservation law to solve the model. The numerical results are shown to match well those obtained by packet-level simulation, which runs orders of magnitude slower than our numerical method.
Resumo:
Under high loads, a Web server may be servicing many hundreds of connections concurrently. In traditional Web servers, the question of the order in which concurrent connections are serviced has been left to the operating system. In this paper we ask whether servers might provide better service by using non-traditional service ordering. In particular, for the case when a Web server is serving static files, we examine the costs and benefits of a policy that gives preferential service to short connections. We start by assessing the scheduling behavior of a commonly used server (Apache running on Linux) with respect to connection size and show that it does not appear to provide preferential service to short connections. We then examine the potential performance improvements of a policy that does favor short connections (shortest-connection-first). We show that mean response time can be improved by factors of four or five under shortest-connection-first, as compared to an (Apache-like) size-independent policy. Finally we assess the costs of shortest-connection-first scheduling in terms of unfairness (i.e., the degree to which long connections suffer). We show that under shortest-connection-first scheduling, long connections pay very little penalty. This surprising result can be understood as a consequence of heavy-tailed Web server workloads, in which most connections are small, but most server load is due to the few large connections. We support this explanation using analysis.
Resumo:
Recent research have exposed new breeds of attacks that are capable of denying service or inflicting significant damage to TCP flows, without sustaining the attack traffic. Such attacks are often referred to as "low-rate" attacks and they stand in sharp contrast against traditional Denial of Service (DoS) attacks that can completely shut off TCP flows by flooding an Internet link. In this paper, we study the impact of these new breeds of attacks and the extent to which defense mechanisms are capable of mitigating the attack's impact. Through adopting a simple discrete-time model with a single TCP flow and a nonoblivious adversary, we were able to expose new variants of these low-rate attacks that could potentially have high attack potency per attack burst. Our analysis is focused towards worst-case scenarios, thus our results should be regarded as upper bounds on the impact of low-rate attacks rather than a real assessment under a specific attack scenario.
Resumo:
This article presents a new neural pattern recognition architecture on multichannel data representation. The architecture emploies generalized ART modules as building blocks to construct a supervised learning system generating recognition codes on channels dynamically selected in context using serial and parallel match trackings led by inter-ART vigilance signals.
Resumo:
A new neural network architecture for spatial patttern recognition using multi-scale pyramida1 coding is here described. The network has an ARTMAP structure with a new class of ART-module, called Hybrid ART-module, as its front-end processor. Hybrid ART-module, which has processing modules corresponding to each scale channel of multi-scale pyramid, employs channels of finer scales only if it is necesssary to discriminate a pattern from others. This process is effected by serial match tracking. Also the parallel match tracking is used to select the spatial location having most salient feature and limit its attention to that part.
Resumo:
This article compares the performance of Fuzzy ARTMAP with that of Learned Vector Quantization and Back Propagation on a handwritten character recognition task. Training with Fuzzy ARTMAP to a fixed criterion used many fewer epochs. Voting with Fuzzy ARTMAP yielded the highest recognition rates.
Resumo:
A neural network model of 3-D visual perception and figure-ground separation by visual cortex is introduced. The theory provides a unified explanation of how a 2-D image may generate a 3-D percept; how figures pop-out from cluttered backgrounds; how spatially sparse disparity cues can generate continuous surface representations at different perceived depths; how representations of occluded regions can be completed and recognized without usually being seen; how occluded regions can sometimes be seen during percepts of transparency; how high spatial frequency parts of an image may appear closer than low spatial frequency parts; how sharp targets are detected better against a figure and blurred targets are detector better against a background; how low spatial frequency parts of an image may be fused while high spatial frequency parts are rivalrous; how sparse blue cones can generate vivid blue surface percepts; how 3-D neon color spreading, visual phantoms, and tissue contrast percepts are generated; how conjunctions of color-and-depth may rapidly pop-out during visual search. These explanations arise derived from an ecological analysis of how monocularly viewed parts of an image inherit the appropriate depth from contiguous binocularly viewed parts, as during DaVinci stereopsis. The model predicts the functional role and ordering of multiple interactions within and between the two parvocellular processing streams that join LGN to prestriate area V4. Interactions from cells representing larger scales and disparities to cells representing smaller scales and disparities are of particular importance.
Resumo:
The atom pencil we describe here is a versatile tool that writes arbitrary structures by atomic deposition in a serial lithographic process. This device consists of a transversely laser-cooled and collimated cesium atomic beam that passes through a 4-pole atom-flux concentrator and impinges on to micron- and sub-micron-sized apertures. The aperture translates above a fixed substrate and enables the writing of sharp features with sizes down to 280 nm. We have investigated the writing and clogging properties of an atom pencil tip fabricated from silicon oxide pyramids perforated at the tip apex with a sub-micron aperture.
Resumo:
This thesis is concentrated on the historical aspects of the elitist field sports of deer stalking and game shooting, as practiced by four Irish landed ascendancy families in the south west of Ireland. Four great estates were selected for study. Two of these were, by Irish standards, very large: the Kenmare estate of over 136,000 acres in the ownership of the Roman Catholic Earls of Kenmare, and the Herbert estate of over 44,000 acres in the ownership of the Protestant Herbert family. The other two were, in relative terms, small: the Grehan estate of c.7,500 acres in the ownership of the Roman Catholic Grehan family, and the Godfrey estate of c.5,000 acres, in the ownership of the Protestant Barons Godfrey. This mixture of contrasting estate size, owner's religions, nobleman, minor aristocrat and untitled gentry should, it is argued, yield a diversity of the field sports and lifestyles of their owners, and go some way to assess the contributions, good or bad, they have bequeathed to modern Ireland. Equally, it should help in assessing what importance, if any, applied to hunting. In this context, hunting is here used in its broadest meaning, and includes deer stalking and game shooting, as well as hunting with dogs and hounds on foot and horseback. Where a specific type of hunting is involved, it is so described; for example, fox hunting, stag hunting, hare hunting. Similarly, the term game is sometimes used in sporting literature to encompass all species of quarry killed, and can include deer, ground game (hares and rabbits), waterfowl, and various species of game birds. Where it refers to specific species, these are so described; for example grouse, pheasants, woodcork, wild duck, etc. Since two of these estates - the Kenmare and Herbert - each created a deer forest, unique in mid-19th century Ireland, they form the core study estates; the two smaller estates serve as comparative studies. And, equally unique, as these two larger estates held the only remnant population of native Irish red deer, the survival of that herd itself forms a concomitant core area of analysis. The numerary descriptions applied to these animals in popular literature are critically reassessed against prime source historical evidence, as are the so-called deer forest 'clearances'. The core period, 1840 to 1970, is selected as the seminal period, spanning 130 years, from the creation of the deer forests to when a fundamental change in policy and administration was introduced by the state. Comparison is made with similar estates elsewhere, in Britain and especially in Scotland. Their influence on the Irish methods and style of hunting is historically examined.
Resumo:
The use of optical sensor technology for non-invasive determination of key quality pack parameters improved package/product quality. This technology can be used for optimization of packaging processes, improvement of product shelf-life and maintenance of quality. In recent years, there has been a major focus on O2 and CO2 sensor development as these are key gases used in modified atmosphere packaging (MAP) of food. The first and second experimental chapters (chapter 2 and 3) describe the development of O2, pH and CO2 solid state sensors and its (potential) use for food packaging applications. A dual-analyte sensor for dissolved O2 and pH with one bi-functional reporter dye (meso-substituted Pd- or Ptporphyrin) embedded in plasticized PVC membrane was developed in chapter 2. The developed CO2 sensor in chapter 3 was comprised of a phosphorescent reporter dye Pt(II)- tetrakis(pentafluorophenyl) porphyrin (PtTFPP) and a colourimetric pH indicator α-naphtholphthalein (NP) incorporated in a plastic matrix together with a phase transfer agent tetraoctyl- or cetyltrimethylammonium hydroxide (TOA-OH or CTA-OH). The third experimental chapter, chapter 4, described the development of liquid O2 sensors for rapid microbiological determination which are important for improvement and assurance of food safety systems. This automated screening assay produced characteristic profiles with a sharp increase in fluorescence above the baseline level at a certain threshold time (TT) which can be correlated with their initial microbial load and was applied to various raw fish and horticultural samples. Chapter 5, the fourth experimental chapter, reported upon the successful application of developed O2 and CO2 sensors for quality assessment of MAP mushrooms during storage for 7 days at 4°C.
Resumo:
In order to widely use Ge and III-V materials instead of Si in advanced CMOS technology, the process and integration of these materials has to be well established so that their high mobility benefit is not swamped by imperfect manufacturing procedures. In this dissertation number of key bottlenecks in realization of Ge devices are investigated; We address the challenge of the formation of low resistivity contacts on n-type Ge, comparing conventional and advanced rapid thermal annealing (RTA) and laser thermal annealing (LTA) techniques respectively. LTA appears to be a feasible approach for realization of low resistivity contacts with an incredibly sharp germanide-substrate interface and contact resistivity in the order of 10 -7 Ω.cm2. Furthermore the influence of RTA and LTA on dopant activation and leakage current suppression in n+/p Ge junction were compared. Providing very high active carrier concentration > 1020 cm-3, LTA resulted in higher leakage current compared to RTA which provided lower carrier concentration ~1019 cm-3. This is an indication of a trade-off between high activation level and junction leakage current. High ION/IOFF ratio ~ 107 was obtained, which to the best of our knowledge is the best reported value for n-type Ge so far. Simulations were carried out to investigate how target sputtering, dose retention, and damage formation is generated in thin-body semiconductors by means of energetic ion impacts and how they are dependent on the target physical material properties. Solid phase epitaxy studies in wide and thin Ge fins confirmed the formation of twin boundary defects and random nucleation growth, like in Si, but here 600 °C annealing temperature was found to be effective to reduce these defects. Finally, a non-destructive doping technique was successfully implemented to dope Ge nanowires, where nanowire resistivity was reduced by 5 orders of magnitude using PH3 based in-diffusion process.