838 resultados para end-to-side
Resumo:
Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When a number of such connections share a common endpoint, that endpoint has the opportunity to correlate these end-to-end measurements to better diagnose and control the use of shared resources. A valuable characterization of such shared resources is the "loss topology". From the perspective of a server with concurrent connections to multiple clients, the loss topology is a logical tree rooted at the server in which edges represent lossy paths between a pair of internal network nodes. We develop an end-to-end unicast packet probing technique and an associated analytical framework to: (1) infer loss topologies, (2) identify loss rates of links in an existing loss topology, and (3) augment a topology to incorporate the arrival of a new connection. Correct, efficient inference of loss topology information enables new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that our approach is robust in terms of its accuracy and convergence over a wide range of network conditions.
Resumo:
The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.
Resumo:
Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.
Resumo:
Overlay networks have become popular in recent times for content distribution and end-system multicasting of media streams. In the latter case, the motivation is based on the lack of widespread deployment of IP multicast and the ability to perform end-host processing. However, constructing routes between various end-hosts, so that data can be streamed from content publishers to many thousands of subscribers, each having their own QoS constraints, is still a challenging problem. First, any routes between end-hosts using trees built on top of overlay networks can increase stress on the underlying physical network, due to multiple instances of the same data traversing a given physical link. Second, because overlay routes between end-hosts may traverse physical network links more than once, they increase the end-to-end latency compared to IP-level routing. Third, algorithms for constructing efficient, large-scale trees that reduce link stress and latency are typically more complex. This paper therefore compares various methods to construct multicast trees between end-systems, that vary in terms of implementation costs and their ability to support per-subscriber QoS constraints. We describe several algorithms that make trade-offs between algorithmic complexity, physical link stress and latency. While no algorithm is best in all three cases we show how it is possible to efficiently build trees for several thousand subscribers with latencies within a factor of two of the optimal, and link stresses comparable to, or better than, existing technologies.
Resumo:
Both animals and mobile robots, or animats, need adaptive control systems to guide their movements through a novel environment. Such control systems need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once the environment is familiar. How reactive and planned behaviors interact together in real time, and arc released at the appropriate times, during autonomous navigation remains a major unsolved problern. This work presents an end-to-end model to address this problem, named SOVEREIGN: A Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation system. The model comprises several interacting subsystems, governed by systems of nonlinear differential equations. As the animat explores the environment, a vision module processes visual inputs using networks that arc sensitive to visual form and motion. Targets processed within the visual form system arc categorized by real-time incremental learning. Simultaneously, visual target position is computed with respect to the animat's body. Estimates of target position activate a motor system to initiate approach movements toward the target. Motion cues from animat locomotion can elicit orienting head or camera movements to bring a never target into view. Approach and orienting movements arc alternately performed during animat navigation. Cumulative estimates of each movement, based on both visual and proprioceptive cues, arc stored within a motor working memory. Sensory cues are stored in a parallel sensory working memory. These working memories trigger learning of sensory and motor sequence chunks, which together control planned movements. Effective chunk combinations arc selectively enhanced via reinforcement learning when the animat is rewarded. The planning chunks effect a gradual transition from reactive to planned behavior. The model can read-out different motor sequences under different motivational states and learns more efficient paths to rewarded goals as exploration proceeds. Several volitional signals automatically gate the interactions between model subsystems at appropriate times. A 3-D visual simulation environment reproduces the animat's sensory experiences as it moves through a simplified spatial environment. The SOVEREIGN model exhibits robust goal-oriented learning of sequential motor behaviors. Its biomimctic structure explicates a number of brain processes which are involved in spatial navigation.
Resumo:
We propose a novel data-delivery method for delay-sensitive traffic that significantly reduces the energy consumption in wireless sensor networks without reducing the number of packets that meet end-to-end real-time deadlines. The proposed method, referred to as SensiQoS, leverages the spatial and temporal correlation between the data generated by events in a sensor network and realizes energy savings through application-specific in-network aggregation of the data. SensiQoS maximizes energy savings by adaptively waiting for packets from upstream nodes to perform in-network processing without missing the real-time deadline for the data packets. SensiQoS is a distributed packet scheduling scheme, where nodes make localized decisions on when to schedule a packet for transmission to meet its end-to-end real-time deadline and to which neighbor they should forward the packet to save energy. We also present a localized algorithm for nodes to adapt to network traffic to maximize energy savings in the network. Simulation results show that SensiQoS improves the energy savings in sensor networks where events are sensed by multiple nodes, and spatial and/or temporal correlation exists among the data packets. Energy savings due to SensiQoS increase with increase in the density of the sensor nodes and the size of the sensed events. © 2010 Harshavardhan Sabbineni and Krishnendu Chakrabarty.
Resumo:
This study, "Civil Rights on the Cell Block: Race, Reform, and Violence in Texas Prisons and the Nation, 1945-1990," offers a new perspective on the historical origins of the modern prison industrial complex, sexual violence in working-class culture, and the ways in which race shaped the prison experience. This study joins new scholarship that reperiodizes the Civil Rights era while also considering how violence and radicalism shaped the civil rights struggle. It places the criminal justice system at the heart of both an older racial order and within a prison-made civil rights movement that confronted the prison's power to deny citizenship and enforce racial hierarchies. By charting the trajectory of the civil rights movement in Texas prisons, my dissertation demonstrates how the internal struggle over rehabilitation and punishment shaped civil rights, racial formation, and the political contest between liberalism and conservatism. This dissertation offers a close case study of Texas, where the state prison system emerged as a national model for penal management. The dissertation begins with a hopeful story of reform marked by an apparently successful effort by the State of Texas to replace its notorious 1940s plantation/prison farm system with an efficient, business-oriented agricultural enterprise system. When this new system was fully operational in the 1960s, Texas garnered plaudits as a pioneering, modern, efficient, and business oriented Sun Belt state. But this reputation of competence and efficiency obfuscated the reality of a brutal system of internal prison management in which inmates acted as guards, employing coercive means to maintain control over the prisoner population. The inmates whom the prison system placed in charge also ran an internal prison economy in which money, food, human beings, reputations, favors, and sex all became commodities to be bought and sold. I analyze both how the Texas prison system managed to maintain its high external reputation for so long in the face of the internal reality and how that reputation collapsed when inmates, inspired by the Civil Rights Movement, revolted. My dissertation shows that this inmate Civil Rights rebellion was a success in forcing an end to the existing system but a failure in its attempts to make conditions in Texas prisons more humane. The new Texas prison regime, I conclude, utilized paramilitary practices, privatized prisons, and gang-related warfare to establish a new system that focused much more on law and order in the prisons than on the legal and human rights of prisoners. Placing the inmates and their struggle at the heart of the national debate over rights and "law and order" politics reveals an inter-racial social justice movement that asked the courts to reconsider how the state punished those who committed a crime while also reminding the public of the inmates' humanity and their constitutional rights.
Resumo:
BACKGROUND: Anterior cruciate ligament (ACL) reconstruction is associated with a high incidence of second tears (graft tears and contralateral ACL tears). These secondary tears have been attributed to asymmetrical lower extremity mechanics. Knee bracing is one potential intervention that can be used during rehabilitation that has the potential to normalize lower extremity asymmetry; however, little is known about the effect of bracing on movement asymmetry in patients following ACL reconstruction. HYPOTHESIS: Wearing a knee brace would increase knee joint flexion and joint symmetry. It was also expected that the joint mechanics would become more symmetrical in the braced condition. OBJECTIVE: To examine how knee bracing affects knee joint function and symmetry over the course of rehabilitation in patients 6 months following ACL reconstruction. STUDY DESIGN: Controlled laboratory study. LEVEL OF EVIDENCE: Level 3. METHODS: Twenty-three adolescent patients rehabilitating from ACL reconstruction surgery were recruited for the study. The subjects all underwent a motion analysis assessment during a stop-jump activity with and without a functional knee brace on the surgical side that resisted extension for 6 months following the ACL reconstruction surgery. Statistical analysis utilized a 2 × 2 (limb × brace) analysis of variance with a significant alpha level of 0.05. RESULTS: Subjects had increased knee flexion on the surgical side when they were braced. The brace condition increased knee flexion velocity, decreased the initial knee flexion angle, and increased the ground reaction force and knee extension moment on both limbs. Side-to-side asymmetry was present across conditions for the vertical ground reaction force and knee extension moment. CONCLUSION: Wearing a knee brace appears to increase lower extremity compliance and promotes normalized loading on the surgical side. CLINICAL RELEVANCE: Knee extension constraint bracing in postoperative ACL patients may improve symmetry of lower extremity mechanics, which is potentially beneficial in progressing rehabilitation and reducing the incidence of second ACL tears.
Resumo:
Software-based control of life-critical embedded systems has become increasingly complex, and to a large extent has come to determine the safety of the human being. For example, implantable cardiac pacemakers have over 80,000 lines of code which are responsible for maintaining the heart within safe operating limits. As firmware-related recalls accounted for over 41% of the 600,000 devices recalled in the last decade, there is a need for rigorous model-driven design tools to generate verified code from verified software models. To this effect, we have developed the UPP2SF model-translation tool, which facilitates automatic conversion of verified models (in UPPAAL) to models that may be simulated and tested (in Simulink/Stateflow). We describe the translation rules that ensure correct model conversion, applicable to a large class of models. We demonstrate how UPP2SF is used in themodel-driven design of a pacemaker whosemodel is (a) designed and verified in UPPAAL (using timed automata), (b) automatically translated to Stateflow for simulation-based testing, and then (c) automatically generated into modular code for hardware-level integration testing of timing-related errors. In addition, we show how UPP2SF may be used for worst-case execution time estimation early in the design stage. Using UPP2SF, we demonstrate the value of integrated end-to-end modeling, verification, code-generation and testing process for complex software-controlled embedded systems. © 2014 ACM.
Resumo:
There has been a recent revival of interest in the register insertion (RI) protocol because of its high throughput and low delay characteristics. Several variants of the protocol have been investigated with a view to integrating voice and data applications on a single local area network (LAN). In this paper the performance of an RI ring with a variable size buffer is studied by modelling and simulation. The chief advantage of the proposed scheme is that an efficient but simple bandwidth allocation scheme is easily incorporated. Approximate formulas are derived for queue lengths, queueing times, and total end-to-end transfer delays. The results are compared with previous analyses and with simulation estimates. The effectiveness of the proposed protocol in ensuring fairness of access under conditions of heavy and unequal loading is investigated.
Resumo:
Ecosystems can alternate suddenly between contrasting persistent states due to internal processes or external drivers. It is important to understand the mechanisms by which these shifts occur, especially in exploited ecosystems. There have been several abrupt marine ecosystem shifts attributed either to fishing, recent climate change or a combination of these two drivers. We show that temperature has been an important driver of the trophodynamics of the North Sea, a heavily fished marine ecosystem, for nearly 50 years and that a recent pronounced change in temperature established a new ecosystem dynamic regime through a series of internal mechanisms. Using an end-to-end ecosystem approach that included primary producers, primary, secondary and tertiary consumers, and detritivores, we found that temperature modified the relationships among species through nonlinearities in the ecosystem involving ecological thresholds and trophic amplifications. Trophic amplification provides an alternative mechanism to positive feedback to drive an ecosystem towards a new dynamic regime, which in this case favours jellyfish in the plankton and decapods and detritivores in the benthos. Although overfishing is often held responsible for marine ecosystem degeneration, temperature can clearly bring about similar effects. Our results are relevant to ecosystem-based fisheries management (EBFM), seen as the way forward to manage exploited marine ecosystems.
Resumo:
Exploring climate and anthropogenic impacts on marine ecosystems requires an understanding of how trophic components interact. However, integrative end-to-end ecosystem studies (experimental and/or modelling) are rare. Experimental investigations often concentrate on a particular group or individual species within a trophic level, while tropho-dynamic field studies typically employ either a bottom-up approach concentrating on the phytoplankton community or a top-down approach concentrating on the fish community. Likewise the emphasis within modelling studies is usually placed upon phytoplankton-dominated biogeochemistry or on aspects of fisheries regulation. In consequence the roles of zooplankton communities (protists and metazoans) linking phytoplankton and fish communities are typically under-represented if not (especially in fisheries models) ignored. Where represented in ecosystem models, zooplankton are usually incorporated in an extremely simplistic fashion, using empirical descriptions merging various interacting physiological functions governing zooplankton growth and development, and thence ignoring physiological feedback mechanisms. Here we demonstrate, within a modelled plankton food-web system, how trophic dynamics are sensitive to small changes in parameter values describing zooplankton vital rates and thus the importance of using appropriate zooplankton descriptors. Through a comprehensive review, we reveal the mismatch between empirical understanding and modelling activities identifying important issues that warrant further experimental and modelling investigation. These include: food selectivity, kinetics of prey consumption and interactions with assimilation and growth, form of voided material, mortality rates at different age-stages relative to prior nutrient history. In particular there is a need for dynamic data series in which predator and prey of known nutrient history are studied interacting under varied pH and temperature regimes.
Resumo:
BACKGROUND: Hypertension and cognitive impairment are prevalent in older people. It is known that hypertension is a direct risk factor for vascular dementia and recent studies have suggested hypertension also impacts upon prevalence of Alzheimer's disease. The question is therefore whether treatment of hypertension lowers the rate of cognitive decline. OBJECTIVES: To assess the effects of blood pressure lowering treatments for the prevention of dementia and cognitive decline in patients with hypertension but no history of cerebrovascular disease. SEARCH STRATEGY: The trials were identified through a search of CDCIG's Specialised Register, CENTRAL, MEDLINE, EMBASE, PsycINFO and CINAHL on 27 April 2005. SELECTION CRITERIA: Randomized, double-blind, placebo controlled trials in which pharmacological or non-pharmacological interventions to lower blood pressure were given for at least six months. DATA COLLECTION AND ANALYSIS: Two independent reviewers assessed trial quality and extracted data. The following outcomes were assessed: incidence of dementia, cognitive change from baseline, blood pressure level, incidence and severity of side effects and quality of life. MAIN RESULTS: Three trials including 12,091 hypertensive subjects were identified. Average age was 72.8 years. Participants were recruited from industrialised countries. Mean blood pressure at entry across the studies was 170/84 mmHg. All trials instituted a stepped care approach to hypertension treatment, starting with a calcium-channel blocker, a diuretic or an angiotensin receptor blocker. The combined result of the three trials reporting incidence of dementia indicated no significant difference between treatment and placebo (Odds Ratio (OR) = 0.89, 95% CI 0.69, 1.16). Blood pressure reduction resulted in a 11% relative risk reduction of dementia in patients with no prior cerebrovascular disease but this effect was not statistically significant (p = 0.38) and there was considerable heterogeneity between the trials. The combined results from the two trials reporting change in Mini Mental State Examination (MMSE) did not indicate a benefit from treatment (Weighted Mean Difference (WMD) = 0.10, 95% CI -0.03, 0.23). Both systolic and diastolic blood pressure levels were reduced significantly in the two trials assessing this outcome (WMD = -7.53, 95% CI -8.28, -6.77 for systolic blood pressure, WMD = -3.87, 95% CI -4.25, -3.50 for diastolic blood pressure).Two trials reported adverse effects requiring discontinuation of treatment and the combined results indicated a significant benefit from placebo (OR = 1.18, 95% CI 1.06, 1.30). When analysed separately, however, more patients on placebo in SCOPE were likely to discontinue treatment due to side effects; the converse was true in SHEP 1991. Quality of life data could not be analysed in the three studies. There was difficulty with the control group in this review as many of the control subjects received antihypertensive treatment because their blood pressures exceeded pre-set values. In most cases the study became a comparison between the study drug against a usual antihypertensive regimen. AUTHORS' CONCLUSIONS: There was no convincing evidence from the trials identified that blood pressure lowering prevents the development of dementia or cognitive impairment in hypertensive patients with no apparent prior cerebrovascular disease. There were significant problems identified with analysing the data, however, due to the number of patients lost to follow-up and the number of placebo patients given active treatment. This introduced bias. More robust results may be obtained by analysing one year data to reduce differential drop-out or by conducting a meta-analysis using individual patient data.
Resumo:
The future convergence of voice, video and data applications on the Internet requires that next generation technology provides bandwidth and delay guarantees. Current technology trends are moving towards scalable aggregate-based systems where applications are grouped together and guarantees are provided at the aggregate level only. This solution alone is not enough for interactive video applications with sub-second delay bounds. This paper introduces a novel packet marking scheme that controls the end-to-end delay of an individual flow as it traverses a network enabled to supply aggregate- granularity Quality of Service (QoS). IPv6 Hop-by-Hop extension header fields are used to track the packet delay encountered at each network node and autonomous decisions are made on the best queuing strategy to employ. The results of network simulations are presented and it is shown that when the proposed mechanism is employed the requested delay bound is met with a 20% reduction in resource reservation and no packet loss in the network.
Resumo:
This paper presents a new packet scheduling scheme called agent-based WFQ to control and maintain QoS parameters in virtual private networks (VPNs) within the confines of adaptive networks. Future networks are expected to be open heterogeneous environments consisting of more than one network operator. In this adaptive environment, agents act on behalf of users or third-party operators to obtain the best service for their clients and maintain those services through the modification of the scheduling scheme in routers and switches spanning the VPN. In agent-based WFQ, an agent on the router monitors the accumulated queuing delay for each service. In order to control and to keep the end-to-end delay within the bounds, the weights for services are adjusted dynamically by agents on the routers spanning the VPN. If there is an increase or decrease in queuing delay of a service, an agent on a downstream router informs the upstream routers to adjust the weights of their queues. This keeps the end-to-end delay of services within the specified bounds and offers better QoS compared to VPNs using static WFQ. This paper also describes the algorithm for agent-based WFQ, and presents simulation results. (C) 2003 Elsevier Science Ltd. All rights reserved.