886 resultados para Location-based services
Resumo:
Campus-based publishing partnerships offer the academy greater control over the intellectual products that it helps create. To fully realize this potential, such partnerships will need to evolve from informal working alliances to long-term, programmatic collaborations. SPARCâ s Campus-based Publishing Partnerships: A Guide to Critical Issues addresses issues relevant to building sound and balanced partnerships, including: Establishing governance and administrative structures; Identifying funding models that accommodate the objectives of both libraries and presses; Defining a partnershipâ s objectives to align the missions of the library and the press; Determining what services to provide; and Demonstrating the value of the collaboration. SPARCâ s Campus-based Publishing Partnerships will help libraries, presses, and academic units to define effective partnerships capable of supporting innovative approaches to campus-based publishing.
Resumo:
As distributed information services like the World Wide Web become increasingly popular on the Internet, problems of scale are clearly evident. A promising technique that addresses many of these problems is service (or document) replication. However, when a service is replicated, clients then need the additional ability to find a "good" provider of that service. In this paper we report on techniques for finding good service providers without a priori knowledge of server location or network topology. We consider the use of two principal metrics for measuring distance in the Internet: hops, and round-trip latency. We show that these two metrics yield very different results in practice. Surprisingly, we show data indicating that the number of hops between two hosts in the Internet is not strongly correlated to round-trip latency. Thus, the distance in hops between two hosts is not necessarily a good predictor of the expected latency of a document transfer. Instead of using known or measured distances in hops, we show that the extra cost at runtime incurred by dynamic latency measurement is well justified based on the resulting improved performance. In addition we show that selection based on dynamic latency measurement performs much better in practice that any static selection scheme. Finally, the difference between the distribution of hops and latencies is fundamental enough to suggest differences in algorithms for server replication. We show that conclusions drawn about service replication based on the distribution of hops need to be revised when the distribution of latencies is considered instead.
Resumo:
An appearance-based framework for 3D hand shape classification and simultaneous camera viewpoint estimation is presented. Given an input image of a segmented hand, the most similar matches from a large database of synthetic hand images are retrieved. The ground truth labels of those matches, containing hand shape and camera viewpoint information, are returned by the system as estimates for the input image. Database retrieval is done hierarchically, by first quickly rejecting the vast majority of all database views, and then ranking the remaining candidates in order of similarity to the input. Four different similarity measures are employed, based on edge location, edge orientation, finger location and geometric moments.
Resumo:
One relatively unexplored question about the Internet's physical structure concerns the geographical location of its components: routers, links and autonomous systems (ASes). We study this question using two large inventories of Internet routers and links, collected by different methods and about two years apart. We first map each router to its geographical location using two different state-of-the-art tools. We then study the relationship between router location and population density; between geographic distance and link density; and between the size and geographic extent of ASes. Our findings are consistent across the two datasets and both mapping methods. First, as expected, router density per person varies widely over different economic regions; however, in economically homogeneous regions, router density shows a strong superlinear relationship to population density. Second, the probability that two routers are directly connected is strongly dependent on distance; our data is consistent with a model in which a majority (up to 75-95%) of link formation is based on geographical distance (as in the Waxman topology generation method). Finally, we find that ASes show high variability in geographic size, which is correlated with other measures of AS size (degree and number of interfaces). Among small to medium ASes, ASes show wide variability in their geographic dispersal; however, all ASes exceeding a certain threshold in size are maximally dispersed geographically. These findings have many implications for the next generation of topology generators, which we envisage as producing router-level graphs annotated with attributes such as link latencies, AS identifiers and geographical locations.
Resumo:
Personal communication devices are increasingly equipped with sensors for passive monitoring of encounters and surroundings. We envision the emergence of services that enable a community of mobile users carrying such resource-limited devices to query such information at remote locations in the field in which they collectively roam. One approach to implement such a service is directed placement and retrieval (DPR), whereby readings/queries about a specific location are routed to a node responsible for that location. In a mobile, potentially sparse setting, where end-to-end paths are unavailable, DPR is not an attractive solution as it would require the use of delay-tolerant (flooding-based store-carry-forward) routing of both readings and queries, which is inappropriate for applications with data freshness constraints, and which is incompatible with stringent device power/memory constraints. Alternatively, we propose the use of amorphous placement and retrieval (APR), in which routing and field monitoring are integrated through the use of a cache management scheme coupled with an informed exchange of cached samples to diffuse sensory data throughout the network, in such a way that a query answer is likely to be found close to the query origin. We argue that knowledge of the distribution of query targets could be used effectively by an informed cache management policy to maximize the utility of collective storage of all devices. Using a simple analytical model, we show that the use of informed cache management is particularly important when the mobility model results in a non-uniform distribution of users over the field. We present results from extensive simulations which show that in sparsely-connected networks, APR is more cost-effective than DPR, that it provides extra resilience to node failure and packet losses, and that its use of informed cache management yields superior performance.
Resumo:
Wireless Intrusion Detection Systems (WIDS) monitor 802.11 wireless frames (Layer-2) in an attempt to detect misuse. What distinguishes a WIDS from a traditional Network IDS is the ability to utilize the broadcast nature of the medium to reconstruct the physical location of the offending party, as opposed to its possibly spoofed (MAC addresses) identity in cyber space. Traditional Wireless Network Security Systems are still heavily anchored in the digital plane of "cyber space" and hence cannot be used reliably or effectively to derive the physical identity of an intruder in order to prevent further malicious wireless broadcasts, for example by escorting an intruder off the premises based on physical evidence. In this paper, we argue that Embedded Sensor Networks could be used effectively to bridge the gap between digital and physical security planes, and thus could be leveraged to provide reciprocal benefit to surveillance and security tasks on both planes. Toward that end, we present our recent experience integrating wireless networking security services into the SNBENCH (Sensor Network workBench). The SNBENCH provides an extensible framework that enables the rapid development and automated deployment of Sensor Network applications on a shared, embedded sensing and actuation infrastructure. The SNBENCH's extensible architecture allows an engineer to quickly integrate new sensing and response capabilities into the SNBENCH framework, while high-level languages and compilers allow novice SN programmers to compose SN service logic, unaware of the lower-level implementation details of tools on which their services rely. In this paper we convey the simplicity of the service composition through concrete examples that illustrate the power and potential of Wireless Security Services that span both the physical and digital plane.
Resumo:
Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.
Resumo:
This paper is centered around the design of a thread- and memory-safe language, primarily for the compilation of application-specific services for extensible operating systems. We describe various issues that have influenced the design of our language, called Cuckoo, that guarantees safety of programs with potentially asynchronous flows of control. Comparisons are drawn between Cuckoo and related software safety techniques, including Cyclone and software-based fault isolation (SFI), and performance results suggest our prototype compiler is capable of generating safe code that executes with low runtime overheads, even without potential code optimizations. Compared to Cyclone, Cuckoo is able to safely guard accesses to memory when programs are multithreaded. Similarly, Cuckoo is capable of enforcing memory safety in situations that are potentially troublesome for techniques such as SFI.
Resumo:
This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.
Resumo:
Ultra Wide Band (UWB) transmission has recently been the object of considerable attention in the field of next generation location aware wireless sensor networks. This is due to its fine time resolution, energy efficient and robustness to interference in harsh environments. This paper presents a thorough applied examination of prototype IEEE 802.15.4a impulse UWB transceiver technology to quantify the effect of line of sight (LOS) and non line of sight (NLOS) ranging in real indoor and outdoor environments. Results included draw on an extensive array of experiments that fully characterize the 802.15.4a UWB transceiver technology, its reliability and ranging capabilities for the first time. A new two way (TW) ranging protocol is proposed. The goal of this work is to validate the technology as a dependable wireless communications mechanism for the subset of sensor network localization applications where reliability and precision positions are key concerns.
Resumo:
Background and Aims: Caesarean section rates have increased in recent decades and the effects on subsequent pregnancy outcome are largely unknown. Prior research has hypothesised that Caesarean section delivery may lead to an increased risk of subsequent stillbirth, miscarriage, ectopic pregnancy and sub-fertility. Structure and Methods: Papers 1-3 are systematic reviews with meta-analyses. Papers 4-6 are findings from this thesis on the rate of subsequent stillbirth, miscarriage, ectopic pregnancy and live birth by mode of delivery. Results Systematic reviews and meta-analyses: A 23% increased odds of subsequent stillbirth; no increase in odds of subsequent ectopic pregnancy and a 10% reduction in the odds of subsequent live birth among women with a previous Caesarean section were found in the various meta-analyses. Danish cohorts: Results from the Danish Civil Registration System (CRS) cohort revealed a small increased rate of subsequent stillbirth and ectopic pregnancy among women with a primary Caesarean section, which remained in the analyses by type of Caesarean. No increased rate of miscarriage was found among women with a primary Caesarean section. In the CRS data, women with a primary Caesarean section had a significantly reduced rate of subsequent live birth particularly among women with primary elective and maternal-requested Caesarean sections. In the Aarhus Birth Cohort, overall the effect of mode of delivery on the rate and time to next live birth was minimal. Conclusions: Primary Caesarean section was associated with a small increased rate of stillbirth and ectopic pregnancy, which may be in part due to underlying medical conditions. No increased rate of miscarriage was found. A reduced rate of subsequent live birth was found among Caesarean section in the CRS data. In the smaller ABC cohort, a small reduction in rate of subsequent live birth was found among women with a primary Caesarean section and is most likely due to maternal choice rather than any ill effects of the Caesarean. The findings of this study, the largest and most comprehensive to date will be of significant interest to health care providers and women globally.
Resumo:
Background: With cesarean section rates increasing worldwide, clarity regarding negative effects is essential. This study aimed to investigate the rate of subsequent stillbirth, miscarriage, and ectopic pregnancy following primary cesarean section, controlling for confounding by indication. Methods and Findings: We performed a population-based cohort study using Danish national registry data linking various registers. The cohort included primiparous women with a live birth between January 1, 1982, and December 31, 2010 (n = 832,996), with follow-up until the next event (stillbirth, miscarriage, or ectopic pregnancy) or censoring by live birth, death, emigration, or study end. Cox regression models for all types of cesarean sections, sub-group analyses by type of cesarean, and competing risks analyses for the causes of stillbirth were performed. An increased rate of stillbirth (hazard ratio [HR] 1.14, 95% CI 1.01, 1.28) was found in women with primary cesarean section compared to spontaneous vaginal delivery, giving a theoretical absolute risk increase (ARI) of 0.03% for stillbirth, and a number needed to harm (NNH) of 3,333 women. Analyses by type of cesarean section showed similarly increased rates for emergency (HR 1.15, 95% CI 1.01, 1.31) and elective cesarean (HR 1.11, 95% CI 0.91, 1.35), although not statistically significant in the latter case. An increased rate of ectopic pregnancy was found among women with primary cesarean overall (HR 1.09, 95% CI 1.04, 1.15) and by type (emergency cesarean, HR 1.09, 95% CI 1.03, 1.15, and elective cesarean, HR 1.12, 95% CI 1.03, 1.21), yielding an ARI of 0.1% and a NNH of 1,000 women for ectopic pregnancy. No increased rate of miscarriage was found among women with primary cesarean, with maternally requested cesarean section associated with a decreased rate of miscarriage (HR 0.72, 95% CI 0.60, 0.85). Limitations include incomplete data on maternal body mass index, maternal smoking, fertility treatment, causes of stillbirth, and maternally requested cesarean section, as well as lack of data on antepartum/intrapartum stillbirth and gestational age for stillbirth and miscarriage. Conclusions: This study found that cesarean section is associated with a small increased rate of subsequent stillbirth and ectopic pregnancy. Underlying medical conditions, however, and confounding by indication for the primary cesarean delivery account for at least part of this increased rate. These findings will assist women and health-care providers to reach more informed decisions regarding mode of delivery.
Resumo:
INTRODUCTION: Neurodegenerative diseases (NDD) are characterized by progressive decline and loss of function, requiring considerable third-party care. NDD carers report low quality of life and high caregiver burden. Despite this, little information is available about the unmet needs of NDD caregivers. METHODS: Data from a cross-sectional, whole of population study conducted in South Australia were analyzed to determine the profile and unmet care needs of people who identify as having provided care for a person who died an expected death from NDDs including motor neurone disease and multiple sclerosis. Bivariate analyses using chi(2) were complemented with a regression analysis. RESULTS: Two hundred and thirty respondents had a person close to them die from an NDD in the 5 years before responding. NDD caregivers were more likely to have provided care for more than 2 years and were more able to move on after the death than caregivers of people with other disorders such as cancer. The NDD caregivers accessed palliative care services at the same rate as other caregivers at the end of life, however people with an NDD were almost twice as likely to die in the community (odds ratio [OR] 1.97; 95% confidence interval [CI] 1.30 to 3.01) controlling for relevant caregiver factors. NDD caregivers reported significantly more unmet needs in emotional, spiritual, and bereavement support. CONCLUSION: This study is the first step in better understanding across the whole population the consequences of an expected death from an NDD. Assessments need to occur while in the role of caregiver and in the subsequent bereavement phase.
Resumo:
PURPOSE: Little is known about young caregivers of people with advanced life-limiting illness. Better understanding of the needs and characteristics of these young caregivers can inform development of palliative care and other support services. METHODS: A population-based analysis of caregivers was performed from piloted questions included in the 2001-2007 face-to-face annual health surveys of 23,706 South Australians on the death of a loved one, caregiving provided, and characteristics of the deceased individual and caregiver. The survey was representative of the population by age, gender, and region of residence. FINDINGS: Most active care was provided by older, close family members, but large numbers of young people (ages 15-29) also provided assistance to individuals with advanced life-limiting illness. They comprised 14.4% of those undertaking "hands-on" care on a daily or intermittent basis, whom we grouped together as active caregivers. Almost as many young males as females participate in active caregiving (men represent 46%); most provide care while being employed, including 38% who work full-time. Over half of those engaged in hands-on care indicated the experience to be worse or much worse than expected, with young people more frequently reporting dissatisfaction thereof. Young caregivers also exhibited an increased perception of the need for assistance with grief. CONCLUSION: Young people can be integral to end-of-life care, and represent a significant cohort of active caregivers with unique needs and experiences. They may have a more negative experience as caregivers, and increased needs for grief counseling services compared to other age cohorts of caregivers.
Resumo:
BACKGROUND: Outpatient palliative care, an evolving delivery model, seeks to improve continuity of care across settings and to increase access to services in hospice and palliative medicine (HPM). It can provide a critical bridge between inpatient palliative care and hospice, filling the gap in community-based supportive care for patients with advanced life-limiting illness. Low capacities for data collection and quantitative research in HPM have impeded assessment of the impact of outpatient palliative care. APPROACH: In North Carolina, a regional database for community-based palliative care has been created through a unique partnership between a HPM organization and academic medical center. This database flexibly uses information technology to collect patient data, entered at the point of care (e.g., home, inpatient hospice, assisted living facility, nursing home). HPM physicians and nurse practitioners collect data; data are transferred to an academic site that assists with analyses and data management. Reports to community-based sites, based on data they provide, create a better understanding of local care quality. CURRENT STATUS: The data system was developed and implemented over a 2-year period, starting with one community-based HPM site and expanding to four. Data collection methods were collaboratively created and refined. The database continues to grow. Analyses presented herein examine data from one site and encompass 2572 visits from 970 new patients, characterizing the population, symptom profiles, and change in symptoms after intervention. CONCLUSION: A collaborative regional approach to HPM data can support evaluation and improvement of palliative care quality at the local, aggregated, and statewide levels.