863 resultados para tourist destination
Resumo:
Forwarding in DTNs is a challenging problem. We focus on the specific issue of forwarding in an environment where mobile devices are carried by people in a restricted physical space (e.g. a conference) and contact patterns are not predictable. We show for the first time a path explosion phenomenon between most pairs of nodes. This means that, once the first path reaches the destination, the number of subsequent paths grows rapidly with time, so there usually exist many near-optimal paths. We study the path explosion phenomenon both analytically and empirically. Our results highlight the importance of unequal contact rates across nodes for understanding the performance of forwarding algorithms. We also find that a variety of well-known forwarding algorithms show surprisingly similar performance in our setting and we interpret this fact in light of the path explosion phenomenon.
Resumo:
In an n-way broadcast application each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers and thus, by necessity, employs an almost random overlay topology. n-way broadcast applications on the other hand, owing to their inherent n-squared nature, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (local) approaches. We present the Max-Min and MaxSum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using trace-driven simulation and measurements from a PlanetLab prototype implementation, we demonstrate that the performance of swarming on top of our constructed topologies is far superior to the performance of random and myopic overlays. Moreover, we show how to modify our swarming protocol to allow it to accommodate selfish nodes.
Resumo:
Interdomain routing on the Internet is performed using route preference policies specified independently, and arbitrarily by each Autonomous System in the network. These policies are used in the border gateway protocol (BGP) by each AS when selecting next-hop choices for routes to each destination. Conflicts between policies used by different ASs can lead to routing instabilities that, potentially, cannot be resolved no matter how long BGP is run. The Stable Paths Problem (SPP) is an abstract graph theoretic model of the problem of selecting nexthop routes for a destination. A stable solution to the problem is a set of next-hop choices, one for each AS, that is compatible with the policies of each AS. In a stable solution each AS has selected its best next-hop given that the next-hop choices of all neighbors are fixed. BGP can be viewed as a distributed algorithm for solving SPP. In this report we consider the stable paths problem, as well as a family of restricted variants of the stable paths problem, which we call F stable paths problems. We show that two very simple variants of the stable paths problem are also NP-complete. In addition we show that for networks with a DAG topology, there is an efficient centralized algorithm to solve the stable paths problem, and that BGP always efficiently converges to a stable solution on such networks.
Resumo:
We propose an economic mechanism to reduce the incidence of malware that delivers spam. Earlier research proposed attention markets as a solution for unwanted messages, and showed they could provide more net benefit than alternatives such as filtering and taxes. Because it uses a currency system, Attention Bonds faces a challenge. Zombies, botnets, and various forms of malware might steal valuable currency instead of stealing unused CPU cycles. We resolve this problem by taking advantage of the fact that the spam-bot problem has been reduced to financial fraud. As such, the large body of existing work in that realm can be brought to bear. By drawing an analogy between sending and spending, we show how a market mechanism can detect and prevent spam malware. We prove that by using a currency (i) each instance of spam increases the probability of detecting infections, and (ii) the value of eradicating infections can justify insuring users against fraud. This approach attacks spam at the source, a virtue missing from filters that attack spam at the destination. Additionally, the exchange of currency provides signals of interest that can improve the targeting of ads. ISPs benefit from data management services and consumers benefit from the higher average value of messages they receive. We explore these and other secondary effects of attention markets, and find them to offer, on the whole, attractive economic benefits for all – including consumers, advertisers, and the ISPs.
Resumo:
The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.
Resumo:
Within a recently developed low-power ad hoc network system, we present a transport protocol (JTP) whose goal is to reduce power consumption without trading off delivery requirements of applications. JTP has the following features: it is lightweight whereby end-nodes control in-network actions by encoding delivery requirements in packet headers; JTP enables applications to specify a range of reliability requirements, thus allocating the right energy budget to packets; JTP minimizes feedback control traffic from the destination by varying its frequency based on delivery requirements and stability of the network; JTP minimizes energy consumption by implementing in-network caching and increasing the chances that data retransmission requests from destinations "hit" these caches, thus avoiding costly source retransmissions; and JTP fairly allocates bandwidth among flows by backing off the sending rate of a source to account for in-network retransmissions on its behalf. Analysis and extensive simulations demonstrate the energy gains of JTP over one-size-fits-all transport protocols.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
The process of determining the level of care and specific postacute care facility for stroke patients has not been adequately studied. The objective of this study was to better understand the factors that influence postacute care decisions by surveying stroke discharge planners. Requests were sent to discharge planners at 471 hospitals in the Northeast United States to complete an online survey regarding the factors impacting the selection of postacute care. Seventy-seven (16%) discharge planners completed the online survey. Respondents were mainly nurses and social workers and 73% reported ≥20 years healthcare experience. Patients and families were found to be significantly more influential than physicians (P < 0.001) and other clinicians (P = 0.04) in influencing postdischarge care. Other clinicians were significantly more influential than physicians (P < 0.001). Insurance and quality of postacute care were the factors likely to most affect the selection of postacute care facility. Insurance was also identified as the greatest barrier in the selection of level of postacute care (70%; P < 0.001) and specific postacute care facility (46%; P = 0.02). More than half reported that pressure to discharge patients quickly impacts a patients' final destination. Nonclinical factors are perceived by discharge planners to have a major influence on postacute stroke care decision making.
Resumo:
Within the building evacuation context, wayfinding describes the process in which an individual located within an arbitrarily complex enclosure attempts to find a path which leads them to relative safety, usually the exterior of the enclosure. Within most evacuation modelling tools, wayfinding is completely ignored; agents are either assigned the shortest distance path or use a potential field to find the shortest path to the exits. In this paper a novel wayfinding technique that attempts to represent the manner in which people wayfind within structures is introduced and demonstrated through two examples. The first step is to encode the spatial information of the enclosure in terms of a graph. The second step is to apply search algorithms to the graph to find possible routes to the destination and assign a cost to the routes based on their personal route preferences such as "least time" or "least distance" or a combination of criteria. The third step is the route execution and refinement. In this step, the agent moves along the chosen route and reassesses the route at regular intervals and may decide to take an alternative path if the agent determines that an alternate route is more favourable e.g. initial path is highly congested or is blocked due to fire.
Resumo:
This article examines the concepts, definitions, policies, and practices of heritage in a contemporary context. Within recent years, there have been significant shifts in our understandings and applications of heritage concepts and policies in the modern world. ‘Heritage’ emerged as a buzz word in international policy arenas in the 1980s and early 1990s, and has since weathered the vagaries of turbulent definitional and governance–nomenclature storms, as traditional debates about ‘what it is and what it is not’ reverberate around academia and state agencies alike. Policy and funding structures for heritage are determined by the classifications used to define them in various countries. Typically, reference is made to ‘built heritage’, ‘natural heritage’, and ‘intangible heritage’, loosely reflecting buildings, landscapes, and culture. Aspects of heritage are used by the cultural and tourism industries to add economic value, through heritage tourism sites, museums, and other activities. The cultural tourism product is often anchored around notions of heritage, and in postmodern, post-tourist societies, boundaries between culture, (travel) space, and identities are increasingly blurred. Issues of authenticity become important in the representation of heritage, and questions are asked about the validity of nostalgia versus realism. The role of heritage is examined in the context of identity formulation at individual and nation-state levels, and the political aspects of this are also discussed. Finally, heritage conservation is assessed through an examination of UNESCO’s World Heritage Site listing and protection strategy. In a changing world, new constructs of heritage, identity, authenticity, and representation will continue to emerge as meanings are constantly renegotiated over time and space.
Resumo:
In Sofia Coppola's 2003 film Lost in Translation, Bill Murray and Scarlett Johansson's characters find themselves culturally stranded and oddly mismatched as an improvised tourist couple in contemporary Tokyo. This is an urban landscape that they cannot comprehend but only temporarily experience, in a fragmented and surreptitious way that allows no possible understanding and categorizations, but offers physical inclusion, emotional participation and momentary embeddedness.
Resumo:
Culloden (BBC, 1964) The Great War (BBC, 1964) 1914-18 (BBC/KCET, 1996) Haig: the Unknown Soldier (BBC, 1996) Veterans: the Last Survivors of the Great War (BBC, 1998) 1900s House (Channel 4, 1999) The Western Front (BBC, 1999) History of Britain (BBC, 2000) 1940s House (Channel 4, 2001) The Ship (BBC, 2002) Surviving the Iron Age (BBC, 2001) The Trench (BBC, 2002) Frontier House (Channel 4, 2002) Lad's Army (BBC, 2002) Edwardian Country House (Channel 4, 2002) Spitfire Ace (Channel 4, 2003) World War One in Colour (Channel 5, 2003) 1914: the War Revolution (BBC, 2003) The First World War (Channel 4, 2003) Dunkirk (BBC, 2004) Dunkirk: The Soldier's Story (BBC, 2004) D-Day to Berlin (BBC, 2004) Bad Lad's Army (ITV, 2004) Destination D-Day: Raw Recruits (BBC, 2004) Bomber Crew (Channel 4, 2004) Battlefield Britain (BBC, 2004) The Last Battle (ARTE/ZDF, 2005) Who Do You Think You Are? (BBC, 2004, 2006) The Somme (Channel 4, 2005) [From the Publisher]
Radio propagation modeling for capacity optimization in wireless relay MIMO systems with partial CSI
Resumo:
The enormous growth of wireless communication systems makes it important to evaluate the capacity of such channels. Multiple Input Multiple Output (MIMO) wireless communication systems are shown to yield significant performance improvement to data rates when compared to the traditional Single Input Single Output (SISO) wireless systems. The benefits of multiple antenna elements at the transmitter and receiver have become necessary to the research and the development of the next generation of mobile communication systems. In this paper we propose the use of Relaying MIMO wireless communication systems for use over long throughput. We investigate how Relays can be used in a "demodulate-and-forward" operation when the transmitter is equipped with spatially correlated multiple antenna elements and the receiver has only partial knowledge of the statistics of the channel. We show that Relays between the source and destination nodes of a wireless communication system in MIMO configuration improve the throughput of the system when compared to the typical MIMO systems, or achieve the desired channel capacity with significantly lower power resources needed.
Resumo:
The AMT (www.amt-uk.org) is a multidisciplinary programme which undertakes biological, chemical, and physical oceanographic research during an annual voyage between the UK and a destination in the South Atlantic such as the Falkland Islands,South Africa, or Chile. This transect of >12,000 km crosses a range of ecosystems from subpolar to tropical, from euphotic shelf seas and upwelling systems, to oligotrophic mid-ocean gyres. The year 2015 has seen two milestones in the history of the AMT: the achievement of 20 years of this unique ocean going programme and the departure of the 25th cruise on the 15th of September. Both of these events were celebrated in June this year with an open science conference hosted by the Plymouth Marine Laboratory (PML) and will be further documented in a special issue of Progress in Oceanography which is planned for publication in 2016. Since 1995, the 25 research cruises have involved 242 sea-going scientists from 66 institutes representing 22 countries.