933 resultados para Planning expansion network
Resumo:
Based on the RS and GIS methods, Siping city is selected as a study case with four remote sensing images in 25 years. Indices of urban morphology such as fractal dimension and compactness are employed to research the characteristics of urban expansion. Through digital processing and interpreting of the images, the process and characteristics of urban expansion are analysed using urban area change, fractal dimension and compactness. The results showed that there are three terms in this period. It expended fastest in the period of 1979~1991, and in the period of 1992~2001, the emphases on urban redevelopment made it expended slower. And this is in agreement with the Siping Statistical Yearbook. This indicates that the united of metrics of urban morphology and statistical data can be used to satisfactorily describe the process and characteristics of urban expansion. © 2008 IEEE.
Resumo:
Network traffic arises from the superposition of Origin-Destination (OD) flows. Hence, a thorough understanding of OD flows is essential for modeling network traffic, and for addressing a wide variety of problems including traffic engineering, traffic matrix estimation, capacity planning, forecasting and anomaly detection. However, to date, OD flows have not been closely studied, and there is very little known about their properties. We present the first analysis of complete sets of OD flow timeseries, taken from two different backbone networks (Abilene and Sprint-Europe). Using Principal Component Analysis (PCA), we find that the set of OD flows has small intrinsic dimension. In fact, even in a network with over a hundred OD flows, these flows can be accurately modeled in time using a small number (10 or less) of independent components or dimensions. We also show how to use PCA to systematically decompose the structure of OD flow timeseries into three main constituents: common periodic trends, short-lived bursts, and noise. We provide insight into how the various constituents contribute to the overall structure of OD flows and explore the extent to which this decomposition varies over time.
Resumo:
In this paper we discuss a new type of query in Spatial Databases, called Trip Planning Query (TPQ). Given a set of points P in space, where each point belongs to a category, and given two points s and e, TPQ asks for the best trip that starts at s, passes through exactly one point from each category, and ends at e. An example of a TPQ is when a user wants to visit a set of different places and at the same time minimize the total travelling cost, e.g. what is the shortest travelling plan for me to visit an automobile shop, a CVS pharmacy outlet, and a Best Buy shop along my trip from A to B? The trip planning query is an extension of the well-known TSP problem and therefore is NP-hard. The difficulty of this query lies in the existence of multiple choices for each category. In this paper, we first study fast approximation algorithms for the trip planning query in a metric space, assuming that the data set fits in main memory, and give the theory analysis of their approximation bounds. Then, the trip planning query is examined for data sets that do not fit in main memory and must be stored on disk. For the disk-resident data, we consider two cases. In one case, we assume that the points are located in Euclidean space and indexed with an Rtree. In the other case, we consider the problem of points that lie on the edges of a spatial network (e.g. road network) and the distance between two points is defined using the shortest distance over the network. Finally, we give an experimental evaluation of the proposed algorithms using synthetic data sets generated on real road networks.
Resumo:
One role for workload generation is as a means for understanding how servers and networks respond to variation in load. This enables management and capacity planning based on current and projected usage. This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server. The tool, called Surge (Scalable URL Reference Generator) generates references matching empirical measurements of 1) server file size distribution; 2) request size distribution; 3) relative file popularity; 4) embedded file references; 5) temporal locality of reference; and 6) idle periods of individual users. This paper reviews the essential elements required in the generation of a representative Web workload. It also addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream, the solutions we adopted, and their associated accuracy. Finally, we present evidence that Surge exercises servers in a manner significantly different from other Web server benchmarks.
Resumo:
Since Wireless Sensor Networks (WSNs) are subject to failures, fault-tolerance becomes an important requirement for many WSN applications. Fault-tolerance can be enabled in different areas of WSN design and operation, including the Medium Access Control (MAC) layer and the initial topology design. To be robust to failures, a MAC protocol must be able to adapt to traffic fluctuations and topology dynamics. We design ER-MAC that can switch from energy-efficient operation in normal monitoring to reliable and fast delivery for emergency monitoring, and vice versa. It also can prioritise high priority packets and guarantee fair packet deliveries from all sensor nodes. Topology design supports fault-tolerance by ensuring that there are alternative acceptable routes to data sinks when failures occur. We provide solutions for four topology planning problems: Additional Relay Placement (ARP), Additional Backup Placement (ABP), Multiple Sink Placement (MSP), and Multiple Sink and Relay Placement (MSRP). Our solutions use a local search technique based on Greedy Randomized Adaptive Search Procedures (GRASP). GRASP-ARP deploys relays for (k,l)-sink-connectivity, where each sensor node must have k vertex-disjoint paths of length ≤ l. To count how many disjoint paths a node has, we propose Counting-Paths. GRASP-ABP deploys fewer relays than GRASP-ARP by focusing only on the most important nodes – those whose failure has the worst effect. To identify such nodes, we define Length-constrained Connectivity and Rerouting Centrality (l-CRC). Greedy-MSP and GRASP-MSP place minimal cost sinks to ensure that each sensor node in the network is double-covered, i.e. has two length-bounded paths to two sinks. Greedy-MSRP and GRASP-MSRP deploy sinks and relays with minimal cost to make the network double-covered and non-critical, i.e. all sensor nodes must have length-bounded alternative paths to sinks when an arbitrary sensor node fails. We then evaluate the fault-tolerance of each topology in data gathering simulations using ER-MAC.
Resumo:
A wireless sensor network can become partitioned due to node failure, requiring the deployment of additional relay nodes in order to restore network connectivity. This introduces an optimisation problem involving a tradeoff between the number of additional nodes that are required and the costs of moving through the sensor field for the purpose of node placement. This tradeoff is application-dependent, influenced for example by the relative urgency of network restoration. In addition, minimising the number of relay nodes might lead to long routing paths to the sink, which may cause problems of data latency. This data latency is extremely important in wireless sensor network applications such as battlefield surveillance, intrusion detection, disaster rescue, highway traffic coordination, etc. where they must not violate the real-time constraints. Therefore, we also consider the problem of deploying multiple sinks in order to improve the network performance. Previous research has only parts of this problem in isolation, and has not properly considered the problems of moving through a constrained environment or discovering changes to that environment during the repair or network quality after the restoration. In this thesis, we firstly consider a base problem in which we assume the exploration tasks have already been completed, and so our aim is to optimise our use of resources in the static fully observed problem. In the real world, we would not know the radio and physical environments after damage, and this creates a dynamic problem where damage must be discovered. Therefore, we extend to the dynamic problem in which the network repair problem considers both exploration and restoration. We then add a hop-count constraint for network quality in which the desired locations can talk to a sink within a hop count limit after the network is restored. For each new problem of the network repair, we have proposed different solutions (heuristics and/or complete algorithms) which prioritise different objectives. We evaluate our solutions based on simulation, assessing the quality of solutions (node cost, movement cost, computation time, and total restoration time) by varying the problem types and the capability of the agent that makes the repair. We show that the relative importance of the objectives influences the choice of algorithm, and different speeds of movement for the repairing agent have a significant impact on performance, and must be taken into account when selecting the algorithm. In particular, the node-based approaches are the best in the node cost, and the path-based approaches are the best in the mobility cost. For the total restoration time, the node-based approaches are the best with a fast moving agent while the path-based approaches are the best with a slow moving agent. For a medium speed moving agent, the total restoration time of the node-based approaches and that of the path-based approaches are almost balanced.
Resumo:
As announced in the November 2000 issue of MathStats&OR [1], one of the projects supported by the Maths, Stats & OR Network funds is an international survey of research into pedagogic issues in statistics and OR. I am taking the lead on this and report here on the progress that has been made during the first year. A paper giving some background to the project and describing initial thinking on how it might be implemented was presented at the 53rd session of the International Statistical Institute in Seoul, Korea, in August 2001 in a session on The future of statistics education research [2]. It sounded easy. I considered that I was something of an expert on surveys having lectured on the topic for many years and having helped students and others who were doing surveys, particularly with the design of their questionnaires. Surely all I had to do was to draft a few questions, send them electronically to colleagues in statistical education who would be only to happy to respond, and summarise their responses? I should have learnt from my experience of advising all those students who thought that doing a survey was easy and to whom I had to explain that their ideas were too ambitious. There are several inter-related stages in survey research and it is important to think about these before rushing into the collection of data. In the case of the survey in question, this planning stage revealed several challenges. Surveys are usually done for a purpose so even before planning how to do them, it is advisable to think about the final product and the dissemination of results. This is the route I followed.
Resumo:
Front detection and aggregation techniques were applied to 300m resolution MERIS satellite ocean colour data for the first time, to describe frequently occurring shelf-sea fronts near to the Scottish coast. Medium resolution (1km) thermal and colour data have previously been used to analyse the distribution of surface fronts, though these cannot capture smaller frontal zones or those in close proximity to the coast, particularly where the coastline is convoluted. Seasonal frequent front maps, derived from both chlorophyll and SST data, revealed a number of key frontal zones, a subset of which were based on new insights into the sediment and plankton dynamics provided exclusively by the higher-resolution chlorophyll fronts. The methodology is described for applying colour and thermal front data to the task of identifying zones of ecological importance that could assist the process of defining marine protected areas. Each key frontal zone is analysed to describe its spatial and temporal extent and variability, and possible mechanisms. It is hoped that these tools can provide guidance on the dynamic habitats of marine fauna towards aspects of marine spatial planning and conservation.
Resumo:
Front detection and aggregation techniques were applied to 300m resolution MERIS satellite ocean colour data for the first time, to describe frequently occurring shelf-sea fronts near to the Scottish coast. Medium resolution (1km) thermal and colour data have previously been used to analyse the distribution of surface fronts, though these cannot capture smaller frontal zones or those in close proximity to the coast, particularly where the coastline is convoluted. Seasonal frequent front maps, derived from both chlorophyll and SST data, revealed a number of key frontal zones, a subset of which were based on new insights into the sediment and plankton dynamics provided exclusively by the higher-resolution chlorophyll fronts. The methodology is described for applying colour and thermal front data to the task of identifying zones of ecological importance that could assist the process of defining marine protected areas. Each key frontal zone is analysed to describe its spatial and temporal extent and variability, and possible mechanisms. It is hoped that these tools can provide guidance on the dynamic habitats of marine fauna towards aspects of marine spatial planning and conservation.
Resumo:
Ecosystems consist of complex dynamic interactions among species and the environment, the understanding of which has implications for predicting the environmental response to changes in climate and biodiversity. However, with the recent adoption of more explorative tools, like Bayesian networks, in predictive ecology, few assumptions can be made about the data and complex, spatially varying interactions can be recovered from collected field data. In this study, we compare Bayesian network modelling approaches accounting for latent effects to reveal species dynamics for 7 geographically and temporally varied areas within the North Sea. We also apply structure learning techniques to identify functional relationships such as prey–predator between trophic groups of species that vary across space and time. We examine if the use of a general hidden variable can reflect overall changes in the trophic dynamics of each spatial system and whether the inclusion of a specific hidden variable can model unmeasured group of species. The general hidden variable appears to capture changes in the variance of different groups of species biomass. Models that include both general and specific hidden variables resulted in identifying similarity with the underlying food web dynamics and modelling spatial unmeasured effect. We predict the biomass of the trophic groups and find that predictive accuracy varies with the models' features and across the different spatial areas thus proposing a model that allows for spatial autocorrelation and two hidden variables. Our proposed model was able to produce novel insights on this ecosystem's dynamics and ecological interactions mainly because we account for the heterogeneous nature of the driving factors within each area and their changes over time. Our findings demonstrate that accounting for additional sources of variation, by combining structure learning from data and experts' knowledge in the model architecture, has the potential for gaining deeper insights into the structure and stability of ecosystems. Finally, we were able to discover meaningful functional networks that were spatially and temporally differentiated with the particular mechanisms varying from trophic associations through interactions with climate and commercial fisheries.
Resumo:
Social networks have increasingly become a showcase where the media can be promoted. Like many other media, radio stations have made use of social networks to promote themselves in a better way and, sometimes, to keep more feedback with their listeners. But not all programs make the same use and not all of them have managed to reach in the same way his followers. This article discusses the consolidation in the social networks of the major radio sports programs in Spain. Through a comparative analysis between 2010 and 2015, throughout the text, the authors have tried to observe the evolution of the programs and, at the same time, to establish comparisons between the followers that these programs have on social networks and the number of listeners as EGM.
Resumo:
Los indicadores de sostenibilidad conforman herramientas útiles para la toma de decisiones. Las ciudades latinoamericanas, y especialmente las áreas de expansión sin planificación adecuada, enfrentan desafíos cada vez mayores para revertir problemáticas que amenazan su sostenibilidad. El presente trabajo evalúa de manera preliminar, la sostenibilidad ambiental del periurbano de Mar del Plata (Argentina) tomando como referencia algunos de los indicadores propuestos por el modelo del Banco Interamericano de Desarrollo en la Iniciativa Ciudades Emergentes y Sostenibles. Se construyó un índice sintético (Índice de Sostenibilidad Ambiental, ISA) que integra trece indicadores agrupados en ocho temas. Las situaciones más críticas (ISA: 0,45-0,558) se identifican fundamentalmente en zonas en las que se desarrollan actividades rurales y en las que se localizan asentamientos de carácter precario. El estudio realizado profundiza en el conocimiento de la dimensión ambiental de la sostenibilidad, enfatizando en el análisis de los contrastes internos del periurbano marplatense.
Resumo:
Globally, priority areas for biodiversity are relatively well known, yet few detailed plans exist to direct conservation action within them, despite urgent need. Madagascar, like other globally recognized biodiversity hot spots, has complex spatial patterns of endemism that differ among taxonomic groups, creating challenges for the selection of within-country priorities. We show, in an analysis of wide taxonomic and geographic breadth and high spatial resolution, that multitaxonomic rather than single-taxon approaches are critical for identifying areas likely to promote the persistence of most species. Our conservation prioritization, facilitated by newly available techniques, identifies optimal expansion sites for the Madagascar government's current goal of tripling the land area under protection. Our findings further suggest that high-resolution multitaxonomic approaches to prioritization may be necessary to ensure protection for biodiversity in other global hot spots.
Resumo:
Ecological coherence is a multifaceted conservation objective that includes some potentially conflicting concepts. These concepts include the extent to which the network maximises diversity (including genetic diversity) and the extent to which protected areas interact with non-reserve locations. To examine the consequences of different selection criteria, the preferred location to complement protected sites was examined using samples taken from four locations around each of two marine protected areas: Strangford Lough and Lough Hyne, Ireland. Three different measures of genetic distance were used: FST, Dest and a measure of allelic dissimilarity, along with a direct assessment of the total number of alleles in different candidate networks. Standardized site scores were used for comparisons across methods and selection criteria. The average score for Castlehaven, a site relatively close to Lough Hyne, was highest, implying that this site would capture the most genetic diversity while ensuring highest degree of interaction between protected and unprotected sites. Patterns around Strangford Lough were more ambiguous, potentially reflecting the weaker genetic structure around this protected area in comparison to Lough Hyne. Similar patterns were found across species with different dispersal capacities, indicating that methods based on genetic distance could be used to help maximise ecological coherence in reserve networks. ⺠Ecological coherence is a key component of marine protected area network design. ⺠Coherence contains a number of competing concepts. ⺠Genetic information from field populations can help guide assessments of coherence. ⺠Average choice across different concepts of coherence was consistent among species. ⺠Measures can be combined to compare the coherence of different network designs.