868 resultados para Distributed mobility management
Resumo:
Yellowfin sole, Pleuronectes asper, is the second most abundant flatfish in the North Pacific Ocean and is most highly concentrated in the eastern Bering Sea. It has been a target species in the eastern Bering Sea since the mid-1950's, initially by foreign distant-water fisheries but more recently by U.S. fisheries. Annual commercial catches since 1959 have ranged from 42,000 to 554,000 metric tons (t). Yellowfin sole is a relatively small flatfish averaging about 26 cm in length and 200 g in weight in commercial catches. It is distributed from nearshore waters to depths of about 100 m in the eastern Bering Sea in summer, but moves to deeper water in winter to escape sea ice. Yellowfin sole is a benthopelagic feeder. It is a longlived species (>20 years) with a correspondingly low natural mortality rate estimated at 0.12. After being overexploited during the early years of the fishery and suffering a substantial decline in stock abundance, the resource has recovered and is currently in excellent condition. The biomass during the 1980's may have been as high as, if not higher than, that at the beginning of the fishery. Based on results of demersal trawl surveys and two age structured models, the current exploitable biomass has been estimated to range between 1.9 and 2.6 million t. Appropriate harvest strategies were investigated under a range of possible recruitment levels. The recommended harvest level was calculated by multiplying the yield derived from the FOI harvest level (161 g at F = 0.14) hy an average recruitment value resulting in a commercial harvest of 276,900 t, or about 14% of the estimated exploitable biomass.
Resumo:
O crescimento da população e dos núcleos urbanos durante o século XX, sobretudo nos países em desenvolvimento, contribuiu para o aumento das áreas impermeáveis das bacias hidrográficas, com impactos importantes nos sistemas de drenagem urbana e na ocorrência de enchentes associadas. As enchentes trazem prejuízos materiais, na saúde e sociais. Recentemente, têm sido propostas práticas conservacionistas e medidas compensatórias, que buscam contribuir para o controle das enchentes urbanas, através do retardo do pico e amortecimento dos hidrogramas. Modelos matemáticos hidrológicos-hidráulicos permitem a simulação da adoção destas medidas de controle, demonstrando e otimizando sua localização. Esta dissertação apresenta os resultados da aplicação do modelo hidrológico Storm Water Management Model (SWMM) à bacia hidrográfica de estudo e representativa do rio Morto localizada em área peri-urbana em Jacarepaguá na cidade do Rio de Janeiro, com área de 9,41 km. O processamento do modelo SWMM foi realizado com o apoio da interface Storm and Sanitary Analysis (SSA), integrada ao sistema AutoCAD Civil 3D. Além da verificação da adequabilidade do modelo à representação dos sistemas hidrológico e hidráulico na bacia, foram desenvolvidos estudos para dois cenários como medidas de controle de enchentes: cenário 1, envolvendo implantação de um reservatório de detenção e, cenário 2, considerando a implantação de reservatórios de águas pluviais nos lotes. Os hidrogramas resultantes foram comparados ao hidrograma resultante da simulação nas condições atuais. Além disso, foram avaliados os custos associados a cada um dos cenários usando o sistema de orçamento da Empresa Rio Águas da PCRJ. Nas simulações foram adotadas a base cartográfica, e os dados climatológicos e hidrológicos previamente observados no contexto do projeto HIDROCIDADES, Rede de Pesquisa BRUM/FINEP, na qual este estudo se insere. Foram representados os processos de geração e propagação do escoamento superficial e de base. Durante o processo de calibração, realizou-se a análise de sensibilidade dos parâmetros, resultando como parâmetros mais sensíveis os relativos às áreas impermeáveis, especialmente o percentual de área impermeável da bacia (Ai). A calibração foi realizada através do ajuste manual de sete parâmetros do escoamento superficial e cinco do escoamento de base para três eventos. Foram obtidos coeficientes de determinação entre 0,52 e 0,64, e a diferença entre os volumes escoados e observados entre 0,60% e 4,96%. Para a validação do modelo foi adotado um evento pluviométrico excepcional observado na cidade em abril de 2010, que à época causou enchentes e grandes transtornos na cidade. Neste caso, o coeficiente de determinação foi igual a 0,78 e a diferença entre volumes foi de 15%. As principais distorções entre hidrogramas observados e simulados foram verificados para as vazões máximas. Em ambos os cenários as enchentes foram controladas. A partir destes estudos, pôde-se concluir que o melhor custo-benefício foi o cenário 2. Para este cenário, foi observado maiores amortecimento e retardo da vazão de pico do hidrograma, igual a 21,51% da vazão simulada para as condições atuais da bacia. Os custos de implantação orçados para os reservatórios de lote ficaram 52% a menos do que o do reservatório de detenção.
Resumo:
Nile perch, Lates niloticus Linnaeus, 1758, is a predatory fish of high commercial and recreational value. It can grow to a length of 2 m and a weight of 200 kg. In Uganda, Nile perch was originally found only in Lake Albert and the River Nile below Murchison Falls. The species is, however, widely distributed in Africa, occurring in the Nile system below Murchison Falls, the Congo, Niger, Volta, Senegal and in Lakes Chad and Turkana (Greenwood 1966).
Resumo:
This paper provides a direct comparison of two stochastic optimisation techniques (Markov Chain Monte Carlo and Sequential Monte Carlo) when applied to the problem of conflict resolution and aircraft trajectory control in air traffic management. The two methods are then also compared to another existing technique of Mixed-Integer Linear Programming which is also popular in distributed control. © 2011 IFAC.
On the structure of state-feedback LQG controllers for distributed systems with communication delays
Resumo:
This paper presents explicit solutions for a few distributed LQG problems in which players communicate their states with delays. The resulting control structure is reminiscent of a simple management hierarchy, in which a top level input is modified by newer, more localized information as it gets passed down the chain of command. It is hoped that the controller forms arising through optimization may lend insight into the control strategies of biological and social systems with communication delays. © 2011 IEEE.
Resumo:
We demonstrate room temperature operation of photonic-crystal distributed-feedback quantum cascade lasers emitting at 4.7 mu m. A rectangular photonic crystal lattice perpendicular to the cleaved facet was defined using holographic lithography. The anticrossing of the index- and Bragg-guided dispersions of rectangular lattice forms the band-edge mode with extended mode volume and reduced group velocity. Utilizing this coupling mechanism, single mode operation with a near-diffractive-limited divergence angle of 12 degrees is obtained for 33 mu m wide devices in a temperature range of 85-300 K. The reduced threshold current densities and improved heat dissipation management contribute to the realization of devices' room temperature operation.
Resumo:
Knowledge management is a critical issue for the next-generation web application, because the next-generation web is becoming a semantic web, a knowledge-intensive network. XML Topic Map (XTM), a new standard, is appearing in this field as one of the structures for the semantic web. It organizes information in a way that can be optimized for navigation. In this paper, a new set of hyper-graph operations on XTM (HyO-XTM) is proposed to manage the distributed knowledge resources.HyO-XTM is based on the XTM hyper-graph model. It is well applied upon XTM to simplify the workload of knowledge management.The application of the XTM hyper-graph operations is demonstrated by the knowledge management system of a consulting firm. HyO-XTM shows the potential to lead the knowledge management to the next-generation web.
Resumo:
The existing interpretation of the T-1 temperature dependence of the low-field miniband conduction is derived from certain concepts of conventional band theory for band structures resulting from spatial periodicities commensurable with the dimensionalities of the system. It is pointed out that such concepts do not apply to the case of miniband conduction, where we are dealing with band structures resulting from a one-dimensional periodicity in a three-dimensional system. It is shown that in the case of miniband conduction, the current carriers are distributed continuously over all energies in a sub-band, but only those with energies within the width of the miniband contribute to the current. The T-1 temperature dependence of the low-field mobility is due to the depletion of these current-carrying carriers with the rise of temperature.
Resumo:
It is anticipated that constrained devices in the Internet of Things (IoT) will often operate in groups to achieve collective monitoring or management tasks. For sensitive and mission-critical sensing tasks, securing multicast applications is therefore highly desirable. To secure group communications, several group key management protocols have been introduced. However, the majority of the proposed solutions are not adapted to the IoT and its strong processing, storage, and energy constraints. In this context, we introduce a novel decentralized and batch-based group key management protocol to secure multicast communications. Our protocol is simple and it reduces the rekeying overhead triggered by membership changes in dynamic and mobile groups and guarantees both backward and forward secrecy. To assess our protocol, we conduct a detailed analysis with respect to its communcation and storage costs. This analysis is validated through simulation to highlight energy gains. The obtained results show that our protocol outperforms its peers with respect to keying overhead and the mobility of members.
Resumo:
Programmers of parallel processes that communicate through shared globally distributed data structures (DDS) face a difficult choice. Either they must explicitly program DDS management, by partitioning or replicating it over multiple distributed memory modules, or be content with a high latency coherent (sequentially consistent) memory abstraction that hides the DDS' distribution. We present Mermera, a new formalism and system that enable a smooth spectrum of noncoherent shared memory behaviors to coexist between the above two extremes. Our approach allows us to define known noncoherent memories in a new simple way, to identify new memory behaviors, and to characterize generic mixed-behavior computations. The latter are useful for programming using multiple behaviors that complement each others' advantages. On the practical side, we show that the large class of programs that use asynchronous iterative methods (AIM) can run correctly on slow memory, one of the weakest, and hence most efficient and fault-tolerant, noncoherence conditions. An example AIM program to solve linear equations, is developed to illustrate: (1) the need for concurrently mixing memory behaviors, and, (2) the performance gains attainable via noncoherence. Other program classes tolerate weak memory consistency by synchronizing in such a way as to yield executions indistinguishable from coherent ones. AIM computations on noncoherent memory yield noncoherent, yet correct, computations. We report performance data that exemplifies the potential benefits of noncoherence, in terms of raw memory performance, as well as application speed.
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
Commonly, research work in routing for delay tolerant networks (DTN) assumes that node encounters are predestined, in the sense that they are the result of unknown, exogenous processes that control the mobility of these nodes. In this paper, we argue that for many applications such an assumption is too restrictive: while the spatio-temporal coordinates of the start and end points of a node's journey are determined by exogenous processes, the specific path that a node may take in space-time, and hence the set of nodes it may encounter could be controlled in such a way so as to improve the performance of DTN routing. To that end, we consider a setting in which each mobile node is governed by a schedule consisting of a ist of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged for DTN message delivery purposes. We define the Mobility Coordination Problem (MCP) for DTNs as follows: Given a set of nodes, each with its own schedule, and a set of messages to be exchanged between these nodes, devise a set of node encounters that minimize message delivery delays while satisfying all node schedules. The MCP for DTNs is general enough that it allows us to model and evaluate some of the existing DTN schemes, including data mules and message ferries. In this paper, we show that MCP for DTNs is NP-hard and propose two detour-based approaches to solve the problem. The first (DMD) is a centralized heuristic that leverages knowledge of the message workload to suggest specific detours to optimize message delivery. The second (DNE) is a distributed heuristic that is oblivious to the message workload, and which selects detours so as to maximize node encounters. We evaluate the performance of these detour-based approaches using extensive simulations based on synthetic workloads as well as real schedules obtained from taxi logs in a major metropolitan area. Our evaluation shows that our centralized, workload-aware DMD approach yields the best performance, in terms of message delay and delivery success ratio, and that our distributed, workload-oblivious DNE approach yields favorable performance when compared to approaches that require the use of data mules and message ferries.
Resumo:
As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.
Resumo:
Controlling the mobility pattern of mobile nodes (e.g., robots) to monitor a given field is a well-studied problem in sensor networks. In this setup, absolute control over the nodes’ mobility is assumed. Apart from the physical ones, no other constraints are imposed on planning mobility of these nodes. In this paper, we address a more general version of the problem. Specifically, we consider a setting in which mobility of each node is externally constrained by a schedule consisting of a list of locations that the node must visit at particular times. Typically, such schedules exhibit some level of slack, which could be leveraged to achieve a specific coverage distribution of a field. Such a distribution defines the relative importance of different field locations. We define the Constrained Mobility Coordination problem for Preferential Coverage (CMC-PC) as follows: given a field with a desired monitoring distribution, and a number of nodes n, each with its own schedule, we need to coordinate the mobility of the nodes in order to achieve the following two goals: 1) satisfy the schedules of all nodes, and 2) attain the required coverage of the given field. We show that the CMC-PC problem is NP-complete (by reduction to the Hamiltonian Cycle problem). Then we propose TFM, a distributed heuristic to achieve field coverage that is as close as possible to the required coverage distribution. We verify the premise of TFM using extensive simulations, as well as taxi logs from a major metropolitan area. We compare TFM to the random mobility strategy—the latter provides a lower bound on performance. Our results show that TFM is very successful in matching the required field coverage distribution, and that it provides, at least, two-fold query success ratio for queries that follow the target coverage distribution of the field.