917 resultados para CONNECTIVITIES HOC PROCEDURES
Resumo:
We consider a dense, ad hoc wireless network, confined to a small region. The wireless network is operated as a single cell, i.e., only one successful transmission is supported at a time. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organize into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention-based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first motivate that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc wireless network (described above) as a single cell, we study the hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (t) (1/eta), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterization of the optimal operating point. Simulation results are provided comparing the performance of the optimal strategy derived here with some simple strategies for operating the network.
Resumo:
Different medium access control (MAC) layer protocols, for example, IEEE 802.11 series and others are used in wireless local area networks. They have limitation in handling bulk data transfer applications, like video-on-demand, videoconference, etc. To avoid this problem a cooperative MAC protocol environment has been introduced, which enables the MAC protocol of a node to use its nearby nodes MAC protocol as and when required. We have found on various occasions that specified cooperative MAC establishes cooperative transmissions to send the specified data to the destination. In this paper we propose cooperative MAC priority (CoopMACPri) protocol which exploits the advantages of priority value given by the upper layers for selection of different paths to nodes running heterogeneous applications in a wireless ad hoc network environment. The CoopMACPri protocol improves the system throughput and minimizes energy consumption. Using a Markov chain model, we developed a model to analyse the performance of CoopMACPri protocol; and also derived closed-form expression of saturated system throughput and energy consumption. Performance evaluations validate the accuracy of the theoretical analysis, and also show that the performance of CoopMACPri protocol varies with the number of nodes. We observed that the simulation results and analysis reflects the effectiveness of the proposed protocol as per the specifications.
Resumo:
Vehicular Ad-hoc Networks (VANET), is a type of wireless ad-hoc network that aims to provide communication among vehicles. A key characteristic of VANETs is the very high mobility of nodes that result in a frequently changing topology along with the frequent breakage and linkage of the paths among the nodes involved. These characteristics make the Quality of Service (QoS) requirements in VANET a challenging issue. In this paper we characterize the performance available to applications in infrastructureless VANETs in terms of path holding time, path breakage probability and per session throughput as a function of various vehicle densities on road, data traffic rate and number of connections formed among vehicles by making use of table-driven and on-demand routing algorithms. Several QoS constraints in the applications of infrastructureless VANETs are observed in the results obtained.
Resumo:
Index-flood related regional frequency analysis (RFA) procedures are in use by hydrologists to estimate design quantiles of hydrological extreme events at data sparse/ungauged locations in river basins. There is a dearth of attempts to establish which among those procedures is better for RFA in the L-moment framework. This paper evaluates the performance of the conventional index flood (CIF), the logarithmic index flood (LIF), and two variants of the population index flood (PIF) procedures in estimating flood quantiles for ungauged locations by Monte Carlo simulation experiments and a case study on watersheds in Indiana in the U.S. To evaluate the PIF procedure, L-moment formulations are developed for implementing the procedure in situations where the regional frequency distribution (RFD) is the generalized logistic (GLO), generalized Pareto (GPA), generalized normal (GNO) or Pearson type III (PE3), as those formulations are unavailable. Results indicate that one of the variants of the PIF procedure, which utilizes the regional information on the first two L-moments is more effective than the CIF and LIF procedures. The improvement in quantile estimation using the variant of PIF procedure as compared with the CIF procedure is significant when the RFD is a generalized extreme value, GLO, GNO, or PE3, and marginal when it is GPA. (C) 2015 American Society of Civil Engineers.
Resumo:
Hydrilla ( Hydrilla verticillata (L.f.) Royle), an invasive aquatic weed, continues to spread to new regions in the United States. Two biotypes, one a female dioecious and the other monoecious have been identified. Management of the spread of hydrilla requires understanding the mechanisms of introduction and transport, an ability to map and make available information on distribution, and tools to distinguish the known U.S. biotypes as well as potential new introductions. Review of the literature and discussions with aquatic scientists and resource managers point to the aquarium and water garden plant trades as the primary past mechanism for the regional dispersal of hydrilla while local dispersal is primarily carried out by other mechanisms such as boat traffic, intentional introductions, and waterfowl. The Nonindigenous Aquatic Species (NAS) database is presented as a tool for assembling, geo-referencing, and making available information on the distribution of hydrilla. A map of the current range of dioecious and monoecious hydrilla by drainage is presented. Four hydrilla samples, taken from three discrete, non-contiguous regions (Pennsylvania, Connecticut, and Washington State) were examined using two RAPD assays. The first, generated using primer Operon G17, and capable of distinguishing the dioecious and monoecious U.S. biotypes, indicated all four samples were of the monoecious biotype. Results of the second assay using the Stoffel fragment and 5 primers, produced 111 markers, indicated that these samples do not represent new foreign introductions. The differences in the monoecious and dioecious growth habits and management are discussed.
Resumo:
This report documents the methods used at the Monterey Bay Aquarium Research Institute (MBARI) for analyzing seawater nutrient samples with an Alpkem Series 300 Rapid Flow Analyzer (RFA) system. The methods have been optimized for the particular requirements of this laboratory. The RFA system has been used to analyze approximately 20,000 samples during the past two years. The methods have been optimized to run nutrient analyses in a routine manner with a detection limit of better than -±1% and a within run precision of -±1% of the full scale concentration range. The normal concentration ranges are 0-200 ~M silicate, 0-5 ~M phosphate, 0-50 ~M nitrate, 0-3 ~M nitrite, and 0-10 ~M ammonium. The memorandum is designed to be used in a loose-leaf binder format. Each page is dated and as revisions are made, they should be inserted into the binder. The revisions should be added into the binder. Retain the old versions in order to maintain a historical record of the procedures. (88 pages)
Resumo:
This report describes the working of National Centers for Coastal Ocean Service (NCCOS) Wave Exposure Model (WEMo) capable of predicting the exposure of a site in estuarine and closed water to local wind generated waves. WEMo works in two different modes: the Representative Wave Energy (RWE) mode calculates the exposure using physical parameters like wave energy and wave height, while the Relative Exposure Index (REI) empirically calculates exposure as a unitless index. Detailed working of the model in both modes and their procedures are described along with a few sample runs. WEMo model output in RWE mode (wave height and wave energy) is compared against data collected from wave sensors near Harkers Island, North Carolina for validation purposes. Computed results agreed well with the wave sensors data indicating that WEMo can be an effective tool in predicting local wave energy in closed estuarine environments. (PDF contains 31 pages)
Resumo:
ENGLISH: A two-stage sampling design is used to estimate the variances of the numbers of yellowfin in different age groups caught in the eastern Pacific Ocean. For purse seiners, the primary sampling unit (n) is a brine well containing fish from a month-area stratum; the number of fish lengths (m) measured from each well are the secondary units. The fish cannot be selected at random from the wells because of practical limitations. The effects of different sampling methods and other factors on the reliability and precision of statistics derived from the length-frequency data were therefore examined. Modifications are recommended where necessary. Lengths of fish measured during the unloading of six test wells revealed two forms of inherent size stratification: 1) short-term disruptions of existing pattern of sizes, and 2) transition zones between long-term trends in sizes. To some degree, all wells exhibited cyclic changes in mean size and variance during unloading. In half of the wells, it was observed that size selection by the unloaders induced a change in mean size. As a result of stratification, the sequence of sizes removed from all wells was non-random, regardless of whether a well contained fish from a single set or from more than one set. The number of modal sizes in a well was not related to the number of sets. In an additional well composed of fish from several sets, an experiment on vertical mixing indicated that a representative sample of the contents may be restricted to the bottom half of the well. The contents of the test wells were used to generate 25 simulated wells and to compare the results of three sampling methods applied to them. The methods were: (1) random sampling (also used as a standard), (2) protracted sampling, in which the selection process was extended over a large portion of a well, and (3) measuring fish consecutively during removal from the well. Repeated sampling by each method and different combinations indicated that, because the principal source of size variation occurred among primary units, increasing n was the most effective way to reduce the variance estimates of both the age-group sizes and the total number of fish in the landings. Protracted sampling largely circumvented the effects of size stratification, and its performance was essentially comparable to that of random sampling. Sampling by this method is recommended. Consecutive-fish sampling produced more biased estimates with greater variances. Analysis of the 1988 length-frequency samples indicated that, for age groups that appear most frequently in the catch, a minimum sampling frequency of one primary unit in six for each month-area stratum would reduce the coefficients of variation (CV) of their size estimates to approximately 10 percent or less. Additional stratification of samples by set type, rather than month-area alone, further reduced the CV's of scarce age groups, such as the recruits, and potentially improved their accuracy. The CV's of recruitment estimates for completely-fished cohorts during the 198184 period were in the vicinity of 3 to 8 percent. Recruitment estimates and their variances were also relatively insensitive to changes in the individual quarterly catches and variances, respectively, of which they were composed. SPANISH: Se usa un diseño de muestreo de dos etapas para estimar las varianzas de los números de aletas amari11as en distintos grupos de edad capturados en el Océano Pacifico oriental. Para barcos cerqueros, la unidad primaria de muestreo (n) es una bodega de salmuera que contenía peces de un estrato de mes-área; el numero de ta11as de peces (m) medidas de cada bodega es la unidad secundaria. Limitaciones de carácter practico impiden la selección aleatoria de peces de las bodegas. Por 10 tanto, fueron examinados los efectos de distintos métodos de muestreo y otros factores sobre la confiabilidad y precisión de las estadísticas derivadas de los datos de frecuencia de ta11a. Se recomiendan modificaciones donde sean necesarias. Las ta11as de peces medidas durante la descarga de seis bodegas de prueba revelaron dos formas de estratificación inherente por ta11a: 1) perturbaciones a corto plazo en la pauta de ta11as existente, y 2) zonas de transición entre las tendencias a largo plazo en las ta11as. En cierto grado, todas las bodegas mostraron cambios cíclicos en ta11a media y varianza durante la descarga. En la mitad de las bodegas, se observo que selección por ta11a por los descargadores indujo un cambio en la ta11a media. Como resultado de la estratificación, la secuencia de ta11as sacadas de todas las bodegas no fue aleatoria, sin considerar si una bodega contenía peces de un solo lance 0 de mas de uno. El numero de ta11as modales en una bodega no estaba relacionado al numero de lances. En una bodega adicional compuesta de peces de varios lances, un experimento de mezcla vertical indico que una muestra representativa del contenido podría estar limitada a la mitad inferior de la bodega. Se uso el contenido de las bodegas de prueba para generar 25 bodegas simuladas y comparar los resultados de tres métodos de muestreo aplicados a estas. Los métodos fueron: (1) muestreo aleatorio (usado también como norma), (2) muestreo extendido, en el cual el proceso de selección fue extendido sobre una porción grande de una bodega, y (3) medición consecutiva de peces durante la descarga de la bodega. EI muestreo repetido con cada método y distintas combinaciones de n y m indico que, puesto que la fuente principal de variación de ta11a ocurría entre las unidades primarias, aumentar n fue la manera mas eficaz de reducir las estimaciones de la varianza de las ta11as de los grupos de edad y el numero total de peces en los desembarcos. El muestreo extendido evito mayormente los efectos de la estratificación por ta11a, y su desempeño fue esencialmente comparable a aquel del muestreo aleatorio. Se recomienda muestrear con este método. El muestreo de peces consecutivos produjo estimaciones mas sesgadas con mayores varianzas. Un análisis de las muestras de frecuencia de ta11a de 1988 indico que, para los grupos de edad que aparecen con mayor frecuencia en la captura, una frecuencia de muestreo minima de una unidad primaria de cada seis para cada estrato de mes-área reduciría los coeficientes de variación (CV) de las estimaciones de ta11a correspondientes a aproximadamente 10% 0 menos. Una estratificación adicional de las muestras por tipo de lance, y no solamente mes-área, redujo aun mas los CV de los grupos de edad escasos, tales como los reclutas, y mejoró potencialmente su precisión. Los CV de las estimaciones del reclutamiento para las cohortes completamente pescadas durante 1981-1984 fueron alrededor de 3-8%. Las estimaciones del reclutamiento y sus varianzas fueron también relativamente insensibles a cambios en las capturas de trimestres individuales y las varianzas, respectivamente, de las cuales fueron derivadas. (PDF contains 70 pages)
Resumo:
This thesis is comprised of three chapters, each of which is concerned with properties of allocational mechanisms which include voting procedures as part of their operation. The theme of interaction between economic and political forces recurs in the three chapters, as described below.
Chapter One demonstrates existence of a non-controlling interest shareholders' equilibrium for a stylized one-period stock market economy with fewer securities than states of the world. The economy has two decision mechanisms: Owners vote to change firms' production plans across states, fixing shareholdings; and individuals trade shares and the current production / consumption good, fixing production plans. A shareholders' equilibrium is a production plan profile, and a shares / current good allocation stable for both mechanisms. In equilibrium, no (Kramer direction-restricted) plan revision is supported by a share-weighted majority, and there exists no Pareto superior reallocation.
Chapter Two addresses efficient management of stationary-site, fixed-budget, partisan voter registration drives. Sufficient conditions obtain for unique optimal registrar deployment within contested districts. Each census tract is assigned an expected net plurality return to registration investment index, computed from estimates of registration, partisanship, and turnout. Optimum registration intensity is a logarithmic transformation of a tract's index. These conditions are tested using a merged data set including both census variables and Los Angeles County Registrar data from several 1984 Assembly registration drives. Marginal registration spending benefits, registrar compensation, and the general campaign problem are also discussed.
The last chapter considers social decision procedures at a higher level of abstraction. Chapter Three analyzes the structure of decisive coalition families, given a quasitransitive-valued social decision procedure satisfying the universal domain and ITA axioms. By identifying those alternatives X* ⊆ X on which the Pareto principle fails, imposition in the social ranking is characterized. Every coaliton is weakly decisive for X* over X~X*, and weakly antidecisive for X~X* over X*; therefore, alternatives in X~X* are never socially ranked above X*. Repeated filtering of alternatives causing Pareto failure shows states in X^n*~X^((n+1))* are never socially ranked above X^((n+1))*. Limiting results of iterated application of the *-operator are also discussed.