920 resultados para probability distribution
Resumo:
Esta tesis se desarrolla dentro del marco de las comunicaciones satelitales en el innovador campo de los pequeños satélites también llamados nanosatélites o cubesats, llamados así por su forma cubica. Estos nanosatélites se caracterizan por su bajo costo debido a que usan componentes comerciales llamados COTS (commercial off-the-shelf) y su pequeño tamaño como los Cubesats 1U (10cm*10 cm*10 cm) con masa aproximada a 1 kg. Este trabajo de tesis tiene como base una iniciativa propuesta por el autor de la tesis para poner en órbita el primer satélite peruano en mi país llamado chasqui I, actualmente puesto en órbita desde la Estación Espacial Internacional. La experiencia de este trabajo de investigación me llevo a proponer una constelación de pequeños satélites llamada Waposat para dar servicio de monitoreo de sensores de calidad de agua a nivel global, escenario que es usado en esta tesis. Es ente entorno y dadas las características limitadas de los pequeños satélites, tanto en potencia como en velocidad de datos, es que propongo investigar una nueva arquitectura de comunicaciones que permita resolver en forma óptima la problemática planteada por los nanosatélites en órbita LEO debido a su carácter disruptivo en sus comunicaciones poniendo énfasis en las capas de enlace y aplicación. Esta tesis presenta y evalúa una nueva arquitectura de comunicaciones para proveer servicio a una red de sensores terrestres usando una solución basada en DTN (Delay/Disruption Tolerant Networking) para comunicaciones espaciales. Adicionalmente, propongo un nuevo protocolo de acceso múltiple que usa una extensión del protocolo ALOHA no ranurado, el cual toma en cuenta la prioridad del trafico del Gateway (ALOHAGP) con un mecanismo de contienda adaptativo. Utiliza la realimentación del satélite para implementar el control de la congestión y adapta dinámicamente el rendimiento efectivo del canal de una manera óptima. Asumimos un modelo de población de sensores finito y una condición de tráfico saturado en el que cada sensor tiene siempre tramas que transmitir. El desempeño de la red se evaluó en términos de rendimiento efectivo, retardo y la equidad del sistema. Además, se ha definido una capa de convergencia DTN (ALOHAGP-CL) como un subconjunto del estándar TCP-CL (Transmission Control Protocol-Convergency Layer). Esta tesis muestra que ALOHAGP/CL soporta adecuadamente el escenario DTN propuesto, sobre todo cuando se utiliza la fragmentación reactiva. Finalmente, esta tesis investiga una transferencia óptima de mensajes DTN (Bundles) utilizando estrategias de fragmentación proactivas para dar servicio a una red de sensores terrestres utilizando un enlace de comunicaciones satelitales que utiliza el mecanismo de acceso múltiple con prioridad en el tráfico de enlace descendente (ALOHAGP). El rendimiento efectivo ha sido optimizado mediante la adaptación de los parámetros del protocolo como una función del número actual de los sensores activos recibidos desde el satélite. También, actualmente no existe un método para advertir o negociar el tamaño máximo de un “bundle” que puede ser aceptado por un agente DTN “bundle” en las comunicaciones por satélite tanto para el almacenamiento y la entrega, por lo que los “bundles” que son demasiado grandes son eliminados o demasiado pequeños son ineficientes. He caracterizado este tipo de escenario obteniendo una distribución de probabilidad de la llegada de tramas al nanosatélite así como una distribución de probabilidad del tiempo de visibilidad del nanosatélite, los cuales proveen una fragmentación proactiva óptima de los DTN “bundles”. He encontrado que el rendimiento efectivo (goodput) de la fragmentación proactiva alcanza un valor ligeramente inferior al de la fragmentación reactiva. Esta contribución permite utilizar la fragmentación activa de forma óptima con todas sus ventajas tales como permitir implantar el modelo de seguridad de DTN y la simplicidad al implementarlo en equipos con muchas limitaciones de CPU y memoria. La implementación de estas contribuciones se han contemplado inicialmente como parte de la carga útil del nanosatélite QBito, que forma parte de la constelación de 50 nanosatélites que se está llevando a cabo dentro del proyecto QB50. ABSTRACT This thesis is developed within the framework of satellite communications in the innovative field of small satellites also known as nanosatellites (<10 kg) or CubeSats, so called from their cubic form. These nanosatellites are characterized by their low cost because they use commercial components called COTS (commercial off-the-shelf), and their small size and mass, such as 1U Cubesats (10cm * 10cm * 10cm) with approximately 1 kg mass. This thesis is based on a proposal made by the author of the thesis to put into orbit the first Peruvian satellite in his country called Chasqui I, which was successfully launched into orbit from the International Space Station in 2014. The experience of this research work led me to propose a constellation of small satellites named Waposat to provide water quality monitoring sensors worldwide, scenario that is used in this thesis. In this scenario and given the limited features of nanosatellites, both power and data rate, I propose to investigate a new communications architecture that allows solving in an optimal manner the problems of nanosatellites in orbit LEO due to the disruptive nature of their communications by putting emphasis on the link and application layers. This thesis presents and evaluates a new communications architecture to provide services to terrestrial sensor networks using a space Delay/Disruption Tolerant Networking (DTN) based solution. In addition, I propose a new multiple access mechanism protocol based on extended unslotted ALOHA that takes into account the priority of gateway traffic, which we call ALOHA multiple access with gateway priority (ALOHAGP) with an adaptive contention mechanism. It uses satellite feedback to implement the congestion control, and to dynamically adapt the channel effective throughput in an optimal way. We assume a finite sensor population model and a saturated traffic condition where every sensor always has frames to transmit. The performance was evaluated in terms of effective throughput, delay and system fairness. In addition, a DTN convergence layer (ALOHAGP-CL) has been defined as a subset of the standard TCP-CL (Transmission Control Protocol-Convergence Layer). This thesis reveals that ALOHAGP/CL adequately supports the proposed DTN scenario, mainly when reactive fragmentation is used. Finally, this thesis investigates an optimal DTN message (bundles) transfer using proactive fragmentation strategies to give service to a ground sensor network using a nanosatellite communications link which uses a multi-access mechanism with priority in downlink traffic (ALOHAGP). The effective throughput has been optimized by adapting the protocol parameters as a function of the current number of active sensors received from satellite. Also, there is currently no method for advertising or negotiating the maximum size of a bundle which can be accepted by a bundle agent in satellite communications for storage and delivery, so that bundles which are too large can be dropped or which are too small are inefficient. We have characterized this kind of scenario obtaining a probability distribution for frame arrivals to nanosatellite and visibility time distribution that provide an optimal proactive fragmentation of DTN bundles. We have found that the proactive effective throughput (goodput) reaches a value slightly lower than reactive fragmentation approach. This contribution allows to use the proactive fragmentation optimally with all its advantages such as the incorporation of the security model of DTN and simplicity in protocol implementation for computers with many CPU and memory limitations. The implementation of these contributions was initially contemplated as part of the payload of the nanosatellite QBito, which is part of the constellation of 50 nanosatellites envisaged under the QB50 project.
Resumo:
One of the aims of COST C14 action is the assessment and evaluation of pedestrian wind comfort. At present there is no general rule available that is applied across Europe. There are several criteria that have been developed and applied in different countries. These criteria are based on the definition of two independent parameters, a threshold effective wind speed and a probability of exceedence of this threshold speed. The difficulty of the criteria comparison arises from the two-dimensional character of the criteria definition. An effort is being made to compare these criteria, trying both to find commonalities and to clearly identify differences, in order to build up the basis for the next step: to try to define common criteria (perhaps with regional and seasonal variations). The first point is to define clearly the threshold effective wind speed (mean velocity definition parameters: averaging interval and reference height) and equivalence between different ways of defining the threshold effective wind speed (mean wind speed, gust equivalent mean, etc.) in comparable terms (as far as possible). It can be shown that if the wind speed at a given location is defined in terms of a probability distribution, e.g. Weibull function, a given criterion is satisfied by an infinite set of wind conditions, that is, of probability distributions. The criterion parameters and the Weibull function parameters are linked to each other, establishing a set called iso-criteria lines (the locus of the Weibull function parameters that fulfil a given criterion). The relative position of iso-criteria lines when displayed in a suitable two-dimensional plane facilitates the comparison of comfort criteria. The comparison of several wind comfort criteria, coming from several institutes is performed, showing the feasibility and limitations of the method.
Resumo:
As part of our attempts at understanding fundamental principles that underlie the generation of nondividing terminally differentiated progeny from dividing precursor cells, we have developed approaches to a quantitative analysis of proliferation and differentiation of oligodendrocyte type 2 astrocyte (O-2A) progenitor cells at the clonal level. Owing to extensive previous studies of clonal differentiation in this lineage, O-2A progenitor cells represent an excellent system for such an analysis. Previous studies have resulted in two competing hypotheses; one of them suggests that progenitor cell differentiation is symmetric, the other hypothesis introduces an asymmetric process of differentiation. We propose a general model that incorporates both such extreme hypotheses as special cases. Our analysis of experimental data has shown, however, that neither of these extreme cases completely explains the observed kinetics of O-2A progenitor cell proliferation and oligodendrocyte generation in vitro. Instead, our results indicate that O-2A progenitor cells become competent for differentiation after they complete a certain number of critical mitotic cycles that represent a period of symmetric development. This number varies from clone to clone and may be thought of as a random variable; its probability distribution was estimated from experimental data. Those O-2A cells that have undergone the critical divisions then may differentiate into an oligodendrocyte in each of the subsequent mitotic cycles with a certain probability, thereby exhibiting the asymmetric type of differentiation.
Resumo:
Because the retinal activity generated by a moving object cannot specify which of an infinite number of possible physical displacements underlies the stimulus, its real-world cause is necessarily uncertain. How, then, do observers respond successfully to sequences of images whose provenance is ambiguous? Here we explore the hypothesis that the visual system solves this problem by a probabilistic strategy in which perceived motion is generated entirely according to the relative frequency of occurrence of the physical sources of the stimulus. The merits of this concept were tested by comparing the directions and speeds of moving lines reported by subjects to the values determined by the probability distribution of all the possible physical displacements underlying the stimulus. The velocities reported by observers in a variety of stimulus contexts can be accounted for in this way.
Resumo:
A molecular model of poorly understood hydrophobic effects is heuristically developed using the methods of information theory. Because primitive hydrophobic effects can be tied to the probability of observing a molecular-sized cavity in the solvent, the probability distribution of the number of solvent centers in a cavity volume is modeled on the basis of the two moments available from the density and radial distribution of oxygen atoms in liquid water. The modeled distribution then yields the probability that no solvent centers are found in the cavity volume. This model is shown to account quantitatively for the central hydrophobic phenomena of cavity formation and association of inert gas solutes. The connection of information theory to statistical thermodynamics provides a basis for clarification of hydrophobic effects. The simplicity and flexibility of the approach suggest that it should permit applications to conformational equilibria of nonpolar solutes and hydrophobic residues in biopolymers.
Resumo:
Using the results of large scale numerical simulations we study the probability distribution of the pseudo critical temperature for the three dimensional Edwards Anderson Ising spin glass and for the fully connected Sherrington-Kirkpatrick model. We find that the behaviour of our data is nicely described by straightforward finitesize scaling relations.
Resumo:
We present a massive equilibrium simulation of the three-dimensional Ising spin glass at low temperatures. The Janus special-purpose computer has allowed us to equilibrate, using parallel tempering, L = 32 lattices down to T ≈ 0.64Tc. We demonstrate the relevance of equilibrium finite-size simulations to understand experimental non-equilibrium spin glasses in the thermodynamical limit by establishing a time-length dictionary. We conclude that non-equilibrium experiments performed on a time scale of one hour can be matched with equilibrium results on L ≈ 110 lattices. A detailed investigation of the probability distribution functions of the spin and link overlap, as well as of their correlation functions, shows that Replica Symmetry Breaking is the appropriate theoretical framework for the physically relevant length scales. Besides, we improve over existing methodologies to ensure equilibration in parallel tempering simulations.
Resumo:
We combine multi-wavelength data in the AEGIS-XD and C-COSMOS surveys to measure the typical dark matter halo mass of X-ray selected active galactic nuclei (AGN) [L_X(2–10 keV) > 10^42 erg s^− 1] in comparison with far-infrared selected star-forming galaxies detected in the Herschel/PEP survey (PACS Evolutionary Probe; L_IR > 10^11 L_⊙) and quiescent systems at z ≈ 1. We develop a novel method to measure the clustering of extragalactic populations that uses photometric redshift probability distribution functions in addition to any spectroscopy. This is advantageous in that all sources in the sample are used in the clustering analysis, not just the subset with secure spectroscopy. The method works best for large samples. The loss of accuracy because of the lack of spectroscopy is balanced by increasing the number of sources used to measure the clustering. We find that X-ray AGN, far-infrared selected star-forming galaxies and passive systems in the redshift interval 0.6 < z < 1.4 are found in haloes of similar mass, log M_DMH/(M_⊙ h^−1) ≈ 13.0. We argue that this is because the galaxies in all three samples (AGN, star-forming, passive) have similar stellar mass distributions, approximated by the J-band luminosity. Therefore, all galaxies that can potentially host X-ray AGN, because they have stellar masses in the appropriate range, live in dark matter haloes of log M_DMH/(M_⊙ h^−1) ≈ 13.0 independent of their star formation rates. This suggests that the stellar mass of X-ray AGN hosts is driving the observed clustering properties of this population. We also speculate that trends between AGN properties (e.g. luminosity, level of obscuration) and large-scale environment may be related to differences in the stellar mass of the host galaxies.
Resumo:
High ³⁷Ar activity concentration in soil gas is proposed as a key evidence for the detection of underground nuclear explosion by the Comprehensive Nuclear Test-Ban Treaty. However, such a detection is challenged by the natural background of ³⁷Ar in the subsurface, mainly due to Ca activation by cosmic rays. A better understanding and improved capability to predict ³⁷Ar activity concentration in the subsurface and its spatial and temporal variability is thus required. A numerical model integrating ³⁷Ar production and transport in the subsurface is developed, including variable soil water content and water infiltration at the surface. A parameterized equation for ³⁷Ar production in the first 15 m below the surface is studied, taking into account the major production reactions and the moderation effect of soil water content. Using sensitivity analysis and uncertainty quantification, a realistic and comprehensive probability distribution of natural ³⁷Ar activity concentrations in soil gas is proposed, including the effects of water infiltration. Site location and soil composition are identified as the parameters allowing for a most effective reduction of the possible range of ³⁷Ar activity concentrations. The influence of soil water content on ³⁷Ar production is shown to be negligible to first order, while ³⁷Ar activity concentration in soil gas and its temporal variability appear to be strongly influenced by transient water infiltration events. These results will be used as a basis for practical CTBTO concepts of operation during an OSI.
Resumo:
The Australian-Indonesian monsoon has a governing influence on the agricultural practices and livelihood in the highly populated islands of Indonesia. However, little is known about the factors that have influenced past monsoon activity in southern Indonesia. Here, we present a ~6000 years high-resolution record of Australian-Indonesian summer monsoon (AISM) rainfall variations based on bulk sediment element analysis in a sediment archive retrieved offshore northwest Sumba Island (Indonesia). The record suggests lower riverine detrital supply and hence weaker AISM rainfall between 6000 yr BP and ~3000 yr BP compared to the Late Holocene. We find a distinct shift in terrigenous sediment supply at around 2800 yr BP indicating a reorganization of the AISM from a drier Mid Holocene to a wetter Late Holocene in southern Indonesia. The abrupt increase in rainfall at around 2800 yr BP coincides with a grand solar minimum. An increase in southern Indonesian rainfall in response to a solar minimum is consistent with climate model simulations that provide a possible explanation of the underlying mechanism responsible for the monsoonal shift. We conclude that variations in solar activity play a significant role in monsoonal rainfall variability at multi-decadal and longer timescales. The combined effect of orbital and solar forcing explains important details in the temporal evolution of AISM rainfall during the last 6000 years. By contrast, we find neither evidence for volcanic forcing of AISM variability nor for a control by long-term variations in the El Niño-Southern Oscillation (ENSO).
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
A quantum random walk on the integers exhibits pseudo memory effects, in that its probability distribution after N steps is determined by reshuffling the first N distributions that arise in a classical random walk with the same initial distribution. In a classical walk, entropy increase can be regarded as a consequence of the majorization ordering of successive distributions. The Lorenz curves of successive distributions for a symmetric quantum walk reveal no majorization ordering in general. Nevertheless, entropy can increase, and computer experiments show that it does so on average. Varying the stages at which the quantum coin system is traced out leads to new quantum walks, including a symmetric walk for which majorization ordering is valid but the spreading rate exceeds that of the usual symmetric quantum walk.
Resumo:
Classical metapopulation theory assumes a static landscape. However, empirical evidence indicates many metapopulations are driven by habitat succession and disturbance. We develop a stochastic metapopulation model, incorporating habitat disturbance and recovery, coupled with patch colonization and extinction, to investigate the effect of habitat dynamics on persistence. We discover that habitat dynamics play a fundamental role in metapopulation dynamics. The mean number of suitable habitat patches is not adequate for characterizing the dynamics of the metapopulation. For a fixed mean number of suitable patches, we discover that the details of how disturbance affects patches and how patches recover influences metapopulation dynamics in a fundamental way. Moreover, metapopulation persistence is dependent not only oil the average lifetime of a patch, but also on the variance in patch lifetime and the synchrony in patch dynamics that results from disturbance. Finally, there is an interaction between the habitat and metapopulation dynamics, for instance declining metapopulations react differently to habitat dynamics than expanding metapopulations. We close, emphasizing the importance of using performance measures appropriate to stochastic systems when evaluating their behavior, such as the probability distribution of the state of the. metapopulation, conditional on it being extant (i.e., the quasistationary distribution).
Resumo:
We consider a problem of robust performance analysis of linear discrete time varying systems on a bounded time interval. The system is represented in the state-space form. It is driven by a random input disturbance with imprecisely known probability distribution; this distributional uncertainty is described in terms of entropy. The worst-case performance of the system is quantified by its a-anisotropic norm. Computing the anisotropic norm is reduced to solving a set of difference Riccati and Lyapunov equations and a special form equation.
Resumo:
Stochastic models based on Markov birth processes are constructed to describe the process of invasion of a fly larva by entomopathogenic nematodes. Various forms for the birth (invasion) rates are proposed. These models are then fitted to data sets describing the observed numbers of nematodes that have invaded a fly larval after a fixed period of time. Non-linear birthrates are required to achieve good fits to these data, with their precise form leading to different patterns of invasion being identified for three populations of nematodes considered. One of these (Nemasys) showed the greatest propensity for invasion. This form of modelling may be useful more generally for analysing data that show variation which is different from that expected from a binomial distribution.