914 resultados para distribution (probability theory)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an economic model of the effects of identity and social norms on consumption patterns. By incorporating qualitative studies in psychology and sociology, I propose a utility function that features two components – economic (functional) and identity elements. This setup is extended to analyze a market comprising a continuum of consumers, whose identity distribution along a spectrum of binary identities is described by a Beta distribution. I also introduce the notion of salience in the context of identity and consumption decisions. The key result of the model suggests that fundamental economic parameters, such as price elasticity and market demand, can be altered by identity elements. In addition, it predicts that firms in perfectly competitive markets may associate their products with certain types of identities, in order to reduce product substitutability and attain price-setting power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"In this paper we extend the earlier treatment of out-of-equilibrium mesoscopic fluctuations in glassy systems in several significant ways. First, via extensive simulations, we demonstrate that models of glassy behavior without quenched disorder display scalings of the probability of local two-time correlators that are qualitatively similar to that of models with short-ranged quenched interactions. The key ingredient for such scaling properties is shown to be the development of a criticallike dynamical correlation length, and not other microscopic details. This robust data collapse may be described in terms of a time-evolving "extreme value" distribution. We develop a theory to describe both the form and evolution of these distributions based on a effective sigma model approach."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the world population continues to grow past seven billion people and global challenges continue to persist including resource availability, biodiversity loss, climate change and human well-being, a new science is required that can address the integrated nature of these challenges and the multiple scales on which they are manifest. Sustainability science has emerged to fill this role. In the fifteen years since it was first called for in the pages of Science, it has rapidly matured, however its place in the history of science and the way it is practiced today must be continually evaluated. In Part I, two chapters address this theoretical and practical grounding. Part II transitions to the applied practice of sustainability science in addressing the urban heat island (UHI) challenge wherein the climate of urban areas are warmer than their surrounding rural environs. The UHI has become increasingly important within the study of earth sciences given the increased focus on climate change and as the balance of humans now live in urban areas.

In Chapter 2 a novel contribution to the historical context of sustainability is argued. Sustainability as a concept characterizing the relationship between humans and nature emerged in the mid to late 20th century as a response to findings used to also characterize the Anthropocene. Emerging from the human-nature relationships that came before it, evidence is provided that suggests Sustainability was enabled by technology and a reorientation of world-view and is unique in its global boundary, systematic approach and ambition for both well being and the continued availability of resources and Earth system function. Sustainability is further an ambition that has wide appeal, making it one of the first normative concepts of the Anthropocene.

Despite its widespread emergence and adoption, sustainability science continues to suffer from definitional ambiguity within the academe. In Chapter 3, a review of efforts to provide direction and structure to the science reveals a continuum of approaches anchored at either end by differing visions of how the science interfaces with practice (solutions). At one end, basic science of societally defined problems informs decisions about possible solutions and their application. At the other end, applied research directly affects the options available to decision makers. While clear from the literature, survey data further suggests that the dichotomy does not appear to be as apparent in the minds of practitioners.

In Chapter 4, the UHI is first addressed at the synoptic, mesoscale. Urban climate is the most immediate manifestation of the warming global climate for the majority of people on earth. Nearly half of those people live in small to medium sized cities, an understudied scale in urban climate research. Widespread characterization would be useful to decision makers in planning and design. Using a multi-method approach, the mesoscale UHI in the study region is characterized and the secular trend over the last sixty years evaluated. Under isolated ideal conditions the findings indicate a UHI of 5.3 ± 0.97 °C to be present in the study area, the magnitude of which is growing over time.

Although urban heat islands (UHI) are well studied, there remain no panaceas for local scale mitigation and adaptation methods, therefore continued attention to characterization of the phenomenon in urban centers of different scales around the globe is required. In Chapter 5, a local scale analysis of the canopy layer and surface UHI in a medium sized city in North Carolina, USA is conducted using multiple methods including stationary urban sensors, mobile transects and remote sensing. Focusing on the ideal conditions for UHI development during an anticyclonic summer heat event, the study observes a range of UHI intensity depending on the method of observation: 8.7 °C from the stationary urban sensors; 6.9 °C from mobile transects; and, 2.2 °C from remote sensing. Additional attention is paid to the diurnal dynamics of the UHI and its correlation with vegetation indices, dewpoint and albedo. Evapotranspiration is shown to drive dynamics in the study region.

Finally, recognizing that a bridge must be established between the physical science community studying the Urban Heat Island (UHI) effect, and the planning community and decision makers implementing urban form and development policies, Chapter 6 evaluates multiple urban form characterization methods. Methods evaluated include local climate zones (LCZ), national land cover database (NCLD) classes and urban cluster analysis (UCA) to determine their utility in describing the distribution of the UHI based on three standard observation types 1) fixed urban temperature sensors, 2) mobile transects and, 3) remote sensing. Bivariate, regression and ANOVA tests are used to conduct the analyses. Findings indicate that the NLCD classes are best correlated to the UHI intensity and distribution in the study area. Further, while the UCA method is not useful directly, the variables included in the method are predictive based on regression analysis so the potential for better model design exists. Land cover variables including albedo, impervious surface fraction and pervious surface fraction are found to dominate the distribution of the UHI in the study area regardless of observation method.

Chapter 7 provides a summary of findings, and offers a brief analysis of their implications for both the scientific discourse generally, and the study area specifically. In general, the work undertaken does not achieve the full ambition of sustainability science, additional work is required to translate findings to practice and more fully evaluate adoption. The implications for planning and development in the local region are addressed in the context of a major light-rail infrastructure project including several systems level considerations like human health and development. Finally, several avenues for future work are outlined. Within the theoretical development of sustainability science, these pathways include more robust evaluations of the theoretical and actual practice. Within the UHI context, these include development of an integrated urban form characterization model, application of study methodology in other geographic areas and at different scales, and use of novel experimental methods including distributed sensor networks and citizen science.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Goodness-of-fit tests have been studied by many researchers. Among them, an alternative statistical test for uniformity was proposed by Chen and Ye (2009). The test was used by Xiong (2010) to test normality for the case that both location parameter and scale parameter of the normal distribution are known. The purpose of the present thesis is to extend the result to the case that the parameters are unknown. A table for the critical values of the test statistic is obtained using Monte Carlo simulation. The performance of the proposed test is compared with the Shapiro-Wilk test and the Kolmogorov-Smirnov test. Monte-Carlo simulation results show that proposed test performs better than the Kolmogorov-Smirnov test in many cases. The Shapiro Wilk test is still the most powerful test although in some cases the test proposed in the present research performs better.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Dirichlet distribution is a multivariate generalization of the Beta distribution. It is an important multivariate continuous distribution in probability and statistics. In this report, we review the Dirichlet distribution and study its properties, including statistical and information-theoretic quantities involving this distribution. Also, relationships between the Dirichlet distribution and other distributions are discussed. There are some different ways to think about generating random variables with a Dirichlet distribution. The stick-breaking approach and the Pólya urn method are discussed. In Bayesian statistics, the Dirichlet distribution and the generalized Dirichlet distribution can both be a conjugate prior for the Multinomial distribution. The Dirichlet distribution has many applications in different fields. We focus on the unsupervised learning of a finite mixture model based on the Dirichlet distribution. The Initialization Algorithm and Dirichlet Mixture Estimation Algorithm are both reviewed for estimating the parameters of a Dirichlet mixture. Three experimental results are shown for the estimation of artificial histograms, summarization of image databases and human skin detection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pharmaceutical industry wields disproportionate power and control within the medical economy of knowledge where the desire for profit considerably outweighs health for its own sake. Utilizing the theoretical tools of political philosophy, this project restructures the economy of medical knowledge in order to lessen the oligarchical control possessed by the pharmaceutical industry. Ultimately, this project argues that an economy of medical knowledge structured around communitarian political theory lessens the current power dynamic without taking an anti-capitalist stance. Arising from the core commitments of communitarian-liberalism, the production, distribution, and consumption of medical knowledge all become guided processes seeking to realize the common good of quality healthcare. This project also considers two other theoretical approaches: liberalism and egalitarianism. A Medical knowledge economy structured around liberal political theory is ultimately rejected as it empowers the oligarchical status quo. Egalitarian political theory is able to significantly reduce the power imbalance problem but simultaneously renders inconsequential medical knowledge; therefore, it is also rejected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article proposes a three-step procedure to estimate portfolio return distributions under the multivariate Gram-Charlier (MGC) distribution. The method combines quasi maximum likelihood (QML) estimation for conditional means and variances and the method of moments (MM) estimation for the rest of the density parameters, including the correlation coefficients. The procedure involves consistent estimates even under density misspecification and solves the so-called ‘curse of dimensionality’ of multivariate modelling. Furthermore, the use of a MGC distribution represents a flexible and general approximation to the true distribution of portfolio returns and accounts for all its empirical regularities. An application of such procedure is performed for a portfolio composed of three European indices as an illustration. The MM estimation of the MGC (MGC-MM) is compared with the traditional maximum likelihood of both the MGC and multivariate Student’s t (benchmark) densities. A simulation on Value-at-Risk (VaR) performance for an equally weighted portfolio at 1% and 5% confidence indicates that the MGC-MM method provides reasonable approximations to the true empirical VaR. Therefore, the procedure seems to be a useful tool for risk managers and practitioners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mental logic theory does not accept the disjunction introduction rule of standard propositional calculus as a natural schema of the human mind. In this way, the problem that I want to show in this paper is that, however, that theory does admit another much more complex schema in which the mentioned rule must be used as a previous step. So, I try to argue that this is a very important problem that the mental logic theory needs to solve, and claim that another rival theory, the mental models theory, does not have these difficulties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims. Projected rotational velocities (ve sin i) have been estimated for 334 targets in the VLT-FLAMES Tarantula Survey that do not manifest significant radial velocity variations and are not supergiants. They have spectral types from approximately O9.5 to B3. The estimates have been analysed to infer the underlying rotational velocity distribution, which is critical for understanding the evolution of massive stars. Methods. Projected rotational velocities were deduced from the Fourier transforms of spectral lines, with upper limits also being obtained from profile fitting. For the narrower lined stars, metal and non-diffuse helium lines were adopted, and for the broader lined stars, both non-diffuse and diffuse helium lines; the estimates obtained using the different sets of lines are in good agreement. The uncertainty in the mean estimates is typically 4% for most targets. The iterative deconvolution procedure of Lucy has been used to deduce the probability density distribution of the rotational velocities. Results. Projected rotational velocities range up to approximately 450 kms-1 and show a bi-modal structure. This is also present in the inferred rotational velocity distribution with 25% of the sample having 0 <ve <100 km s-1 and the high velocity component having ve ∼ 250 km s-1. There is no evidence from the spatial and radial velocity distributions of the two components that they represent either field and cluster populations or different episodes of star formation. Be-type stars have also been identified. Conclusions. The bi-modal rotational velocity distribution in our sample resembles that found for late-B and early-A type stars.While magnetic braking appears to be a possible mechanism for producing the low-velocity component, we can not rule out alternative explanations. © ESO 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The strong mixing of many-electron basis states in excited atoms and ions with open f shells results in very large numbers of complex, chaotic eigenstates that cannot be computed to any degree of accuracy. Describing the processes which involve such states requires the use of a statistical theory. Electron capture into these “compound resonances” leads to electron-ion recombination rates that are orders of magnitude greater than those of direct, radiative recombination and cannot be described by standard theories of dielectronic recombination. Previous statistical theories considered this as a two-electron capture process which populates a pair of single-particle orbitals, followed by “spreading” of the two-electron states into chaotically mixed eigenstates. This method is similar to a configuration-average approach because it neglects potentially important effects of spectator electrons and conservation of total angular momentum. In this work we develop a statistical theory which considers electron capture into “doorway” states with definite angular momentum obtained by the configuration interaction method. We apply this approach to electron recombination with W20+, considering 2×106 doorway states. Despite strong effects from the spectator electrons, we find that the results of the earlier theories largely hold. Finally, we extract the fluorescence yield (the probability of photoemission and hence recombination) by comparison with experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider the transmission of confidential information over a κ-μ fading channel in the presence of an eavesdropper who also experiences κ-μ fading. In particular, we obtain novel analytical solutions for the probability of strictly positive secrecy capacity (SPSC) and a lower bound of secure outage probability (SOPL) for independent and non-identically distributed channel coefficients without parameter constraints. We also provide a closed-form expression for the probability of SPSC when the μ parameter is assumed to take positive integer values. Monte-Carlo simulations are performed to verify the derived results. The versatility of the κ-μ fading model means that the results presented in this paper can be used to determine the probability of SPSC and SOPL for a large number of other fading scenarios, such as Rayleigh, Rice (Nakagamin), Nakagami-m, One-Sided Gaussian, and mixtures of these common fading models. In addition, due to the duality of the analysis of secrecy capacity and co-channel interference (CCI), the results presented here will have immediate applicability in the analysis of outage probability in wireless systems affected by CCI and background noise (BN). To demonstrate the efficacy of the novel formulations proposed here, we use the derived equations to provide a useful insight into the probability of SPSC and SOPL for a range of emerging wireless applications, such as cellular device-to-device, peer-to-peer, vehicle-to-vehicle, and body centric communications using data obtained from real channel measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Queueing theory is the mathematical study of ‘queue’ or ‘waiting lines’ where an item from inventory is provided to the customer on completion of service. A typical queueing system consists of a queue and a server. Customers arrive in the system from outside and join the queue in a certain way. The server picks up customers and serves them according to certain service discipline. Customers leave the system immediately after their service is completed. For queueing systems, queue length, waiting time and busy period are of primary interest to applications. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, mean queue length, traffic intensity, the expected number waiting or receiving service, mean busy period, distribution of queue length, and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served.