972 resultados para Overflow probability


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This data set includes measurements from moored instruments from the Faroe Bank Channel overflow region in the period between 28 May 2012 and 5 June 2013. The data set was collected under the project entitled "Faroe Bank Channel Overflow: Dynamics and Mixing Research", with an objective to describe the structure and variability of the dense oceanic overflow plume from the Faroe Bank Channel on daily to seasonal timescales. Mooring arrays were deployed in two sections: located 25 km downstream of the main sill, in the channel that geographically confines the overflow plume at both edges (section C), and 60 km further downstream, over the slope (section S). The measurements delivered with this data set include hourly-averaged data gridded on 5-m vertical separation, after accounting for mooring knock downs using a mooring dynamics model. Complete set of mooring drawings and detailed description can be found in the cruise report (Fer et al. 2016, PDF provided). The article by Ullgren et al. (2016) gives further details on processing of the data set and presents the data set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most common forms of reuse is through API usage. However, one of the main challenges to effective usage is an accessible and easy to understand documentation. Several papers have proposed alternatives to make more understandable API documentation, or even more detailed. However, these studies have not taken into account the complexity of understanding of the examples to make these documentations adaptable to different levels of experience of developers. In this work we developed and evaluated four different methodologies to generate tutorials for APIs from the contents of Stack Overflow and organizing them according to the complexity of understanding. The methodologies were evaluated through tutorials generated for the Swing API. A survey was conducted to evaluate eight different features of the generated tutorials. The overall outcome of the tutorials was positive on several characteristics, showing the feasibility of the use of tutorials generated automatically. In addition, the use of criteria for presentation of tutorial elements in order of complexity, the separation of the tutorial in basic and advanced parts, the nature of tutorial to the selected posts and existence of didactic source had significantly different results regarding a chosen generation methodology. A second study compared the official documentation of the Android API and tutorial generated by the best methodology of the previous study. A controlled experiment was conducted with students who had a first contact with the Android development. In the experiment these students developed two tasks, one using the official documentation of Android and using the generated tutorial. The results of this experiment showed that in most cases, the students had the best performance in tasks when they used the tutorial proposed in this work. The main reasons for the poor performance of students in tasks using the official API documentation were due to lack of usage examples, as well as its difficult use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Improving lava flow hazard assessment is one of the most important and challenging fields of volcanology, and has an immediate and practical impact on society. Here, we present a methodology for the quantitative assessment of lava flow hazards based on a combination of field data, numerical simulations and probability analyses. With the extensive data available on historic eruptions of Mt. Etna, going back over 2000 years, it has been possible to construct two hazard maps, one for flank and the other for summit eruptions, allowing a quantitative analysis of the most likely future courses of lava flows. The effective use of hazard maps of Etna may help in minimizing the damage from volcanic eruptions through correct land use in densely urbanized area with a population of almost one million people. Although this study was conducted on Mt. Etna, the approach used is designed to be applicable to other volcanic areas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, the authors propose simple methods to evaluate the achievable rates and outage probability of a cognitive radio (CR) link that takes into account the imperfectness of spectrum sensing. In the considered system, the CR transmitter and receiver correlatively sense and dynamically exploit the spectrum pool via dynamic frequency hopping. Under imperfect spectrum sensing, false-alarm and miss-detection occur which cause impulsive interference emerged from collisions due to the simultaneous spectrum access of primary and cognitive users. That makes it very challenging to evaluate the achievable rates. By first examining the static link where the channel is assumed to be constant over time, they show that the achievable rate using a Gaussian input can be calculated accurately through a simple series representation. In the second part of this study, they extend the calculation of the achievable rate to wireless fading environments. To take into account the effect of fading, they introduce a piece-wise linear curve fitting-based method to approximate the instantaneous achievable rate curve as a combination of linear segments. It is then demonstrated that the ergodic achievable rate in fast fading and the outage probability in slow fading can be calculated to achieve any given accuracy level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of custom classification techniques and posterior probability modeling (PPM) using Worldview-2 multispectral imagery to archaeological field survey is presented in this paper. Research is focused on the identification of Neolithic felsite stone tool workshops in the North Mavine region of the Shetland Islands in Northern Scotland. Sample data from known workshops surveyed using differential GPS are used alongside known non-sites to train a linear discriminant analysis (LDA) classifier based on a combination of datasets including Worldview-2 bands, band difference ratios (BDR) and topographical derivatives. Principal components analysis is further used to test and reduce dimensionality caused by redundant datasets. Probability models were generated by LDA using principal components and tested with sites identified through geological field survey. Testing shows the prospective ability of this technique and significance between 0.05 and 0.01, and gain statistics between 0.90 and 0.94, higher than those obtained using maximum likelihood and random forest classifiers. Results suggest that this approach is best suited to relatively homogenous site types, and performs better with correlated data sources. Finally, by combining posterior probability models and least-cost analysis, a survey least-cost efficacy model is generated showing the utility of such approaches to archaeological field survey.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem addressed concerns the determination of the average numberof successive attempts of guessing a word of a certain length consisting of letters withgiven probabilities of occurrence. Both first- and second-order approximations to a naturallanguage are considered. The guessing strategy used is guessing words in decreasing orderof probability. When word and alphabet sizes are large, approximations are necessary inorder to estimate the number of guesses. Several kinds of approximations are discusseddemonstrating moderate requirements regarding both memory and central processing unit(CPU) time. When considering realistic sizes of alphabets and words (100), the numberof guesses can be estimated within minutes with reasonable accuracy (a few percent) andmay therefore constitute an alternative to, e.g., various entropy expressions. For manyprobability distributions, the density of the logarithm of probability products is close to anormal distribution. For those cases, it is possible to derive an analytical expression for theaverage number of guesses. The proportion of guesses needed on average compared to thetotal number decreases almost exponentially with the word length. The leading term in anasymptotic expansion can be used to estimate the number of guesses for large word lengths.Comparisons with analytical lower bounds and entropy expressions are also provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This analysis paper presents previously unknown properties of some special cases of the Wright function whose consideration is necessitated by our work on probability theory and the theory of stochastic processes. Specifically, we establish new asymptotic properties of the particular Wright function 1Ψ1(ρ, k; ρ, 0; x) = X∞ n=0 Γ(k + ρn) Γ(ρn) x n n! (|x| < ∞) when the parameter ρ ∈ (−1, 0)∪(0, ∞) and the argument x is real. In the probability theory applications, which are focused on studies of the Poisson-Tweedie mixtures, the parameter k is a non-negative integer. Several representations involving well-known special functions are given for certain particular values of ρ. The asymptotics of 1Ψ1(ρ, k; ρ, 0; x) are obtained under numerous assumptions on the behavior of the arguments k and x when the parameter ρ is both positive and negative. We also provide some integral representations and structural properties involving the ‘reduced’ Wright function 0Ψ1(−−; ρ, 0; x) with ρ ∈ (−1, 0) ∪ (0, ∞), which might be useful for the derivation of new properties of members of the power-variance family of distributions. Some of these imply a reflection principle that connects the functions 0Ψ1(−−;±ρ, 0; ·) and certain Bessel functions. Several asymptotic relationships for both particular cases of this function are also given. A few of these follow under additional constraints from probability theory results which, although previously available, were unknown to analysts.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyzes the inner relations between classical sub-scheme probability and statistic probability, subjective probability and objective probability, prior probability and posterior probability, transition probability and probability of utility, and further analysis the goal, method, and its practical economic purpose which represent by these various probability from the perspective of mathematics, so as to deeply understand there connotation and its relation with economic decision making, thus will pave the route for scientific predication and decision making.