923 resultados para Error bounds
Resumo:
Integration of biometrics is considered as an attractive solution for the issues associated with password based human authentication as well as for secure storage and release of cryptographic keys which is one of the critical issues associated with modern cryptography. However, the widespread popularity of bio-cryptographic solutions are somewhat restricted by the fuzziness associated with biometric measurements. Therefore, error control mechanisms must be adopted to make sure that fuzziness of biometric inputs can be sufficiently countered. In this paper, we have outlined such existing techniques used in bio-cryptography while explaining how they are deployed in different types of solutions. Finally, we have elaborated on the important facts to be considered when choosing appropriate error correction mechanisms for a particular biometric based solution.
Resumo:
Tourism plays an important role in the development of Cook Islands. In this paper we examine the nexus between tourism and growth using quarterly data over the period 2009Q1–2014Q2 using the recently upgraded ARDL bounds test to cointegration tool, Microfit 5.01, which provides sample adjusted bounds and hence is more reliable for small sample size studies. We perform the cointegration using the ARDL bounds test and examine the direction of causality. Using visitor arrival and output in per capita terms as respective proxy for tourism development and growth, we examine the long-run association and report the elasticity coefficient of tourism and causality nexus, accordingly. Using unit root break tests, we note that 2011Q1 and 2011Q2 are two structural break periods in the output series. However, we note that this period is not statistically significant in the ARDL model and hence excluded from the estimation. Subsequently, the regression results show the two series are cointegrated. The long-run elasticity coefficient of tourism is estimated to be 0.83 and the short-run is 0.73. A bidirectional causality between tourism and income is noted for Cook Islands which indicates that tourism development and income mutually reinforce each other. In light of this, socio-economic policies need to focus on broad-based, inclusive and income-generating tourism development projects which are expected to have feedback effect.
Resumo:
Melanopsin containing intrinsically photosensitive Retinal Ganglion cells (ipRGCs) mediate the pupil light reflex (PLR) during light onset and at light offset (the post-illumination pupil response, PIPR). Recent evidence shows that the PLR and PIPR can provide non-invasive, objective markers of age-related retinal and optic nerve disease, however there is no consensus on the effects of healthy ageing or refractive error on the ipRGC mediated pupil function. Here we isolated melanopsin contributions to the pupil control pathway in 59 human participants with no ocular pathology across a range of ages and refractive errors. We show that there is no effect of age or refractive error on ipRGC inputs to the human pupil control pathway. The stability of the ipRGC mediated pupil response across the human lifespan provides a functional correlate of their robustness observed during ageing in rodent models.
Resumo:
Let G = (V, E) be a finite, simple and undirected graph. For S subset of V, let delta(S, G) = {(u, v) is an element of E : u is an element of S and v is an element of V - S} be the edge boundary of S. Given an integer i, 1 <= i <= vertical bar V vertical bar, let the edge isoperimetric value of G at i be defined as b(e)(i, G) = min(S subset of V:vertical bar S vertical bar=i)vertical bar delta(S, G)vertical bar. The edge isoperimetric peak of G is defined as b(e)(G) = max(1 <= j <=vertical bar V vertical bar)b(e)(j, G). Let b(v)(G) denote the vertex isoperimetric peak defined in a corresponding way. The problem of determining a lower bound for the vertex isoperimetric peak in complete t-ary trees was recently considered in [Y. Otachi, K. Yamazaki, A lower bound for the vertex boundary-width of complete k-ary trees, Discrete Mathematics, in press (doi: 10.1016/j.disc.2007.05.014)]. In this paper we provide bounds which improve those in the above cited paper. Our results can be generalized to arbitrary (rooted) trees. The depth d of a tree is the number of nodes on the longest path starting from the root and ending at a leaf. In this paper we show that for a complete binary tree of depth d (denoted as T-d(2)), c(1)d <= b(e) (T-d(2)) <= d and c(2)d <= b(v)(T-d(2)) <= d where c(1), c(2) are constants. For a complete t-ary tree of depth d (denoted as T-d(t)) and d >= c log t where c is a constant, we show that c(1)root td <= b(e)(T-d(t)) <= td and c(2)d/root t <= b(v) (T-d(t)) <= d where c(1), c(2) are constants. At the heart of our proof we have the following theorem which works for an arbitrary rooted tree and not just for a complete t-ary tree. Let T = (V, E, r) be a finite, connected and rooted tree - the root being the vertex r. Define a weight function w : V -> N where the weight w(u) of a vertex u is the number of its successors (including itself) and let the weight index eta(T) be defined as the number of distinct weights in the tree, i.e eta(T) vertical bar{w(u) : u is an element of V}vertical bar. For a positive integer k, let l(k) = vertical bar{i is an element of N : 1 <= i <= vertical bar V vertical bar, b(e)(i, G) <= k}vertical bar. We show that l(k) <= 2(2 eta+k k)
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
Error estimates for the error reproducing kernel method (ERKM) are provided. The ERKM is a mesh-free functional approximation scheme [A. Shaw, D. Roy, A NURBS-based error reproducing kernel method with applications in solid mechanics, Computational Mechanics (2006), to appear (available online)], wherein a targeted function and its derivatives are first approximated via non-uniform rational B-splines (NURBS) basis function. Errors in the NURBS approximation are then reproduced via a family of non-NURBS basis functions, constructed using a polynomial reproduction condition, and added to the NURBS approximation of the function obtained in the first step. In addition to the derivation of error estimates, convergence studies are undertaken for a couple of test boundary value problems with known exact solutions. The ERKM is next applied to a one-dimensional Burgers equation where, time evolution leads to a breakdown of the continuous solution and the appearance of a shock. Many available mesh-free schemes appear to be unable to capture this shock without numerical instability. However, given that any desired order of continuity is achievable through NURBS approximations, the ERKM can even accurately approximate functions with discontinuous derivatives. Moreover, due to the variation diminishing property of NURBS, it has advantages in representing sharp changes in gradients. This paper is focused on demonstrating this ability of ERKM via some numerical examples. Comparisons of some of the results with those via the standard form of the reproducing kernel particle method (RKPM) demonstrate the relative numerical advantages and accuracy of the ERKM.
Resumo:
The decision of Henry J in Ginn & Anor v Ginn; ex parte Absolute Law Lawyers & Attorneys [2015] QSC 49 provides clarification of the approach to be taken on a default costs assessment under r708 of the Uniform Civil Procedure Rules 1999
Resumo:
We obtain stringent bounds in the < r(2)>(K pi)(S)-c plane where these are the scalar radius and the curvature parameters of the scalar K pi form factor, respectively, using analyticity and dispersion relation constraints, the knowledge of the form factor from the well-known Callan-Treiman point m(K)(2)-m(pi)(2), as well as at m(pi)(2)-m(K)(2), which we call the second Callan-Treiman point. The central values of these parameters from a recent determination are accomodated in the allowed region provided the higher loop corrections to the value of th form factor at the second Callan-Treiman point reduce the one-loop result by about 3% with F-K/F-pi = 1.21. Such a variation in magnitude at the second Callan-Treiman point yields 0.12 fm(2) less than or similar to < r(2)>(K pi)(S) less than or similar to 0.21 fm(2) and 0.56 GeV-4 less than or similar to c less than or similar to 1.47 GeV-4 and a strong correlation between them. A smaller value of F-K/F-pi shifts both bounds to lower values.
Resumo:
In australia, 'Aboriginality' is often defined by people in constrictive ways that are heavily influenced by the coloniser's epistemological frameworks. An essential component of this is a 'racial' categorisation of peoples that marks sameness and difference, thereby influencing insider and outsider status. In one sense, this categorisation of people acts to exclude non-aboriginal 'others' from participation in preinvasion indigenous ontologies, ways of living that may not have contained such restrictive identity categories and were thereby highly inclusive of outsiders. One of the effects of this is that aboriginal peoples' efforts for 'advancement' - either out of 'disadvantage' and/or towards political independence (ie, sovereignty) - become confined and restricted by what is deemed possible within the coloniser's epistemological frameworks. This is so much so that aboriginal people are at risk of only reinforcing and upholding the very systems that resulted in their original and continuing dispossession.
Resumo:
A 59-year-old man was mistakenly prescribed Slow-Na instead of Slow-K due to incorrect selection from a drop-down list in the prescribing software. This error was identified by a pharmacist during a home medicine review (HMR) before the patient began taking the supplement. The reported error emphasizes the need for vigilance due to the emergence of novel look-alike, sound-alike (LASA) drug pairings. This case highlights the important role of pharmacists in medication safety.
Resumo:
We review here classical Bogomolnyi bounds, and their generalisation to supersymmetric quantum field theories by Witten and Olive. We also summarise some recent work by several people on whether such bounds are saturated in the quantised theory.
Resumo:
A residual-based strategy to estimate the local truncation error in a finite volume framework for steady compressible flows is proposed. This estimator, referred to as the -parameter, is derived from the imbalance arising from the use of an exact operator on the numerical solution for conservation laws. The behaviour of the residual estimator for linear and non-linear hyperbolic problems is systematically analysed. The relationship of the residual to the global error is also studied. The -parameter is used to derive a target length scale and consequently devise a suitable criterion for refinement/derefinement. This strategy, devoid of any user-defined parameters, is validated using two standard test cases involving smooth flows. A hybrid adaptive strategy based on both the error indicators and the -parameter, for flows involving shocks is also developed. Numerical studies on several compressible flow cases show that the adaptive algorithm performs excellently well in both two and three dimensions.