948 resultados para Error threshold
Resumo:
As the conventional MOSFET's scaling is approaching the limit imposed by short channel effects, Double Gate (DG) MOS transistors are appearing as the most feasible candidate in terms of technology in sub-45nm technology nodes. As the short channel effect in DG transistor is controlled by the device geometry, undoped or lightly doped body is used to sustain the channel. There exits a disparity in threshold voltage calculation criteria of undoped-body symmetric double gate transistors which uses two definitions, one is potential based and the another is charge based definition. In this paper, a novel concept of "crossover point'' is introduced, which proves that the charge-based definition is more accurate than the potential based definition.The change in threshold voltage with body thickness variation for a fixed channel length is anomalous as predicted by potential based definition while it is monotonous for charge based definition.The threshold voltage is then extracted from drain currant versus gate voltage characteristics using linear extrapolation and "Third Derivative of Drain-Source Current'' method or simply "TD'' method. The trend of threshold voltage variation is found same in both the cases which support charge-based definition.
Resumo:
We show that the cubicity of a connected threshold graph is equal to inverted right perpendicularlog(2) alpha inverted left perpendicular, where alpha is its independence number.
Resumo:
As the conventional MOSFETs scaling is approaching the limit imposed by short channel effects, Double Gate (DG) MOS transistors are appearing as the most feasible andidate in terms of technology in sub-45nm technology nodes. As the short channel effect in DG transistor is controlled by the device geometry, undoped or lightly doped body, is used to sustain the channel. There exits a disparity in threshold voltage calculation criteria of undoped-body symmetric double gate transistors which uses two definitions, one is potential based and the another is charge based definition. In this paper, a novel concept of "crossover point" is introduced, which proves that the charge-based definition is more accurate than the potential based definition. The change in threshold voltage with body thickness variation for a fixed channel length is anomalous as predicted by, potential based definition while it is monotonous for change based definition. The threshold voltage is then extracted from drain currant versus gate voltage characteristics using linear extrapolation and "Third Derivative of Drain-Source Current" method or simply "TD" method. The trend of threshold voltage variation is found some in both the cases which support charge-based definition.
Resumo:
Near threshold fatigue crack growth behavior of a high strength steel under different temper levels was investigated. It is found that the observed variations in ΔKth could predominantly be attributed to roughness induced crack closure. The closure-free component of the threshold stress intensity range, ΔKeff,th showed a systematic variation with monotonic yield strength.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
Error estimates for the error reproducing kernel method (ERKM) are provided. The ERKM is a mesh-free functional approximation scheme [A. Shaw, D. Roy, A NURBS-based error reproducing kernel method with applications in solid mechanics, Computational Mechanics (2006), to appear (available online)], wherein a targeted function and its derivatives are first approximated via non-uniform rational B-splines (NURBS) basis function. Errors in the NURBS approximation are then reproduced via a family of non-NURBS basis functions, constructed using a polynomial reproduction condition, and added to the NURBS approximation of the function obtained in the first step. In addition to the derivation of error estimates, convergence studies are undertaken for a couple of test boundary value problems with known exact solutions. The ERKM is next applied to a one-dimensional Burgers equation where, time evolution leads to a breakdown of the continuous solution and the appearance of a shock. Many available mesh-free schemes appear to be unable to capture this shock without numerical instability. However, given that any desired order of continuity is achievable through NURBS approximations, the ERKM can even accurately approximate functions with discontinuous derivatives. Moreover, due to the variation diminishing property of NURBS, it has advantages in representing sharp changes in gradients. This paper is focused on demonstrating this ability of ERKM via some numerical examples. Comparisons of some of the results with those via the standard form of the reproducing kernel particle method (RKPM) demonstrate the relative numerical advantages and accuracy of the ERKM.
Resumo:
For a wide class of semi-Markov decision processes the optimal policies are expressible in terms of the Gittins indices, which have been found useful in sequential clinical trials and pharmaceutical research planning. In general, the indices can be approximated via calibration based on dynamic programming of finite horizon. This paper provides some results on the accuracy of such approximations, and, in particular, gives the error bounds for some well known processes (Bernoulli reward processes, normal reward processes and exponential target processes).
Resumo:
Two different definitions, one is potential based and the other is charge based, are used in the literatures to define the threshold voltage of undoped body symmetric double gate transistors. This paper, by introducing a novel concept of crossover point, proves that the charge based definition is more accurate than the potential based definition. It is shown that for a given channel length the potential based definition predicts anomalous change in threshold voltage with body thickness variation while the charge based definition results in monotonous change. The threshold voltage is then extracted from drain current versus gate voltage characteristics using linear extrapolation, transconductance and match-point methods. In all the three cases it is found that trend of threshold voltage variation support the charge based definition.
Resumo:
A geometrical structure called the implied minterm structure (IMS) has been developed from the properties of minterms of a threshold function. The IMS is useful for the manual testing of linear separability of switching functions of up to six variables. This testing is done just by inspection of the plot of the function on the IMS.
Resumo:
Recent studies (I-7) clearly indicate a strong dependence of fatigue threshold parameter, A K on grain size in several alloy systems. Attempts to explain these observations on the basis of crat~tortuosity (1,8), fracture surface roughness (5,9) and crack closure (6) appear to present a fairly clear picture of the mechanisms that cause a reduction in crack growth rates at threshold. In general, it has been shown that coarse grained microstructures exhibit higher fatigue threshold in low carbon steels (1,5) aluminium alloys (7) and titanium alloys (6). In spite of these observations, there exists (10-1#) considerable uncertainity about the manner in which the AK~L depends on prior austenitic grain size in quenched and tempered steels. Studies in quenched and tempered steels demonstrating both an increase (3,12,14) as well as a decrease (11,12) in AKth with an increase in prior austenitic grain size can be sought to illustrate this point. Occasionally , the absence of any sensitivity of AKth to the variations in prior austenitJc grain size has also been reported (11,13). While a few investigators (5-7) comfortably rationalised the grain size effects on AK~L on the basis of crack closure by a comparison in terms of the closure-free component of the thresho~Ifc~, AK -f such an approach has yet to be extended to high strength steels, An attempt has been made in t~et ,pthrg sent study to explai. n the effect of pri, or austeniti.c grain size on &Kth on the basis of crack closure measurements in a high strength steel.
Resumo:
The statistical minimum risk pattern recognition problem, when the classification costs are random variables of unknown statistics, is considered. Using medical diagnosis as a possible application, the problem of learning the optimal decision scheme is studied for a two-class twoaction case, as a first step. This reduces to the problem of learning the optimum threshold (for taking appropriate action) on the a posteriori probability of one class. A recursive procedure for updating an estimate of the threshold is proposed. The estimation procedure does not require the knowledge of actual class labels of the sample patterns in the design set. The adaptive scheme of using the present threshold estimate for taking action on the next sample is shown to converge, in probability, to the optimum. The results of a computer simulation study of three learning schemes demonstrate the theoretically predictable salient features of the adaptive scheme.
Resumo:
The decision of Henry J in Ginn & Anor v Ginn; ex parte Absolute Law Lawyers & Attorneys [2015] QSC 49 provides clarification of the approach to be taken on a default costs assessment under r708 of the Uniform Civil Procedure Rules 1999
Resumo:
Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.
Resumo:
A 59-year-old man was mistakenly prescribed Slow-Na instead of Slow-K due to incorrect selection from a drop-down list in the prescribing software. This error was identified by a pharmacist during a home medicine review (HMR) before the patient began taking the supplement. The reported error emphasizes the need for vigilance due to the emergence of novel look-alike, sound-alike (LASA) drug pairings. This case highlights the important role of pharmacists in medication safety.
Resumo:
A residual-based strategy to estimate the local truncation error in a finite volume framework for steady compressible flows is proposed. This estimator, referred to as the -parameter, is derived from the imbalance arising from the use of an exact operator on the numerical solution for conservation laws. The behaviour of the residual estimator for linear and non-linear hyperbolic problems is systematically analysed. The relationship of the residual to the global error is also studied. The -parameter is used to derive a target length scale and consequently devise a suitable criterion for refinement/derefinement. This strategy, devoid of any user-defined parameters, is validated using two standard test cases involving smooth flows. A hybrid adaptive strategy based on both the error indicators and the -parameter, for flows involving shocks is also developed. Numerical studies on several compressible flow cases show that the adaptive algorithm performs excellently well in both two and three dimensions.