912 resultados para default probability


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This volume puts together the works of a group of distinguished scholars and active researchers in the field of media and communication studies to reflect upon the past, present, and future of new media research. The chapters examine the implications of new media technologies on everyday life, existing social institutions, and the society at large at various levels of analysis. Macro-level analyses of changing techno-social formation – such as discussions of the rise of surveillance society and the "fifth estate" – are combined with studies on concrete and specific new media phenomena, such as the rise of Pro-Am collaboration and "fan labor" online. In the process, prominent concepts in the field of new media studies, such as social capital, displacement, and convergence, are critically examined, while new theoretical perspectives are proposed and explicated. Reflecting the inter-disciplinary nature of the field of new media studies and communication research in general, the chapters interrogate into the problematic through a range of theoretical and methodological approaches. The book should offer students and researchers who are interested in the social impact of new media both critical reviews of the existing literature and inspirations for developing new research questions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Hill v Robertson Suspension Systems Pty Ltd [2009] QDC 165 McGill DCJ considered the procedural requirements for the service of originating process on a company, and for proving that service for the purpose of obtaining default judgment.The judge’s views adopt a strict and technical construction of the requirements for an affidavit of service under r 120(1)(b). Though clearly obiter, they may well affect the approach taken on applications to enter or set aside default judgments in the lower courts. Pending further judicial consideration of the issue, it is suggested the prudent course is to ensure that the deponent of an affidavit for service effected under s 109X(1)(a) of the Act deposes not only to the location of the registered office of the company but also, at a minimum, provides the source of that information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In condition-based maintenance (CBM), effective diagnostic and prognostic tools are essential for maintenance engineers to identify imminent fault and predict the remaining useful life before the components finally fail. This enables remedial actions to be taken in advance and reschedule of production if necessary. All machine components are subjected to degradation processes in real environments and they have certain failure characteristics which can be related to the operating conditions. This paper describes a technique for accurate assessment of the remnant life of bearings based on health state probability estimation and historical knowledge embedded in the closed loop diagnostics and prognostics system. The technique uses the Support Vector Machine (SVM) classifier as a tool for estimating health state probability of machine degradation process to provide long term prediction. To validate the feasibility of the proposed model, real life fault historical data from bearings of High Pressure-Liquefied Natural Gas (HP-LNG) pumps were analysed and used to obtain the optimal prediction of remaining useful life (RUL). The results obtained were very encouraging and showed that the proposed prognosis system based on health state probability estimation has the potential to be used as an estimation tool for remnant life prediction in industrial machinery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dose kernels may be used to calculate dose distributions in radiotherapy (as described by Ahnesjo et al., 1999). Their calculation requires use of Monte Carlo methods, usually by forcing interactions to occur at a point. The Geant4 Monte Carlo toolkit provides a capability to force interactions to occur in a particular volume. We have modified this capability and created a Geant4 application to calculate dose kernels in cartesian, cylindrical, and spherical scoring systems. The simulation considers monoenergetic photons incident at the origin of a 3 m x 3 x 9 3 m water volume. Photons interact via compton, photo-electric, pair production, and rayleigh scattering. By default, Geant4 models photon interactions by sampling a physical interaction length (PIL) for each process. The process returning the smallest PIL is then considered to occur. In order to force the interaction to occur within a given length, L_FIL, we scale each PIL according to the formula: PIL_forced = L_FIL 9 (1 - exp(-PIL/PILo)) where PILo is a constant. This ensures that the process occurs within L_FIL, whilst correctly modelling the relative probability of each process. Dose kernels were produced for an incident photon energy of 0.1, 1.0, and 10.0 MeV. In order to benchmark the code, dose kernels were also calculated using the EGSnrc Edknrc user code. Identical scoring systems were used; namely, the collapsed cone approach of the Edknrc code. Relative dose difference images were then produced. Preliminary results demonstrate the ability of the Geant4 application to reproduce the shape of the dose kernels; median relative dose differences of 12.6, 5.75, and 12.6 % were found for an incident photon energy of 0.1, 1.0, and 10.0 MeV respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We examine the asset allocation, returns, and expenses of superannuation funds whose assets are mainly invested in default investment options between 2004 and 2012. A majority of these funds fail to earn returns commensurate with their strategic asset allocation policy. It appears that much of the variation of returns between the funds might be a result of their engaging in significant active management of assets. Our results indicate that returns from active management are negatively related to expenses. We also find strong evidence of economies of scale existing in these superannuation funds across different size categories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents new theoretical and empirical evidence on the forecasting ability of prediction markets. We develop a model that predicts that the time until expiration of a prediction market should negatively affect the accuracy of prices as a forecasting tool in the direction of a ‘favourite/longshot bias’. That is, high-likelihood events are underpriced, and low-likelihood events are over-priced. We confirm this result using a large data set of prediction market transaction prices. Prediction markets are reasonably well calibrated when time to expiration is relatively short, but prices are significantly biased for events farther in the future. When time value of money is considered, the miscalibration can be exploited to earn excess returns only when the trader has a relatively low discount rate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A known limitation of the Probability Ranking Principle (PRP) is that it does not cater for dependence between documents. Recently, the Quantum Probability Ranking Principle (QPRP) has been proposed, which implicitly captures dependencies between documents through “quantum interference”. This paper explores whether this new ranking principle leads to improved performance for subtopic retrieval, where novelty and diversity is required. In a thorough empirical investigation, models based on the PRP, as well as other recently proposed ranking strategies for subtopic retrieval (i.e. Maximal Marginal Relevance (MMR) and Portfolio Theory(PT)), are compared against the QPRP. On the given task, it is shown that the QPRP outperforms these other ranking strategies. And unlike MMR and PT, one of the main advantages of the QPRP is that no parameter estimation/tuning is required; making the QPRP both simple and effective. This research demonstrates that the application of quantum theory to problems within information retrieval can lead to significant improvements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, we summarise the development of a ranking principle based on quantum probability theory, called the Quantum Probability Ranking Principle (QPRP), and we also provide an overview of the initial experiments performed employing the QPRP. The main difference between the QPRP and the classic Probability Ranking Principle, is that the QPRP implicitly captures the dependencies between documents by means of quantum interference". Subsequently, the optimal ranking of documents is not based solely on documents' probability of relevance but also on the interference with the previously ranked documents. Our research shows that the application of quantum theory to problems within information retrieval can lead to consistently better retrieval effectiveness, while still being simple, elegant and tractable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

At NDSS 2012, Yan et al. analyzed the security of several challenge-response type user authentication protocols against passive observers, and proposed a generic counting based statistical attack to recover the secret of some counting based protocols given a number of observed authentication sessions. Roughly speaking, the attack is based on the fact that secret (pass) objects appear in challenges with a different probability from non-secret (decoy) objects when the responses are taken into account. Although they mentioned that a protocol susceptible to this attack should minimize this difference, they did not give details as to how this can be achieved barring a few suggestions. In this paper, we attempt to fill this gap by generalizing the attack with a much more comprehensive theoretical analysis. Our treatment is more quantitative which enables us to describe a method to theoretically estimate a lower bound on the number of sessions a protocol can be safely used against the attack. Our results include 1) two proposed fixes to make counting protocols practically safe against the attack at the cost of usability, 2) the observation that the attack can be used on non-counting based protocols too as long as challenge generation is contrived, 3) and two main design principles for user authentication protocols which can be considered as extensions of the principles from Yan et al. This detailed theoretical treatment can be used as a guideline during the design of counting based protocols to determine their susceptibility to this attack. The Foxtail protocol, one of the protocols analyzed by Yan et al., is used as a representative to illustrate our theoretical and experimental results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The appropriateness of default investment options in participant-directed retirement plans like 401(k) has been in sharp focus given that most participants fail to nominate an investment option to direct their contributions. In United States (US), prior to the Pension Protection Act (PPA) of 2006, plan fiduciaries often selected a money market fund as the default option. Whilst this ‘low risk and low return’ investment option was considered to be a ‘safe’ choice by many fiduciaries who were fearful of litigation risk, it was heavily criticized for resulting in inadequate wealth at retirement, particularly when retirees were living much longer and facing inflation risk (see, for example, Viceira, 2008; Skinner, 2009)...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Effective machine fault prognostic technologies can lead to elimination of unscheduled downtime and increase machine useful life and consequently lead to reduction of maintenance costs as well as prevention of human casualties in real engineering asset management. This paper presents a technique for accurate assessment of the remnant life of machines based on health state probability estimation technique and historical failure knowledge embedded in the closed loop diagnostic and prognostic system. To estimate a discrete machine degradation state which can represent the complex nature of machine degradation effectively, the proposed prognostic model employed a classification algorithm which can use a number of damage sensitive features compared to conventional time series analysis techniques for accurate long-term prediction. To validate the feasibility of the proposed model, the five different level data of typical four faults from High Pressure Liquefied Natural Gas (HP-LNG) pumps were used for the comparison of intelligent diagnostic test using five different classification algorithms. In addition, two sets of impeller-rub data were analysed and employed to predict the remnant life of pump based on estimation of health state probability using the Support Vector Machine (SVM) classifier. The results obtained were very encouraging and showed that the proposed prognostics system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The operation of the law rests on the selection of an account of the facts. Whether this involves prediction or postdiction, it is not possible to achieve certainty. Any attempt to model the operation of the law completely will therefore raise questions of how to model the process of proof. In the selection of a model a crucial question will be whether the model is to be used normatively or descriptively. Focussing on postdiction, this paper presents and contrasts the mathematical model with the story model. The former carries the normative stamp of scientific approval, whereas the latter has been developed by experimental psychologists to describe how humans reason. Neil Cohen's attempt to use a mathematical model descriptively provides an illustration of the dangers in not clearly setting this parameter of the modelling process. It should be kept in mind that the labels 'normative' and 'descriptive' are not eternal. The mathematical model has its normative limits, beyond which we may need to critically assess models with descriptive origins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While the Probability Ranking Principle for Information Retrieval provides the basis for formal models, it makes a very strong assumption regarding the dependence between documents. However, it has been observed that in real situations this assumption does not always hold. In this paper we propose a reformulation of the Probability Ranking Principle based on quantum theory. Quantum probability theory naturally includes interference effects between events. We posit that this interference captures the dependency between the judgement of document relevance. The outcome is a more sophisticated principle, the Quantum Probability Ranking Principle, that provides a more sensitive ranking which caters for interference/dependence between documents’ relevance.