223 resultados para Probabilistic approaches


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article explores the use of restorative justice as a response to sexual crime. The management of high risk sex offenders, particularly in the community post-release, has been a key focus of contemporary popular and political debates on sexual offending. Many offenders fail to come to the attention of the criminal justice system. For those that do, there is the almost blanket application of recent control in the community measures such as sex offender registries and community notification which have failed to prevent reoffending. The response by the media and the public to the presence of sex offenders in the community may also impede offender rehabilitation. The use of punishment alone via formal criminal justice is, therefore, an inadequate deterrent for sexual crimes. Although controversial, this article advocates the use of restorative practices with sexual crime as a proactive, holistic response to the problem and ultimately as a more effective means of reducing the incidence of sexual offences and sex offender recidivism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Incidence calculus is a mechanism for probabilistic reasoning in which sets of possible worlds, called incidences, are associated with axioms, and probabilities are then associated with these sets. Inference rules are used to deduce bounds on the incidence of formulae which are not axioms, and bounds for the probability of such a formula can then be obtained. In practice an assignment of probabilities directly to axioms may be given, and it is then necessary to find an assignment of incidence which will reproduce these probabilities. We show that this task of assigning incidences can be viewed as a tree searching problem, and two techniques for performing this research are discussed. One of these is a new proposal involving a depth first search, while the other incorporates a random element. A Prolog implementation of these methods has been developed. The two approaches are compared for efficiency and the significance of their results are discussed. Finally we discuss a new proposal for applying techniques from linear programming to incidence calculus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the relations between extended incidence calculus and assumption-based truth maintenance systems (ATMSs). We first prove that managing labels for statements (nodes) in an ATMS is equivalent to producing incidence sets of these statements in extended incidence calculus. We then demonstrate that the justification set for a node is functionally equivalent to the implication relation set for the same node in extended incidence calculus. As a consequence, extended incidence calculus can provide justifications for an ATMS, because implication relation sets are discovered by the system automatically. We also show that extended incidence calculus provides a theoretical basis for constructing a probabilistic ATMS by associating proper probability distributions on assumptions. In this way, we can not only produce labels for all nodes in the system, but also calculate the probability of any of such nodes in it. The nogood environments can also be obtained automatically. Therefore, extended incidence calculus and the ATMS are equivalent in carrying out inferences at both the symbolic level and the numerical level. This extends a result due to Laskey and Lehner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Face recognition with unknown, partial distortion and occlusion is a practical problem, and has a wide range of applications, including security and multimedia information retrieval. The authors present a new approach to face recognition subject to unknown, partial distortion and occlusion. The new approach is based on a probabilistic decision-based neural network, enhanced by a statistical method called the posterior union model (PUM). PUM is an approach for ignoring severely mismatched local features and focusing the recognition mainly on the reliable local features. It thereby improves the robustness while assuming no prior information about the corruption. We call the new approach the posterior union decision-based neural network (PUDBNN). The new PUDBNN model has been evaluated on three face image databases (XM2VTS, AT&T and AR) using testing images subjected to various types of simulated and realistic partial distortion and occlusion. The new system has been compared to other approaches and has demonstrated improved performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are now more than 1200 papers a year describing research results using the 'neoteric' solvents, known as ionic liquids (ILs). If ILs are such highly studied solvents, why has there been so comparatively little research in their use in crystallization? Here we explore this question and discuss possible strategies for utilization of the mundane and the unique aspects of ILs for novel crystallization strategies including crystallization of high and low melting solids using thermal shifts; ''solvothermal'' techniques; slow diffusion; electrocrystallization; and use of a co-solvent. The results presented here and those appearing in the literature indicate both the complex nature of these solvents and their promise in delivering unique solvation, metal ion coordination numbers, coordination polymer motifs, and metal-anion interactions, to name but a few. These complex, but fascinating, results and the promise of much more intimate control over crystallization processes will drive a growing interest in using ILs as crystallization solvents.