195 resultados para False confession


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Volunteering is a very important part of life in Australia with an estimated 36% of the adult population volunteering in 2010. Voluntary work generates economic benefits, addresses community needs and develops the social networks that form the backbone of civil society. Without volunteers, many essential services would either cease to exist or become too expensive for many people to afford. These volunteers, who by definition are not in receipt of any remuneration for their work and services, are exposed to personal injury and to legal liability in the discharge of their functions. It is therefore appropriate that statutory protection is extended to volunteers and that volunteer organisations procure public liability and personal accident cover where possible. However, given the patchwork quilt of circumstances where statutory or institutional cover is available to volunteers and the existence of many and diverse exclusions, it is important to have regard also to what scope a volunteer may have to avail themselves of protection against liability for volunteering activity by relying upon their own personal insurance cover. This article considers the extent of private insurance cover and its availability to volunteers under home and contents insurance and under comprehensive motor vehicle insurance. The most common policies in the Australian market are examined and the uncertain nature of protection against liability afforded by these policies is discussed. This uncertainty could be reduced should the Federal Government through amendments to the Insurance Contracts Regulations standardise the circumstances and extent to which liability protection was afforded to an insured holding home and contents insurance and comprehensive motor vehicle insurance cover.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Iris based identity verification is highly reliable but it can also be subject to attacks. Pupil dilation or constriction stimulated by the application of drugs are examples of sample presentation security attacks which can lead to higher false rejection rates. Suspects on a watch list can potentially circumvent the iris based system using such methods. This paper investigates a new approach using multiple parts of the iris (instances) and multiple iris samples in a sequential decision fusion framework that can yield robust performance. Results are presented and compared with the standard full iris based approach for a number of iris degradations. An advantage of the proposed fusion scheme is that the trade-off between detection errors can be controlled by setting parameters such as the number of instances and the number of samples used in the system. The system can then be operated to match security threat levels. It is shown that for optimal values of these parameters, the fused system also has a lower total error rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a classification problem typically we face two challenging issues, the diverse characteristic of negative documents and sometimes a lot of negative documents that are closed to positive documents. Therefore, it is hard for a single classifier to clearly classify incoming documents into classes. This paper proposes a novel gradual problem solving to create a two-stage classifier. The first stage identifies reliable negatives (negative documents with weak positive characteristics). It concentrates on minimizing the number of false negative documents (recall-oriented). We use Rocchio, an existing recall based classifier, for this stage. The second stage is a precision-oriented “fine tuning”, concentrates on minimizing the number of false positive documents by applying pattern (a statistical phrase) mining techniques. In this stage a pattern-based scoring is followed by threshold setting (thresholding). Experiment shows that our statistical phrase based two-stage classifier is promising.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions. This observation raises the question, "what assumptions are required to achieve one-time programs for quantum circuits?" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acoustic sensors can be used to estimate species richness for vocal species such as birds. They can continuously and passively record large volumes of data over extended periods. These data must subsequently be analyzed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced surveyors can produce accurate results; however the time and effort required to process even small volumes of data can make manual analysis prohibitive. This study examined the use of sampling methods to reduce the cost of analyzing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilizing five days of manually analyzed acoustic sensor data from four sites, we examined a range of sampling frequencies and methods including random, stratified, and biologically informed. We found that randomly selecting 120 one-minute samples from the three hours immediately following dawn over five days of recordings, detected the highest number of species. On average, this method detected 62% of total species from 120 one-minute samples, compared to 34% of total species detected from traditional area search methods. Our results demonstrate that targeted sampling methods can provide an effective means for analyzing large volumes of acoustic sensor data efficiently and accurately. Development of automated and semi-automated techniques is required to assist in analyzing large volumes of acoustic sensor data. Read More: http://www.esajournals.org/doi/abs/10.1890/12-2088.1

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Crashes on motorway contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence reduce crashes will help address congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a Short time window around the time of crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques, that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists, and that this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with traffic flow data of one hour prior to the crash using an incident detection algorithm. Traffic flow trends (traffic speed/occupancy time series) revealed that crashes could be clustered with regards of the dominant traffic flow pattern prior to the crash. Using the k-means clustering method allowed the crashes to be clustered based on their flow trends rather than their distance. Four major trends have been found in the clustering results. Based on these findings, crash likelihood estimation algorithms can be fine-tuned based on the monitored traffic flow conditions with a sliding window of 60 minutes to increase accuracy of the results and minimize false alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence, reducing the frequency of crashes assists in addressing congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a short time window around the time of a crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists. We will compare them with normal traffic trends and show this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding to traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash. Using the K-Means clustering method with Euclidean distance function allowed the crashes to be clustered. Then, normal situation data was extracted based on the time distribution of crashes and were clustered to compare with the “high risk” clusters. Five major trends have been found in the clustering results for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Based on these findings, crash likelihood estimation models can be fine-tuned based on the monitored traffic conditions with a sliding window of 30 minutes to increase accuracy of the results and minimize false alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, vision-based systems have been deployed in professional sports to track the ball and players to enhance analysis of matches. Due to their unobtrusive nature, vision-based approaches are preferred to wearable sensors (e.g. GPS or RFID sensors) as it does not require players or balls to be instrumented prior to matches. Unfortunately, in continuous team sports where players need to be tracked continuously over long-periods of time (e.g. 35 minutes in field-hockey or 45 minutes in soccer), current vision-based tracking approaches are not reliable enough to provide fully automatic solutions. As such, human intervention is required to fix-up missed or false detections. However, in instances where a human can not intervene due to the sheer amount of data being generated - this data can not be used due to the missing/noisy data. In this paper, we investigate two representations based on raw player detections (and not tracking) which are immune to missed and false detections. Specifically, we show that both team occupancy maps and centroids can be used to detect team activities, while the occupancy maps can be used to retrieve specific team activities. An evaluation on over 8 hours of field hockey data captured at a recent international tournament demonstrates the validity of the proposed approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we describe a method to represent and discover adversarial group behavior in a continuous domain. In comparison to other types of behavior, adversarial behavior is heavily structured as the location of a player (or agent) is dependent both on their teammates and adversaries, in addition to the tactics or strategies of the team. We present a method which can exploit this relationship through the use of a spatiotemporal basis model. As players constantly change roles during a match, we show that employing a "role-based" representation instead of one based on player "identity" can best exploit the playing structure. As vision-based systems currently do not provide perfect detection/tracking (e.g. missed or false detections), we show that our compact representation can effectively "denoise" erroneous detections as well as enabe temporal analysis, which was previously prohibitive due to the dimensionality of the signal. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labelled data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Taking an empirical, critical approach to the problem of drugs, this thesis explores the interaction of drug policies and young people's drug use in Brisbane. The research argues that criminalising drug users does not usually prevent harmful drug use, but it can exacerbate harm and change how young people use drugs. Contemporary understandings of drug use as either recreational or addictive can create a false binary, and influence how illicit drugs are used. These understandings interact with policy responses to the drug problem, with some very real implications for the lived experiences of drug users. This research opens up possibilities for new directions in drug research and allows for a redefinition of drug related harm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Numbers, rates and proportions of those remanded in custody have increased significantly in recent decades across a range of jurisdictions. In Australia they have doubled since the early 1980s, such that close to one in four prisoners is currently unconvicted. Taking NSW as a case study and drawing on the recent New South Wales Law Reform Commission Report on Bail (2012), this article will identify the key drivers of this increase in NSW, predominantly a form of legislative hyperactivity involving constant changes to the Bail Act 1978 (NSW), changes which remove or restrict the presumption in favour of bail for a wide range of offences. The article will then examine some of the conceptual, cultural and practice shifts underlying the increase. These include: a shift away from a conception of bail as a procedural issue predominantly concerned with securing the attendance of the accused at trial and the integrity of the trial, to the use of bail for crime prevention purposes; the diminishing force of the presumption of innocence; the framing of a false opposition between an individual interest in liberty and a public interest in safety; a shift from determination of the individual case by reference to its own particular circumstances to determination by its classification within pre‐set legislative categories of offence types and previous convictions; a double jeopardy effect arising in relation to people with previous convictions for which they have already been punished; and an unacknowledged preventive detention effect arising from the increased emphasis on risk. Many of these conceptual shifts are apparent in the explosion in bail conditions and the KPI‐driven policing of bail conditions and consequent rise in revocations, especially in relation to juveniles. The paper will conclude with a note on the NSW Government’s response to the NSW LRC Report in the form of a Bail Bill (2013) and brief speculation as to its likely effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Genetic variability in the strength and precision of fear memory is hypothesised to contribute to the etiology of anxiety disorders, including post-traumatic stress disorder. We generated fear-susceptible (F-S) or fear-resistant (F-R) phenotypes from an F8 advanced intercross line (AIL) of C57BL/6J and DBA/2J inbred mice by selective breeding. We identified specific traits underlying individual variability in Pavlovian conditioned fear learning and memory. Offspring of selected lines differed in the acquisition of conditioned fear. Furthermore, F-S mice showed greater cued fear memory and generalised fear in response to a novel context than F-R mice. F-S mice showed greater basal corticosterone levels and hypothalamic corticotrophin-releasing hormone (CRH) mRNA levels than F-R mice, consistent with higher hypothalamic-pituitary-adrenal (HPA) axis drive. Hypothalamic mineralocorticoid receptor and CRH receptor 1 mRNA levels were decreased in F-S mice as compared with F-R mice. Manganese-enhanced magnetic resonance imaging (MEMRI) was used to investigate basal levels of brain activity. MEMRI identified a pattern of increased brain activity in F-S mice that was driven primarily by the hippocampus and amygdala, indicating excessive limbic circuit activity in F-S mice as compared with F-R mice. Thus, selection pressure applied to the AIL population leads to the accumulation of heritable trait-relevant characteristics within each line, whereas non-behaviorally relevant traits remain distributed. Selected lines therefore minimise false-positive associations between behavioral phenotypes and physiology. We demonstrate that intrinsic differences in HPA axis function and limbic excitability contribute to phenotypic differences in the acquisition and consolidation of associative fear memory. Identification of system-wide traits predisposing to variability in fear memory may help in the direction of more targeted and efficacious treatments for fear-related pathology. Through short-term selection in a B6D2 advanced intercross line we created mouse populations divergent for the retention of Pavlovian fear memory. Trait distinctions in HPA-axis drive and fear network circuitry could be made between naïve animals in the two lines. These data demonstrate underlying physiological and neurological differences between Fear-Susceptible and Fear-Resistant animals in a natural population. F-S and F-R mice may therefore be relevant to a spectrum of disorders including depression, anxiety disorders and PTSD for which altered fear processing occurs.