190 resultados para False anglicisms
Resumo:
In a classification problem typically we face two challenging issues, the diverse characteristic of negative documents and sometimes a lot of negative documents that are closed to positive documents. Therefore, it is hard for a single classifier to clearly classify incoming documents into classes. This paper proposes a novel gradual problem solving to create a two-stage classifier. The first stage identifies reliable negatives (negative documents with weak positive characteristics). It concentrates on minimizing the number of false negative documents (recall-oriented). We use Rocchio, an existing recall based classifier, for this stage. The second stage is a precision-oriented “fine tuning”, concentrates on minimizing the number of false positive documents by applying pattern (a statistical phrase) mining techniques. In this stage a pattern-based scoring is followed by threshold setting (thresholding). Experiment shows that our statistical phrase based two-stage classifier is promising.
Resumo:
A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions. This observation raises the question, "what assumptions are required to achieve one-time programs for quantum circuits?" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.
Resumo:
Acoustic sensors can be used to estimate species richness for vocal species such as birds. They can continuously and passively record large volumes of data over extended periods. These data must subsequently be analyzed to detect the presence of vocal species. Automated analysis of acoustic data for large numbers of species is complex and can be subject to high levels of false positive and false negative results. Manual analysis by experienced surveyors can produce accurate results; however the time and effort required to process even small volumes of data can make manual analysis prohibitive. This study examined the use of sampling methods to reduce the cost of analyzing large volumes of acoustic sensor data, while retaining high levels of species detection accuracy. Utilizing five days of manually analyzed acoustic sensor data from four sites, we examined a range of sampling frequencies and methods including random, stratified, and biologically informed. We found that randomly selecting 120 one-minute samples from the three hours immediately following dawn over five days of recordings, detected the highest number of species. On average, this method detected 62% of total species from 120 one-minute samples, compared to 34% of total species detected from traditional area search methods. Our results demonstrate that targeted sampling methods can provide an effective means for analyzing large volumes of acoustic sensor data efficiently and accurately. Development of automated and semi-automated techniques is required to assist in analyzing large volumes of acoustic sensor data. Read More: http://www.esajournals.org/doi/abs/10.1890/12-2088.1
Resumo:
Crashes on motorway contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence reduce crashes will help address congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a Short time window around the time of crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques, that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists, and that this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with traffic flow data of one hour prior to the crash using an incident detection algorithm. Traffic flow trends (traffic speed/occupancy time series) revealed that crashes could be clustered with regards of the dominant traffic flow pattern prior to the crash. Using the k-means clustering method allowed the crashes to be clustered based on their flow trends rather than their distance. Four major trends have been found in the clustering results. Based on these findings, crash likelihood estimation algorithms can be fine-tuned based on the monitored traffic flow conditions with a sliding window of 60 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence, reducing the frequency of crashes assists in addressing congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a short time window around the time of a crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists. We will compare them with normal traffic trends and show this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding to traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash. Using the K-Means clustering method with Euclidean distance function allowed the crashes to be clustered. Then, normal situation data was extracted based on the time distribution of crashes and were clustered to compare with the “high risk” clusters. Five major trends have been found in the clustering results for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Based on these findings, crash likelihood estimation models can be fine-tuned based on the monitored traffic conditions with a sliding window of 30 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Recently, vision-based systems have been deployed in professional sports to track the ball and players to enhance analysis of matches. Due to their unobtrusive nature, vision-based approaches are preferred to wearable sensors (e.g. GPS or RFID sensors) as it does not require players or balls to be instrumented prior to matches. Unfortunately, in continuous team sports where players need to be tracked continuously over long-periods of time (e.g. 35 minutes in field-hockey or 45 minutes in soccer), current vision-based tracking approaches are not reliable enough to provide fully automatic solutions. As such, human intervention is required to fix-up missed or false detections. However, in instances where a human can not intervene due to the sheer amount of data being generated - this data can not be used due to the missing/noisy data. In this paper, we investigate two representations based on raw player detections (and not tracking) which are immune to missed and false detections. Specifically, we show that both team occupancy maps and centroids can be used to detect team activities, while the occupancy maps can be used to retrieve specific team activities. An evaluation on over 8 hours of field hockey data captured at a recent international tournament demonstrates the validity of the proposed approach.
Resumo:
In this paper, we describe a method to represent and discover adversarial group behavior in a continuous domain. In comparison to other types of behavior, adversarial behavior is heavily structured as the location of a player (or agent) is dependent both on their teammates and adversaries, in addition to the tactics or strategies of the team. We present a method which can exploit this relationship through the use of a spatiotemporal basis model. As players constantly change roles during a match, we show that employing a "role-based" representation instead of one based on player "identity" can best exploit the playing structure. As vision-based systems currently do not provide perfect detection/tracking (e.g. missed or false detections), we show that our compact representation can effectively "denoise" erroneous detections as well as enabe temporal analysis, which was previously prohibitive due to the dimensionality of the signal. To evaluate our approach, we used a fully instrumented field-hockey pitch with 8 fixed high-definition (HD) cameras and evaluated our approach on approximately 200,000 frames of data from a state-of-the-art real-time player detector and compare it to manually labelled data.
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
Taking an empirical, critical approach to the problem of drugs, this thesis explores the interaction of drug policies and young people's drug use in Brisbane. The research argues that criminalising drug users does not usually prevent harmful drug use, but it can exacerbate harm and change how young people use drugs. Contemporary understandings of drug use as either recreational or addictive can create a false binary, and influence how illicit drugs are used. These understandings interact with policy responses to the drug problem, with some very real implications for the lived experiences of drug users. This research opens up possibilities for new directions in drug research and allows for a redefinition of drug related harm.
Resumo:
Numbers, rates and proportions of those remanded in custody have increased significantly in recent decades across a range of jurisdictions. In Australia they have doubled since the early 1980s, such that close to one in four prisoners is currently unconvicted. Taking NSW as a case study and drawing on the recent New South Wales Law Reform Commission Report on Bail (2012), this article will identify the key drivers of this increase in NSW, predominantly a form of legislative hyperactivity involving constant changes to the Bail Act 1978 (NSW), changes which remove or restrict the presumption in favour of bail for a wide range of offences. The article will then examine some of the conceptual, cultural and practice shifts underlying the increase. These include: a shift away from a conception of bail as a procedural issue predominantly concerned with securing the attendance of the accused at trial and the integrity of the trial, to the use of bail for crime prevention purposes; the diminishing force of the presumption of innocence; the framing of a false opposition between an individual interest in liberty and a public interest in safety; a shift from determination of the individual case by reference to its own particular circumstances to determination by its classification within pre‐set legislative categories of offence types and previous convictions; a double jeopardy effect arising in relation to people with previous convictions for which they have already been punished; and an unacknowledged preventive detention effect arising from the increased emphasis on risk. Many of these conceptual shifts are apparent in the explosion in bail conditions and the KPI‐driven policing of bail conditions and consequent rise in revocations, especially in relation to juveniles. The paper will conclude with a note on the NSW Government’s response to the NSW LRC Report in the form of a Bail Bill (2013) and brief speculation as to its likely effects.
Resumo:
Genetic variability in the strength and precision of fear memory is hypothesised to contribute to the etiology of anxiety disorders, including post-traumatic stress disorder. We generated fear-susceptible (F-S) or fear-resistant (F-R) phenotypes from an F8 advanced intercross line (AIL) of C57BL/6J and DBA/2J inbred mice by selective breeding. We identified specific traits underlying individual variability in Pavlovian conditioned fear learning and memory. Offspring of selected lines differed in the acquisition of conditioned fear. Furthermore, F-S mice showed greater cued fear memory and generalised fear in response to a novel context than F-R mice. F-S mice showed greater basal corticosterone levels and hypothalamic corticotrophin-releasing hormone (CRH) mRNA levels than F-R mice, consistent with higher hypothalamic-pituitary-adrenal (HPA) axis drive. Hypothalamic mineralocorticoid receptor and CRH receptor 1 mRNA levels were decreased in F-S mice as compared with F-R mice. Manganese-enhanced magnetic resonance imaging (MEMRI) was used to investigate basal levels of brain activity. MEMRI identified a pattern of increased brain activity in F-S mice that was driven primarily by the hippocampus and amygdala, indicating excessive limbic circuit activity in F-S mice as compared with F-R mice. Thus, selection pressure applied to the AIL population leads to the accumulation of heritable trait-relevant characteristics within each line, whereas non-behaviorally relevant traits remain distributed. Selected lines therefore minimise false-positive associations between behavioral phenotypes and physiology. We demonstrate that intrinsic differences in HPA axis function and limbic excitability contribute to phenotypic differences in the acquisition and consolidation of associative fear memory. Identification of system-wide traits predisposing to variability in fear memory may help in the direction of more targeted and efficacious treatments for fear-related pathology. Through short-term selection in a B6D2 advanced intercross line we created mouse populations divergent for the retention of Pavlovian fear memory. Trait distinctions in HPA-axis drive and fear network circuitry could be made between naïve animals in the two lines. These data demonstrate underlying physiological and neurological differences between Fear-Susceptible and Fear-Resistant animals in a natural population. F-S and F-R mice may therefore be relevant to a spectrum of disorders including depression, anxiety disorders and PTSD for which altered fear processing occurs.
Resumo:
Suicide is a serious public health issue that results from an interaction between multiple risk factors including individual vulnerabilities to complex feelings of hopelessness, fear, and stress. Although kinase genes have been implicated in fear and stress, including the consolidation and extinction of fearful memories, expression profiles of those genes in the brain of suicide victims are less clear. Using gene expression microarray data from the Online Stanley Genomics Database 1 and a quantitative PCR, we investigated the expression profiles of multiple kinase genes including the calcium calmodulin-dependent kinase (CAMK), the cyclin-dependent kinase, the mitogen-activated protein kinase (MAPK), and the protein kinase C (PKC) in the prefrontal cortex (PFC) of mood disorder patients died with suicide (N = 45) and without suicide (N = 38). We also investigated the expression pattern of the same genes in the PFC of developing humans ranging in age from birth to 49 year (N = 46). The expression levels of CAMK2B, CDK5, MAPK9, and PRKCI were increased in the PFC of suicide victims as compared to non-suicide controls (false discovery rate, FDR-adjusted p < 0.05, fold change >1.1). Those genes also showed changes in expression pattern during the postnatal development (FDR-adjusted p < 0.05). These results suggest that multiple kinase genes undergo age-dependent changes in normal brains as well as pathological changes in suicide brains. These findings may provide an important link to protein kinases known to be important for the development of fear memory, stress associated neural plasticity, and up-regulation in the PFC of suicide victims. More research is needed to better understand the functional role of these kinase genes that may be associated with the pathophysiology of suicide
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestion. Hence, reducing the frequency of crashes assist in addressing congestion issues (Meyer, 2008). Analysing traffic conditions and discovering risky traffic trends and patterns are essential basics in crash likelihood estimations studies and still require more attention and investigation. In this paper we will show, through data mining techniques, that there is a relationship between pre-crash traffic flow patterns and crash occurrence on motorways, compare them with normal traffic trends, and that this knowledge has the potentiality to improve the accuracy of existing crash likelihood estimation models, and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash occurrence. K-Means clustering algorithm applied to determine dominant pre-crash traffic patterns. In the first phase of this research, traffic regimes identified by analysing crashes and normal traffic situations using half an hour speed in upstream locations of crashes. Then, the second phase investigated the different combination of speed risk indicators to distinguish crashes from normal traffic situations more precisely. Five major trends have been found in the first phase of this paper for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Moreover, the second phase explains that spatiotemporal difference of speed is a better risk indicator among different combinations of speed related risk indicators. Based on these findings, crash likelihood estimation models can be fine-tuned to increase accuracy of estimations and minimize false alarms.
Resumo:
During the last three decades, restorative justice has emerged in numerous localities around the world as an accepted approach to responding to crime. This article, which stems from a doctoral study on the history of restorative justice, provides a critical analysis of accepted histories of restorative practices. It revisits the celebrated historical texts of the restorative justice movement, and re-evaluates their contribution to the emergence of restorative justice measures. It traces the emergence of the term 'restorative justice', and reveals that it emerged in much earlier writings than is commonly thought to be the case by scholars in the restorative justice field. It also briefly considers some 'power struggles' in relation to producing an accepted version of the history of restorative justice, and scholars' attempts to 'rewrite history' to align with current views on restorative justice. Finally, this article argues that some histories of restorative justice selectively and inaccurately portray key figures from the history of criminology as restorative justice supporters. This, it is argued, gives restorative justice a false lineage and operates to legitimise the widespread adoption of restorative justice around the globe.