984 resultados para Fraud Detection
Resumo:
Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour, and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R/3, presently the predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach.
Resumo:
ERP systems generally implement controls to prevent certain common kinds of fraud. In addition however, there is an imperative need for detection of more sophisticated patterns of fraudulent activity as evidenced by the legal requirement for company audits and the common incidence of fraud. This paper describes the design and implementation of a framework for detecting patterns of fraudulent activity in ERP systems. We include the description of six fraud scenarios and the process of specifying and detecting the occurrence of those scenarios in ERP user log data using the prototype software which we have developed. The test results for detecting these scenarios in log data have been verified and confirm the success of our approach which can be generalized to ERP systems in general.
Resumo:
We find evidence that U.S. auditors increased their attention to fraud detection during or immediately after the economic contractions of the 20th century, based on a content analysis of the 12 volumes of the 20th-century auditing reference series Montgomery’s Auditing. Contractions, however, do not seem to have affected auditors’ attention to the formal goal of fraud detection. The study suggests that auditors’ aversion to the heightened risks of fraud during economic downturns leads them to focus more on fraud detection at those times regardless of the particular guidance in formal audit standards. This study is the first to find some evidence of a recession-influenced difference between fraud detection practices and formal fraud detection goals.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Fraud is a global problem that has required more attention due to an accentuated expansion of modern technology and communication. When statistical techniques are used to detect fraud, whether a fraud detection model is accurate enough in order to provide correct classification of the case as a fraudulent or legitimate is a critical factor. In this context, the concept of bootstrap aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the adjusted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper, for the first time, we aim to present a pioneer study of the performance of the discrete and continuous k-dependence probabilistic networks within the context of bagging predictors classification. Via a large simulation study and various real datasets, we discovered that the probabilistic networks are a strong modeling option with high predictive capacity and with a high increment using the bagging procedure when compared to traditional techniques. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Is Benford's law a good instrument to detect fraud in reports of statistical and scientific data? For a valid test the probability of "false positives" and "false negatives" has to be low. However, it is very doubtful whether the Benford distribution is an appropriate tool to discriminate between manipulated and non-manipulated estimates. Further research should focus more on the validity of the test and test results should be interpreted more carefully.
Resumo:
In today's technological age, fraud has become more complicated, and increasingly more difficult to detect, especially when it is collusive in nature. Different fraud surveys showed that the median loss from collusive fraud is much greater than fraud perpetrated by a single person. Despite its prevalence and potentially devastating effects, collusion is commonly overlooked as an organizational risk. Internal auditors often fail to proactively consider collusion in their fraud assessment and detection efforts. In this paper, we consider fraud scenarios with collusion. We present six potentially collusive fraudulent behaviors and show their detection process in an ERP system. We have enhanced our fraud detection framework to utilize aggregation of different sources of logs in order to detect communication and have further enhanced it to render it system-agnostic thus achieving portability and making it generally applicable to all ERP systems.
Resumo:
Nowadays, fraud detection is important to avoid nontechnical energy losses. Various electric companies around the world have been faced with such losses, mainly from industrial and commercial consumers. This problem has traditionally been dealt with using artificial intelligence techniques, although their use can result in difficulties such as a high computational burden in the training phase and problems with parameter optimization. A recently-developed pattern recognition technique called optimum-path forest (OPF), however, has been shown to be superior to state-of-the-art artificial intelligence techniques. In this paper, we proposed to use OPF for nontechnical losses detection, as well as to apply its learning and pruning algorithms to this purpose. Comparisons against neural networks and other techniques demonstrated the robustness of the OPF with respect to commercial losses automatic identification.
Resumo:
2000 Mathematics Subject Classification: 62H30, 62M10, 62M20, 62P20, 94A13.
Resumo:
Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R/3, presently the most predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach.
Resumo:
Billing Mediation Platform (BMP) in telecommunication industry is used to process real-time streams of Call Detail Records (CDRs) which can be a massive number a day. The generated records by BMP can be deployed for billing purposes, fraud detection, spam filtering, traffic analysis, and churn forecast. Several of these applications are distinguished by real-time processing requiring low-latency analysis of CDRs. Testing of such a platform carries diverse aspects like stress testing of analytics for scalability and what-if scenarios which require generating of CDRs with realistic volumetric and appropriate properties. The approach of this project is to build user friendly and flexible application which assists the development department to test their billing solution occasionally. These generators projects have been around for a while the only difference are the potions they cover and the purpose they will be used for. This paper proposes to use a simulator application to test the BMPs with simulating CDRs. The Simulated CDRs are modifiable based on the user requirements and represent real world data.
Resumo:
A presente pesquisa objetiva verificar se os procedimentos utilizados pela Auditoria Interna na detecção de fraudes em uma empresa privada operadora de planos de saúde permitiram a coleta de evidências confiáveis e suficientes para suportar as conclusões dos auditores sobre os fatos reportados nos relatórios de auditoria interna. Para realização do estudo, adotou-se como estratégia investigativa, o estudo de caso único. As técnicas utilizadas ao longo de toda a pesquisa foram as análises documentais e de conteúdo, baseadas nos objetivos propostos no estudo e na fundamentação teórica. A pesquisa concentrou-se nas análises dos relatórios de auditoria interna que reportaram ocorrências de fraudes na empresa estudada, emitidos nos anos de 2010, 2011 e 2012; contudo optou-se, também, por descrever as rotinas e práticas operacionais relacionadas à atuação do departamento de auditoria que contribuíram para uma melhor compreensão dos dados e do resultado do estudo. Os principais achados demonstram que os procedimentos utilizados pela Auditoria Interna na detecção de fraudes permitiram a coleta de evidências de auditoria confiáveis e suficientes para suportar as conclusões dos auditores. O resultado da pesquisa indica, também, que não existe um padrão de utilização dos procedimentos de auditoria. De acordo com o tipo de fraude e objetivo, o auditor interno deve definir quais os procedimentos de auditoria devem ser utilizados na obtenção de evidências de auditoria confiáveis e suficientes para suportar as suas conclusões.
Resumo:
The CTC algorithm, Consolidated Tree Construction algorithm, is a machine learning paradigm that was designed to solve a class imbalance problem, a fraud detection problem in the area of car insurance [1] where, besides, an explanation about the classification made was required. The algorithm is based on a decision tree construction algorithm, in this case the well-known C4.5, but it extracts knowledge from data using a set of samples instead of a single one as C4.5 does. In contrast to other methodologies based on several samples to build a classifier, such as bagging, the CTC builds a single tree and as a consequence, it obtains comprehensible classifiers. The main motivation of this implementation is to make public and available an implementation of the CTC algorithm. With this purpose we have implemented the algorithm within the well-known WEKA data mining environment http://www.cs.waikato.ac.nz/ml/weka/). WEKA is an open source project that contains a collection of machine learning algorithms written in Java for data mining tasks. J48 is the implementation of C4.5 algorithm within the WEKA package. We called J48Consolidated to the implementation of CTC algorithm based on the J48 Java class.
Resumo:
The application of chemometrics in food science has revolutionized the field by allowing the creation of models able to automate a broad range of applications such as food authenticity and food fraud detection. In order to create effective and general models able to address the complexity of real life problems, a vast amount of varied training samples are required. Training dataset has to cover all possible types of sample and instrument variability. However, acquiring a varied amount of samples is a time consuming and costly process, in which collecting samples representative of the real world variation is not always possible, specially in some application fields. To address this problem, a novel framework for the application of data augmentation techniques to spectroscopic data has been designed and implemented. This is a carefully designed pipeline of four complementary and independent blocks which can be finely tuned depending on the desired variance for enhancing model's robustness: a) blending spectra, b) changing baseline, c) shifting along x axis, and d) adding random noise.
This novel data augmentation solution has been tested in order to obtain highly efficient generalised classification model based on spectroscopic data. Fourier transform mid-infrared (FT-IR) spectroscopic data of eleven pure vegetable oils (106 admixtures) for the rapid identification of vegetable oil species in mixtures of oils have been used as a case study to demonstrate the influence of this pioneering approach in chemometrics, obtaining a 10% improvement in classification which is crucial in some applications of food adulteration.