40 resultados para SOCIETY CLASSIFICATION CRITERIA

em Deakin Research Online - Australia


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Epoetin-δ (Dynepo™ Shire Pharmaceuticals, Basing stoke, UK) is a synthetic form of erythropoietin (EPO) whose resemblance with endogenous EPO makes it hard to identify using the classical identification criteria. Urine samples collected from six healthy volunteers treated with epoetin-δ injections and from a control population were immuno-purified and analyzed with the usual IEF method. On the basis of the EPO profiles integration, a linear multivariate model was computed for discriminant analysis. For each sample, a pattern classification algorithm returned a bands distribution and intensity score (bands intensity score) saying how representative this sample is of one of the two classes, positive or negative. Effort profiles were also integrated in the model. The method yielded a good sensitivity versus specificity relation and was used to determine the detection window of the molecule following multiple injections. The bands intensity score, which can be generalized to epoetin-α and epoetin-β, is proposed as an alternative criterion and a supplementary evidence for the identification of EPO abuse.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The potential for conservation of individual species has been greatly advanced by the International Union for Conservation of Nature's (IUCN) development of objective, repeatable, and transparent criteria for assessing extinction risk that explicitly separate risk assessment from priority setting. At the IV World Conservation Congress in 2008, the process began to develop and implement comparable global standards for ecosystems. A working group established by the IUCN has begun formulating a system of quantitative categories and criteria, analogous to those used for species, for assigning levels of threat to ecosystems at local, regional, and global levels. A final system will require definitions of ecosystems; quantification of ecosystem status; identification of the stages of degradation and loss of ecosystems; proxy measures of risk (criteria); classification thresholds for these criteria; and standardized methods for performing assessments. The system will need to reflect the degree and rate of change in an ecosystem's extent, composition, structure, and function, and have its conceptual roots in ecological theory and empirical research. On the basis of these requirements and the hypothesis that ecosystem risk is a function of the risk of its component species, we propose a set of four criteria: recent declines in distribution or ecological function, historical total loss in distribution or ecological function, small distribution combined with decline, or very small distribution. Most work has focused on terrestrial ecosystems, but comparable thresholds and criteria for freshwater and marine ecosystems are also needed. These are the first steps in an international consultation process that will lead to a unified proposal to be presented at the next World Conservation Congress in 2012.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Extracting knowledge from the transaction records and the personal data of credit card holders has great profit potential for the banking industry. The challenge is to detect/predict bankrupts and to keep and recruit the profitable customers. However, grouping and targeting credit card customers by traditional data-driven mining often does not directly meet the needs of the banking industry, because data-driven mining automatically generates classification outputs that are imprecise, meaningless, and beyond users' control. In this paper, we provide a novel domain-driven classification method that takes advantage of multiple criteria and multiple constraint-level programming for intelligent credit scoring. The method involves credit scoring to produce a set of customers' scores that allows the classification results actionable and controllable by human interaction during the scoring process. Domain knowledge and experts' experience parameters are built into the criteria and constraint functions of mathematical programming and the human and machine conversation is employed to generate an efficient and precise solution. Experiments based on various data sets validated the effectiveness and efficiency of the proposed methods. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in technology and new software applications are steadily transforming human civilization into what is called the Information Society. This is manifested by the new terminology appearing in our daily activities. E-Business, E-Government, E-Learning, E-Contracting, and E-Voting are just a few of the ever-growing list of new terms that are shaping the Information Society. Nonetheless, as "Information" gains more prominence in our society, the task of securing it against all forms of threats becomes a vital and crucial undertaking. Addressing the various security issues confronting our new Information Society, this volume is divided into 13 parts covering the following topics: Information Security Management; Standards of Information Security; Threats and Attacks to Information; Education and Curriculum for Information Security; Social and Ethical Aspects of Information Security; Information Security Services; Multilateral Security; Applications of Information Security; Infrastructure for Information Security Advanced Topics in Security; Legislation for Information Security; Modeling and Analysis for Information Security; Tools for Information Security. Security in the Information Society: Visions and Perspectives comprises the proceedings of the 17th International Conference on Information Security (SEC2002), which was sponsored by the International Federation for Information Processing (IFIP), and jointly organized by IFIP Technical Committee 11 and the Department of Electronics and Electrical Communications of Cairo University. The conference was held in May 2002 in Cairo, Egypt. This volume is essential reading for scholars, researchers, and practitioners interested inkeeping pace with the ever-growing field of Information Security.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a performance study of four statistical test algorithms used to identify smooth image blocks in order to filter the reconstructed image of a video coded image. The four algorithms considered are the Coefficient of Variation (CV), Exponential Entropy of Pal and Pal (E), Shannon's (Logarithmic) Entropy (H), and Quadratic Entropy (Q). These statistical algorithms are employed to distinguish between smooth and textured blocks in a reconstructed image. The linear filtering is carried out on the smooth blocks of the image to reduce the blocking artefact. The rationale behind applying the filter on the smooth blocks only is that the blocking artefact is visually more prominent in the smooth region of an image rather than in the textured region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decision support tools will be useful in guiding regions to sustainability. These need to be simple but effective at identifying, for regional managers, areas most in need of initiatives to progress sustainability. Multiple criteria analysis (MCA) is often used as a decision support tool for a wide range of applications. This method allows many criteria to be considered at one time. It does this by giving a ranking of possible options based on how closely each option meets the criteria. Thus, it is suited to the assessment of regional sustainability as it can consider a number of indicators simultaneously and demonstrates how sustainability can vary at small scales across the region. Coupling MCA with GIS to produce maps, allows this analysis to become visual giving the manager a picture of sustainability across the region. To do this each indicator is standardised to a common scale so that it can be compared to other indicators. A weighting is then applied to each indicator to calculate weighted summation for each area in the region. This paper argues that this is the critical step in developing a useful decision support tool. A study being conducted in south west Victoria demonstrates that the weights chosen can have a dramatic impact on the results of the sustainability assessment. It is therefore imperative that careful consideration be given to determining indicator weights in a way that is objective and fully considers the impact of that indicator on regional sustainability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an innovative email categorization using a serialized multi-stage classification ensembles technique. Many approaches are used in practice for email categorization to control the menace of spam emails in different ways. Content-based email categorization employs filtering techniques using classification algorithms to learn to predict spam e-mails given a corpus of training e-mails. This process achieves a substantial performance with some amount of FP tradeoffs. It has been studied and investigated with different classification algorithms and found that the outputs of the classifiers vary from one classifier to another with same email corpora. In this paper we have proposed a multi-stage classification technique using different popular learning algorithms with an analyser which reduces the FP (false positive) problems substantially and increases classification accuracy compared to similar existing techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a new technique of email classification based on grey list (GL) analysis of user emails. This technique is based on the analysis of output emails of an integrated model which uses multiple classifiers of statistical learning algorithms. The GL is a list of classifier/(s) output which is/are not considered as true positive (TP) and true negative (TN) but in the middle of them. Many works have been done to filter spam from legitimate emails using classification algorithm and substantial performance has been achieved with some amount of false positive (FP) tradeoffs. In the case of spam detection the FP problem is unacceptable, sometimes. The proposed technique will provide a list of output emails, called "grey list (GL)", to the analyser for making decisions about the status of these emails. It has been shown that the performance of our proposed technique for email classification is much better compare to existing systems, in order to reducing FP problems and accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the repetitive and lengthy nature, automatic content-based summarization is essential to extract a more compact and interesting representation of sport video. State-of-the art approaches have confirmed that high-level semantic in sport video can be detected based on the occurrences of specific audio and visual features (also known as cinematic). However, most of them still rely heavily on manual investigation to construct the algorithms for highlight detection. Thus, the primary aim of this paper is to demonstrate how the statistics of cinematic features within play-break sequences can be used to less-subjectively construct highlight classification rules. To verify the effectiveness of our algorithms, we will present some experimental results using six AFL (Australian Football League) matches from different broadcasters. At this stage, we have successfully classified each play-break sequence into: goal, behind, mark, tackle, and non-highlight. These events are chosen since they are commonly used for broadcasted AFL highlights. The proposed algorithms have also been tested successfully with soccer video.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to automatically extract and classify self-consumable sport video highlights. For this purpose, we will emphasize the benefits of using play-break sequences as the effective inputs for HMMbased classifier. HMM is used to model the stochastic pattern of high-level states during specific sport highlights which correspond to the sequence of generic audio-visual measurements extracted from raw video data. This paper uses soccer as the domain study, focusing on the extraction and classification of goal, shot and foul highlights. The experiment work which uses183 play-break sequences from 6 soccer matches will be presented to demonstrate the performance of our proposed scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major challenge facing freshwater ecologists and managers is the development of models that link stream ecological condition to catchment scale effects, such as land use. Previous attempts to make such models have followed two general approaches. The bottom-up approach employs mechanistic models, which can quickly become too complex to be useful. The top-down approach employs empirical models derived from large data sets, and has often suffered from large amounts of unexplained variation in stream condition.

We believe that the lack of success of both modelling approaches may be at least partly explained by scientists considering too wide a breadth of catchment type. Thus, we believe that by stratifying large sets of catchments into groups of similar types prior to modelling, both types of models may be improved. This paper describes preliminary work using a Bayesian classification software package, ‘Autoclass’ (Cheeseman and Stutz 1996) to create classes of catchments within the Murray Darling Basin based on physiographic data.

Autoclass uses a model-based classification method that employs finite mixture modelling and trades off model fit versus complexity, leading to a parsimonious solution. The software provides information on the posterior probability that the classification is ‘correct’ and also probabilities for alternative classifications. The importance of each attribute in defining the individual classes is calculated and presented, assisting description of the classes. Each case is ‘assigned’ to a class based on membership probability, but the probability of membership of other classes is also provided. This feature deals very well with cases that do not fit neatly into a larger class. Lastly, Autoclass requires the user to specify the measurement error of continuous variables.

Catchments were derived from the Australian digital elevation model. Physiographic data werederived from national spatial data sets. There was very little information on measurement errors for the spatial data, and so a conservative error of 5% of data range was adopted for all continuous attributes. The incorporation of uncertainty into spatial data sets remains a research challenge.

The results of the classification were very encouraging. The software found nine classes of catchments in the Murray Darling Basin. The classes grouped together geographically, and followed altitude and latitude gradients, despite the fact that these variables were not included in the classification. Descriptions of the classes reveal very different physiographic environments, ranging from dry and flat catchments (i.e. lowlands), through to wet and hilly catchments (i.e. mountainous areas). Rainfall and slope were two important discriminators between classes. These two attributes, in particular, will affect the ways in which the stream interacts with the catchment, and can thus be expected to modify the effects of land use change on ecological condition. Thus, realistic models of the effects of land use change on streams would differ between the different types of catchments, and sound management practices will differ.

A small number of catchments were assigned to their primary class with relatively low probability. These catchments lie on the boundaries of groups of catchments, with the second most likely class being an adjacent group. The locations of these ‘uncertain’ catchments show that the Bayesian classification dealt well with cases that do not fit neatly into larger classes.

Although the results are intuitive, we cannot yet assess whether the classifications described in this paper would assist the modelling of catchment scale effects on stream ecological condition. It is most likely that catchment classification and modelling will be an iterative process, where the needs of the model are used to guide classification, and the results of classifications used to suggest further refinements to models.