900 resultados para INTERNATIONAL CLASSIFICATION
Resumo:
Design-build (DB) is a generic form of construction procurement, and, rather than simply representing a single system, it has evolved in practice into a variety of forms, each of which is similar to, and yet different from each other. Although the importance of selecting an appropriate DB variant has been widely accepted, difficulties occur in practice due to the multiplicity of terms and concepts used. What is needed is some kind of taxonomy or framework within which the individual variants can be placed and their relative attributes identified and understood. Through a comprehensive literature review and content analysis, this paper establishes a systematic classification framework for DB variants based on their operational attributes. In addition to providing much needed support for decision-making, this classification framework provides client/owners with perspectives to understand and examine different categories of DB variants from an operational perspective.
Resumo:
There is limited understanding about business strategies related to parliamentary government's departments. This study focuses on the strategies of departments of two state governments in Australia. The strategies are derived from department strategic plans available in public domain and collected from respective websites. The results of this research indicate that strategies fall into seven categories: internal, development, political, partnership, environment, reorientation and status quo. The strategies of the departments are mainly internal or development where development strategy is mainly the focus of departments such as transport, and infrastructure. Political strategy is prevalent for departments related to communities, and education and training. Further three layers of strategies are identified as kernel, cluster and individual, which are mapped to the developed taxonomy.
Resumo:
This article outlines the key recommendations of the Australian Law Reform Commission’s review of the National Classification Scheme, as outlined in its report Classification – Content Regulation and Convergent Media (ALRC, 2012). It identifies key contextual factors that underpin the need for reform of media classification laws and policies, including the fragmentation of regulatory responsibilities and the convergence of media platforms, content and services, as well as discussing the ALRC’s approach to law reform.
Resumo:
International comparison is complicated by the use of different terms, classification methods, policy frameworks and system structures, not to mention different languages and terminology. Multi-case studies can assist in the understanding of the influence wielded by cultural, social, economic, historical and political forces upon educational decisions, policy construction and changes over time. But case studies alone are not enough. In this paper, we argue for an ecological or scaled approach that travels through macro, meso and micro levels to build nested case-studies to allow for more comprehensive analysis of the external and internal factors that shape policy-making and education systems. Such an approach allows for deeper understanding of the relationship between globalizing trends and policy developments.
Resumo:
Increasingly, the effectiveness of the present system of taxation of international businesses is being questioned. The problem associated with the taxation of such businesses is twofold. A system of international taxation must be a fair and equitable system, distributing profits between the relevant jurisdictions and, in doing so, avoiding double taxation. At the same time, the prevention of fiscal evasion must be secured. In an attempt to achieve a fair and equitable system Australia adopts unilateral, bilateral and multilateral measures to avoid double taxation and restrict the avoidance of tax. The first step in ascertaining the international allocation of business income is to consider the taxation of business income according to domestic law, that is, the unilateral measures. The treatment of international business income under the Australian domestic law, that is, the Income Tax Assessment Act 1936 (Cth) and Income Tax Assessment Act 1997 (Cth), will depend on two concepts, first, whether the taxpayer is a resident of Australia and secondly, whether the income is sourced in Australia. After the taxation of business profits has been determined according to domestic law it is necessary to consider the applicability of the bilateral measures, that is, the Double Tax Agreements (DTAs) to which Australia is a party, as the DTAs will override the domestic law where there is any conflict. Australia is a party to 40 DTAs with another seven presently being negotiated. The preamble to Australia's DTAs provides that the purpose of such agreements is 'to conclude an Agreement for the avoidance of double taxation and the prevention of fiscal evasion with respect to taxes on income'. Both purposes, for different reasons, are equally important. It has been said that: The taxpayer hopes the treaty will prevent the double taxation of his income; the tax gatherer hopes the treaty will prevent fiscal evasion; and the politician just hopes. The first purpose, the avoidance of double taxation, is achieved through the provision of rules whereby the Contracting States agree to the classification of income and the allocation of that income to a particular State. In this sense DTAs do not allocate jurisdiction to tax but rather provide an arrangement whereby the States agree to restrict their substantive law. The restriction is either through the non-taxing of the income or via the provision of a tax credit.
Resumo:
Cardiomyopathies represent a group of diseases of the myocardium of the heart and include diseases both primarily of the cardiac muscle and systemic diseases leading to adverse effects on the heart muscle size, shape, and function. Traditionally cardiomyopathies were defined according to phenotypical appearance. Now, as our understanding of the pathophysiology of the different entities classified under each of the different phenotypes improves and our knowledge of the molecular and genetic basis for these entities progresses, the traditional classifications seem oversimplistic and do not reflect current understanding of this myriad of diseases and disease processes. Although our knowledge of the exact basis of many of the disease processes of cardiomyopathies is still in its infancy, it is important to have a classification system that has the ability to incorporate the coming tide of molecular and genetic information. This paper discusses how the traditional classification of cardiomyopathies based on morphology has evolved due to rapid advances in our understanding of the genetic and molecular basis for many of these clinical entities.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
A cell classification algorithm that uses first, second and third order statistics of pixel intensity distributions over pre-defined regions is implemented and evaluated. A cell image is segmented into 6 regions extending from a boundary layer to an inner circle. First, second and third order statistical features are extracted from histograms of pixel intensities in these regions. Third order statistical features used are one-dimensional bispectral invariants. 108 features were considered as candidates for Adaboost based fusion. The best 10 stage fused classifier was selected for each class and a decision tree constructed for the 6-class problem. The classifier is robust, accurate and fast by design.
Resumo:
Next Generation Sequencing (NGS) has revolutionised molecular biology, resulting in an explosion of data sets and an increasing role in clinical practice. Such applications necessarily require rapid identification of the organism as a prelude to annotation and further analysis. NGS data consist of a substantial number of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. Highly accurate results have been obtained for restricted sets using SVM classifiers, but such methods are difficult to parallelise and success depends on careful attention to feature selection. This work examines the problem at very large scale, using a mix of synthetic and real data with a view to determining the overall structure of the problem and the effectiveness of parallel ensembles of simpler classifiers (principally random forests) in addressing the challenges of large scale genomics.
Resumo:
The proliferation of news reports published in online websites and news information sharing among social media users necessitates effective techniques for analysing the image, text and video data related to news topics. This paper presents the first study to classify affective facial images on emerging news topics. The proposed system dynamically monitors and selects the current hot (of great interest) news topics with strong affective interestingness using textual keywords in news articles and social media discussions. Images from the selected hot topics are extracted and classified into three categorized emotions, positive, neutral and negative, based on facial expressions of subjects in the images. Performance evaluations on two facial image datasets collected from real-world resources demonstrate the applicability and effectiveness of the proposed system in affective classification of facial images in news reports. Facial expression shows high consistency with the affective textual content in news reports for positive emotion, while only low correlation has been observed for neutral and negative. The system can be directly used for applications, such as assisting editors in choosing photos with a proper affective semantic for a certain topic during news report preparation.
Resumo:
Description of a patient's injuries is recorded in narrative text form by hospital emergency departments. For statistical reporting, this text data needs to be mapped to pre-defined codes. Existing research in this field uses the Naïve Bayes probabilistic method to build classifiers for mapping. In this paper, we focus on providing guidance on the selection of a classification method. We build a number of classifiers belonging to different classification families such as decision tree, probabilistic, neural networks, and instance-based, ensemble-based and kernel-based linear classifiers. An extensive pre-processing is carried out to ensure the quality of data and, in hence, the quality classification outcome. The records with a null entry in injury description are removed. The misspelling correction process is carried out by finding and replacing the misspelt word with a soundlike word. Meaningful phrases have been identified and kept, instead of removing the part of phrase as a stop word. The abbreviations appearing in many forms of entry are manually identified and only one form of abbreviations is used. Clustering is utilised to discriminate between non-frequent and frequent terms. This process reduced the number of text features dramatically from about 28,000 to 5000. The medical narrative text injury dataset, under consideration, is composed of many short documents. The data can be characterized as high-dimensional and sparse, i.e., few features are irrelevant but features are correlated with one another. Therefore, Matrix factorization techniques such as Singular Value Decomposition (SVD) and Non Negative Matrix Factorization (NNMF) have been used to map the processed feature space to a lower-dimensional feature space. Classifiers with these reduced feature space have been built. In experiments, a set of tests are conducted to reflect which classification method is best for the medical text classification. The Non Negative Matrix Factorization with Support Vector Machine method can achieve 93% precision which is higher than all the tested traditional classifiers. We also found that TF/IDF weighting which works well for long text classification is inferior to binary weighting in short document classification. Another finding is that the Top-n terms should be removed in consultation with medical experts, as it affects the classification performance.
Resumo:
Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. HRV analysis is an important tool to observe the heart’s ability to respond to normal regulatory impulses that affect its rhythm. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. A computer-based arrhythmia detection system of cardiac states is very useful in diagnostics and disease management. In this work, we studied the identification of the HRV signals using features derived from HOS. These features were fed to the support vector machine (SVM) for classification. Our proposed system can classify the normal and other four classes of arrhythmia with an average accuracy of more than 85%.
Resumo:
In this paper we propose the hybrid use of illuminant invariant and RGB images to perform image classification of urban scenes despite challenging variation in lighting conditions. Coping with lighting change (and the shadows thereby invoked) is a non-negotiable requirement for long term autonomy using vision. One aspect of this is the ability to reliably classify scene components in the presence of marked and often sudden changes in lighting. This is the focus of this paper. Posed with the task of classifying all parts in a scene from a full colour image, we propose that lighting invariant transforms can reduce the variability of the scene, resulting in a more reliable classification. We leverage the ideas of “data transfer” for classification, beginning with full colour images for obtaining candidate scene-level matches using global image descriptors. This is commonly followed by superpixellevel matching with local features. However, we show that if the RGB images are subjected to an illuminant invariant transform before computing the superpixel-level features, classification is significantly more robust to scene illumination effects. The approach is evaluated using three datasets. The first being our own dataset and the second being the KITTI dataset using manually generated ground truth for quantitative analysis. We qualitatively evaluate the method on a third custom dataset over a 750m trajectory.
Resumo:
Calls from 14 species of bat were classified to genus and species using discriminant function analysis (DFA), support vector machines (SVM) and ensembles of neural networks (ENN). Both SVMs and ENNs outperformed DFA for every species while ENNs (mean identification rate – 97%) consistently outperformed SVMs (mean identification rate – 87%). Correct classification rates produced by the ENNs varied from 91% to 100%; calls from six species were correctly identified with 100% accuracy. Calls from the five species of Myotis, a genus whose species are considered difficult to distinguish acoustically, had correct identification rates that varied from 91 – 100%. Five parameters were most important for classifying calls correctly while seven others contributed little to classification performance.