32 resultados para Information Technologies Classification


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The availability of affordable, user-friendly audio-visual equipment, software and the Internet, allows anyone to become a content creator or media outlet. This exploratory case study examines the adoption of social networking by Victoria Police and Diary Australia, a non-media and non-profit organization respectively, in corporate communication, public infmmation and community relations. The paper initiates discussions on the implications for traditional media and audiences of this phenomenon. It content analyzed the two websites during a two-week period and conducted interviews with their moderators about the sites' content, functions and efficacy. The purpose, role and community acceptance of these sites are examined, along with organizational motivations for establishing these channels to reach audiences, bypassing traditional media's gatekeeping function. It highlights how these organizations may set both media and public agendas when traditional media use this web content in their news gathering and reporting, similar to using press releases in the past.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Spam or unwanted email is one of the potential issues of Internet security and classifying user emails correctly from penetration of spam is an important research issue for anti-spam researchers. In this paper we present an effective and efficient spam classification technique using clustering approach to categorize the features. In our clustering technique we use VAT (Visual Assessment and clustering Tendency) approach into our training model to categorize the extracted features and then pass the information into classification engine. We have used WEKA (www.cs.waikato.ac.nz/ml/weka/) interface to classify the data using different classification algorithms, including tree-based classifiers, nearest neighbor algorithms, statistical algorithms and AdaBoosts. Our empirical performance shows that we can achieve detection rate over 97%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite significant efforts in natural resource management (NRM), the environmental condition of Victoria’s catchments is mostly ‘poor to moderate’, and continuing to decline in many places. NRM is a complex undertaking involving social, economic, and environmental objectives, across policy, research, and practice dimensions. It is therefore not easy to ensure that the knowledge required to underpin effective NRM is readily available to practitioners. Knowledge brokering is an emerging approach with the potential to improve knowledge sharing and exchange. While it has attracted attention in other areas of public interest (such as health and information technology), its potential in NRM has received relatively limited attention. This article reports on a Victorian knowledge brokering case study which was a major element in the Catchment Knowledge Exchange project. A key finding is that knowledge brokering is a role that is being undertaken informally, without proper acknowledgement or definition. This raises challenges for knowledge management in the context of NRM. We conclude that the ‘people’ component of knowledge brokering is the driving element, although organisational processes and information technologies are critical in enhancing the effectiveness of knowledge brokers. Demonstrating the benefits of knowledge brokering in terms of the ultimate measure of its contribution towards improving the condition of catchments remains a challenge.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The severe class distribution shews the presence of underrepresented data, which has great effects on the performance of learning algorithm, is still a challenge of data mining and machine learning. Lots of researches currently focus on experimental comparison of the existing re-sampling approaches. We believe it requires new ways of constructing better algorithms to further balance and analyse the data set. This paper presents a Fuzzy-based Information Decomposition oversampling (FIDoS) algorithm used for handling the imbalanced data. Generally speaking, this is a new way of addressing imbalanced learning problems from missing data perspective. First, we assume that there are missing instances in the minority class that result in the imbalanced dataset. Then the proposed algorithm which takes advantages of fuzzy membership function is used to transfer information to the missing minority class instances. Finally, the experimental results demonstrate that the proposed algorithm is more practical and applicable compared to sampling techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ontology-driven systems with reasoning capabilities in the legal field are now better understood. Legal concepts are not discrete, but make up a dynamic continuum between common sense terms, specific technical use, and professional knowledge, in an evolving institutional reality. Thus, the tension between a plural understanding of regulations and a more general understanding of law is bringing into view a new landscape in which general legal frameworks – grounded in well-known legal theories stemming from 20th-century c. legal positivism or sociological jurisprudence – are made compatible with specific forms of rights management on the Web. In this sense, Semantic Web tools are not only being designed for information retrieval, classification, clustering, and knowledge management. They can also be understood as regulatory tools, i.e. as components of the contemporary legal architecture, to be used by multiple stakeholders – front-line practitioners, policymakers, legal drafters, companies, market agents, and citizens. That is the issue broadly addressed in this Special Issue on the Semantic Web for the Legal Domain, overviewing the work carried out over the last fifteen years, and seeking to foster new research in this field, beyond the state of the art.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Based on the knowledge sharing model by Nonaka (1994), this study examines the relative efficacy of various Information Communication Technologies (ICTs) applications in facilitating sharing of explicit and tacit knowledge among professional accountants in Malaysia. The results of this study indicate that ICTs, generally, facilitate all modes of knowledge sharing. Best-Practice Repositories are effective for sharing of both explicit and tacit knowledge, while internet/e-mail facilities are effective for tacit knowledge sharing. Data warehousing /mining, on the other hand, is effective in facilitating self learning through tacit-to-tacit mode and explicit-to-explicit mode. ICT facilities used mainly for office administration are ineffective for knowledge sharing purpose. The implications of the findings are
discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article is devoted to experimental investigation of a novel application of a clustering technique introduced by the authors recently in order to use robust and stable consensus functions in information security, where it is often necessary to process large data sets and monitor outcomes in real time, as it is required, for example, for intrusion detection. Here we concentrate on a particular case of application to profiling of phishing websites. First, we apply several independent clustering algorithms to a randomized sample of data to obtain independent initial clusterings. Silhouette index is used to determine the number of clusters. Second, rank correlation is used to select a subset of features for dimensionality reduction. We investigate the effectiveness of the Pearson Linear Correlation Coefficient, the Spearman Rank Correlation Coefficient and the Goodman--Kruskal Correlation Coefficient in this application. Third, we use a consensus function to combine independent initial clusterings into one consensus clustering. Fourth, we train fast supervised classification algorithms on the resulting consensus clustering in order to enable them to process the whole large data set as well as new data. The precision and recall of classifiers at the final stage of this scheme are critical for the effectiveness of the whole procedure. We investigated various combinations of several correlation coefficients, consensus functions, and a variety of supervised classification algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Information and communication technologies such as email, text messaging and video messaging are commonly used by the general population. However, international research has shown that they are not used routinely by GPs to communicate or consult with patients. Investigating Victorian GPs’ perceptions of doing so is timely given Australia’s new National Broadband Network, which may facilitate web-based modes of doctor-patient interaction. This study therefore aimed to explore Victorian GPs’ experiences of, and attitudes toward, using information and communication technologies to consult with patients. Qualitative telephone interviews were carried out with a maximum variation sample of 36 GPs from across Victoria. GPs reported a range of perspectives on using new consultation technologies within their practice. Common concerns included medico-legal and remuneration issues and perceived patient information technology literacy. Policy makers should incorporate GPs’ perspectives into primary care service delivery planning to promote the effective use of information and communication technologies in improving accessibility and quality of general practice care.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Textural image classification technologies have been extensively explored and widely applied in many areas. It is advantageous to combine both the occurrence and spatial distribution of local patterns to describe a texture. However, most existing state-of-the-art approaches for textural image classification only employ the occurrence histogram of local patterns to describe textures, without considering their co-occurrence information. And they are usually very time-consuming because of the vector quantization involved. Moreover, those feature extraction paradigms are implemented at a single scale. In this paper we propose a novel multi-scale local pattern co-occurrence matrix (MS_LPCM) descriptor to characterize textural images through four major steps. Firstly, Gaussian filtering pyramid preprocessing is employed to obtain multi-scale images; secondly, a local binary pattern (LBP) operator is applied on each textural image to create a LBP image; thirdly, the gray-level co-occurrence matrix (GLCM) is utilized to extract local pattern co-occurrence matrix (LPCM) from LBP images as the features; finally, all LPCM features from the same textural image at different scales are concatenated as the final feature vectors for classification. The experimental results on three benchmark databases in this study have shown a higher classification accuracy and lower computing cost as compared with other state-of-the-art algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traffic classification has wide applications in network management, from security monitoring to quality of service measurements. Recent research tends to apply machine learning techniques to flow statistical feature based classification methods. The nearest neighbor (NN)-based method has exhibited superior classification performance. It also has several important advantages, such as no requirements of training procedure, no risk of overfitting of parameters, and naturally being able to handle a huge number of classes. However, the performance of NN classifier can be severely affected if the size of training data is small. In this paper, we propose a novel nonparametric approach for traffic classification, which can improve the classification performance effectively by incorporating correlated information into the classification process. We analyze the new classification approach and its performance benefit from both theoretical and empirical perspectives. A large number of experiments are carried out on two real-world traffic data sets to validate the proposed approach. The results show the traffic classification performance can be improved significantly even under the extreme difficult circumstance of very few training samples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This alternative event for the 2013 iConference is a combination of lightning talks, a demonstration of an assessment technology for knowledge construction in complex domains, and a hands-on exercise in using the tools discussed. The unifying logic for this presentation is that meaningful learning often involves solving challenging and complex problems that allow for multiple solution approaches and a variety of acceptable solutions. While it is important to prepare students to solve such problems, it is difficult to determine the extent to which various interventions and programs are contributing to the development of appropriate problem-solving strategies and attitudes. Simply testing domain knowledge or the ability to solve simple, single-solution problems may not provide support for improving individual student ability or relevant programs and activities. A reliable and robust methodology for assessing the relevant knowledge constructions of students engaged in solving challenging problems is needed, and that is our focus.