942 resultados para frequency based knowledge discovery


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In response to the forces of globalisation, societies and organisations have had to adapt and even proactively transform themselves. Universities, as knowledge-based organisations, have recognised that there are now many other important sites of knowledge construction and use. The apparent monopoly over valued forms of knowledge making and knowledge certification is disappearing. Universities have had to recognise the value of practical working knowledge developed in workplace settings beyond university domains, and promote the value of academic forms of knowledge making to the practical concerns of everyday learning. It is within this broader systems view that professional curriculum development undertaken by universities needs to be examined.

University educational planning responds to these external forces in ways that are drawing together formal academic capability/competence and practice-based capability/competence. Both forms of academic and practice-based knowledge and knowing are being equally valued and related one to the other. University planning in turn gives impetus to the development of new forms of professional education curricula. This paper presents a contemporary case of a designed professional curriculum in the field of information technology that situates workplace learning as a central element in the education of Information Technology (IT)/Information Systems (IS) professionals.

The key dimensions of the learning environment of Deakin University’s BIT (Hons) program are considered with a view to identifying areas of strong integration between the worlds of academic and workplace learning from the perspectives of major stakeholders. The dynamic interplay between forms of theorising and practising is seen as critical in educating students for professional capability in their chosen field. From this analysis, an applied research agenda, relating to desired forms of professional learning in higher education, is outlined, with specific reference to the information and communication technology professions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid growth of biological databases not only provides biologists with abundant data but also presents a big challenge in relation to the analysis of data. Many data analysis approaches such as data mining, information retrieval and machine learning have been used to extract frequent patterns from diverse biological databases. However, the discrepancies, due to the differences in the structure of databases and their terminologies, result in a significant lack of interoperability. Although ontology-based approaches have been used to integrate biological databases, the inconsistent analysis of biological databases has been greatly disregarded. This paper presents a method by which to measure the degree of inconsistency between biological databases. It not only presents a guideline for correct and efficient database integration, but also exposes high quality data for data mining and knowledge discovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper will develop a discussion related to evidence-based knowledge for mental health nursing, arguing for a historical component to be included in the comprehensive degree programme that will offer significant insights into mental health nursing knowledge from historical information and constructing implications for contemporary practice. Our understanding of the present is clearer by this looking back and forth and by adding meaning (and what the meanings mean) to what historically preceded. It allows the history of psychiatry to be a much more productive, useful, and a continual source of wisdom for the here and now. This blending of past knowledge with contemporary inquiry can offer depth in mental health nursing practices by forming a context for practice for the beginning nurse practitioner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is not a simple matter to develop an integrative approach that exploits  synergies between knowledge management and knowledge discovery in   order to monitor and manage the full lifecycle of knowledge and provides  services quickly, reliably and securely. One of the main problems is the heterogeneity of the involved resources that represent knowledge. Data mining systems produce knowledge in a form meant to be understandable  to machines and on the other hand in knowledge management systems the  priority is placed on the readability and usability of knowledge by humans.  The Semantic Web is a promising platform to unify this heterogeneity and, in conjunction with novel techniques for Web Intelligence it could offer more  then just knowledge - wisdom. The Wisdom Autonomic Grid is an original proposal of a knowledge based Grid that is able to configure and reconfigure itself under varying and unpredictable conditions and optimize its working, performs something akin to healing and provides self-protection, as  visioned in the IBM Autonomic Computing initiative. This paper presents an original framework for creating advanced applications to integrate  knowledge discovery and knowledge management in the Autonomic Grid  and Web environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been argued that entrepreneurship researchers do not place sufficient emphasis on making their research findings relevant to entrepreneurs and their advisors. The paper utilises five general principles introduced by Hindle, Anderson and Gibson (2004) to convert a complex range of entrepreneurship research findings into useful action guidelines for practicing entrepreneurs. The existing research-based knowledge concerning opportunity assessment is distilled into a diagrammatic framework. This framework and a sequence of ten, plain-English questions, provides entrepreneurs and SME operators with a strategic tool (nick-named the '4/10 strategy') for discovering, evaluating and exploiting entrepreneurial opportunities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been argued that entrepreneurship researchers do not place sufficient emphasis on making their research findings relevant to entrepreneurs and their advisors. The paper utilises five general principles introduced by Hindle, Anderson and Gibson (2004) to convert a complex range of entrepreneurship research findings into useful action guidelines for practicing entrepreneurs. The existing research-based knowledge concerning opportunity assessment is distilled into a diagrammatic framework. This framework and a sequence of ten, plain-English questions, provides entrepreneurs and SME operators with a strategic tool (nick-named the "4/10 strategy") for discovering, evaluating and exploiting entrepreneurial opportunities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As one of the primary substances in a living organism, protein defines the character of each cell by interacting with the cellular environment to promote the cell’s growth and function [1]. Previous studies on proteomics indicate that the functions of different proteins could be assigned based upon protein structures [2,3]. The knowledge on protein structures gives us an overview of protein fold space and is helpful for the understanding of the evolutionary principles behind structure. By observing the architectures and topologies of the protein families, biological processes can be investigated more directly with much higher resolution and finer detail. For this reason, the analysis of protein, its structure and the interaction with the other materials is emerging as an important problem in bioinformatics. However, the determination of protein structures is experimentally expensive and time consuming, this makes scientists largely dependent on sequence rather than more general structure to infer the function of the protein at the present time. For this reason, data mining technology is introduced into this area to provide more efficient data processing and knowledge discovery approaches.

Unlike many data mining applications which lack available data, the protein structure determination problem and its interaction study, on the contrary, could utilize a vast amount of biologically relevant information on protein and its interaction, such as the protein data bank (PDB) [4], the structural classification of proteins (SCOP) databases [5], CATH databases [6], UniProt [7], and others. The difficulty of predicting protein structures, specially its 3D structures, and the interactions between proteins as shown in Figure 6.1, lies in the computational complexity of the data. Although a large number of approaches have been developed to determine the protein structures such as ab initio modelling [8], homology modelling [9] and threading [10], more efficient and reliable methods are still greatly needed.

In this chapter, we will introduce a state-of-the-art data mining technique, graph mining, which is good at defining and discovering interesting structural patterns in graphical data sets, and take advantage of its expressive power to study protein structures, including protein structure prediction and comparison, and protein-protein interaction (PPI). The current graph pattern mining methods will be described, and typical algorithms will be presented, together with their applications in the protein structure analysis.

The rest of the chapter is organized as follows: Section 6.2 will give a brief introduction of the fundamental knowledge of protein, the publicly accessible protein data resources and the current research status of protein analysis; in Section 6.3, we will pay attention to one of the state-of-the-art data mining methods, graph mining; then Section 6.4 surveys several existing work for protein structure analysis using advanced graph mining methods in the recent decade; finally, in Section 6.5, a conclusion with potential further work will be summarized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a framework for justifying generalization in information systems (IS) research. First, using evidence from an analysis of two leading IS journals, we show that the treatment of generalization in many empirical papers in leading IS research journals is unsatisfactory. Many quantitative studies need clearer definition of populations and more discussion of the extent to which ‘significant’ statistics and use of non-probability sampling affect support for their knowledge claims. Many qualitative studies need more discussion of boundary conditions for their sample-based general knowledge claims. Second, the proposed new framework is presented. It defines eight alternative logical pathways for justifying generalizations in IS research. Three key concepts underpinning the framework are the need for researcher judgment when making any claim about the likely truth of sample-based knowledge claims in other settings; the importance of sample representativeness and its assessment in terms of the knowledge claim of interest; and the desirability of integrating a study’s general knowledge claims with those from prior research. Finally, we show how the framework may be applied by researchers and reviewers. Observing the pathways in the framework has potential to improve both research rigour and practical relevance for IS research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regression lies heart in statistics, it is the one of the most important branch of multivariate techniques available for extracting knowledge in almost every field of study and research. Nowadays, it has drawn a huge interest to perform the tasks with different fields like machine learning, pattern recognition and data mining. Investigating outlier (exceptional) is a century long problem to the data analyst and researchers. Blind application of data could have dangerous consequences and leading to discovery of meaningless patterns and carrying to the imperfect knowledge. As a result of digital revolution and the growth of the Internet and Intranet data continues to be accumulated at an exponential rate and thereby importance of detecting outliers and study their costs and benefits as a tool for reliable knowledge discovery claims perfect attention. Investigating outliers in regression has been paid great value for the last few decades within two frames of thoughts in the name of robust regression and regression diagnostics. Robust regression first wants to fit a regression to the majority of the data and then to discover outliers as those points that possess large residuals from the robust output whereas in regression diagnostics one first finds the outliers, delete/correct them and then fit the regular data by classical (usual) methods. At the beginning there seems to be much confusion but now the researchers reach to the consensus, robustness and diagnostics are two complementary approaches to the analysis of data and any one is not good enough. In this chapter, we discuss both of them under the unique spectrum of regression diagnostics. Chapter expresses the necessity and views of regression diagnostics as well as presents several contemporary methods through numerical examples in linear regression within each aforesaid category together with current challenges and possible future research directions. Our aim is to make the chapter self-explained maintaining its general accessibility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Growing self-organizing map (GSOM) has been characterized as a knowledge discovery visualization application which outshines the traditional self-organizing map (SOM) due to its dynamic structure in which nodes can grow based on the input data. GSOM is utilized as a visualization tool in this paper to cluster fMRI finger tapping and non- tapping data, demonstrating the visualization capability to distinguish between tapping or non-tapping. A unique feature of GSOM is a parameter called the spread factor whose functionality is to control the spread of the GSOM map. By setting different levels of spread factor, different granularities of region of interests within tapping or non-tapping images can be visualized and analyzed. Euclidean distance based similarity calculation is used to quantify the visualized difference between tapping and non tapping images. Once the differences are identified, the spread factor is used to generate a more detailed view of those regions to provide a better visualization of the brain regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper the Binary Search Tree Imposed Growing Self Organizing Map (BSTGSOM) is presented as an extended version of the Growing Self Organizing Map (GSOM), which has proven advantages in knowledge discovery applications. A Binary Search Tree imposed on the GSOM is mainly used to investigate the dynamic perspectives of the GSOM based on the inputs and these generated temporal patterns are stored to further analyze the behavior of the GSOM based on the input sequence. Also, the performance advantages are discussed and compared with that of the original GSOM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Little is known about how human resource practices contribute towards the competitiveness of people based-knowledge intensive organisations in developing countries. This paper examines the role of human resource practices in developing knowledge and learning capabilities for innovation in the Indian information technology services sector. The study draws from the experience of a sample of 11 of the largest information technology service providers (ITSPs) in India and is based on in-depth interviews. The main finding suggests that the talent management architecture of ITSPs that comprises human resource practices and the development of knowledge and learning capabilities is the main drivers of innovation. A conceptual framework showing the link between human resource practices, knowledge and learning capabilities and innovation of ITSPs is developed followed by the limitations of the study and avenues for future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whichever way you look at it, online crowdfunding is ramifying. From its foundations supporting creative industry initiatives, crowdfunding has branched into almost every aspect of public and private enterprise. Niche crowdfunding platforms and models are burgeoning across the globe faster than you can trill “kerching”. Early adopters have been quick to discover that in addition to money, they also get free market information and an opportunity to develop a relationship with their market base. Despite these evident benefits, universities have been cautious entrants in the crowdfunding space and more generally in the emerging ‘collaborative economy’ (Owyang, 2013). There are many cultural and institutional legacies that might explain this reluctance. For example, to date universities have achieved social (and economic) distinction through refining a set of exclusionary practices including, but not limited to, versions of gatekeeping, ranking and credentialing. These practices are reproduced in the expected behaviors of individual academics who garner social currency and status as experts, legislators and interpreters (Osborne, 20014: 435). Digitalization and the emergent knowledge and collaboration economies, have the potential to disrupt the academy’s traditional appeals to distinction and to re-engage universities and academics with their public stakeholders. This chapter will examine some of the challenges and benefits arising from public micro-funding of university-based research initiatives during a period of industrial transition in the university sector.Broadly then this chapter asks; what does scholarship mean in a digital ecosystem where sociality (rather than traditional systems for assessing academic merit) affords research opportunity and success? How might university research be rethought in a networked world where personal and professional identities are blurred? What happens when scholars adopt the same pathways as non-scholars for knowledge discovery, development and dissemination through use of emerging practices such as crowdfunding. These issues will be discussed through detailed exploration of a successful pilot project to crowdfund university research; Research My World. This project, a collaboration between Deakin University and the crowdfunding platform pozible.com, set out to secure new sources of funding for the ‘long-tail’ of academic research. More generally, it aimed to improve the digital capacity of the participating researchers and create new opportunities for public engagement for the researchers themselves as well as the university. We will examine how crowdfunding and social media platforms alter academic effort (the dis-intermediation or re-intermediation of research funding, reduction of the compliance burden, opportunities for market validation and so on), as well as the particular workflows of scholarly researchers themselves (improvements in “digital presence-building”, provision of cheap alternative funding, opportunities to crowdsource non-academic knowledge). In addressing these questions, this chapter will explore the influence that crowdfunding campaigns have for transforming contemporary academic practices across a range of disciplinary instances, providing the basis for a new form of engagement-led research. To support our analysis we will provide an overview of the initiative through quantitative analysis of a dataset generated by the first iteration of Research My World projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rise of mobile technologies in recent years has led to large volumes of location information, which are valuable resources for knowledge discovery such as travel patterns mining and traffic analysis. However, location dataset has been confronted with serious privacy concerns because adversaries may re-identify a user and his/her sensitivity information from these datasets with only a little background knowledge. Recently, several privacy-preserving techniques have been proposed to address the problem, but most of them lack a strict privacy notion and can hardly resist the number of possible attacks. This paper proposes a private release algorithm to randomize location dataset in a strict privacy notion, differential privacy, with the goal of preserving users’ identities and sensitive information. The algorithm aims to mask the exact locations of each user as well as the frequency that the user visits the locations with a given privacy budget. It includes three privacy-preserving operations: private location clustering shrinks the randomized domain and cluster weight perturbation hides the weights of locations, while private location selection hides the exact locations of a user. Theoretical analysis on privacy and utility confirms an improved trade-off between privacy and utility of released location data. Extensive experiments have been carried out on four real-world datasets, GeoLife, Flickr, Div400 and Instagram. The experimental results further suggest that this private release algorithm can successfully retain the utility of the datasets while preserving users’ privacy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we describe a novel framework for the discovery of the topical content of a data corpus, and the tracking of its complex structural changes across the temporal dimension. In contrast to previous work our model does not impose a prior on the rate at which documents are added to the corpus nor does it adopt the Markovian assumption which overly restricts the type of changes that the model can capture. Our key technical contribution is a framework based on (i) discretization of time into epochs, (ii) epoch-wise topic discovery using a hierarchical Dirichlet process-based model, and (iii) a temporal similarity graph which allows for the modelling of complex topic changes: emergence and disappearance, evolution, splitting and merging. The power of the proposed framework is demonstrated on the medical literature corpus concerned with the autism spectrum disorder (ASD) - an increasingly important research subject of significant social and healthcare importance. In addition to the collected ASD literature corpus which we made freely available, our contributions also include two free online tools we built as aids to ASD researchers. These can be used for semantically meaningful navigation and searching, as well as knowledge discovery from this large and rapidly growing corpus of literature.