78 resultados para Data-Information-Knowledge Chain


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the recent years, the area of data mining has been experiencing considerable demand for technologies that extract knowledge from large and complex data sources. There has been substantial commercial interest as well as active research in the area that aim to develop new and improved approaches for extracting information, relationships, and patterns from large datasets. Artificial neural networks (NNs) are popular biologically-inspired intelligent methodologies, whose classification, prediction, and pattern recognition capabilities have been utilized successfully in many areas, including science, engineering, medicine, business, banking, telecommunication, and many other fields. This paper highlights from a data mining perspective the implementation of NN, using supervised and unsupervised learning, for pattern recognition, classification, prediction, and cluster analysis, and focuses the discussion on their usage in bioinformatics and financial data analysis tasks. © 2012 Wiley Periodicals, Inc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data from civil engineering projects can inform the operation of built infrastructure. This paper captures lessons for such data handover, from projects into operations, through interviews with leading clients and their supply chain. Clients are found to value receiving accurate and complete data. They recognise opportunities to use high quality information in decision-making about capital and operational expenditure; as well as in ensuring compliance with regulatory requirements. Providing this value to clients is a motivation for information management in projects. However, data handover is difficult as key people leave before project completion; and different data formats and structures are used in project delivery and operations. Lessons learnt from leading practice include defining data requirements at the outset, getting operations teams involved early, shaping the evolution of interoperable systems and standards, developing handover processes to check data rather than documentation, and fostering skills to use and update project data in operations

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Good information and career guidance about what post-compulsory educational routes are available and where these routes lead is important in ensuring that young people make choices that are most appropriate to their needs and aspirations. Yet the Association of School and College Leaders (2011) express fears that future provision will be inadequate. This paper reports the findings from an on-line survey of 300 secondary school teachers, and follow up telephone interviews with 18 in the South East of England which explored teachers’ experiences of delivering post-compulsory educational and career guidance and their knowledge and confidence in doing so. Results suggest that teachers lack confidence in delivering information, advice and guidance outside their own area of specialism and experience. In particular, teachers knew little in relation to alternative local provision of post-16 education and lacked knowledge of more non-traditional, vocational routes. This paper will therefore raises important policy considerations with respect to supporting teachers’ knowledge, ability and confidence in delivering information in relation to future pathways and career guidance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sri Lanka's participation rates in higher education are low and have risen only slightly in the last few decades; the number of places for higher education in the state university system only caters for around 3% of the university entrant age cohort. The literature reveals that the highly competitive global knowledge economy increasingly favours workers with high levels of education who are also lifelong learners. This lack of access to higher education for a sizable proportion of the labour force is identified as a severe impediment to Sri Lanka‟s competitiveness in the global knowledge economy. The literature also suggests that Information and Communication Technologies are increasingly relied upon in many contexts in order to deliver flexible learning, to cater especially for the needs of lifelong learners in today‟s higher educational landscape. The government of Sri Lanka invested heavily in ICTs for distance education during the period 2003-2009 in a bid to increase access to higher education; but there has been little research into the impact of this. To address this lack, this study investigated the impact of ICTs on distance education in Sri Lanka with respect to increasing access to higher education. In order to achieve this aim, the research focused on Sri Lanka‟s effort from three perspectives: policy perspective, implementation perspective and user perspective. A multiple case study research using an ethnographic approach was conducted to observe Orange Valley University‟s and Yellow Fields University‟s (pseudonymous) implementation of distance education programmes using questionnaires, qualitative interviewing and document analysis. In total, data for the analysis was collected from 129 questionnaires, 33 individual interviews and 2 group interviews. The research revealed that ICTs have indeed increased opportunities for higher education; but mainly for people of affluent families from the Western Province. Issues identified were categorized under the themes: quality assurance, location, language, digital literacies and access to resources. Recommendations were offered to tackle the identified issues in accordance with the study findings. The study also revealed the strong presence of a multifaceted digital divide in the country. In conclusion, this research has shown that iii although ICT-enabled distance education has the potential to increase access to higher education the present implementation of the system in Sri Lanka has been less than successful.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ensemble-based data assimilation is rapidly proving itself as a computationally-efficient and skilful assimilation method for numerical weather prediction, which can provide a viable alternative to more established variational assimilation techniques. However, a fundamental shortcoming of ensemble techniques is that the resulting analysis increments can only span a limited subspace of the state space, whose dimension is less than the ensemble size. This limits the amount of observational information that can effectively constrain the analysis. In this paper, a data selection strategy that aims to assimilate only the observational components that matter most and that can be used with both stochastic and deterministic ensemble filters is presented. This avoids unnecessary computations, reduces round-off errors and minimizes the risk of importing observation bias in the analysis. When an ensemble-based assimilation technique is used to assimilate high-density observations, the data-selection procedure allows the use of larger localization domains that may lead to a more balanced analysis. Results from the use of this data selection technique with a two-dimensional linear and a nonlinear advection model using both in situ and remote sounding observations are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Much recent research in SLA is guided by the hypothesis of L2 interface vulnerability (see Sorace 2005). This study contributes to this general project by examining the acquisition of two classes of subjunctive complement clauses in L2 Spanish: subjunctive complements of volitional predicates (purely syntactic) and subjunctive vs. indicative complements with negated epistemic matrix predicates, where the mood distinction is discourse dependent (thus involving the syntax-discourse interface). We provide an analysis of the volitional subjunctive in English and Spanish, suggesting that English learners of L2 Spanish need to access the functional projection Mood P and an uninterpretable modal feature on the Force head available to them from their formal English register grammar, and simultaneously must unacquire the structure of English for-to clauses. For negated epistemic predicates, our analysis maintains that they need to revalue the modal feature on the Force head from uninterpretable to interpretable, within the L2 grammar.With others (e.g. Borgonovo & Prévost 2003; Borgonovo, Bruhn de Garavito & Prévost 2005) and in line with Sorace's (2000, 2003, 2005) notion of interface vulnerability, we maintain that the latter case is more difficult for L2 learners, which is borne out in the data we present. However, the data also show that the indicative/subjunctive distinction with negated epistemics can be acquired by advanced stages of acquisition, questioning the notion of obligatory residual optionality for all properties which require the integration of syntactic and discourse information.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traditionally, the formal scientific output in most fields of natural science has been limited to peer- reviewed academic journal publications, with less attention paid to the chain of intermediate data results and their associated metadata, including provenance. In effect, this has constrained the representation and verification of the data provenance to the confines of the related publications. Detailed knowledge of a dataset’s provenance is essential to establish the pedigree of the data for its effective re-use, and to avoid redundant re-enactment of the experiment or computation involved. It is increasingly important for open-access data to determine their authenticity and quality, especially considering the growing volumes of datasets appearing in the public domain. To address these issues, we present an approach that combines the Digital Object Identifier (DOI) – a widely adopted citation technique – with existing, widely adopted climate science data standards to formally publish detailed provenance of a climate research dataset as an associated scientific workflow. This is integrated with linked-data compliant data re-use standards (e.g. OAI-ORE) to enable a seamless link between a publication and the complete trail of lineage of the corresponding dataset, including the dataset itself.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In order to overcome divergence of estimation with the same data, the proposed digital costing process adopts an integrated design of information system to design the process knowledge and costing system together. By employing and extending a widely used international standard, industry foundation classes, the system can provide an integrated process which can harvest information and knowledge of current quantity surveying practice of costing method and data. Knowledge of quantification is encoded from literatures, motivation case and standards. It can reduce the time consumption of current manual practice. The further development will represent the pricing process in a Bayesian Network based knowledge representation approach. The hybrid types of knowledge representation can produce a reliable estimation for construction project. In a practical term, the knowledge management of quantity surveying can improve the system of construction estimation. The theoretical significance of this study lies in the fact that its content and conclusion make it possible to develop an automatic estimation system based on hybrid knowledge representation approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose This paper aims to fill the research and knowledge gap in knowledge management studies in Ghana. Knowledge acquisition is one of the unexploited areas in knowledge management literature, especially in the Ghanaian context. This study tries to ascertain the factors affecting knowledge acquisition in Ghanaian universities. Design/methodology/approach The study used the quantitative approach. The cross-sectional survey was adopted as the research design. A questionnaire consisting of Likert scale questions was used to collect data from the respondents. The items and the constructs were derived from the extant literature. The questionnaire was sent to 350 respondents, out of which 250 were returned fully completed. Data were quantitatively analysed using descriptive methods and factor analysis. Findings This study provides empirical evidence about the factors affecting knowledge acquisition in Ghanaian universities. Findings from the study show that programme content, lecturers’ competence, student academic background and attitude and facilities for teaching and learning influence knowledge acquisition in Ghanaian universities. Research limitations/implications Although the study seeks to generalize the findings, this should be cautiously done, as some scholars have advocated for large sample size. Nonetheless, there are some studies that have used sample size less than the one used in this study. Practical implications The study takes notice of the need for Ghanaian universities to use modern facilities and infrastructures such as electronic libraries and information technology equipment and also provide reading rooms to enhance teaching and learning. Originality/value Studies looking at knowledge acquisition in Ghanaian universities are virtually non-existent, and this study provides empirical findings on the factors affecting knowledge acquisition in Ghanaian universities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes an application of Social Network Analysis methods for identification of knowledge demands in public organisations. Affiliation networks established in a postgraduate programme were analysed. The course was executed in a distance education mode and its students worked on public agencies. Relations established among course participants were mediated through a virtual learning environment using Moodle. Data available in Moodle may be extracted using knowledge discovery in databases techniques. Potential degrees of closeness existing among different organisations and among researched subjects were assessed. This suggests how organisations could cooperate for knowledge management and also how to identify their common interests. The study points out that closeness among organisations and research topics may be assessed through affiliation networks. This opens up opportunities for applying knowledge management between organisations and creating communities of practice. Concepts of knowledge management and social network analysis provide the theoretical and methodological basis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As we enter an era of ‘big data’, asset information is becoming a deliverable of complex projects. Prior research suggests digital technologies enable rapid, flexible forms of project organizing. This research analyses practices of managing change in Airbus, CERN and Crossrail, through desk-based review, interviews, visits and a cross-case workshop. These organizations deliver complex projects, rely on digital technologies to manage large data-sets; and use configuration management, a systems engineering approach with mid-20th century origins, to establish and maintain integrity. In them, configuration management has become more, rather than less, important. Asset information is structured, with change managed through digital systems, using relatively hierarchical, asynchronous and sequential processes. The paper contributes by uncovering limits to flexibility in complex projects where integrity is important. Challenges of managing change are discussed, considering the evolving nature of configuration management; potential use of analytics on complex projects; and implications for research and practice.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Globalization, either directly or indirectly (e.g. through structural adjustment reforms), has called for profound changes in the previously existing institutional order. Some changes adversely impacted the production and market environment of many coffee producers in developing countries resulting in more risky and less remunerative coffee transactions. This paper focuses on customization of a tropical commodity, fair-trade coffee, as an approach to mitigating the effects of worsened market conditions for small-scale coffee producers in less developed countries. fair-trade labeling is viewed as a form of “de-commodification” of coffee through product differentiation on ethical grounds. This is significant not only as a solution to the market failure caused by pervasive information asymmetries along the supply chain, but also as a means of revitalizing the agricultural-commodity-based trade of less developed countries (LDCs) that has been languishing under globalization. More specifically, fair-trade is an example of how the same strategy adopted by developed countries’ producers/ processors (i.e. the sequence product differentiation - institutional certification - advertisement) can be used by LDC producers to increase the reputation content of their outputs by transforming them from mere commodities into “decommodified” (i.e. customized and more reputed) goods. The resulting segmentation of the world coffee market makes possible to meet the demand by consumers with preference for this “(ethically) customized” coffee and to transfer a share of the accruing economic rents backward to the Fair-trade coffee producers in LDCs. It should however be stressed that this outcome cannot be taken for granted since investments are needed to promote the required institutional innovations. In Italy FTC is a niche market with very few private brands selling this product. However, an increase of FTC market share could be a big commercial opportunity for farmers in LDCs and other economic agents involved along the international coffee chain. Hence, this research explores consumers’ knowledge of labels promoting quality products, consumption coffee habits, brand loyalty, willingness to pay and market segmentation according to the heterogeneity of preferences for coffee products. The latter was assessed developing a D-efficient design where stimuli refinement was tested during two focus groups.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

16th IFIP WG8.1 International Conference on Informatics and Semiotics in Organisations, ICISO 2015