42 resultados para Data-driven knowledge acquisition
Resumo:
This paper demonstrates that the conventional approach of using official liberalisation dates as the only existing breakdates could lead to inaccurate conclusions as to the effect of the underlying liberalisation policies. It also proposes an alternative paradigm for obtaining more robust estimates of volatility changes around official liberalisation dates and/or other important market events. By focusing on five East Asian emerging markets, all of which liberalised their financial markets in the late, and by using recent advances in the econometrics of structural change, it shows that (i) the detected breakdates in the volatility of stock market returns can be dramatically different to official liberalisation dates and (ii) the use of official liberalisation dates as breakdates can readily entail inaccurate inference. In contrast, the use of data-driven techniques for the detection of multiple structural changes leads to a richer and inevitably more accurate pattern of volatility evolution emerges in comparison with focussing on official liberalisation dates.
Resumo:
This paper investigates whether the non-normality typically observed in daily stock-market returns could arise because of the joint existence of breaks and GARCH effects. It proposes a data-driven procedure to credibly identify the number and timing of breaks and applies it on the benchmark stock-market indices of 27 OECD countries. The findings suggest that a substantial element of the observed deviations from normality might indeed be due to the co-existence of breaks and GARCH effects. However, the presence of structural changes is found to be the primary reason for the non-normality and not the GARCH effects. Also, there is still some remaining excess kurtosis that is unlikely to be linked to the specification of the conditional volatility or the presence of breaks. Finally, an interesting sideline result implies that GARCH models have limited capacity in forecasting stock-market volatility.
Resumo:
Failure to detect or account for structural changes in economic modelling can lead to misleading policy inferences, which can be perilous, especially for the more fragile economies of developing countries. Using three potential monetary policy instruments (Money Base, M0, and Reserve Money) for 13 member-states of the CFA Franc zone over the period 1989:11-2002:09, we investigate the magnitude of information extracted by employing data-driven techniques when analyzing breaks in time-series, rather than the simplifying practice of imposing policy implementation dates as break dates. The paper also tests Granger's (1980) aggregation theory and highlights some policy implications of the results.
Resumo:
This article focuses on the deviations from normality of stock returns before and after a financial liberalisation reform, and shows the extent to which inference based on statistical measures of stock market efficiency can be affected by not controlling for breaks. Drawing from recent advances in the econometrics of structural change, it compares the distribution of the returns of five East Asian emerging markets when breaks in the mean and variance are either (i) imposed using certain official liberalisation dates or (ii) detected non-parametrically using a data-driven procedure. The results suggest that measuring deviations from normality of stock returns with no provision for potentially existing breaks incorporates substantial bias. This is likely to severely affect any inference based on the corresponding descriptive or test statistics.
Resumo:
The realization of the Semantic Web is constrained by a knowledge acquisition bottleneck, i.e. the problem of how to add RDF mark-up to the millions of ordinary web pages that already exist. Information Extraction (IE) has been proposed as a solution to the annotation bottleneck. In the task based evaluation reported here, we compared the performance of users without access to annotation, users working with annotations which had been produced from manually constructed knowledge bases, and users working with annotations augmented using IE. We looked at retrieval performance, overlap between retrieved items and the two sets of annotations, and usage of annotation options. Automatically generated annotations were found to add value to the browsing experience in the scenario investigated. Copyright 2005 ACM.
Resumo:
The ability to identify early failure in knowledge accquisition amongst students is important because it enables tutors to put in place suitable interventions to help struggling students. We hypothesised that if a reflective learning journal is a useful learning tool, there ought to be relationship between the type of journal entries and the depth of knowledge acquisition. Our research question is: can reflectiuve journals be used to identify struggling students? Previous work with reflective journals has not related the level of reflection with module outcomes obtained by the student. In our study, we have classified journal entries written by first year students in a foundationalprogramming module based on the SOLO taxonomy and compared this against the outcomes of two module assessments. Our results suggest that there is potential for using reflective journals to identify struggling stuidents in first year programming.
Resumo:
This paper presents a novel intonation modelling approach and demonstrates its applicability using the Standard Yorùbá language. Our approach is motivated by the theory that abstract and realised forms of intonation and other dimensions of prosody should be modelled within a modular and unified framework. In our model, this framework is implemented using the Relational Tree (R-Tree) technique. The R-Tree is a sophisticated data structure for representing a multi-dimensional waveform in the form of a tree. Our R-Tree for an utterance is generated in two steps. First, the abstract structure of the waveform, called the Skeletal Tree (S-Tree), is generated using tone phonological rules for the target language. Second, the numerical values of the perceptually significant peaks and valleys on the S-Tree are computed using a fuzzy logic based model. The resulting points are then joined by applying interpolation techniques. The actual intonation contour is synthesised by Pitch Synchronous Overlap Technique (PSOLA) using the Praat software. We performed both quantitative and qualitative evaluations of our model. The preliminary results suggest that, although the model does not predict the numerical speech data as accurately as contemporary data-driven approaches, it produces synthetic speech with comparable intelligibility and naturalness. Furthermore, our model is easy to implement, interpret and adapt to other tone languages.
Resumo:
Age-related macular degeneration (AMD) is the leading cause of visual impairment in older adults in the United Kingdom. This study sought to characterise AMD patients who seek the services of the Macular Society, and determine the level and source of their dietary knowledge. A questionnaire was designed, validated, and administered to 158 participants. The questions covered demographic data and knowledge of nutrition and supplementation. The mean age of participants was 79 years; 61% of them were female, and 27% were registered visually impaired. Only 55% of the participants thought diet was important for eye health, 63% felt that they had not received enough information about AMD. The participants reported that their information mainly came from non-professional support groups. Most participants identified healthy food, but could not say why, and were not able to identify carotenoid rich foods. The results of the study will inform design of education and dissemination methods regarding dietary information. © The Author(s) 2014.
Resumo:
As torrents of new data now emerge from microbial genomics, bioinformatic prediction of immunogenic epitopes remains challenging but vital. In silico methods often produce paradoxically inconsistent results: good prediction rates on certain test sets but not others. The inherent complexity of immune presentation and recognition processes complicates epitope prediction. Two encouraging developments – data driven artificial intelligence sequence-based methods for epitope prediction and molecular modeling methods based on three-dimensional protein structures – offer hope for the future.
Resumo:
This study presents a two stage process to determine suitable areas to grow fuel crops: i) FAO Agro Ecological Zones (AEZ) procedure is applied to four Indian states of different geographical characteristics; and ii) Modelling the growth of candidate crops with GEPIC water and nutrient model, which is used to determine potential yield of candidate crops in areas where irrigation water is brackish or soil is saline. Absence of digital soil maps, paucity of readily available climate data and knowledge of detailed requirements of candidate crops are some of the major problems, of which, a series of detailed maps will evaluate true potential of biofuels in India.
Resumo:
This paper will explore a data-driven approach called Sales Resource Management (SRM) that can provide real insight into sales management. The DSMT (Diagnosis, Strategy, Metrics and Tools) framework can be used to solve field sales management challenges. This paper focus on the 6P's strategy of SRM and illustrates how to use them to solve the CAPS (Concentration, Attrition, Performance and Spend) challenges. © 2010 IEEE.
Resumo:
The sharing of near real-time traceability knowledge in supply chains plays a central role in coordinating business operations and is a key driver for their success. However before traceability datasets received from external partners can be integrated with datasets generated internally within an organisation, they need to be validated against information recorded for the physical goods received as well as against bespoke rules defined to ensure uniformity, consistency and completeness within the supply chain. In this paper, we present a knowledge driven framework for the runtime validation of critical constraints on incoming traceability datasets encapuslated as EPCIS event-based linked pedigrees. Our constraints are defined using SPARQL queries and SPIN rules. We present a novel validation architecture based on the integration of Apache Storm framework for real time, distributed computation with popular Semantic Web/Linked data libraries and exemplify our methodology on an abstraction of the pharmaceutical supply chain.