49 resultados para semi-autonomous information retrieval
Resumo:
School reform is a major concern in many countries that seek to improve their educational systems and enhance their performances. In consequence, many global schemes, theories, studies, attempts, and programmes have been introduced to promote education in recent years. Saudi Arabia is one of these countries that implemented educational change by introducing many initiatives. The Tatweer Programme is one of these initiatives and is considered as a major recent reform. The main purpose of this study is to investigate this reform in depth by examining the perceptions and experiences of the Tatweer leaders and teachers to find out which extent they have been enabled to be innovative, and to examine the types of leadership and decision-making that have been undertaken by such schools. This study adopted a qualitative case study that employed interviews, focus groups and documentary analysis. The design of the study has been divided into two phases; the first phase was the feasibility study and the second phase was the main study. The research sample of the feasibility study was head teachers, educational experts and Tatweer Unit’s members. The sample of the main study was three Tatweer schools, Tatweer Unit members and one official of Tatweer Project in Riyadh. The findings of this study identified the level of autonomy in managing the school; the Tatweer schools’ system is semi-autonomous when it comes to the internal management, but it lacks autonomy when it comes to staff appointment, student assessment, and curriculum development. In addition, the managerial work has been distributed through teams and members; the Excellence Team plays a critical role in school effectiveness leading an efficient change. Moreover, Professional Learning Communities have been used to enhance the work within Tatweer schools. Finally the findings show that there have been major shifts in the Tatweer schools’ system; the shifting from centralisation to semi-decentralisation; from the culture of the individual to the culture of community; from the traditional school to one focused on self-evaluation and planning; from management to leadership; and from an isolated school being open to society. These shifts have impacted positively on the attitudes of students, parents and staff.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
Mainframes, corporate and central servers are becoming information servers. The requirement for more powerful information servers is the best opportunity to exploit the potential of parallelism. ICL recognized the opportunity of the 'knowledge spectrum' namely to convert raw data into information and then into high grade knowledge. Parallel Processing and Data Management Its response to this and to the underlying search problems was to introduce the CAFS retrieval engine. The CAFS product demonstrates that it is possible to move functionality within an established architecture, introduce a different technology mix and exploit parallelism to achieve radically new levels of performance. CAFS also demonstrates the benefit of achieving this transparently behind existing interfaces. ICL is now working with Bull and Siemens to develop the information servers of the future by exploiting new technologies as available. The objective of the joint Esprit II European Declarative System project is to develop a smoothly scalable, highly parallel computer system, EDS. EDS will in the main be an SQL server and an information server. It will support the many data-intensive applications which the companies foresee; it will also support application-intensive and logic-intensive systems.
Resumo:
The effect of a prior gist-based versus item-specific retrieval orientation on recognition of objects and words was examined. Prior item-specific retrieval increased item-specific recognition of episodically related but not previously tested objects relative to both conceptual- and perceptual-gist retrieval. An item-specific retrieval advantage also was found when the stimuli were words (synonyms) rather than objects but not when participants overtly named objects during gist-based recognition testing, which suggests that they did not always label objects under general gist-retrieval instructions. Unlike verbal overshadowing, labeling objects during recognition attenuated (but did not eliminate) test- and interference-related forgetting. A full understanding of how retrieval affects subsequent memory, even for events or facts that are not themselves retrieved, must take into account the specificity with which that retrieval occurs.
Resumo:
The artificial grammar (AG) learning literature (see, e.g., Mathews et al., 1989; Reber, 1967) has relied heavily on a single measure of implicitly acquired knowledge. Recent work comparing this measure (string classification) with a more indirect measure in which participants make liking ratings of novel stimuli (e.g., Manza & Bornstein, 1995; Newell & Bright, 2001) has shown that string classification (which we argue can be thought of as an explicit, rather than an implicit, measure of memory) gives rise to more explicit knowledge of the grammatical structure in learning strings and is more resilient to changes in surface features and processing between encoding and retrieval. We report data from two experiments that extend these findings. In Experiment 1, we showed that a divided attention manipulation (at retrieval) interfered with explicit retrieval of AG knowledge but did not interfere with implicit retrieval. In Experiment 2, we showed that forcing participants to respond within a very tight deadline resulted in the same asymmetric interference pattern between the tasks. In both experiments, we also showed that the type of information being retrieved influenced whether interference was observed. The results are discussed in terms of the relatively automatic nature of implicit retrieval and also with respect to the differences between analytic and nonanalytic processing (Whittlesea Price, 2001).
Resumo:
Background: Problems with lexical retrieval are common across all types of aphasia but certain word classes are thought to be more vulnerable in some aphasia types. Traditionally, verb retrieval problems have been considered characteristic of non-fluent aphasias but there is growing evidence that verb retrieval problems are also found in fluent aphasia. As verbs are retrieved from the mental lexicon with syntactic as well as phonological and semantic information, it is speculated that an improvement in verb retrieval should enhance communicative abilities in this population as in others. We report on an investigation into the effectiveness of verb treatment for three individuals with fluent aphasia. Methods & Procedures: Multiple pre-treatment baselines were established over 3 months in order to monitor language change before treatment. The three participants then received twice-weekly verb treatment over approximately 4 months. All pre-treatment assessments were administered immediately after treatment and 3 months post-treatment. Outcome & Results: Scores fluctuated in the pre-treatment period. Following treatment, there was a significant improvement in verb retrieval for two of the three participants on the treated items. The increase in scores for the third participant was statistically nonsignificant but post-treatment scores moved from below the normal range to within the normal range. All participants were significantly quicker in the verb retrieval task following treatment. There was an increase in well-formed sentences in the sentence construction test and in some samples of connected speech. Conclusions: Repeated systematic treatment can produce a significant improvement in verb retrieval of practised items and generalise to unpractised items for some participants. An increase in well-formed sentences is seen for some speakers. The theoretical and clinical implications of the results are discussed.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.
Resumo:
Search engines exploit the Web's hyperlink structure to help infer information content. The new phenomenon of personal Web logs, or 'blogs', encourage more extensive annotation of Web content. If their resulting link structures bias the Web crawling applications that search engines depend upon, there are implications for another form of annotation rapidly on the rise, the Semantic Web. We conducted a Web crawl of 160 000 pages in which the link structure of the Web is compared with that of several thousand blogs. Results show that the two link structures are significantly different. We analyse the differences and infer the likely effect upon the performance of existing and future Web agents. The Semantic Web offers new opportunities to navigate the Web, but Web agents should be designed to take advantage of the emerging link structures, or their effectiveness will diminish.
Resumo:
A new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) over the ocean is presented, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain-rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes’s theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance the understanding of theoretical benefits of the Bayesian approach, sensitivity analyses have been conducted based on two synthetic datasets for which the “true” conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism, but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak owing to saturation effects. It is also suggested that both the choice of the estimators and the prior information are crucial to the retrieval. In addition, the performance of the Bayesian algorithm herein is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
Resumo:
In the emerging digital economy, the management of information in aerospace and construction organisations is facing a particular challenge due to the ever-increasing volume of information and the extensive use of information and communication technologies (ICTs). This paper addresses the problems of information overload and the value of information in both industries by providing some cross-disciplinary insights. In particular it identifies major issues and challenges in the current information evaluation practice in these two industries. Interviews were conducted to get a spectrum of industrial perspectives (director/strategic, project management and ICT/document management) on these issues in particular to information storage and retrieval strategies and the contrasting approaches to knowledge and information management of personalisation and codification. Industry feedback was collected by a follow-up workshop to strengthen the findings of the research. An information-handling agenda is outlined for the development of a future Information Evaluation Methodology (IEM) which could facilitate the practice of the codification of high-value information in order to support through-life knowledge and information management (K&IM) practice.