961 resultados para schema-based reasoning


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, there has been an increasing interest in learning a distributed representation of word sense. Traditional context clustering based models usually require careful tuning of model parameters, and typically perform worse on infrequent word senses. This paper presents a novel approach which addresses these limitations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned representations outperform the publicly available embeddings on half of the metrics in the word similarity task, 6 out of 13 sub tasks in the analogical reasoning task, and gives the best overall accuracy in the word sense effect classification task, which shows the effectiveness of our proposed distributed distribution learning model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to dynamic variability, identifying the specific conditions under which non-functional requirements (NFRs) are satisfied may be only possible at runtime. Therefore, it is necessary to consider the dynamic treatment of relevant information during the requirements specifications. The associated data can be gathered by monitoring the execution of the application and its underlying environment to support reasoning about how the current application configuration is fulfilling the established requirements. This paper presents a dynamic decision-making infrastructure to support both NFRs representation and monitoring, and to reason about the degree of satisfaction of NFRs during runtime. The infrastructure is composed of: (i) an extended feature model aligned with a domain-specific language for representing NFRs to be monitored at runtime; (ii) a monitoring infrastructure to continuously assess NFRs at runtime; and (iii) a exible decision-making process to select the best available configuration based on the satisfaction degree of the NRFs. The evaluation of the approach has shown that it is able to choose application configurations that well fit user NFRs based on runtime information. The evaluation also revealed that the proposed infrastructure provided consistent indicators regarding the best application configurations that fit user NFRs. Finally, a benefit of our approach is that it allows us to quantify the level of satisfaction with respect to NFRs specification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study explored the differential effects of single-sex versus coed education on the cognitive and affective development of young women in senior year of high school. The basic research question was: What are the differential effects of single-sex versus coed education on the development of mathematical reasoning ability, verbal reasoning ability, or self-concept of high school girls?^ This study was composed of two parts. In the first part, the SAT verbal and mathematical ability scores were recorded for those subjects in the two schools from which the sample populations were drawn. The second part of the study required the application of the Piers-Harris Children's Self-Concept Scale to subjects in each of the two sample populations. The sample schools were deliberately selected to minimize between group differences in the populations. One was an all girls school, the other coeducational.^ The research design employed in this study was the causal-comparative method, used to explore causal relationships between variables that already exist. Based on a comprehensive analysis of the data produced by this research, no significant difference was found to exist between the mean scores of the senior girls in the single-sex school and the coed school on the SAT 1 verbal reasoning section. Nor was any significant difference found to exist between the mean scores of the senior girls in the single-sex school and the coed school on the SAT 1 mathematical reasoning section. Finally, no significant difference between the mean total scores of the senior girls in the single-sex school and the coed school on the Piers-Harris Children's Self-Concept Scale was found to exist.^ Contrary to what many other studies have found in the past about single-sex schools and their advantages for girls, this study found no support for such advantages in the cognitive areas of verbal and mathematical reasoning as measured by the SAT or in the affective area of self-concept as measured by the Piers-Harris Children's Self-Concept Scale. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Science reported in the media is an authentic source material to explore science research and innovation, to learn how science works and to consolidate science literacy skills.

Media reports intended to communicate science research and innovation provide opportunities for teachers to develop among their pupils the critical reading skills that are essential for promoting literacy in science.

This study focuses on a curricular intervention with upper primary pupils (age 11 years) and uses science reported in the media to facilitate science directed learning in the primary curriculum.

The study suggests that the use of science based media reports can be a positive learning experience for pupils. Strategies and teaching approaches can be used to boost pupils’ confidence and competence to adopt critical reading strategies when they encounter science-based media.

Critical reading and reasoning strategies vary in their degree of difficulty. This study would suggest that, when using media-based resources, teachers need approaches that systematically address the different levels of cognative challenge presented by media resources and create opportunities within the curriculum to revisit, consolidate and develop pupils’ critical reasoning skills.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background
Medical students transitioning into professional practice feel underprepared to deal with the emotional complexities of real-life ethical situations. Simulation-based learning (SBL) may provide a safe environment for students to probe the boundaries of ethical encounters. Published studies of ethics simulation have not generated sufficiently deep accounts of student experience to inform pedagogy. The aim of this study was to understand students’ lived experiences as they engaged with the emotional challenges of managing clinical ethical dilemmas within a SBL environment.

Methods
This qualitative study was underpinned by an interpretivist epistemology. Eight senior medical students participated in an interprofessional ward-based SBL activity incorporating a series of ethically challenging encounters. Each student wore digital video glasses to capture point-of-view (PoV) film footage. Students were interviewed immediately after the simulation and the PoV footage played back to them. Interviews were transcribed verbatim. An interpretative phenomenological approach, using an established template analysis approach, was used to iteratively analyse the data.

Results
Four main themes emerged from the analysis: (1) ‘Authentic on all levels?’, (2)‘Letting the emotions flow’, (3) ‘Ethical alarm bells’ and (4) ‘Voices of children and ghosts’. Students recognised many explicit ethical dilemmas during the SBL activity but had difficulty navigating more subtle ethical and professional boundaries. In emotionally complex situations, instances of moral compromise were observed (such as telling an untruth). Some participants felt unable to raise concerns or challenge unethical behaviour within the scenarios due to prior negative undergraduate experiences.

Conclusions
This study provided deep insights into medical students’ immersive and embodied experiences of ethical reasoning during an authentic SBL activity. By layering on the human dimensions of ethical decision-making, students can understand their personal responses to emotion, complexity and interprofessional working. This could assist them in framing and observing appropriate ethical and professional boundaries and help smooth the transition into clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Safety on public transport is a major concern for the relevant authorities. We
address this issue by proposing an automated surveillance platform which combines data from video, infrared and pressure sensors. Data homogenisation and integration is achieved by a distributed architecture based on communication middleware that resolves interconnection issues, thereby enabling data modelling. A common-sense knowledge base models and encodes knowledge about public-transport platforms and the actions and activities of passengers. Trajectory data from passengers is modelled as a time-series of human activities. Common-sense knowledge and rules are then applied to detect inconsistencies or errors in the data interpretation. Lastly, the rationality that characterises human behaviour is also captured here through a bottom-up Hierarchical Task Network planner that, along with common-sense, corrects misinterpretations to explain passenger behaviour. The system is validated using a simulated bus saloon scenario as a case-study. Eighteen video sequences were recorded with up to six passengers. Four metrics were used to evaluate performance. The system, with an accuracy greater than 90% for each of the four metrics, was found to outperform a rule-base system and a system containing planning alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Der Zugang zu Datenbanken über die universelle Abfragesprache SQL stellt für Nicht-Spezialisten eine große Herausforderung dar. Als eine benutzerfreundliche Alternative wurden daher seit den 1970er-Jahren unterschiedliche visuelle Abfragesprachen (Visual Query Languages, kurz VQLs) für klassische PCs erforscht. Ziel der vorliegenden Arbeit ist es, eine generische VQL zu entwickeln und zu erproben, die eine gestenbasierte Exploration von Datenbanken auf Schema- und Instanzdatenebene für mobile Endgeräte, insbesondere Tablets, ermöglicht. Dafür werden verschiedene Darstellungsformen, Abfragestrategien und visuelle Hints für Fremdschlüsselbeziehungen untersucht, die den Benutzer bei der Navigation durch die Daten unterstützen. Im Rahmen einer Anforderungsanalyse erwies sich die Visualisierung der Daten und Beziehungen mittels einer platzsparenden geschachtelten NF2-Darstellung als besonders vorteilhaft. Zur Steuerung der Datenbankexploration wird eine geeignete Gestensprache, bestehend aus Stroke-, Multitouch- und Mid-Air-Gesten, vorgestellt. Das Gesamtkonzept aus Darstellung und Gestensteuerung wurde anhand des im Rahmen dieser Arbeit entwickelten GBXT-Prototyps auf seine reale Umsetzbarkeit hin, als plattformunabhängige Single-Page-Application für verschiedene mobile Endgeräte mittels JavaScript und HTML5/CSS3 untersucht.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite its huge potential in risk analysis, the Dempster–Shafer Theory of Evidence (DST) has not received enough attention in construction management. This paper presents a DST-based approach for structuring personal experience and professional judgment when assessing construction project risk. DST was innovatively used to tackle the problem of lacking sufficient information through enabling analysts to provide incomplete assessments. Risk cost is used as a common scale for measuring risk impact on the various project objectives, and the Evidential Reasoning algorithm is suggested as a novel alternative for aggregating individual assessments. A spreadsheet-based decision support system (DSS) was devised to facilitate the proposed approach. Four case studies were conducted to examine the approach's viability. Senior managers in four British construction companies tried the DSS and gave very promising feedback. The paper concludes that the proposed methodology may contribute to bridging the gap between theory and practice of construction risk assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract not available

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a discrete formalism for temporal reasoning about actions and change, which enjoys an explicit representation of time and action/event occurrences. The formalism allows the expression of truth values for given fluents over various times including nondecomposable points/moments and decomposable intervals. Two major problems which beset most existing interval-based theories of action and change, i.e., the so-called dividing instant problem and the intermingling problem, are absent from this new formalism. The dividing instant problem is overcome by excluding the concepts of ending points of intervals, and the intermingling problem is bypassed by means of characterising the fundamental time structure as a well-ordered discrete set of non-decomposable times (points and moments), from which decomposable intervals are constructed. A comprehensive characterisation about the relationship between the negation of fluents and the negation of involved sentences is formally provided. The formalism provides a flexible expression of temporal relationships between effects and their causal events, including delayed effects of events which remains a problematic question in most existing theories about action and change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new formalism for reasoning about change over time. The formalism derives a clean separation between the notion of states and situations. It allows more flexible temporal causal relationships than do other formalisms for reasoning about causal change, such as the situation calculus and the event calculus. It includes effects that start during, immediately after, or some time after their causes, and which end before, simultaneously with, or after their causes. A formal distinction between actions, action-types and events is proposed, which allows the expression of common-sense causal laws at high level. It is shown how these laws can be used to deduce state change over time at low level, when events occur under certain preconditions hold. Two problems that beset most interval-based temporal systems, i.e., the so-called dividing instant problem and intermingling problem, are absent from the formalism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current practices in agricultural management involve the application of rules and techniques to ensure high quality and environmentally friendly production. Based on their experience, agricultural technicians and farmers make critical decisions affecting crop growth while considering several interwoven agricultural, technological, environmental, legal and economic factors. In this context, decision support systems and the knowledge models that support them, enable the incorporation of valuable experience into software systems providing support to agricultural technicians to make rapid and effective decisions for efficient crop growth. Pest control is an important issue in agricultural management due to crop yield reductions caused by pests and it involves expert knowledge. This paper presents a formalisation of the pest control problem and the workflow followed by agricultural technicians and farmers in integrated pest management, the crop production strategy that combines different practices for growing healthy crops whilst minimising pesticide use. A generic decision schema for estimating infestation risk of a given pest on a given crop is defined and it acts as a metamodel for the maintenance and extension of the knowledge embedded in a pest management decision support system which is also presented. This software tool has been implemented by integrating a rule-based tool into web-based architecture. Evaluation from validity and usability perspectives concluded that both agricultural technicians and farmers considered it a useful tool in pest control, particularly for training new technicians and inexperienced farmers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.