68 resultados para Constraint solving


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visualisation of program executions has been used in applications which include education and debugging. However, traditional visualisation techniques often fall short of expectations or are altogether inadequate for new programming paradigms, such as Constraint Logic Programming (CLP), whose declarative and operational semantics differ in some crucial ways from those of other paradigms. In particular, traditional ideas regarding the behaviour of data often cannot be lifted in a straightforward way to (C)LP from other families of programming languages. In this chapter we discuss techniques for visualising data evolution in CLP. We briefly review some previously proposed visualisation paradigms, and also propose a number of (to our knowledge) novel ones. The graphical representations have been chosen based on the perceived needs of a programmer trying to analyse the behaviour and characteristics of an execution. In particular, we concentrate on the representation of the run-time values of the variables, and the constraints among them. Given our interest in visualising large executions, we also pay attention to abstraction techniques, i.e., techniques which are intended to help in reducing the complexity of the visual information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this report we discuss some of the issues involved in the specialization and optimization of constraint logic programs with dynamic scheduling. Dynamic scheduling, as any other form of concurrency, increases the expressive power of constraint logic programs, but also introduces run-time overhead. The objective of the specialization and optimization is to reduce as much as possible such overhead automatically, while preserving the semantics of the original programs. This is done by program transformation based on global analysis. We present implementation techniques for this purpose and report on experimental results obtained from an implementation of the techniques in the context of the CIAO compiler.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of independence has been recently generalized to the constraint logic programming (CLP) paradigm. Also, several abstract domains specifically designed for CLP languages, and whose information can be used to detect the generalized independence conditions, have been recently defined. As a result we are now in a position where automatic parallelization of CLP programs is feasible. In this paper we study the task of automatically parallelizing CLP programs based on such analyses, by transforming them to explicitly concurrent programs in our parallel CC platform (CIAO) as well as to AKL. We describe the analysis and transformation process, and study its efficiency, accuracy, and effectiveness in program parallelization. The information gathered by the analyzers is evaluated not only in terms of its accuracy, i.e. its ability to determine the actual dependencies among the program variables, but also of its effectiveness, measured in terms of code reduction in the resulting parallelized programs. Given that only a few abstract domains have been already defined for CLP, and that none of them were specifically designed for dependency detection, the aim of the evaluation is not only to asses the effectiveness of the available domains, but also to study what additional information it would be desirable to infer, and what domains would be appropriate for further improving the parallelization process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper performs a further generalization of the notion of independence in constraint logic programs to the context of constraint logic programs with dynamic scheduling. The complexity of this new environment made necessary to first formally define the relationship between independence and search space preservation in the context of CLP languages. In particular, we show that search space preservation is, in the context of CLP languages, not only a sufficient but also a necessary condition for ensuring that both the intended solutions and the number of transitions performed do not change. These results are then extended to dynamically scheduled languages and used as the basis for the extension of the concepts of independence. We also propose several a priori sufficient conditions for independence and also give correctness and efficiency results for parallel execution of constraint logic programs based on the proposed notions of independence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta tesis está enmarcada en el estudio de diferentes procedimientos numéricos para resolver la dinámica de un sistema multicuerpo sometido a restricciones e impacto, que puede estar compuesto por sólidos rígidos y deformables conectados entre sí por diversos tipos de uniones. Dentro de los métodos numéricos analizados se presta un especial interés a los métodos consistentes, los cuales tienen por objetivo que la energía calculada en cada paso de tiempo, para un sistema mecánico, tenga una evolución coherente con el comportamiento teórico de la energía. En otras palabras, un método consistente mantiene constante la energía total en un problema conservativo, y en presencia de fuerzas disipativas proporciona un decremento positivo de la energía total. En esta línea se desarrolla un algoritmo numérico consistente con la energía total para resolver las ecuaciones de la dinámica de un sistema multicuerpo. Como parte de este algoritmo se formulan energéticamente consistentes las restricciones y el contacto empleando multiplicadores de Lagrange, penalización y Lagrange aumentado. Se propone también un método para el contacto con sólidos rígidos representados mediante superficies implícitas, basado en una restricción regularizada que se adaptada adecuadamente para el cumplimiento exacto de la restricción de contacto y para ser consistente con la conservación de la energía total. En este contexto se estudian dos enfoques: uno para el contacto elástico puro (sin deformación) formulado con penalización y Lagrange aumentado; y otro basado en un modelo constitutivo para el contacto con penetración. En el segundo enfoque se usa un potencial de penalización que, en ausencia de componentes disipativas, restaura la energía almacenada en el contacto y disipa energía de forma consistente con el modelo continuo cuando las componentes de amortiguamiento y fricción son consideradas. This thesis focuses on the study of several numerical procedures used to solve the dynamics of a multibody system subjected to constraints and impact. The system may be composed by rigid and deformable bodies connected by different types of joints. Within this framework, special attention is paid to consistent methods, which preserve the theoretical behavior of the energy at each time step. In other words, a consistent method keeps the total energy constant in a conservative problem, and provides a positive decrease in the total energy when dissipative forces are present. A numerical algorithm has been developed for solving the dynamical equations of multibody systems, which is energetically consistent. Energetic consistency in contacts and constraints is formulated using Lagrange multipliers, penalty and augmented Lagrange methods. A contact methodology is proposed for rigid bodies with a boundary represented by implicit surfaces. The method is based on a suitable regularized constraint formulation, adapted both to fulfill exactly the contact constraint, and to be consistent with the conservation of the total energy. In this context two different approaches are studied: the first applied to pure elastic contact (without deformation), formulated with penalty and augmented Lagrange; and a second one based on a constitutive model for contact with penetration. In this second approach, a penalty potential is used in the constitutive model, that restores the energy stored in the contact when no dissipative effects are present. On the other hand, the energy is dissipated consistently with the continuous model when friction and damping are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present in this paper a neural-like membrane system solving the SAT problem in linear time. These neural Psystems are nets of cells working with multisets. Each cell has a finite state memory, processes multisets of symbol-impulses, and can send impulses (?excitations?) to the neighboring cells. The maximal mode of rules application and the replicative mode of communication between cells are at the core of the eficiency of these systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vertical dynamic actions transmitted by railway vehicles to the ballasted track infrastructure is evaluated taking into account models with different degree of detail. In particular, we have studied this matter from a two-dimensional (2D) finite element model to a fully coupled three-dimensional (3D) multi-body finite element model. The vehicle and track are coupled via a non-linear Hertz contact mechanism. The method of Lagrange multipliers is used for the contact constraint enforcement between wheel and rail. Distributed elevation irregularities are generated based on power spectral density (PSD) distributions which are taken into account for the interaction. The numerical simulations are performed in the time domain, using a direct integration method for solving the transient problem due to the contact nonlinearities. The results obtained include contact forces, forces transmitted to the infrastructure (sleeper) by railpads and envelopes of relevant results for several track irregularities and speed ranges. The main contribution of this work is to identify and discuss coincidences and differences between discrete 2D models and continuum 3D models, as wheel as assessing the validity of evaluating the dynamic loading on the track with simplified 2D models

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En las últimas décadas el aumento de la velocidad y la disminución del peso de los vehículos ferroviarios de alta velocidad ha provocado que aumente su riesgo de vuelco. Además, las exigencias de los trazados de las líneas exige en ocasiones la construcción de viaductos muy altos situados en zonas expuestas a fuertes vientos. Esta combinación puede poner en peligro la seguridad de la circulación. En esta tesis doctoral se estudian los efectos dinámicos que aparecen en los vehículos ferroviarios cuando circulan sobre viaductos en presencia de vientos transversales. Para ello se han desarrollado e implementado una serie de modelos numéricos que permiten estudiar estos efectos de una forma realista y general. Los modelos desarrollados permiten analizar la interacción dinámica tridimensional tren-estructura, formulada mediante coordenadas absolutas en un sistema de referencia inercial, en un contexto de elementos _nitos no lineales. Mediante estos modelos se pueden estudiar de forma realista casos extremos como el vuelco o descarrilamiento de los vehículos. Han sido implementados en Abaqus, utilizando sus capacidades para resolver sistemas multi-cuerpo para el vehículo y elementos finitos para la estructura. La interacción entre el vehículo y la estructura se establece a través del contacto entre rueda y carril. Para ello, se han desarrollado una restricción, que permite establecer la relación cinemática entre el eje ferroviario y la vía, teniendo en cuenta los posibles defectos geométricos de la vía; y un modelo de contacto rueda-carril para establecer la interacción entre el vehículo y la estructura. Las principales características del modelo de contacto son: considera la geometría real de ambos cuerpos de forma tridimensional; permite resolver situaciones en las que el contacto entre rueda y carril se da en más de una zona a la vez; y permite utilizar distintas formulaciones para el cálculo de la tensión tangencial entre ambos cuerpos. Además, se ha desarrollado una metodología para determinar, a partir de formulaciones estocásticas, las historias temporales de cargas aerodinámicas debidas al viento turbulento en estructuras grandes y con pilas altas y flexibles. Esta metodología tiene cuenta la variabilidad espacial de la velocidad de viento, considerando la correlación entre los distintos puntos; considera las componentes de la velocidad del viento en tres dimensiones; y permite el cálculo de la velocidad de viento incidente sobre los vehículos que atraviesan la estructura. La metodología desarrollada en este trabajo ha sido implementada, validada y se ha aplicado a un caso concreto en el que se ha estudiado la respuesta de un tren de alta velocidad, similar al Siemens Velaro, circulando sobre el viaducto del río Ulla en presencia viento cruzado. En este estudio se ha analizado la seguridad y el confort de la circulación y la respuesta dinámica de la estructura cuando el tren cruza el viaducto. During the last decades the increase of the speed and the reduction of the weight of high-speed railway vehicles has led to a rise of the overturn risk. In addition, the design requests of the railway lines require some times the construction of very tall viaducts in strong wind areas. This combination may endanger the traffic safety. In this doctoral thesis the dynamic effects that appear in the railway vehicles when crossing viaducts under strong winds are studied. For this purpose it has been developed and implemented numerical models for studying these effects in a realistic and general way. The developed models allow to analyze the train-structure three-dimensional dynamic interaction, that is formulated by using absolute coordinates in an inertial reference frame within a non-linear finite element framework. By means of these models it is possible to study in a realistic way extreme situations such vehicle overturn or derailment. They have been implemented for Abaqus, by using its capabilities for solving multi-body systems for the vehicle and finite elements for the structure. The interaction between the vehicle and the structure is established through the wheel-rail contact. For this purpose, a constraint has been developed. It allows to establish the kinematic relationship between the railway wheelset and the track, taking into account the track irregularities. In addition, a wheel-rail contact model for establishing the interaction of the vehicle and the structure has been developed. The main features of the contact model are: it considers the real geometry During the last decades the increase of the speed and the reduction of the weight of high-peed railway vehicles has led to a rise of the overturn risk. In addition, the design requests of the railway lines require some times the construction of very tall viaducts in strong wind areas. This combination may endanger the traffic safety. In this doctoral thesis the dynamic effects that appear in the railway vehicles when crossing viaducts under strong winds are studied. For this purpose it has been developed and implemented numerical models for studying these effects in a realistic and general way. The developed models allow to analyze the train-structure three-dimensional dynamic interaction, that is formulated by using absolute coordinates in an inertial reference frame within a non-linear finite element framework. By means of these models it is possible to study in a realistic way extreme situations such vehicle overturn or derailment. They have been implemented for Abaqus, by using its capabilities for solving multi-body systems for the vehicle and finite elements for the structure. The interaction between the vehicle and the structure is established through the wheel-rail contact. For this purpose, a constraint has been developed. It allows to establish the kinematic relationship between the railway wheelset and the track, taking into account the track irregularities. In addition, a wheel-rail contact model for establishing the interaction of the vehicle and the structure has been developed. The main features of the contact model are: it considers the real geometry

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose four approximation algorithms (metaheuristic based), for the Minimum Vertex Floodlight Set problem. Urrutia et al. [9] solved the combinatorial problem, although it is strongly believed that the algorithmic problem is NP-hard. We conclude that, on average, the minimum number of vertex floodlights needed to illuminate a orthogonal polygon with n vertices is n/4,29.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge about the quality characteristics (QoS) of service com- positions is crucial for determining their usability and economic value. Ser- vice quality is usually regulated using Service Level Agreements (SLA). While end-to-end SLAs are well suited for request-reply interactions, more complex, decentralized, multiparticipant compositions (service choreographies) typ- ically involve multiple message exchanges between stateful parties and the corresponding SLAs thus encompass several cooperating parties with interde- pendent QoS. The usual approaches to determining QoS ranges structurally (which are by construction easily composable) are not applicable in this sce- nario. Additionally, the intervening SLAs may depend on the exchanged data. We present an approach to data-aware QoS assurance in choreographies through the automatic derivation of composable QoS models from partici- pant descriptions. Such models are based on a message typing system with size constraints and are derived using abstract interpretation. The models ob- tained have multiple uses including run-time prediction, adaptive participant selection, or design-time compliance checking. We also present an experimen- tal evaluation and discuss the benefits of the proposed approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El punto de vista de muchas otras aplicaciones que modifican las reglas de computación. En segundo lugar, y una vez generalizado el concepto de independencia, es necesario realizar un estudio exhaustivo de la efectividad de las herramientas de análisis en la tarea de la paralelizacion automática. Los resultados obtenidos de dicha evaluación permiten asegurar de forma empírica que la utilización de analizadores globales en la tarea de la paralelizacion automática es vital para la consecución de una paralelizarían efectiva. Por último, a la luz de los buenos resultados obtenidos sobre la efectividad de los analizadores de flujo globales basados en la interpretación abstracta, se presenta la generalización de las herramientas de análisis al contexto de los lenguajes lógicos restricciones y planificación dinámica.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of unreliable failure detector was introduced by Chandra and Toueg as a mechanism that provides information about process failures. This mechanism has been used to solve several agreement problems, such as the consensus problem. In this paper, algorithms that implement failure detectors in partially synchronous systems are presented. First two simple algorithms of the weakest class to solve the consensus problem, namely the Eventually Strong class (⋄S), are presented. While the first algorithm is wait-free, the second algorithm is f-resilient, where f is a known upper bound on the number of faulty processes. Both algorithms guarantee that, eventually, all the correct processes agree permanently on a common correct process, i.e. they also implement a failure detector of the class Omega (Ω). They are also shown to be optimal in terms of the number of communication links used forever. Additionally, a wait-free algorithm that implements a failure detector of the Eventually Perfect class (⋄P) is presented. This algorithm is shown to be optimal in terms of the number of bidirectional links used forever.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help lecturers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with the objective of developing a model for teaching and evaluating core competences and providing support to lecturers. This paper deals with the problem-solving competence. The first step has been to elaborate a guide for teachers to provide a homogeneous way to asses this competence. This guide considers several levels of acquisition of the competence and provides the rubrics to be applied for each one. The guide has been subsequently validated with several pilot experiences. In this paper we will explain the problem-solving assessment guide for teachers and will show the pilot experiences that has been carried out. We will finally justify the validity of the method to assess the problem-solving competence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes a knowledge-based application in the domain of road traffic management that we have developed following a knowledge modeling approach and the notion of problem-solving method. The article presents first a domain-independent model for real-time decision support as a structured collection of problem solving methods. Then, it is described how this general model is used to develop an operational version for the domain of traffic management. For this purpose, a particular knowledge modeling tool, called KSM (Knowledge Structure Manager), was applied. Finally, the article shows an application developed for a traffic network of the city of Madrid and it is compared with a second application developed for a different traffic area of the city of Barcelona.