44 resultados para Solving-problem algorithms

em Universidad Politécnica de Madrid


Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper we propose four approximation algorithms (metaheuristic based), for the Minimum Vertex Floodlight Set problem. Urrutia et al. [9] solved the combinatorial problem, although it is strongly believed that the algorithmic problem is NP-hard. We conclude that, on average, the minimum number of vertex floodlights needed to illuminate a orthogonal polygon with n vertices is n/4,29.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed

Relevância:

40.00% 40.00%

Publicador:

Resumo:

At present, many countries allow citizens or entities to interact with the government outside the telematic environment through a legal representative who is granted powers of representation. However, if the interaction takes place through the Internet, only primitive mechanisms of representation are available, and these are mainly based on non-dynamic offline processes that do not enable quick and easy identity delegation. This paper proposes a system of dynamic delegation of identity between two generic entities that can solve the problem of delegated access to the telematic services provided by public authorities. The solution herein is based on the generation of a delegation token created from a proxy certificate that allows the delegating entity to delegate identity to another on the basis of a subset of its attributes as delegator, while also establishing in the delegation token itself restrictions on the services accessible to the delegated entity and the validity period of delegation. Further, the paper presents the mechanisms needed to either revoke a delegation token or to check whether a delegation token has been revoked. Implications for theory and practice and suggestions for future research are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes a proposal of a language called Link which has been designed to formalize and operationalize problem solving strategies. This language is used within a software environment called KSM (Knowledge Structure Manager) which helps developers in formulating and operationalizing structured knowledge models. The paper presents both its syntax and dynamics, and gives examples of well-known problem-solving strategies of reasoning formulated using this language.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents some brief considerations on the role of Computational Logic in the construction of Artificial Intelligence systems and in programming in general. It does not address how the many problems in AI can be solved but, rather more modestly, tries to point out some advantages of Computational Logic as a tool for the AI scientist in his quest. It addresses the interaction between declarative and procedural views of programs (deduction and action), the impact of the intrinsic limitations of logic, the relationship with other apparently competing computational paradigms, and finally discusses implementation-related issues, such as the efficiency of current implementations and their capability for efficiently exploiting existing and future sequential and parallel hardware. The purpose of the discussion is in no way to present Computational Logic as the unique overall vehicle for the development of intelligent systems (in the firm belief that such a panacea is yet to be found) but rather to stress its strengths in providing reasonable solutions to several aspects of the task.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The concept of unreliable failure detector was introduced by Chandra and Toueg as a mechanism that provides information about process failures. This mechanism has been used to solve several agreement problems, such as the consensus problem. In this paper, algorithms that implement failure detectors in partially synchronous systems are presented. First two simple algorithms of the weakest class to solve the consensus problem, namely the Eventually Strong class (⋄S), are presented. While the first algorithm is wait-free, the second algorithm is f-resilient, where f is a known upper bound on the number of faulty processes. Both algorithms guarantee that, eventually, all the correct processes agree permanently on a common correct process, i.e. they also implement a failure detector of the class Omega (Ω). They are also shown to be optimal in terms of the number of communication links used forever. Additionally, a wait-free algorithm that implements a failure detector of the Eventually Perfect class (⋄P) is presented. This algorithm is shown to be optimal in terms of the number of bidirectional links used forever.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help lecturers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with the objective of developing a model for teaching and evaluating core competences and providing support to lecturers. This paper deals with the problem-solving competence. The first step has been to elaborate a guide for teachers to provide a homogeneous way to asses this competence. This guide considers several levels of acquisition of the competence and provides the rubrics to be applied for each one. The guide has been subsequently validated with several pilot experiences. In this paper we will explain the problem-solving assessment guide for teachers and will show the pilot experiences that has been carried out. We will finally justify the validity of the method to assess the problem-solving competence.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article describes a knowledge-based application in the domain of road traffic management that we have developed following a knowledge modeling approach and the notion of problem-solving method. The article presents first a domain-independent model for real-time decision support as a structured collection of problem solving methods. Then, it is described how this general model is used to develop an operational version for the domain of traffic management. For this purpose, a particular knowledge modeling tool, called KSM (Knowledge Structure Manager), was applied. Finally, the article shows an application developed for a traffic network of the city of Madrid and it is compared with a second application developed for a different traffic area of the city of Barcelona.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The assessment of learning outcomes is a key concept in the European Credit Transfer and Accumulation System (ECTS) since credits are awarded when the assessment shows the competences which were aimed at have been developed at an appropriate level. This paper describes a study which was first part of the Bologna Experts Team-Spain project and then developed as an independent study. It was carried out with the overall goal to gain experience in the assessment of learning outcomes. More specifically it aimed at 1) designing procedures for the assessment of learning outcomes related to these compulsory generic competences; 2) testing some basic psychometric features that an assessment device with some consequences for the subjects being evaluated needs to prove; 3) testing different procedures of standard setting, and 4) using assessment results as orienting feedback to students and their tutors. The process of development of tests to carry out the assessment of learning outcomes is described as well as some basic features regarding their reliability and validity. First conclusions on the comparison of the results achieved at two academic levels are also presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help lecturers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with the objective of developing a model for teaching and evaluating core competences and providing support to lecturers. This paper deals with the problem solving competence. The first step has been to elaborate a guide for teachers to provide an homogeneous way to asses this competence. This guide considers several levels of acquisition of the competence and provided the rubrics to be applied for each one. The guide has been subsequently validated with several pilot experiences. In this paper we will explain the problem-solving assessment guide for teachers and will show the pilot experiences that has been carried out. We will finally justify the validity of the method to assess the problem solving competence.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Descripción y análisis critic de una metodología de taller de posgrado a realizar entre dos universidades en idioma ingles y con el apoyo de las nuevas tecnologías

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La familia de algoritmos de Boosting son un tipo de técnicas de clasificación y regresión que han demostrado ser muy eficaces en problemas de Visión Computacional. Tal es el caso de los problemas de detección, de seguimiento o bien de reconocimiento de caras, personas, objetos deformables y acciones. El primer y más popular algoritmo de Boosting, AdaBoost, fue concebido para problemas binarios. Desde entonces, muchas han sido las propuestas que han aparecido con objeto de trasladarlo a otros dominios más generales: multiclase, multilabel, con costes, etc. Nuestro interés se centra en extender AdaBoost al terreno de la clasificación multiclase, considerándolo como un primer paso para posteriores ampliaciones. En la presente tesis proponemos dos algoritmos de Boosting para problemas multiclase basados en nuevas derivaciones del concepto margen. El primero de ellos, PIBoost, está concebido para abordar el problema descomponiéndolo en subproblemas binarios. Por un lado, usamos una codificación vectorial para representar etiquetas y, por otro, utilizamos la función de pérdida exponencial multiclase para evaluar las respuestas. Esta codificación produce un conjunto de valores margen que conllevan un rango de penalizaciones en caso de fallo y recompensas en caso de acierto. La optimización iterativa del modelo genera un proceso de Boosting asimétrico cuyos costes dependen del número de etiquetas separadas por cada clasificador débil. De este modo nuestro algoritmo de Boosting tiene en cuenta el desbalanceo debido a las clases a la hora de construir el clasificador. El resultado es un método bien fundamentado que extiende de manera canónica al AdaBoost original. El segundo algoritmo propuesto, BAdaCost, está concebido para problemas multiclase dotados de una matriz de costes. Motivados por los escasos trabajos dedicados a generalizar AdaBoost al terreno multiclase con costes, hemos propuesto un nuevo concepto de margen que, a su vez, permite derivar una función de pérdida adecuada para evaluar costes. Consideramos nuestro algoritmo como la extensión más canónica de AdaBoost para este tipo de problemas, ya que generaliza a los algoritmos SAMME, Cost-Sensitive AdaBoost y PIBoost. Por otro lado, sugerimos un simple procedimiento para calcular matrices de coste adecuadas para mejorar el rendimiento de Boosting a la hora de abordar problemas estándar y problemas con datos desbalanceados. Una serie de experimentos nos sirven para demostrar la efectividad de ambos métodos frente a otros conocidos algoritmos de Boosting multiclase en sus respectivas áreas. En dichos experimentos se usan bases de datos de referencia en el área de Machine Learning, en primer lugar para minimizar errores y en segundo lugar para minimizar costes. Además, hemos podido aplicar BAdaCost con éxito a un proceso de segmentación, un caso particular de problema con datos desbalanceados. Concluimos justificando el horizonte de futuro que encierra el marco de trabajo que presentamos, tanto por su aplicabilidad como por su flexibilidad teórica. Abstract The family of Boosting algorithms represents a type of classification and regression approach that has shown to be very effective in Computer Vision problems. Such is the case of detection, tracking and recognition of faces, people, deformable objects and actions. The first and most popular algorithm, AdaBoost, was introduced in the context of binary classification. Since then, many works have been proposed to extend it to the more general multi-class, multi-label, costsensitive, etc... domains. Our interest is centered in extending AdaBoost to two problems in the multi-class field, considering it a first step for upcoming generalizations. In this dissertation we propose two Boosting algorithms for multi-class classification based on new generalizations of the concept of margin. The first of them, PIBoost, is conceived to tackle the multi-class problem by solving many binary sub-problems. We use a vectorial codification to represent class labels and a multi-class exponential loss function to evaluate classifier responses. This representation produces a set of margin values that provide a range of penalties for failures and rewards for successes. The stagewise optimization of this model introduces an asymmetric Boosting procedure whose costs depend on the number of classes separated by each weak-learner. In this way the Boosting procedure takes into account class imbalances when building the ensemble. The resulting algorithm is a well grounded method that canonically extends the original AdaBoost. The second algorithm proposed, BAdaCost, is conceived for multi-class problems endowed with a cost matrix. Motivated by the few cost-sensitive extensions of AdaBoost to the multi-class field, we propose a new margin that, in turn, yields a new loss function appropriate for evaluating costs. Since BAdaCost generalizes SAMME, Cost-Sensitive AdaBoost and PIBoost algorithms, we consider our algorithm as a canonical extension of AdaBoost to this kind of problems. We additionally suggest a simple procedure to compute cost matrices that improve the performance of Boosting in standard and unbalanced problems. A set of experiments is carried out to demonstrate the effectiveness of both methods against other relevant Boosting algorithms in their respective areas. In the experiments we resort to benchmark data sets used in the Machine Learning community, firstly for minimizing classification errors and secondly for minimizing costs. In addition, we successfully applied BAdaCost to a segmentation task, a particular problem in presence of imbalanced data. We conclude the thesis justifying the horizon of future improvements encompassed in our framework, due to its applicability and theoretical flexibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work discusses an iterative procedure of shaping offset dual-reflector antennas based on geometrical optics considering both far-field and near-field measurements of amplitude and phase from the feed horn. The surfaces synthesized will transform a known radiation field of a feed to a desired aperture distribution. This technique is applied for both circular and elliptical apertures and has the advantage to simplify the problem compared with existing techniques based on solving nonlinear differential equations. A MATLAB tool has been developed to implement the shaping algorithms. This procedure is applied for the design of a 1.1 m high-gain antenna for the ESA’s Solar Orbiter spacecraft. This antenna operating at X-band will manage high data rate and high efficiency communications with Earth stations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evolutionary search algorithms have become an essential asset in the algorithmic toolbox for solving high-dimensional optimization problems in across a broad range of bioinformatics problems. Genetic algorithms, the most well-known and representative evolutionary search technique, have been the subject of the major part of such applications. Estimation of distribution algorithms (EDAs) offer a novel evolutionary paradigm that constitutes a natural and attractive alternative to genetic algorithms. They make use of a probabilistic model, learnt from the promising solutions, to guide the search process. In this paper, we set out a basic taxonomy of EDA techniques, underlining the nature and complexity of the probabilistic model of each EDA variant. We review a set of innovative works that make use of EDA techniques to solve challenging bioinformatics problems, emphasizing the EDA paradigm's potential for further research in this domain.