35 resultados para Contractual penalty
Resumo:
Serious games are starting to attain a higher role as tools for learning in various contexts, but in particular in areas such as education and training. Due to its characteristics, such as rules, behavior simulation and feedback to the player's actions, serious games provide a favorable learning environment where errors can occur without real life penalty and students get instant feedback from challenges. These challenges are in accordance with the intended objectives and will self-adapt and repeat according to the student’s difficulty level. Through motivating and engaging environments, which serve as base for problem solving and simulation of different situations and contexts, serious games have a great potential to aid players developing professional skills. But, how do we certify the acquired knowledge and skills? With this work we intend to propose a methodology to establish a relationship between the game mechanics of serious games and an array of competences for certification, evaluating the applicability of various aspects in the design and development of games such as the user interfaces and the gameplay, obtaining learning outcomes within the game itself. Through the definition of game mechanics combined with the necessary pedagogical elements, the game will ensure the certification. This paper will present a matrix of generic skills, based on the European Framework of Qualifications, and the definition of the game mechanics necessary for certification on tour guide training context. The certification matrix has as reference axes: skills, knowledge and competencies, which describe what the students should learn, understand and be able to do after they complete the learning process. The guides-interpreters welcome and accompany tourists on trips and visits to places of tourist interest and cultural heritage such as museums, palaces and national monuments, where they provide various information. Tour guide certification requirements include skills and specific knowledge about foreign languages and in the areas of History, Ethnology, Politics, Religion, Geography and Art of the territory where it is inserted. These skills are communication, interpersonal relationships, motivation, organization and management. This certification process aims to validate the skills to plan and conduct guided tours on the territory, demonstrate knowledge appropriate to the context and finally match a good group leader. After defining which competences are to be certified, the next step is to delineate the expected learning outcomes, as well as identify the game mechanics associated with it. The game mechanics, as methods invoked by agents for interaction with the game world, in combination with game elements/objects allows multiple paths through which to explore the game environment and its educational process. Mechanics as achievements, appointments, progression, reward schedules or status, describe how game can be designed to affect players in unprecedented ways. In order for the game to be able to certify tour guides, the design of the training game will incorporate a set of theoretical and practical tasks to acquire skills and knowledge of various transversal themes. For this end, patterns of skills and abilities in acquiring different knowledge will be identified.
Resumo:
Apresenta-se uma abordagem ao poder sancionatório aplicável nos procedimentos conduzidos pela Comissão Europeia na aplicai;:ao das regras substantivas e adjectivas de Direito da União Europeia em direito da concorrência.
Resumo:
Modern real-time systems, with a more flexible and adaptive nature, demand approaches for timeliness evaluation based on probabilistic measures of meeting deadlines. In this context, simulation can emerge as an adequate solution to understand and analyze the timing behaviour of actual systems. However, care must be taken with the obtained outputs under the penalty of obtaining results with lack of credibility. Particularly important is to consider that we are more interested in values from the tail of a probability distribution (near worst-case probabilities), instead of deriving confidence on mean values. We approach this subject by considering the random nature of simulation output data. We will start by discussing well known approaches for estimating distributions out of simulation output, and the confidence which can be applied to its mean values. This is the basis for a discussion on the applicability of such approaches to derive confidence on the tail of distributions, where the worst-case is expected to be.
Resumo:
Consider the problem of scheduling a set of tasks on a single processor such that deadlines are met. Assume that tasks may share data and that linearizability, the most common correctness condition for data sharing, must be satisfied. We find that linearizability can severely penalize schedulability. We identify, however, two special cases where linearizability causes no or not too large penalty on schedulability.
Resumo:
In this work we present a classification of some of the existing Penalty Methods (denominated the Exact Penalty Methods) and describe some of its limitations and estimated. With these methods we can solve problems of optimization with continuous, discrete and mixing constrains, without requiring continuity, differentiability or convexity. The boarding consists of transforming the original problem, in a sequence of problems without constrains, derivate of the initial, making possible its resolution for the methods known for this type of problems. Thus, the Penalty Methods can be used as the first step for the resolution of constrained problems for methods typically used in by unconstrained problems. The work finishes discussing a new class of Penalty Methods, for nonlinear optimization, that adjust the penalty parameter dynamically.
Resumo:
Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.
Resumo:
Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.
Resumo:
The structural integrity of multi-component structures is usually determined by the strength and durability of their unions. Adhesive bonding is often chosen over welding, riveting and bolting, due to the reduction of stress concentrations, reduced weight penalty and easy manufacturing, amongst other issues. In the past decades, the Finite Element Method (FEM) has been used for the simulation and strength prediction of bonded structures, by strength of materials or fracture mechanics-based criteria. Cohesive-zone models (CZMs) have already proved to be an effective tool in modelling damage growth, surpassing a few limitations of the aforementioned techniques. Despite this fact, they still suffer from the restriction of damage growth only at predefined growth paths. The eXtended Finite Element Method (XFEM) is a recent improvement of the FEM, developed to allow the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom with special displacement functions, thus overcoming the main restriction of CZMs. These two techniques were tested to simulate adhesively bonded single- and double-lap joints. The comparative evaluation of the two methods showed their capabilities and/or limitations for this specific purpose.
Resumo:
Personalised video can be achieved by inserting objects into a video play-out according to the viewer's profile. Content which has been authored and produced for general broadcast can take on additional commercial service features when personalised either for individual viewers or for groups of viewers participating in entertainment, training, gaming or informational activities. Although several scenarios and use-cases can be envisaged, we are focussed on the application of personalised product placement. Targeted advertising and product placement are currently garnering intense interest in the commercial networked media industries. Personalisation of product placement is a relevant and timely service for next generation online marketing and advertising and for many other revenue generating interactive services. This paper discusses the acquisition and insertion of media objects into a TV video play-out stream where the objects are determined by the profile of the viewer. The technology is based on MPEG-4 standards using object based video and MPEG-7 for metadata. No proprietary technology or protocol is proposed. To trade the objects into the video play-out, a Software-as-a-Service brokerage platform based on intelligent agent technology is adopted. Agencies, libraries and service providers are represented in a commercial negotiation to facilitate the contractual selection and usage of objects to be inserted into the video play-out.
Resumo:
Dissertação de Mestrado apresentada ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Auditoria, sob orientação do Mestre Adalmiro Álvaro Malheiro de Castro Andrade Pereira
Resumo:
The elastic behavior of the demand consumption jointly used with other available resources such as distributed generation (DG) can play a crucial role for the success of smart grids. The intensive use of Distributed Energy Resources (DER) and the technical and contractual constraints result in large-scale non linear optimization problems that require computational intelligence methods to be solved. This paper proposes a Particle Swarm Optimization (PSO) based methodology to support the minimization of the operation costs of a virtual power player that manages the resources in a distribution network and the network itself. Resources include the DER available in the considered time period and the energy that can be bought from external energy suppliers. Network constraints are considered. The proposed approach uses Gaussian mutation of the strategic parameters and contextual self-parameterization of the maximum and minimum particle velocities. The case study considers a real 937 bus distribution network, with 20310 consumers and 548 distributed generators. The obtained solutions are compared with a deterministic approach and with PSO without mutation and Evolutionary PSO, both using self-parameterization.
Resumo:
In this paper we address the problem of computing multiple roots of a system of nonlinear equations through the global optimization of an appropriate merit function. The search procedure for a global minimizer of the merit function is carried out by a metaheuristic, known as harmony search, which does not require any derivative information. The multiple roots of the system are sequentially determined along several iterations of a single run, where the merit function is accordingly modified by penalty terms that aim to create repulsion areas around previously computed minimizers. A repulsion algorithm based on a multiplicative kind penalty function is proposed. Preliminary numerical experiments with a benchmark set of problems show the effectiveness of the proposed method.
Resumo:
Artificial Intelligence has been applied to dynamic games for many years. The ultimate goal is creating responses in virtual entities that display human-like reasoning in the definition of their behaviors. However, virtual entities that can be mistaken for real persons are yet very far from being fully achieved. This paper presents an adaptive learning based methodology for the definition of players’ profiles, with the purpose of supporting decisions of virtual entities. The proposed methodology is based on reinforcement learning algorithms, which are responsible for choosing, along the time, with the gathering of experience, the most appropriate from a set of different learning approaches. These learning approaches have very distinct natures, from mathematical to artificial intelligence and data analysis methodologies, so that the methodology is prepared for very distinct situations. This way it is equipped with a variety of tools that individually can be useful for each encountered situation. The proposed methodology is tested firstly on two simpler computer versus human player games: the rock-paper-scissors game, and a penalty-shootout simulation. Finally, the methodology is applied to the definition of action profiles of electricity market players; players that compete in a dynamic game-wise environment, in which the main goal is the achievement of the highest possible profits in the market.
Resumo:
In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.
Resumo:
O âmbito deste trabalho envolve o teste do modelo BIM numa obra em construção pela Mota-Engil – Engenharia, na extração experimental de peças desenhadas de preparação e apoio à execução de obra. No capítulo 1 deste relatório são definidos o âmbito e os objetivos deste trabalho, é feito um enquadramento histórico do tema e abordados conceitos e atividades da preparação de obra, na sua forma tradicional. O estado do conhecimento da preparação de obras e mais em concreto da tecnologia BIM a nível nacional e internacional é abordado no capítulo 2. Nesse sentido procura-se definir os conceitos principais inerentes a esta nova metodologia, que passa por identificar e caraterizar a tecnologia envolvida e o seu nível de desenvolvimento. Com suporte em casos práticos de preparação de obra na sua forma tradicional, identificados e desenvolvidos no capítulo 3, foi compilado um processo tipo de peças desenhadas de suporte identificadas e caracterizadas no capítulo 4, frequentes e comuns à execução de diversos tipos de obras de edifícios. Assente na compilação baseada em casos práticos e no estudo do projeto de execução da empreitada que sustenta o presente trabalho, com base no qual o modelo BIM foi concebido, identificou-se um conjunto de peças desenhadas de preparação e apoio à execução dos trabalhos, em 2D, a extrair do modelo. No capítulo 5, é feita uma descrição do modo como foi estudado o projeto da obra, com evidência para os fatores mais relevantes, especificando os desenhos a extrair. Suportada pelo programa de modelação ArchiCAD, a extração do conjunto de desenhos identificados anteriormente foi conseguida com recurso às funcionalidades disponíveis no software, que permite a criação de desenhos 2D atualizáveis ou não automaticamente a partir do modelo. Qualquer alteração introduzida no modelo virtual é automaticamente atualizada nos desenhos bidimensionais, caso o utilizador assim o pretenda. Ao longo desse trabalho foram detetados e analisados os condicionalismos inerentes ao processo de extração, referidos no capítulo 6, para estabelecimento de regras de modelação padrão a adotar em futuras empreitadas, que possam simplificar a obtenção dos elementos desenhados de preparação necessários à sua execução. No ponto 6.3 são identificadas melhorias a introduzir no modelo. Em conclusão no capítulo 7 são abordadas especificidades do setor da construção que sustentam e evidenciam cada vez mais a necessidade de utilizar as novas tecnologias com vista à adoção de práticas e ferramentas padrão de apoio à execução de obras. Sendo a tecnologia BIM, transversal a todo o setor, a sua utilização com regras padrão na conceção dos modelos e na extração de dados, potencia a otimização dos custos, do tempo, dos recursos e da qualidade final de um empreendimento, ao longo de todo o seu ciclo de vida, para além de apoiar com elevada fiabilidade as tomadas de decisão ao longo desse período. A tecnologia BIM, possibilita a antevisão do edifício a construir com um elevado grau de pormenor, com todas as vantagens que daí advêm.