34 resultados para Strictly hyperbolic polynomial
em Instituto Politécnico do Porto, Portugal
Resumo:
The main goal of this work is to solve mathematical program with complementarity constraints (MPCC) using nonlinear programming techniques (NLP). An hyperbolic penalty function is used to solve MPCC problems by including the complementarity constraints in the penalty term. This penalty function [1] is twice continuously differentiable and combines features of both exterior and interior penalty methods. A set of AMPL problems from MacMPEC [2] are tested and a comparative study is performed.
Resumo:
Mathematical Program with Complementarity Constraints (MPCC) finds many applications in fields such as engineering design, economic equilibrium and mathematical programming theory itself. A queueing system model resulting from a single signalized intersection regulated by pre-timed control in traffic network is considered. The model is formulated as an MPCC problem. A MATLAB implementation based on an hyperbolic penalty function is used to solve this practical problem, computing the total average waiting time of the vehicles in all queues and the green split allocation. The problem was codified in AMPL.
Resumo:
In this work we solve Mathematical Programs with Complementarity Constraints using the hyperbolic smoothing strategy. Under this approach, the complementarity condition is relaxed through the use of the hyperbolic smoothing function, involving a positive parameter that can be decreased to zero. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.
Resumo:
We prove that the stable holonomies of a proper codimension 1 attractor Λ, for a Cr diffeomorphism f of a surface, are not C1+θ for θ greater than the Hausdorff dimension of the stable leaves of f intersected with Λ. To prove this result we show that there are no diffeomorphisms of surfaces, with a proper codimension 1 attractor, that are affine on a neighbourhood of the attractor and have affine stable holonomies on the attractor.
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.
Resumo:
Este trabalho visa apresentar um enquadramento da realidade económica e industrial do sector transformador de granitos ornamentais em Portugal e fazer uma análise do processo de serragem, com engenhos multi-lâminas e granalha de aço, na medida em que este é o método de seccionamento de blocos de granito mais utilizado pelas grandes indústrias do sector. Tendo em conta a importância económica desta operação produtiva na indústria em causa, foi definido como fito deste projecto a análise estatística dos custos de produção; a definição de fórmulas de cálculo que permitam prever o custo médio de serragem; e o estudo de soluções economicamente viáveis e ambientalmente sustentáveis para o problema das lamas resultantes do expurgo dos engenhos. Para a persecução deste projecto foi realizada uma recolha de dados implementando rotinas de controlo e registo dos mesmos, em quadros de produção normalizados e de fácil preenchimento, pelos operadores destes equipamentos. Esta recolha de dados permitiu isolar, quantificar e formular os factores de rentabilização do processo de serragem selecionando, dentro da amostra de estudo obtida, um conjunto de serragens com características similares e com valores próximos dos valores da média estatística. Apartir dos dados destas serragens foram geradas curvas de tendência polinomial com as quais se analisaram as variações provocadas no custo médio de serragem, pelas variações do factor em estudo. A formulação dos factores de rentabilização e os dados estatísticos obtidos permitiram depois o desenvolvimento de fórmulas de cálculo do custo médio de serragem que establecem o custo de produção diferenciado em função das espessuras com, ou sem, a incorporação dos factores de rentabilização. Como consequência do projecto realizado obteve-se um conjunto de conclusões util, para o sector industrial em causa, que evidencia a importancia da Ocupação dos engenhos e rentabilização de um espaço confinado, da Resistência oferecida à serragem pelos granitos, e da Diferença de altura entre os blocos de uma mesma carga, nos custos de transformação.
Resumo:
Consider the problem of determining a task-toprocessor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct kinds of processors. We propose a polynomialtime approximation scheme (PTAS) for this problem. It offers the following guarantee: for a given task set and a given platform, if there exists a feasible task-to-processor assignment, then given an input parameter, ϵ, our PTAS succeeds, in polynomial time, in finding such a feasible task-to-processor assignment on a platform in which each processor is 1+3ϵ times faster. In the simulations, our PTAS outperforms the state-of-the-art PTAS [1] and also for the vast majority of task sets, it requires significantly smaller processor speedup than (its upper bound of) 1+3ϵ for successfully determining a feasible task-to-processor assignment.
Resumo:
Composition is a practice of key importance in software engineering. When real-time applications are composed it is necessary that their timing properties (such as meeting the deadlines) are guaranteed. The composition is performed by establishing an interface between the application and the physical platform. Such an interface does typically contain information about the amount of computing capacity needed by the application. In multiprocessor platforms, the interface should also present information about the degree of parallelism. Recently there have been quite a few interface proposals. However, they are either too complex to be handled or too pessimistic.In this paper we propose the Generalized Multiprocessor Periodic Resource model (GMPR) that is strictly superior to the MPR model without requiring a too detailed description. We describe a method to generate the interface from the application specification. All these methods have been implemented in Matlab routines that are publicly available.
Resumo:
Consider the problem of deciding whether a set of n sporadic message streams meet deadlines on a Controller Area Network (CAN) bus for a specified priority assignment. It is assumed that message streams have implicit deadlines and no release jitter. An algorithm to solve this problem is well known but unfortunately it time complexity is non-polynomial. We present an algorithm with polynomial time-complexity for computing an upper bound on the response times. Clearly, if the upper bound on the response time does not exceed the deadline then all deadlines are met. The pessimism of our approach is proven: if the upper bound of the response time exceeds the deadline then the response time exceeds the deadline as well for a CAN network with half the speed.
Resumo:
Within the pedagogical community, Serious Games have arisen as a viable alternative to traditional course-based learning materials. Until now, they have been based strictly on software solutions. Meanwhile, research into Remote Laboratories has shown that they are a viable, low-cost solution for experimentation in an engineering context, providing uninterrupted access, low-maintenance requirements, and a heightened sense of reality when compared to simulations. This paper will propose a solution where both approaches are combined to deliver a Remote Laboratory-based Serious Game for use in engineering and school education. The platform for this system is the WebLab-Deusto Framework, already well-tested within the remote laboratory context, and based on open standards. The laboratory allows users to control a mobile robot in a labyrinth environment and take part in an interactive game where they must locate and correctly answer several questions, the subject of which can be adapted to educators' needs. It also integrates the Google Blockly graphical programming language, allowing students to learn basic programming and logic principles without needing to understand complex syntax.
Resumo:
Prepared for presentation at the Portuguese Finance Network International Conference 2014, Vilamoura, Portugal, June 18-20
Resumo:
Nutritional management is essential for Phenylketonuria (PKU) treatment, consisting in a semi-synthetic and low phenylalanine (Phe) diet, which includes strictly controlled amounts of low protein natural foods (essentially fruits and vegetables) supplemented with Phe-free protein substitutes and dietetic low-protein products. PKU diet has to be carefully planned, providing the best ingredient combinations, so that patients can achieve good metabolic control and an adequate nutritional status. Hereupon, it is mandatory to know the detailed composition of natural and/or cooked foodstuffs prepared specifically for these patients. We intended to evaluate sixteen dishes specifically prepared for PKU patients, regarding the nutritional composition, Phe and tyrosine (Tyr) contents, fatty acids profile, and vitamins E and B12 amounts. The nutritional composition of the cooked samples was 15.5–92.0 g/100 g, for moisture; 0.7–3.2 g/100 g, for protein; 0.1–25.0 g/100 g, for total fat; and 5.0–62.0 g/100 g, for total carbohydrates. Fatty acids profile and vitamin E amount reflected the type of fat used. All samples were poor in vitamin B12 (0.3–0.8 μg/100 g). Boiled rice presented the highest Phe content: 50.3 mg/g of protein. These data allow a more accurate calculation of the diet portions to be ingested by the patients according to their individual tolerance.
Resumo:
Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.