964 resultados para multiple objective programming
Resumo:
Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.
Resumo:
Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards increased parallelism. However, increased parallelism does not necessarily lead to better performance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general-purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM). Our main objective is to provide high performance while maintaining ease of programming. GPRM supports native parallelism; it provides a modular way of expressing parallel tasks and the communication patterns between them. Compiling a GPRM program results in an Intermediate Representation (IR) containing useful information about tasks, their dependencies, as well as the initial mapping information. This compile-time information helps reduce the overhead of runtime task scheduling and is key to high performance. Generally speaking, the granularity and the number of tasks are major factors in achieving high performance. These factors are even more important in the case of GPRM, as it is highly dependent on tasks, rather than threads. We use three basic benchmarks to provide a detailed comparison of GPRM with Intel OpenMP, Cilk Plus, and Threading Building Blocks (TBB) on the Intel Xeon Phi, and with GNU OpenMP on the Tilera TILEPro64. GPRM shows superior performance in almost all cases, only by controlling the number of tasks. GPRM also provides a low-overhead mechanism, called “Global Sharing”, which improves performance in multiprogramming situations. We use OpenMP, as the most popular model for shared-memory parallel programming as the main GPRM competitor for solving three well-known problems on both platforms: LU factorisation of Sparse Matrices, Image Convolution, and Linked List Processing. We focus on proposing solutions that best fit into the GPRM’s model of execution. GPRM outperforms OpenMP in all cases on the TILEPro64. On the Xeon Phi, our solution for the LU Factorisation results in notable performance improvement for sparse matrices with large numbers of small blocks. We investigate the overhead of GPRM’s task creation and distribution for very short computations using the Image Convolution benchmark. We show that this overhead can be mitigated by combining smaller tasks into larger ones. As a result, GPRM can outperform OpenMP for convolving large 2D matrices on the Xeon Phi. Finally, we demonstrate that our parallel worksharing construct provides an efficient solution for Linked List processing and performs better than OpenMP implementations on the Xeon Phi. The results are very promising, as they verify that our parallel programming framework for manycore processors is flexible and scalable, and can provide high performance without sacrificing productivity.
Resumo:
These are the instructions for a programming assignment of the subject Programming 3 taught at University of Alicante in Spain. The objective of the assignment is to build an object-oriented version of Conway's game of life in Java. The assignment is divided into four sub-assignments.
Resumo:
Macroeconomic policy makers are typically concerned with several indicators of economic performance. We thus propose to tackle the design of macroeconomic policy using Multicriteria Decision Making (MCDM) techniques. More specifically, we employ Multiobjective Programming (MP) to seek so-called efficient policies. The MP approach is combined with a computable general equilibrium (CGE) model. We chose use of a CGE model since they have the dual advantage of being consistent with standard economic theory while allowing one to measure the effect(s) of a specific policy with real data. Applying the proposed methodology to Spain (via the 1995 Social Accounting Matrix) we first quantified the trade-offs between two specific policy objectives: growth and inflation, when designing fiscal policy. We then constructed a frontier of efficient policies involving real growth and inflation. In doing so, we found that policy in 1995 Spain displayed some degree of inefficiency with respect to these two policy objectives. We then offer two sets of policy recommendations that, ostensibly, could have helped Spain at the time. The first deals with efficiency independent of the importance given to both growth and inflation by policy makers (we label this set: general policy recommendations). A second set depends on which policy objective is seen as more important by policy makers: increasing growth or controlling inflation (we label this one: objective-specific recommendations).
Resumo:
Object recognition has long been a core problem in computer vision. To improve object spatial support and speed up object localization for object recognition, generating high-quality category-independent object proposals as the input for object recognition system has drawn attention recently. Given an image, we generate a limited number of high-quality and category-independent object proposals in advance and used as inputs for many computer vision tasks. We present an efficient dictionary-based model for image classification task. We further extend the work to a discriminative dictionary learning method for tensor sparse coding. In the first part, a multi-scale greedy-based object proposal generation approach is presented. Based on the multi-scale nature of objects in images, our approach is built on top of a hierarchical segmentation. We first identify the representative and diverse exemplar clusters within each scale. Object proposals are obtained by selecting a subset from the multi-scale segment pool via maximizing a submodular objective function, which consists of a weighted coverage term, a single-scale diversity term and a multi-scale reward term. The weighted coverage term forces the selected set of object proposals to be representative and compact; the single-scale diversity term encourages choosing segments from different exemplar clusters so that they will cover as many object patterns as possible; the multi-scale reward term encourages the selected proposals to be discriminative and selected from multiple layers generated by the hierarchical image segmentation. The experimental results on the Berkeley Segmentation Dataset and PASCAL VOC2012 segmentation dataset demonstrate the accuracy and efficiency of our object proposal model. Additionally, we validate our object proposals in simultaneous segmentation and detection and outperform the state-of-art performance. To classify the object in the image, we design a discriminative, structural low-rank framework for image classification. We use a supervised learning method to construct a discriminative and reconstructive dictionary. By introducing an ideal regularization term, we perform low-rank matrix recovery for contaminated training data from all categories simultaneously without losing structural information. A discriminative low-rank representation for images with respect to the constructed dictionary is obtained. With semantic structure information and strong identification capability, this representation is good for classification tasks even using a simple linear multi-classifier.
Resumo:
The aim of this paper is to extend the classical envelope theorem from scalar to vector differential programming. The obtained result allows us to measure the quantitative behaviour of a certain set of optimal values (not necessarily a singleton) characterized to become minimum when the objective function is composed with a positive function, according to changes of any of the parameters which appear in the constraints. We show that the sensitivity of the program depends on a Lagrange multiplier and its sensitivity.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Pitch Estimation, also known as Fundamental Frequency (F0) estimation, has been a popular research topic for many years, and is still investigated nowadays. The goal of Pitch Estimation is to find the pitch or fundamental frequency of a digital recording of a speech or musical notes. It plays an important role, because it is the key to identify which notes are being played and at what time. Pitch Estimation of real instruments is a very hard task to address. Each instrument has its own physical characteristics, which reflects in different spectral characteristics. Furthermore, the recording conditions can vary from studio to studio and background noises must be considered. This dissertation presents a novel approach to the problem of Pitch Estimation, using Cartesian Genetic Programming (CGP).We take advantage of evolutionary algorithms, in particular CGP, to explore and evolve complex mathematical functions that act as classifiers. These classifiers are used to identify piano notes pitches in an audio signal. To help us with the codification of the problem, we built a highly flexible CGP Toolbox, generic enough to encode different kind of programs. The encoded evolutionary algorithm is the one known as 1 + , and we can choose the value for . The toolbox is very simple to use. Settings such as the mutation probability, number of runs and generations are configurable. The cartesian representation of CGP can take multiple forms and it is able to encode function parameters. It is prepared to handle with different type of fitness functions: minimization of f(x) and maximization of f(x) and has a useful system of callbacks. We trained 61 classifiers corresponding to 61 piano notes. A training set of audio signals was used for each of the classifiers: half were signals with the same pitch as the classifier (true positive signals) and the other half were signals with different pitches (true negative signals). F-measure was used for the fitness function. Signals with the same pitch of the classifier that were correctly identified by the classifier, count as a true positives. Signals with the same pitch of the classifier that were not correctly identified by the classifier, count as a false negatives. Signals with different pitch of the classifier that were not identified by the classifier, count as a true negatives. Signals with different pitch of the classifier that were identified by the classifier, count as a false positives. Our first approach was to evolve classifiers for identifying artifical signals, created by mathematical functions: sine, sawtooth and square waves. Our function set is basically composed by filtering operations on vectors and by arithmetic operations with constants and vectors. All the classifiers correctly identified true positive signals and did not identify true negative signals. We then moved to real audio recordings. For testing the classifiers, we picked different audio signals from the ones used during the training phase. For a first approach, the obtained results were very promising, but could be improved. We have made slight changes to our approach and the number of false positives reduced 33%, compared to the first approach. We then applied the evolved classifiers to polyphonic audio signals, and the results indicate that our approach is a good starting point for addressing the problem of Pitch Estimation.
Resumo:
Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.
Resumo:
String searching within a large corpus of data is an important component of digital forensic (DF) analysis techniques such as file carving. The continuing increase in capacity of consumer storage devices requires corresponding im-provements to the performance of string searching techniques. As string search-ing is a trivially-parallelisable problem, GPGPU approaches are a natural fit – but previous studies have found that local storage presents an insurmountable performance bottleneck. We show that this need not be the case with modern hardware, and demonstrate substantial performance improvements from the use of single and multiple GPUs when searching for strings within a typical forensic disk image.
Resumo:
Remote sensing is a promising approach for above ground biomass estimation, as forest parameters can be obtained indirectly. The analysis in space and time is quite straight forward due to the flexibility of the method to determine forest crown parameters with remote sensing. It can be used to evaluate and monitoring for example the development of a forest area in time and the impact of disturbances, such as silvicultural practices or deforestation. The vegetation indices, which condense data in a quantitative numeric manner, have been used to estimate several forest parameters, such as the volume, basal area and above ground biomass. The objective of this study was the development of allometric functions to estimate above ground biomass using vegetation indices as independent variables. The vegetation indices used were the Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), Simple Ratio (SR) and Soil-Adjusted Vegetation Index (SAVI). QuickBird satellite data, with 0.70 m of spatial resolution, was orthorectified, geometrically and atmospheric corrected, and the digital number were converted to top of atmosphere reflectance (ToA). Forest inventory data and published allometric functions at tree level were used to estimate above ground biomass per plot. Linear functions were fitted for the monospecies and multispecies stands of two evergreen oaks (Quercus suber and Quercus rotundifolia) in multiple use systems, montados. The allometric above ground biomass functions were fitted considering the mean and the median of each vegetation index per grid as independent variable. Species composition as a dummy variable was also considered as an independent variable. The linear functions with better performance are those with mean NDVI or mean SR as independent variable. Noteworthy is that the two better functions for monospecies cork oak stands have median NDVI or median SR as independent variable. When species composition dummy variables are included in the function (with stepwise regression) the best model has median NDVI as independent variable. The vegetation indices with the worse model performance were EVI and SAVI.
Resumo:
Introduction: A higher frequency of sleep and breathing disorders in Multiple System Atrophy (MSA) populations is documented in literature. The analysis of disease progression and prognosis in patients with sleep and breathing disorders could shed light on specific neuropathology and pathophysiology of MSA. Objective: To characterize sleep disorders and their longitudinal modifications during disease course in MSA patients, and to determine their prognostic value. Methods: This is a retrospective and prospective cohort study including 182 MSA patients (58.8% males). Type of onset was defined by the first reported motor or autonomic symptom/sign related to MSA. The occurrence of symptoms/signs and milestones of disease progression and their latency were collected. REM sleep behaviour disorder (RBD) and stridor were video-polysomnography (VPSG)-confirmed. VPSG recordings were analysed in a standardized fashion during the disease course. Survival data were based on time to death from the first symptom of disease. Results: Isolated RBD represented the first MSA symptom in 30% of patients, preceding disease onset according to international criteria with a median of 3(1–5) years. Patients developing early stridor or presenting with RBD at disease onset showed a more rapid and severe disease progression. These features had independent negative prognostic value for survival. Sleep architecture was characterized by peculiar features which could represent negative markers in MSA prognosis. Patients with stridor treated with tracheostomy showed a reduced risk of death. Conclusions: This is one of the first studies focusing on longitudinal progression of sleep in MSA. Sleep disorders are key features of disease, playing a role in presentation, prognosis and progression. In our MSA cohort, RBD represented the most frequent mode of disease presentation. Moreover, some specific clinical and instrumental sleep features could represent a hallmark of MSA and could be involved in prognosis and, in particular, in sudden death and death during sleep.
Resumo:
The cerebellum is an important site for cortical demyelination in multiple sclerosis, but the functional significance of this finding is not fully understood. To evaluate the clinical and cognitive impact of cerebellar grey-matter pathology in multiple sclerosis patients. Forty-two relapsing-remitting multiple sclerosis patients and 30 controls underwent clinical assessment including the Multiple Sclerosis Functional Composite, Expanded Disability Status Scale (EDSS) and cerebellar functional system (FS) score, and cognitive evaluation, including the Paced Auditory Serial Addition Test (PASAT) and the Symbol-Digit Modalities Test (SDMT). Magnetic resonance imaging was performed with a 3T scanner and variables of interest were: brain white-matter and cortical lesion load, cerebellar intracortical and leukocortical lesion volumes, and brain cortical and cerebellar white-matter and grey-matter volumes. After multivariate analysis high burden of cerebellar intracortical lesions was the only predictor for the EDSS (p<0.001), cerebellar FS (p = 0.002), arm function (p = 0.049), and for leg function (p<0.001). Patients with high burden of cerebellar leukocortical lesions had lower PASAT scores (p = 0.013), while patients with greater volumes of cerebellar intracortical lesions had worse SDMT scores (p = 0.015). Cerebellar grey-matter pathology is widely present and contributes to clinical dysfunction in relapsing-remitting multiple sclerosis patients, independently of brain grey-matter damage.
Resumo:
The Fourier transform-infrared (FT-IR) signature of dry samples of DNA and DNA-polypeptide complexes, as studied by IR microspectroscopy using a diamond attenuated total reflection (ATR) objective, has revealed important discriminatory characteristics relative to the PO2(-) vibrational stretchings. However, DNA IR marks that provide information on the sample's richness in hydrogen bonds have not been resolved in the spectral profiles obtained with this objective. Here we investigated the performance of an all reflecting objective (ARO) for analysis of the FT-IR signal of hydrogen bonds in DNA samples differing in base richness types (salmon testis vs calf thymus). The results obtained using the ARO indicate prominent band peaks at the spectral region representative of the vibration of nitrogenous base hydrogen bonds and of NH and NH2 groups. The band areas at this spectral region differ in agreement with the DNA base richness type when using the ARO. A peak assigned to adenine was more evident in the AT-rich salmon DNA using either the ARO or the ATR objective. It is concluded that, for the discrimination of DNA IR hydrogen bond vibrations associated with varying base type proportions, the use of an ARO is recommended.