974 resultados para Software packages selection
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
The Franches-Montagnes is an indigenous Swiss horse breed, with approximately 2500 foalings per year. The stud book is closed, and no introgression from other horse breeds was conducted since 1998. Since 2006, breeding values for 43 different traits (conformation, performance and coat colour) are estimated with a best linear unbiased prediction (BLUP) multiple trait animal model. In this study, we evaluated the genetic diversity for the breeding population, considering the years from 2003 to 2008. Only horses with at least one progeny during that time span were included. Results were obtained based on pedigree information as well as from molecular markers. A series of software packages were screened to combine best the best linear unbiased prediction (BLUP) methodology with optimal genetic contribution theory. We looked for stallions with highest breeding values and lowest average relationship to the dam population. Breeding with such stallions is expected to lead to a selection gain, while lowering the future increase in inbreeding within the breed.
Resource-allocation capabilities of commercial project management software. An experimental analysis
Resumo:
When project managers determine schedules for resource-constrained projects, they commonly use commercial project management software packages. Which resource-allocation methods are implemented in these packages is proprietary information. The resource-allocation problem is in general computationally difficult to solve to optimality. Hence, the question arises if and how various project management software packages differ in quality with respect to their resource-allocation capabilities. None of the few existing papers on this subject uses a sizeable data set and recent versions of common software packages. We experimentally analyze the resource-allocation capabilities of Acos Plus.1, AdeptTracker Professional, CS Project Professional, Microsoft Office Project 2007, Primavera P6, Sciforma PS8, and Turbo Project Professional. Our analysis is based on 1560 instances of the precedence- and resource-constrained project scheduling problem RCPSP. The experiment shows that using the resource-allocation feature of these packages may lead to a project duration increase of almost 115% above the best known feasible schedule. The increase gets larger with increasing resource scarcity and with increasing number of activities. We investigate the impact of different complexity scenarios and priority rules on the project duration obtained by the software packages. We provide a decision table to support managers in selecting a software package and a priority rule.
Resumo:
We present results of a benchmark test evaluating the resource allocation capabilities of the project management software packages Acos Plus.1 8.2, CA SuperProject 5.0a, CS Project Professional 3.0, MS Project 2000, and Scitor Project Scheduler 8.0.1. The tests are based on 1560 instances of precedence– and resource–constrained project scheduling problems. For different complexity scenarios, we analyze the deviation of the makespan obtained by the software packages from the best feasible makespan known. Among the tested software packages, Acos Plus.1 and Project Scheduler show the best resource allocation performance. Moreover, our numerical analysis reveals a considerable performance gap between the implemented methods and state–of–the–art project scheduling algorithms, especially for large–sized problems. Thus, there is still a significant potential for improving solutions to resource allocation problems in practice.
Resumo:
Most commercial project management software packages include planning methods to devise schedules for resource-constrained projects. As it is proprietary information of the software vendors which planning methods are implemented, the question arises how the software packages differ in quality with respect to their resource-allocation capabilities. We experimentally evaluate the resource-allocation capabilities of eight recent software packages by using 1,560 instances with 30, 60, and 120 activities of the well-known PSPLIB library. In some of the analyzed packages, the user may influence the resource allocation by means of multi-level priority rules, whereas in other packages, only few options can be chosen. We study the impact of various complexity parameters and priority rules on the project duration obtained by the software packages. The results indicate that the resource-allocation capabilities of these packages differ significantly. In general, the relative gap between the packages gets larger with increasing resource scarcity and with increasing number of activities. Moreover, the selection of the priority rule has a considerable impact on the project duration. Surprisingly, when selecting a priority rule in the packages where it is possible, both the mean and the variance of the project duration are in general worse than for the packages which do not offer the selection of a priority rule.
Resumo:
Navigation of deep space probes is most commonly operated using the spacecraft Doppler tracking technique. Orbital parameters are determined from a series of repeated measurements of the frequency shift of a microwave carrier over a given integration time. Currently, both ESA and NASA operate antennas at several sites around the world to ensure the tracking of deep space probes. Just a small number of software packages are nowadays used to process Doppler observations. The Astronomical Institute of the University of Bern (AIUB) has recently started the development of Doppler data processing capabilities within the Bernese GNSS Software. This software has been extensively used for Precise Orbit Determination of Earth orbiting satellites using GPS data collected by on-board receivers and for subsequent determination of the Earth gravity field. In this paper, we present the currently achieved status of the Doppler data modeling and orbit determination capabilities in the Bernese GNSS Software using GRAIL data. In particular we will focus on the implemented orbit determination procedure used for the combined analysis of Doppler and intersatellite Ka-band data. We show that even at this earlier stage of the development we can achieve an accuracy of few mHz on two-way S-band Doppler observation and of 2 µm/s on KBRR data from the GRAIL primary mission phase.
Resumo:
The Imbrie and Kipp transfer function method (IKM) and the modern analog technique (MAT) are accepted tools for quantitative paleoenvironmental reconstructions. However, no uncomplicated, flexible software has been available to apply these methods on modern computer devices. For this reason the software packages PaleoToolBox, MacTransfer, WinTransfer, MacMAT, and PanPlot have been developed. The PaleoToolBox package provides a flexible tool for the preprocessing of microfossil reference and downcore data as well as hydrographic reference parameters. It includes procedures to randomize the raw databases; to switch specific species in or out of the total species list; to establish individual ranking systems and their application on the reference and downcore databasessemi; and to convert the prepared databases into the file formats of IKM and MAT software for estimation of paleohydrographic parameters.
Resumo:
Species extinctions and the deterioration of other biodiversity features worldwide have led to the adoption of systematic conservation planning in many regions of the world. As a consequence, various software tools for conservation planning have been developed over the past twenty years. These, tools implement algorithms designed to identify conservation area networks for the representation and persistence of biodiversity features. Budgetary, ethical, and other sociopolitical constraints dictate that the prioritized sites represent biodiversity with minimum impact on human interests. Planning tools are typically also used to satisfy these criteria. This chapter reviews both the concepts and technical choices that underlie the development of these tools. Conservation planning problems can be formulated as optimization problems, and we evaluate the suitability of different algorithms for their solution. Finally, we also review some key issues associated with the use of these tools, such as computational efficiency, the effectiveness of taxa and abiotic parameters at choosing surrogates for biodiversity, the process of setting explicit targets of representation for biodiversity surrogates, and
Resumo:
Suboptimal maternal nutrition during gestation results in the establishment of long-term phenotypic changes and an increased disease risk in the offspring. To elucidate how such environmental sensitivity results in physiological outcomes, the molecular characterisation of these offspring has become the focus of many studies. However, the likely modification of key cellular processes such as metabolism in response to maternal undernutrition raises the question of whether the genes typically used as reference constants in gene expression studies are suitable controls. Using a mouse model of maternal protein undernutrition, we have investigated the stability of seven commonly used reference genes (18s, Hprt1, Pgk1, Ppib, Sdha, Tbp and Tuba1) in a variety of offspring tissues including liver, kidney, heart, retro-peritoneal and inter-scapular fat, extra-embryonic placenta and yolk sac, as well as in the preimplantation blastocyst and blastocyst-derived embryonic stem cells. We find that although the selected reference genes are all highly stable within this system, they show tissue, treatment and sex-specific variation. Furthermore, software-based selection approaches rank reference genes differently and do not always identify genes which differ between conditions. Therefore, we recommend that reference gene selection for gene expression studies should be thoroughly validated for each tissue of interest. © 2011 Elsevier Inc.
Resumo:
OpenLab ESEV is a project of the School of Education of the Polytechnic Institute of Viseu (ESEV), Portugal, that aims to promote, foster and support the use of Free/Libre Software and Open Source Software, Open Educational Resources, Free Culture, Free file formats and more flexible copyright licenses for creative and educational purposes in the ESEV's domains of activity (education, arts, media). Most of the OpenLab ESEV activities are related to the teacher education and arts and multimedia programs, with a special focus on the later. In this paper, the project and some activities are presented, starting with its origins and its conceptual framework. The presented overview is intended as background for the examination of the use of Free/Libre Software and Free Culture in educational settings, specially at the higher education level, and for creative purposes. The activities developed with students and professionals generated pipelines and workflows implemented for different creative purposes, software packages used for different tasks, choices for file formats and copyright licenses. Finished and ongoing multimedia and arts projects will be presented as real case scenarios.
Resumo:
Ecological niche modelling combines species occurrence points with environmental raster layers in order to obtain models for describing the probabilistic distribution of species. The process to generate an ecological niche model is complex. It requires dealing with a large amount of data, use of different software packages for data conversion, for model generation and for different types of processing and analyses, among other functionalities. A software platform that integrates all requirements under a single and seamless interface would be very helpful for users. Furthermore, since biodiversity modelling is constantly evolving, new requirements are constantly being added in terms of functions, algorithms and data formats. This evolution must be accompanied by any software intended to be used in this area. In this scenario, a Service-Oriented Architecture (SOA) is an appropriate choice for designing such systems. According to SOA best practices and methodologies, the design of a reference business process must be performed prior to the architecture definition. The purpose is to understand the complexities of the process (business process in this context refers to the ecological niche modelling problem) and to design an architecture able to offer a comprehensive solution, called a reference architecture, that can be further detailed when implementing specific systems. This paper presents a reference business process for ecological niche modelling, as part of a major work focused on the definition of a reference architecture based on SOA concepts that will be used to evolve the openModeller software package for species modelling. The basic steps that are performed while developing a model are described, highlighting important aspects, based on the knowledge of modelling experts. In order to illustrate the steps defined for the process, an experiment was developed, modelling the distribution of Ouratea spectabilis (Mart.) Engl. (Ochnaceae) using openModeller. As a consequence of the knowledge gained with this work, many desirable improvements on the modelling software packages have been identified and are presented. Also, a discussion on the potential for large-scale experimentation in ecological niche modelling is provided, highlighting opportunities for research. The results obtained are very important for those involved in the development of modelling tools and systems, for requirement analysis and to provide insight on new features and trends for this category of systems. They can also be very helpful for beginners in modelling research, who can use the process and the experiment example as a guide to this complex activity. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
A área da engenharia responsável pelo dimensionamento de estruturas vive em busca da solução que melhor atenderá a vários parâmetros simultâneos como estética, custo, qualidade, peso entre outros. Na prática, não se pode afirmar que o melhor projeto foi de fato executado, pois os projetos são feitos principalmente baseados na experiência do executor, sem se esgotar todas as hipóteses possíveis. É neste sentido que os processos de otimização se fazem necessários na área de dimensionamento de estruturas. É possível obter a partir de um objetivo dado, como o custo, o dimensionamento que melhor atenderá a este parâmetro. Existem alguns estudos nesta área, porém ainda é necessário mais pesquisas. Uma área que vem avançando no estudo de otimização estrutural é o dimensionamento de pilares de acordo com a ABNT NBR 6118:2014 que atenda a uma gama maior de geometrias possíveis. Deve-se também estudar o melhor método de otimização para este tipo de problema dentro dos vários existentes na atualidade. Assim o presente trabalho contempla o embasamento conceitual nos temas de dimensionamento de pilares e métodos de otimização na revisão bibliográfica indicando as referências e métodos utilizados no software de dimensionamento otimizado de pilares, programado com auxílio do software MathLab e seus pacotes, utilizando métodos determinísticos de otimização. Esta pesquisa foi realizada para obtenção do Título de Mestre em Engenharia Civil na Universidade Federal do Espírito Santo.
Resumo:
Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calcula- tions. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It o ers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper o ers an SPSS dialog written in the R programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.
Resumo:
Conferência: 2nd Experiment at International Conference (Exp at)- Univ Coimbra, Coimbra, Portugal - Sep 18-20, 2013
Resumo:
In Nonlinear Optimization Penalty and Barrier Methods are normally used to solve Constrained Problems. There are several Penalty/Barrier Methods and they are used in several areas from Engineering to Economy, through Biology, Chemistry, Physics among others. In these areas it often appears Optimization Problems in which the involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. In this work some Penalty/Barrier functions are tested and compared, using in the internal process, Derivative-free, namely Direct Search, methods. This work is a part of a bigger project involving the development of an Application Programming Interface, that implements several Optimization Methods, to be used in applications that need to solve constrained and/or unconstrained Nonlinear Optimization Problems. Besides the use of it in applied mathematics research it is also to be used in engineering software packages.