873 resultados para Feature evaluation and selection


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seleção recorrente é um dos métodos mais eficientes para o melhoramento de espécies alógamas, especialmente quando progênies S1 são utilizadas. Considerando-se que abobrinha geralmente não apresenta perda de vigor pela endogamia, este método pode ser adequado para o melhoramento desta espécie. Neste trabalho foram realizados experimentos com o objetivo de avaliar a eficiência da seleção recorrente em abobrinha 'Piramoita'. Foram realizados três ciclos de seleção recorrente a partir da cultivar Piramoita (população P0), com avaliação e seleção de progênies S1. Novas populações foram obtidas com a recombinação de plantas das progênies selecionadas, utilizando-se sementes remanescentes. No primeiro ciclo foram avaliadas 74 progênies e selecionadas 14, no segundo foram avaliadas 60 e selecionadas 10 progênies e no terceiro ciclo foram avaliadas 77 e selecionadas 12 progênies. Foram obtidas populações melhoradas após um (PI), dois (PII) e três (PIII) ciclos de seleção recorrente. Quatro populações (P0, PI, PII e PIII) foram avaliadas em um delineamento em blocos ao acaso, com oito repetições e cinco plantas por parcela. Em todos os experimentos foram avaliadas as seguintes características: número de frutos total e comercial por planta, % de frutos comerciais, produção, em massa, de frutos total e comercial por planta e a massa média de fruto comercial. Foram observados aumentos de produção lineares significativos ao longo dos ciclos de seleção. Para número de frutos total e comercial e produção, em massa, total e comercial foram obtidos aumentos com a população PIII, comparativamente a população inicial, de 32, 63, 24 e 57%, respectivamente. A massa média de fruto comercial não foi afetada pela seleção recorrente. Conclui-se que a seleção recorrente foi eficiente para melhorar a abobrinha 'Piramoita'... (Resumo completo, clicar acesso eletrônico abaixo)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internationally there are validated instruments that are used on a large scale to verify the information literacy of individuals in different contexts. Their results serve as a diagnostic for the planning and implementation of information literacy activities. In Brazil there is no validated assessment tools of information literacy. An analysis of contents of four international instruments attached to the upper level educational institutions duly recognized in the literature of the field and validated. The results will be presented related to information literacy addressed in these instruments. It was found that the instruments analyzed the skills focused on the identification of terms of the informational needs, the preparation and construction of search strategies available in parameter two of the Association of College and Research Libraries; differentiation of information sources (parameter one); evaluation and selection of informational sources and selection of the theme searched information (parameter three); and, finally, the abilities of the ethical issue related to use of the information (parameter five). The parameter four, which is facing the communication of information, was not addressed by the instruments. It was concluded that there is a concern of those responsible for preparing the instruments covered with more traditional technical aspects of training users and aspects of information literacy at the expense of ethical and aesthetic aspects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis analiza los elementos que afectan a la evaluación del rendimiento dentro de la técnica de radiodiagnóstico mediante tomografía por emisión de positrones (PET), centrándose en escáneres preclínicos. Se exploran las posibilidades de los protocolos estándar de evaluación sobre los siguientes aspectos: su uso como herramienta para validar programas de simulación Montecarlo, como método para la comparación de escáneres y su validez en el estudio del efecto sobre la calidad de imagen al utilizar radioisótopos alternativos. Inicialmente se estudian los métodos de evaluación orientados a la validación de simulaciones PET, para ello se presenta el programa GAMOS como entorno de simulación y se muestran los resultados de su validación basada en el estándar NEMA NU 4-2008 para escáneres preclínicos. Esta validación se ha realizado mediante la comparación de los resultados simulados frente a adquisiciones reales en el equipo ClearPET, describiendo la metodología de evaluación y selección de los parámetros NEMA. En este apartado también se mencionan las aportaciones desarrolladas en GAMOS para aplicaciones PET, como la inclusión de herramientas para la reconstrucción de imágenes. Por otro lado, la evaluación NEMA del ClearPET es utilizada para comparar su rendimiento frente a otro escáner preclínico: el sistema rPET-1. Esto supone la primera caracterización NEMA NU 4 completa de ambos equipos; al mismo tiempo que se analiza cómo afectan las importantes diferencias de diseño entre ellos, especialmente el tamaño axial del campo de visión y la configuración de los detectores. El 68Ga es uno de los radioisótopos no convencionales en imagen PET que está experimentando un mayor desarrollo, sin embargo, presenta la desventaja del amplio rango o distancia recorrida por el positrón emitido. Además del rango del positrón, otra propiedad física característica de los radioisótopos PET que puede afectar a la imagen es la emisión de fotones gamma adicionales, tal como le ocurre al isótopo 48V. En esta tesis se evalúan dichos efectos mediante estudios de resolución espacial y calidad de imagen NEMA. Finalmente, se analiza el alcance del protocolo NEMA NU 4-2008 cuando se utiliza para este propósito, adaptándolo a tal fin y proponiendo posibles modificaciones. Abstract This thesis analyzes the factors affecting the performance evaluation in positron emission tomography (PET) imaging, focusing on preclinical scanners. It explores the possibilities of standard protocols of assessment on the following aspects: their use as tools to validate Monte Carlo simulation programs, their usefulness as a method for comparing scanners and their validity in the study of the effect of alternative radioisotopes on image quality. Initially we study the methods of performance evaluation oriented to validate PET simulations. For this we present the GAMOS program as a simulation framework and show the results of its validation based on the standard NEMA NU 4-2008 for preclinical PET scanners. This has been accomplished by comparing simulated results against experimental acquisitions in the ClearPET scanner, describing the methodology for the evaluation and selection of NEMA parameters. This section also mentions the contributions developed in GAMOS for PET applications, such as the inclusion of tools for image reconstruction. Furthermore, the evaluation of the ClearPET scanner is used to compare its performance against another preclinical scanner, specifically the rPET-1 system. This is the first complete NEMA NU 4 based characterization study of both systems. At the same time we analyze how do the significant design differences of these two systems, especially the size of the axial field of view and the detectors configuration affect their performance characteristics. 68Ga is one of the unconventional radioisotopes in PET imaging the use of which is currently significantly increasing; however, it presents the disadvantage of the long positron range (distance traveled by the emitted positron before annihilating with an electron). Besides the positron range, additional gamma photon emission is another physical property characteristic of PET radioisotopes that can affect the reconstructed image quality, as it happens to the isotope 48V. In this thesis we assess these effects through studies of spatial resolution and image quality. Finally, we analyze the scope of the NEMA NU 4-2008 to carry out such studies, adapting it and proposing possible modifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A eficiência econômica da bovinocultura leiteira está relacionada à utilização de animais que apresentem, concomitantemente, bom desempenho quanto à produção, reprodução, saúde e longevidade. Nisto, o índice de seleção configura-se como ferramenta importante ao aumento da lucratividade nesse sistema, visto que permite a seleção de reprodutores para várias características simultaneamente, considerando a relação entre elas bem como a relevância econômica das mesmas. Com a recente disponibilidade de dados genômicos tornou-se ainda possível expandir a abrangência e acurácia dos índices de seleção por meio do aumento do número e qualidade das informações consideradas. Nesse contexto, dois estudos foram desenvolvidos. No primeiro, o objetivo foi estimar parâmetros genéticos e valores genéticos (VG) para características relacionadas à produção e qualidade do leite incluindo-se a informação genômica na avaliação genética. Foram utilizadas medidas de idade ao primeiro parto (IPP), produção de leite (PROD), teor de gordura (GOR), proteína (PROT), lactose, caseína, escore de células somáticas (ECS) e perfil de ácidos graxos de 4.218 vacas bem como os genótipos de 755 vacas para 57.368 polimorfismos de nucleotídeo único (SNP). Os componentes de variância e VG foram obtidos por meio de um modelo misto animal, incluindo-se os efeitos de grupos de contemporâneas, ordem de lactação, dias em lactação e os efeitos aditivo genético, ambiente permanente e residual. Duas abordagens foram desenvolvidas: uma tradicional, na qual a matriz de relacionamentos é baseada no pedigree; e uma genômica, na qual esta matriz é construída combinando-se a informação de pedigree e dos SNP. As herdabilidades variaram de 0,07 a 0,39. As correlações genéticas entre PROD e os componentes do leite variaram entre -0,45 e -0,13 enquanto correlações altas e positivas foram estimadas entre GOR e os ácidos graxos. O uso da abordagem genômica não alterou as estimativas de parâmetros genéticos; contudo, houve aumento entre 1,5% e 6,8% na acurácia dos VG, à exceção de IPP, para a qual houve uma redução de 1,9%. No segundo estudo, o objetivo foi incorporar a informação genômica no desenvolvimento de índices econômicos de seleção. Neste, os VG para PROD, GOR, PROT, teor de ácidos graxos insaturados (INSAT), ECS e vida produtiva foram combinados em índices de seleção ponderados por valores econômicos estimados sob três cenários de pagamento: exclusivamente por volume de leite (PAG1); por volume e por componentes do leite (PAG2); por volume e componentes do leite incluindo INSAT (PAG3). Esses VG foram preditos a partir de fenótipos de 4.293 vacas e genótipos de 755 animais em um modelo multi-característica sob as abordagens tradicional e genômica. O uso da informação genômica influenciou os componentes de variância, VG e a resposta à seleção. Entretanto, as correlações de ranking entre as abordagens foram altas nos três cenários, com valores entre 0,91 e 0,99. Diferenças foram principalmente observadas entre PAG1 e os demais cenários, com correlações entre 0,67 e 0,88. A importância relativa das características e o perfil dos melhores animais foram sensíveis ao cenário de remuneração considerado. Assim, verificou-se como essencial a consideração dos valores econômicos das características na avaliação genética e decisões de seleção.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to measure the performance of commercial virtual learning environment (VLE) systems, which helps the decision makers to select the appropriate system for their institutions. Design/methodology/approach – This paper develops an integrated multiple criteria decision making approach, which combines the analytic hierarchy process (AHP) and quality function deployment (QFD), to evaluate and select the best system. The evaluating criteria are derived from the requirements of those who use the system. A case study is provided to demonstrate how the integrated approach works. Findings – The major advantage of the integrated approach is that the evaluating criteria are of interest to the stakeholders. This ensures that the selected system will achieve the requirements and satisfy the stakeholders most. Another advantage is that the approach can guarantee the benchmarking to be consistent and reliable. From the case study, it is proved that the performance of a VLE system being used at the university is the best. Therefore, the university should continue to run the system in order to support and facilitate both teaching and learning. Originality/value – It is believed that there is no study that measures the performance of VLE systems, and thus decision makers may have difficulties in system evaluation and selection for their institutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventionally, oil pipeline projects are evaluated thoroughly by the owner before investment decision is made using market, technical and financial analysis sequentially. The market analysis determines pipelines throughput and supply and demand points. Subsequent, technical analysis identifies technological options and economic and financial analysis then derives the least cost option among all technically feasible options. The subsequent impact assessment tries to justify the selected option by addressing environmental and social issues. The impact assessment often suggests alternative sites, technologies, and/or implementation methodology, necessitating revision of technical and financial analysis. This study addresses these issues via an integrated project evaluation and selection model. The model uses analytic hierarchy process, a multiple-attribute decision-making technique. The effectiveness of the model has been demonstrated through a case application on cross-country petroleum pipeline project in India.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Joint sentiment-topic (JST) model was previously proposed to detect sentiment and topic simultaneously from text. The only supervision required by JST model learning is domain-independent polarity word priors. In this paper, we modify the JST model by incorporating word polarity priors through modifying the topic-word Dirichlet priors. We study the polarity-bearing topics extracted by JST and show that by augmenting the original feature space with polarity-bearing topics, the in-domain supervised classifiers learned from augmented feature representation achieve the state-of-the-art performance of 95% on the movie review data and an average of 90% on the multi-domain sentiment dataset. Furthermore, using feature augmentation and selection according to the information gain criteria for cross-domain sentiment classification, our proposed approach performs either better or comparably compared to previous approaches. Nevertheless, our approach is much simpler and does not require difficult parameter tuning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of this research is to develop a holistic approach to maximize the customer service level while minimizing the logistics cost by using an integrated multiple criteria decision making (MCDM) method for the contemporary transshipment problem. Unlike the prevalent optimization techniques, this paper proposes an integrated approach which considers both quantitative and qualitative factors in order to maximize the benefits of service deliverers and customers under uncertain environments. Design/methodology/approach – This paper proposes a fuzzy-based integer linear programming model, based on the existing literature and validated with an example case. The model integrates the developed fuzzy modification of the analytic hierarchy process (FAHP), and solves the multi-criteria transshipment problem. Findings – This paper provides several novel insights about how to transform a company from a cost-based model to a service-dominated model by using an integrated MCDM method. It suggests that the contemporary customer-driven supply chain remains and increases its competitiveness from two aspects: optimizing the cost and providing the best service simultaneously. Research limitations/implications – This research used one illustrative industry case to exemplify the developed method. Considering the generalization of the research findings and the complexity of the transshipment service network, more cases across multiple industries are necessary to further enhance the validity of the research output. Practical implications – The paper includes implications for the evaluation and selection of transshipment service suppliers, the construction of optimal transshipment network as well as managing the network. Originality/value – The major advantages of this generic approach are that both quantitative and qualitative factors under fuzzy environment are considered simultaneously and also the viewpoints of service deliverers and customers are focused. Therefore, it is believed that it is useful and applicable for the transshipment service network design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report examines important issues pertaining to the different ways of affecting the information security of file objects under information attacks through methods of compression. Accordingly, the report analyzes the three-way relationships which may exist among a selected set of attacks, methods and objects. Thus, a methodology is proposed for evaluation of information security, and a coefficient of information security is created. With respects to this coefficient, using different criteria and methods for evaluation and selection of alternatives, the lowest-risk methods of compression are selected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este proyecto de investigación es un estudio de factibilidad de importación del calzado para dama desde China para la empresa colombiana Kenzo Jeans a través del cual se evalúan a profundidad estrategias en producto, precio y distribución para que la empresa valore la conveniencia del proceso. El objetivo de esta investigación es generar herramientas y estrategias necesarias para que la empresa logre tener una visión más completa al importar calzado desde China para la distribución en el mercado colombiano. Este estudio se realizó con el fin de brindar información para que la gerencia pueda tomar decisiones correctas, eliminando el desconocimiento que pueda generar mayor incertidumbre al involucrarse en un proceso de importación. Para llevar a cabo este proceso se determinaron unos criterios de evaluación y selección mínimos respecto al diseño del producto, precio, calidad, número de unidades mínimas para realizar el pedido, empaque y etiquetado con el que debían contar los posibles proveedores en China. Esto se realizó a través de un acercamiento a los potenciales proveedores y permitió filtrar a aquellos que podrían cumplir con los criterios exigidos por Kenzo Jeans. Una vez realizado el proceso de clasificación y selección se logró determinar que existe potencial en la importación de calzado de dama desde China. Hecho este proceso se sugiere a Kenzo Jeans realizar contacto directo con estas empresas a través de un posible viaje de negocios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies feature subset selection in classification using a multiobjective estimation of distribution algorithm. We consider six functions, namely area under ROC curve, sensitivity, specificity, precision, F1 measure and Brier score, for evaluation of feature subsets and as the objectives of the problem. One of the characteristics of these objective functions is the existence of noise in their values that should be appropriately handled during optimization. Our proposed algorithm consists of two major techniques which are specially designed for the feature subset selection problem. The first one is a solution ranking method based on interval values to handle the noise in the objectives of this problem. The second one is a model estimation method for learning a joint probabilistic model of objectives and variables which is used to generate new solutions and advance through the search space. To simplify model estimation, l1 regularized regression is used to select a subset of problem variables before model learning. The proposed algorithm is compared with a well-known ranking method for interval-valued objectives and a standard multiobjective genetic algorithm. Particularly, the effects of the two new techniques are experimentally investigated. The experimental results show that the proposed algorithm is able to obtain comparable or better performance on the tested datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Feature selection is one of important and frequently used techniques in data preprocessing. It can improve the efficiency and the effectiveness of data mining by reducing the dimensions of feature space and removing the irrelevant and redundant information. Feature selection can be viewed as a global optimization problem of finding a minimum set of M relevant features that describes the dataset as well as the original N attributes. In this paper, we apply the adaptive partitioned random search strategy into our feature selection algorithm. Under this search strategy, the partition structure and evaluation function is proposed for feature selection problem. This algorithm ensures the global optimal solution in theory and avoids complete randomness in search direction. The good property of our algorithm is shown through the theoretical analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tutkielmantavoitteena oli luoda ohjeistus toimittajan valinnasta ja suorituskyvyn arvioinnista case - yrityksen, Exel Oyj:n, käyttöön. Ohjeistuksen tarkoituksena oli ollalähtökohtana toimittajan valinta- ja suoristuskyvyn arviointiprosessien kehittämisessä. Tutkielma keskittyy esittelemään toimittajan valintakriteereitä ja toimittajan suorituskyvyn arviointikriteereitä. Kriteerit valittiin ja analysoitiin teorian ja empirian avulla ja kriteereistä tehtiin selkeät listaukset. Näitä listoja käytettiin avuksi pohdittaessa uusia valintakriteereitä ja suorituskyvyn arviointikriteereitä, joita case -yritys voi jatkossa käyttää. Tutkielmassa käytiin läpi myös toimittajan valintaprosessi jaapuvälineitä ja mittareita toimittajan arviointiin liittyen. Empiirisen aineiston keruu toteutettiin haastattelemalla hankintapäällikköä sekä keräämällä tietoavuosikertomuksesta ja yrityksen internet sivuilta. Tutkielman tuloksena saatiinlistauksia kriteereistä, joita yritys voi hyödyntää jatkossa sekä listaukset kriteereistä, jotka valittiin alustavasti yrityksen käyttöön.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aircraft manufacturing industries are looking for solutions in order to increase their productivity. One of the solutions is to apply the metrology systems during the production and assembly processes. Metrology Process Model (MPM) (Maropoulos et al, 2007) has been introduced which emphasises metrology applications with assembly planning, manufacturing processes and product designing. Measurability analysis is part of the MPM and the aim of this analysis is to check the feasibility for measuring the designed large scale components. Measurability Analysis has been integrated in order to provide an efficient matching system. Metrology database is structured by developing the Metrology Classification Model. Furthermore, the feature-based selection model is also explained. By combining two classification models, a novel approach and selection processes for integrated measurability analysis system (MAS) are introduced and such integrated MAS could provide much more meaningful matching results for the operators. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.