869 resultados para selection model
Resumo:
Smallholder farmers in Africa practice traditional cropping techniques such as intercropping. Intercropping is thought to offer higher productivity and resource milisation than sole cropping. In this study, risk associated with maize-bean intercropping was evaluated by quantifying long-term yield in both intercropping and sole cropping in a semi-arid region of South Africa (Bloemfontein, Free State) with reference to rainfall variability. The crop simulation model was run with different cultural practices (planting date and plant density) for 52 summer crop growing seasons (1950/1951-2001/2002). Eighty-one scenarios, consisted of three levels of initial soil water, planting date, maize population, and bean population, were simulated. From the simulation outputs, the total land equivalent ratio (LER) was greater than one. The intercrop (equivalent to sole maize) had greater energy value (EV) than sole beans, and the intercrop (equivalent to sole beans) had greater monetary value (MV) than sole maize. From these results, it can be concluded that maize-bean intercropping is advantageous for this semi-arid region. Soil water at planting was the most important factor of all scenario factors, followed by planting date. Irrigation application at planting, November/December planting and high plant density of maize for EV and beans for MV can be one of the most effective cultural practices in the study region. With regard to rainfall variability, seasonal (October-April) rainfall positively affected EV and MV, but not LER. There was more intercrop production in La Nina years than in El Nino years. Thus, better cultural practices may be selected to maximize maize-bean intercrop yields for specific seasons in the semi-arid region based on the global seasonal outlook. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Mating preferences are common in natural populations, and their divergence among populations is considered an important source of reproductive isolation during speciation. Although mechanisms for the divergence of mating preferences have received substantial theoretical treatment, complementary experimental tests are lacking. We conducted a laboratory evolution experiment, using the fruit fly Drosophila serrata, to explore the role of divergent selection between environments in the evolution of female mating preferences. Replicate populations of D. serrata were derived from a common ancestor and propagated in one of three resource environments: two novel environments and the ancestral laboratory environment. Adaptation to both novel environments involved changes in cuticular hydrocarbons, traits that predict mating success in these populations. Furthermore, female mating preferences for these cuticular hydrocarbons also diverged among populations. A component of this divergence occurred among treatment environments, accounting for at least 17.4% of the among- population divergence in linear mating preferences and 17.2% of the among-population divergence in nonlinear mating preferences. The divergence of mating preferences in correlation with environment is consistent with the classic by- product model of speciation in which premating isolation evolves as a side effect of divergent selection adapting populations to their different environments.
Resumo:
Document classification is a supervised machine learning process, where predefined category labels are assigned to documents based on the hypothesis derived from training set of labelled documents. Documents cannot be directly interpreted by a computer system unless they have been modelled as a collection of computable features. Rogati and Yang [M. Rogati and Y. Yang, Resource selection for domain-specific cross-lingual IR, in SIGIR 2004: Proceedings of the 27th annual international conference on Research and Development in Information Retrieval, ACM Press, Sheffied: United Kingdom, pp. 154-161.] pointed out that the effectiveness of document classification system may vary in different domains. This implies that the quality of document model contributes to the effectiveness of document classification. Conventionally, model evaluation is accomplished by comparing the effectiveness scores of classifiers on model candidates. However, this kind of evaluation methods may encounter either under-fitting or over-fitting problems, because the effectiveness scores are restricted by the learning capacities of classifiers. We propose a model fitness evaluation method to determine whether a model is sufficient to distinguish positive and negative instances while still competent to provide satisfactory effectiveness with a small feature subset. Our experiments demonstrated how the fitness of models are assessed. The results of our work contribute to the researches of feature selection, dimensionality reduction and document classification.
Resumo:
The ‘leading coordinate’ approach to computing an approximate reaction pathway, with subsequent determination of the true minimum energy profile, is applied to a two-proton chain transfer model based on the chromophore and its surrounding moieties within the green fluorescent protein (GFP). Using an ab initio quantum chemical method, a number of different relaxed energy profiles are found for several plausible guesses at leading coordinates. The results obtained for different trial leading coordinates are rationalized through the calculation of a two-dimensional relaxed potential energy surface (PES) for the system. Analysis of the 2-D relaxed PES reveals that two of the trial pathways are entirely spurious, while two others contain useful information and can be used to furnish starting points for successful saddle-point searches. Implications for selection of trial leading coordinates in this class of proton chain transfer reactions are discussed, and a simple diagnostic function is proposed for revealing whether or not a relaxed pathway based on a trial leading coordinate is likely to furnish useful information.
Resumo:
Topological measures of large-scale complex networks are applied to a specific artificial regulatory network model created through a whole genome duplication and divergence mechanism. This class of networks share topological features with natural transcriptional regulatory networks. Specifically, these networks display scale-free and small-world topology and possess subgraph distributions similar to those of natural networks. Thus, the topologies inherent in natural networks may be in part due to their method of creation rather than being exclusively shaped by subsequent evolution under selection. The evolvability of the dynamics of these networks is also examined by evolving networks in simulation to obtain three simple types of output dynamics. The networks obtained from this process show a wide variety of topologies and numbers of genes indicating that it is relatively easy to evolve these classes of dynamics in this model. (c) 2006 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Racing algorithms have recently been proposed as a general-purpose method for performing model selection in machine teaming algorithms. In this paper, we present an empirical study of the Hoeffding racing algorithm for selecting the k parameter in a simple k-nearest neighbor classifier. Fifteen widely-used classification datasets from UCI are used and experiments conducted across different confidence levels for racing. The results reveal a significant amount of sensitivity of the k-nn classifier to its model parameter value. The Hoeffding racing algorithm also varies widely in its performance, in terms of the computational savings gained over an exhaustive evaluation. While in some cases the savings gained are quite small, the racing algorithm proved to be highly robust to the possibility of erroneously eliminating the optimal models. All results were strongly dependent on the datasets used.
Resumo:
A formalism recently introduced by Prugel-Bennett and Shapiro uses the methods of statistical mechanics to model the dynamics of genetic algorithms. To be of more general interest than the test cases they consider. In this paper, the technique is applied to the subset sum problem, which is a combinatorial optimization problem with a strongly non-linear energy (fitness) function and many local minima under single spin flip dynamics. It is a problem which exhibits an interesting dynamics, reminiscent of stabilizing selection in population biology. The dynamics are solved under certain simplifying assumptions and are reduced to a set of difference equations for a small number of relevant quantities. The quantities used are the population's cumulants, which describe its shape, and the mean correlation within the population, which measures the microscopic similarity of population members. Including the mean correlation allows a better description of the population than the cumulants alone would provide and represents a new and important extension of the technique. The formalism includes finite population effects and describes problems of realistic size. The theory is shown to agree closely to simulations of a real genetic algorithm and the mean best energy is accurately predicted.
Resumo:
Data visualization algorithms and feature selection techniques are both widely used in bioinformatics but as distinct analytical approaches. Until now there has been no method of measuring feature saliency while training a data visualization model. We derive a generative topographic mapping (GTM) based data visualization approach which estimates feature saliency simultaneously with the training of the visualization model. The approach not only provides a better projection by modeling irrelevant features with a separate noise model but also gives feature saliency values which help the user to assess the significance of each feature. We compare the quality of projection obtained using the new approach with the projections from traditional GTM and self-organizing maps (SOM) algorithms. The results obtained on a synthetic and a real-life chemoinformatics dataset demonstrate that the proposed approach successfully identifies feature significance and provides coherent (compact) projections. © 2006 IEEE.
Resumo:
Multi-agent algorithms inspired by the division of labour in social insects are applied to a problem of distributed mail retrieval in which agents must visit mail producing cities and choose between mail types under certain constraints.The efficiency (i.e. the average amount of mail retrieved per time step), and the flexibility (i.e. the capability of the agents to react to changes in the environment) are investigated both in static and dynamic environments. New rules for mail selection and specialisation are introduced and are shown to exhibit improved efficiency and flexibility compared to existing ones. We employ a genetic algorithm which allows the various rules to evolve and compete. Apart from obtaining optimised parameters for the various rules for any environment, we also observe extinction and speciation. From a more theoretical point of view, in order to avoid finite size effects, most results are obtained for large population sizes. However, we do analyse the influence of population size on the performance. Furthermore, we critically analyse the causes of efficiency loss, derive the exact dynamics of the model in the large system limit under certain conditions, derive theoretical upper bounds for the efficiency, and compare these with the experimental results.
Resumo:
The survival of organisations, especially SMEs, depends, to the greatest extent, on those who supply them with the required material input. This is because if the supplier fails to deliver the right materials at the right time and place, and at the right price, then the recipient organisation is bound to fail in its obligations to satisfy the needs of its customers, and to stay in business. Hence, the task of choosing a supplier(s) from a list of vendors, that an organisation will trust with its very existence, is not an easy one. This project investigated how purchasing personnel in organisations solve the problem of vendor selection. The investigation went further to ascertain whether an Expert Systems model could be developed and used as a plausible solution to the problem. An extensive literature review indicated that very scanty research has been conducted in the area of Expert Systems for Vendor Selection, whereas many research theories in expert systems and in purchasing and supply management chain, respectively, had been reported. A survey questionnaire was designed and circulated to people in the industries who actually perform the vendor selection tasks. Analysis of the collected data confirmed the various factors which are considered during the selection process, and established the order in which those factors are ranked. Five of the factors, namely, Production Methods Used, Vendors Financial Background, Manufacturing Capacity, Size of Vendor Organisations, and Suppliers Position in the Industry; appeared to have similar patterns in the way organisations ranked them. These patterns suggested that the bigger the organisation, the more importantly they regarded the above factors. Further investigations revealed that respondents agreed that the most important factors were: Product Quality, Product Price and Delivery Date. The most apparent pattern was observed for the Vendors Financial Background. This generated curiosity which led to the design and development of a prototype expert system for assessing the financial profile of a potential supplier(s). This prototype was called ESfNS. It determines whether a prospective supplier(s) has good financial background or not. ESNS was tested by the potential users who then confirmed that expert systems have great prospects and commercial viability in the domain for solving vendor selection problems.
Resumo:
Construction projects are risky. However, the characteristics of the risk highly depend on the type of procurement being adopted for managing the project. A build-operate-transfer (BOT) project is recognized as one of the most risky project schemes. There are instances of project failure where a BOT scheme was employed. Ineffective rts are increasingly being managed using various risk management tools and techniques. However, application of those tools depends on the nature of the project, organization's policy, project management strategy, risk attitude of the project team members, and availability of the resources. Understanding of the contents and contexts of BOT projects, together with a thorough understanding of risk management tools and techniques, helps select processes of risk management for effective project implementation in a BOT scheme. This paper studies application of risk management tools and techniques in BOT projects through reviews of relevant literatures and develops a model for selecting risk management process for BOT projects. The application to BOT projects is considered from the viewpoints of the major project participants. Discussion is also made with regard to political risks. This study would contribute to the establishment of a framework for systematic risk management in BOT projects.
Resumo:
Selection of a power market structure from the available alternatives is an important activity within an overall power sector reform programme. The evaluation criteria for selection are both subjective as well as objective in nature and the selection of alternatives is characterised by their conflicting nature. This study demonstrates a methodology for power market structure selection using the analytic hierarchy process (AHP), a multiple attribute decision-making technique, to model the selection methodology with the active participation of relevant stakeholders in a workshop environment. The methodology is applied to a hypothetical case of a State Electricity Board reform in India.
Resumo:
This paper introduces a compact form for the maximum value of the non-Archimedean in Data Envelopment Analysis (DEA) models applied for the technology selection, without the need to solve a linear programming (LP). Using this method the computational performance the common weight multi-criteria decision-making (MCDM) DEA model proposed by Karsak and Ahiska (International Journal of Production Research, 2005, 43(8), 1537-1554) is improved. This improvement is significant when computational issues and complexity analysis are a concern.
Resumo:
This paper extends previous analyses of the choice between internal and external R&D to consider the costs of internal R&D. The Heckman two-stage estimator is used to estimate the determinants of internal R&D unit cost (i.e. cost per product innovation) allowing for sample selection effects. Theory indicates that R&D unit cost will be influenced by scale issues and by the technological opportunities faced by the firm. Transaction costs encountered in research activities are allowed for and, in addition, consideration is given to issues of market structure which influence the choice of R&D mode without affecting the unit cost of internal or external R&D. The model is tested on data from a sample of over 500 UK manufacturing plants which have engaged in product innovation. The key determinants of R&D mode are the scale of plant and R&D input, and market structure conditions. In terms of the R&D cost equation, scale factors are again important and have a non-linear relationship with R&D unit cost. Specificities in physical and human capital also affect unit cost, but have no clear impact on the choice of R&D mode. There is no evidence of technological opportunity affecting either R&D cost or the internal/external decision.