980 resultados para Artificial Selection
Resumo:
Os criatórios de peixe do estado de Goiás são inúmeros e de intensa atividade recreativa. No entanto, estudos sobre as cianobactérias nesses ambientes são escassos, fato preocupante, uma vez que é comum notar-se intensa proliferação do fitoplâncton em pesqueiros, principalmente devido a ações antrópicas. O perigo consiste na formação de florações de espécies potencialmente tóxicas, principalmente de cianobactérias. Este trabalho visa inventariar as espécies planctônicas de cianobactérias ocorrentes em um pesqueiro (lago Jaó - um lago artificial raso) da área municipal de Goiânia (GO) (16º39'13" S-49º13'26" O). As amostragens foram realizadas nos períodos de seca (2003 a 2008) e chuva (2009), quando visualmente era evidente a ocorrência de florações. Foram aferidas variáveis climatológicas, morfométricas e limnológicas. O período de seca foi representativo nos anos amostrados apresentando no máximo 50 mm de precipitação mensal em 2005. Foram registrados 31 táxons de cianobactérias pertencentes aos gêneros Dolichospermum (5 spp.), Aphanocapsa (4 spp.), Microcystis (3 spp.), Pseudanabaena (3 spp.), Radiocystis (2 spp.), Oscillatoria (2 spp.), Bacularia, Coelosphaerium, Cylindrospermopsis, Geitlerinema, Glaucospira, Limnothrix, Pannus, Phormidium, Planktolyngbya, Planktothrix, Sphaerocavum e Synechocystis, esses últimos com uma espécie cada. Nos anos de 2003 a 2005 ocorreu predomínio de florações de espécies de Dolichospermum e em 2006 predominaram espécies de Microcystis, Radiocystis e Aphanocapsa. Das espécies inventariadas neste estudo, 21 são primeiras citações para o estado de Goiás e 13 foram constadas na literatura como potencialmente tóxicas.
Resumo:
Seven selection indexes based on the phenotypic value of the individual and the mean performance of its family were assessed for their application in breeding of self-pollinated plants. There is no clear superiority from one index to another although some show one or more negative aspects, such as favoring the selection of a top performing plant from an inferior family in detriment of an excellent plant from a superior family
Resumo:
Data of corn ear production (kg/ha) of 196 half-sib progenies (HSP) of the maize population CMS-39 obtained from experiments carried out in four environments were used to adapt and assess the BLP method (best linear predictor) in comparison with to the selection among and within half-sib progenies (SAWHSP). The 196 HSP of the CMS-39 population developed by the National Center for Maize and Sorghum Research (CNPMS-EMBRAPA) were related through their pedigree with the recombined progenies of the previous selection cycle. The two methodologies used for the selection of the twenty best half-sib progenies, BLP and SAWHSP, led to similar expected genetic gains. There was a tendency in the BLP methodology to select a greater number of related progenies because of the previous generation (pedigree) than the other method. This implies that greater care with the effective size of the population must be taken with this method. The SAWHSP methodology was efficient in isolating the additive genetic variance component from the phenotypic component. The pedigree system, although unnecessary for the routine use of the SAWHSP methodology, allowed the prediction of an increase in the inbreeding of the population in the long term SAWHSP selection when recombination is simultaneous to creation of new progenies.
Resumo:
Demand for the use of energy systems, entailing high efficiency as well as availability to harness renewable energy sources, is a key issue in order to tackling the threat of global warming and saving natural resources. Organic Rankine cycle (ORC) technology has been identified as one of the most promising technologies in recovering low-grade heat sources and in harnessing renewable energy sources that cannot be efficiently utilized by means of more conventional power systems. The ORC is based on the working principle of Rankine process, but an organic working fluid is adopted in the cycle instead of steam. This thesis presents numerical and experimental results of the study on the design of small-scale ORCs. Two main applications were selected for the thesis: waste heat re- covery from small-scale diesel engines concentrating on the utilization of the exhaust gas heat and waste heat recovery in large industrial-scale engine power plants considering the utilization of both the high and low temperature heat sources. The main objective of this work was to identify suitable working fluid candidates and to study the process and turbine design methods that can be applied when power plants based on the use of non-conventional working fluids are considered. The computational work included the use of thermodynamic analysis methods and turbine design methods that were based on the use of highly accurate fluid properties. In addition, the design and loss mechanisms in supersonic ORC turbines were studied by means of computational fluid dynamics. The results indicated that the design of ORC is highly influenced by the selection of the working fluid and cycle operational conditions. The results for the turbine designs in- dicated that the working fluid selection should not be based only on the thermodynamic analysis, but requires also considerations on the turbine design. The turbines tend to be fast rotating, entailing small blade heights at the turbine rotor inlet and highly supersonic flow in the turbine flow passages, especially when power systems with low power outputs are designed. The results indicated that the ORC is a potential solution in utilizing waste heat streams both at high and low temperatures and both in micro and larger scale appli- cations.
Resumo:
The aim of this research is to examine the pricing anomalies existing in the U.S. market during 1986 to 2011. The sample of stocks is divided into decile portfolios based on seven individual valuation ratios (E/P, B/P, S/P, EBIT/EV, EVITDA/EV, D/P, and CE/P) and price momentum to investigate the efficiency of individual valuation ratio and their combinations as portfolio formation criteria. This is the first time in financial literature when CE/P is employed as a constituent of composite value measure. The combinations are based on median scaled composite value measures and TOPSIS method. During the sample period value portfolios significantly outperform both the market portfolio and comparable glamour portfolios. The results show the highest return for the value portfolio that was based on the combination of S/P & CE/P ratios. The outcome of this research will increase the understanding on the suitability of different methodologies for portfolio selection. It will help managers to take advantage of the results of different methodologies in order to gain returns above the market.
Resumo:
An appropriate supplier selection and its profound effects on increasing the competitive advantage of companies has been widely discussed in supply chain management (SCM) literature. By raising environmental awareness among companies and industries they attach more importance to sustainable and green activities in selection procedures of raw material providers. The current thesis benefits from data envelopment analysis (DEA) technique to evaluate the relative efficiency of suppliers in the presence of carbon dioxide (CO2) emission for green supplier selection. We incorporate the pollution of suppliers as an undesirable output into DEA. However, to do so, two conventional DEA model problems arise: the lack of the discrimination power among decision making units (DMUs) and flexibility of the inputs and outputs weights. To overcome these limitations, we use multiple criteria DEA (MCDEA) as one alternative. By applying MCDEA the number of suppliers which are identified as efficient will be decreased and will lead to a better ranking and selection of the suppliers. Besides, in order to compare the performance of the suppliers with an ideal supplier, a “virtual” best practice supplier is introduced. The presence of the ideal virtual supplier will also increase the discrimination power of the model for a better ranking of the suppliers. Therefore, a new MCDEA model is proposed to simultaneously handle undesirable outputs and virtual DMU. The developed model is applied for green supplier selection problem. A numerical example illustrates the applicability of the proposed model.
Resumo:
The present study describes an auxiliary tool in the diagnosis of left ventricular (LV) segmental wall motion (WM) abnormalities based on color-coded echocardiographic WM images. An artificial neural network (ANN) was developed and validated for grading LV segmental WM using data from color kinesis (CK) images, a technique developed to display the timing and magnitude of global and regional WM in real time. We evaluated 21 normal subjects and 20 patients with LVWM abnormalities revealed by two-dimensional echocardiography. CK images were obtained in two sets of viewing planes. A method was developed to analyze CK images, providing quantitation of fractional area change in each of the 16 LV segments. Two experienced observers analyzed LVWM from two-dimensional images and scored them as: 1) normal, 2) mild hypokinesia, 3) moderate hypokinesia, 4) severe hypokinesia, 5) akinesia, and 6) dyskinesia. Based on expert analysis of 10 normal subjects and 10 patients, we trained a multilayer perceptron ANN using a back-propagation algorithm to provide automated grading of LVWM, and this ANN was then tested in the remaining subjects. Excellent concordance between expert and ANN analysis was shown by ROC curve analysis, with measured area under the curve of 0.975. An excellent correlation was also obtained for global LV segmental WM index by expert and ANN analysis (R² = 0.99). In conclusion, ANN showed high accuracy for automated semi-quantitative grading of WM based on CK images. This technique can be an important aid, improving diagnostic accuracy and reducing inter-observer variability in scoring segmental LVWM.
Resumo:
Coronary artery disease (CAD) is a worldwide leading cause of death. The standard method for evaluating critical partial occlusions is coronary arteriography, a catheterization technique which is invasive, time consuming, and costly. There are noninvasive approaches for the early detection of CAD. The basis for the noninvasive diagnosis of CAD has been laid in a sequential analysis of the risk factors, and the results of the treadmill test and myocardial perfusion scintigraphy (MPS). Many investigators have demonstrated that the diagnostic applications of MPS are appropriate for patients who have an intermediate likelihood of disease. Although this information is useful, it is only partially utilized in clinical practice due to the difficulty to properly classify the patients. Since the seminal work of Lotfi Zadeh, fuzzy logic has been applied in numerous areas. In the present study, we proposed and tested a model to select patients for MPS based on fuzzy sets theory. A group of 1053 patients was used to develop the model and another group of 1045 patients was used to test it. Receiver operating characteristic curves were used to compare the performance of the fuzzy model against expert physician opinions, and showed that the performance of the fuzzy model was equal or superior to that of the physicians. Therefore, we conclude that the fuzzy model could be a useful tool to assist the general practitioner in the selection of patients for MPS.
Resumo:
The significance and impact of services in the modern global economy has become greater and there has been more demand for decades in the academic community of international business for further research into better understanding internationalisation of services. Theories based on the internationalisation of manufacturing firms have been long questioned for their applicability to services. This study aims at contributing to understanding internationalisation of services by examining how market selection decisions are made for new service products within the existing markets of a multinational financial service provider. The study focused on the factors influencing market selection and the study was conducted as a case study on a multinational financial service firm and two of its new service products. Two directors responsible for the development and internationalisation of the case service products were interviewed in guided semi-structured interviews based on themes adopted from the literature review and the outcome theoretical framework. The main empirical findings of the study suggest that the most significant factors influencing the market selection for new service products within a multinational financial service firm’s existing markets are: commitment to the new service products by both the management and the rest of the product related organisation; capability and competence by the local country organisations to adopt new services; market potential which combines market size, market structure and competitive environment; product fit to the market requirements; and enabling partnerships. Based on the empirical findings, this study suggests a framework of factors influencing market selection for new service products, and proposes further research issues and methods to test and extend the findings of this research.
Resumo:
This thesis studies metamaterial-inspired mirrors which provide the most general control over the amplitude and phase of the reflected wavefront. The goal is to explore practical possibilities in designing fully reflective electromagnetic structures with full control over reflection phase. The first part of the thesis describes a planar focusing metamirror with the focal distance less than the operating wavelength. Its practical applicability from the viewpoint of aberrations when the incident angle deviates from the normal one is verified numerically and experimentally. The results indicate that the proposed focusing metamirror can be efficiently employed in many different applications due to its advantages over other conventional mirrors. In the second part of the thesis a new theoretical concept of reflecting metasurface operation is introduced based on Huygens’ principle. This concept in contrast to known approaches takes into account all the requirements of perfect metamirror operation. The theory shows a route to improve the previously proposed metamirrors through tilting the individual inclusions of the structure at a chosen angle from normal. It is numerically tested and the results demonstrate improvements over the previous design.
Resumo:
In the present study, we modeled a reaching task as a two-link mechanism. The upper arm and forearm motion trajectories during vertical arm movements were estimated from the measured angular accelerations with dual-axis accelerometers. A data set of reaching synergies from able-bodied individuals was used to train a radial basis function artificial neural network with upper arm/forearm tangential angular accelerations. The trained radial basis function artificial neural network for the specific movements predicted forearm motion from new upper arm trajectories with high correlation (mean, 0.9149-0.941). For all other movements, prediction was low (range, 0.0316-0.8302). Results suggest that the proposed algorithm is successful in generalization over similar motions and subjects. Such networks may be used as a high-level controller that could predict forearm kinematics from voluntary movements of the upper arm. This methodology is suitable for restoring the upper limb functions of individuals with motor disabilities of the forearm, but not of the upper arm. The developed control paradigm is applicable to upper-limb orthotic systems employing functional electrical stimulation. The proposed approach is of great significance particularly for humans with spinal cord injuries in a free-living environment. The implication of a measurement system with dual-axis accelerometers, developed for this study, is further seen in the evaluation of movement during the course of rehabilitation. For this purpose, training-related changes in synergies apparent from movement kinematics during rehabilitation would characterize the extent and the course of recovery. As such, a simple system using this methodology is of particular importance for stroke patients. The results underlie the important issue of upper-limb coordination.
Resumo:
The mortality rate of older patients with intertrochanteric fractures has been increasing with the aging of populations in China. The purpose of this study was: 1) to develop an artificial neural network (ANN) using clinical information to predict the 1-year mortality of elderly patients with intertrochanteric fractures, and 2) to compare the ANN's predictive ability with that of logistic regression models. The ANN model was tested against actual outcomes of an intertrochanteric femoral fracture database in China. The ANN model was generated with eight clinical inputs and a single output. ANN's performance was compared with a logistic regression model created with the same inputs in terms of accuracy, sensitivity, specificity, and discriminability. The study population was composed of 2150 patients (679 males and 1471 females): 1432 in the training group and 718 new patients in the testing group. The ANN model that had eight neurons in the hidden layer had the highest accuracies among the four ANN models: 92.46 and 85.79% in both training and testing datasets, respectively. The areas under the receiver operating characteristic curves of the automatically selected ANN model for both datasets were 0.901 (95%CI=0.814-0.988) and 0.869 (95%CI=0.748-0.990), higher than the 0.745 (95%CI=0.612-0.879) and 0.728 (95%CI=0.595-0.862) of the logistic regression model. The ANN model can be used for predicting 1-year mortality in elderly patients with intertrochanteric fractures. It outperformed a logistic regression on multiple performance measures when given the same variables.
Resumo:
This work presents the results of a Hybrid Neural Network (HNN) technique as applied to modeling SCFE curves obtained from two Brazilian vegetable matrices. A series Hybrid Neural Network was employed to estimate the parameters of the phenomenological model. A small set of SCFE data of each vegetable was used to generate an extended data set, sufficient to train the network. Afterwards, other sets of experimental data, not used in the network training, were used to validate the present approach. The series HNN correlates well the experimental data and it is shown that the predictions accomplished with this technique may be promising for SCFE purposes.
Resumo:
Redes Neurais Artificiais são técnicas computacionais que se utilizam de um modelo matemático capaz de adquirir conhecimentos pela experiência; esse comportamento inteligente da rede provém das interações entre unidades de processamento, denominadas de neurônios artificiais. O objetivo deste trabalho foi criar uma rede neural capaz de prever a estabilidade de óleos vegetais, a partir de dados de suas composições químicas, visando um modelo para a previsão da shelf-life de óleos vegetais, tendo como parâmetros apenas dados de suas composições químicas. Os primeiros passos do processo de desenvolvimento da rede consistiram na coleta de dados relativos ao problema e sua separação em um conjunto de treinamento e outro de testes. Estes conjuntos apresentaram como variáveis dados de composição química, que incluíram os valores totais em ácidos graxos, fenóis, tocoferóis e a composição individual em ácidos graxos. O passo seguinte foi a execução do treinamento, onde o padrão de entrada apresentado à rede como parâmetro de estabilidade foi o índice de peróxido, determinado experimentalmente por um período de 16 dias de armazenagem na ausência de luz, a 65ºC. Após o treinamento foi testada a capacidade de previsão adquirida pela rede, em função do parâmetro de estabilidade adotado, mas com um novo grupo de óleos. Seguindo o teste, foi determinada a correlação linear entre os valores de estabilidade previstos pela rede e aqueles determinados experimentalmente. Com os resultados obtidos, pode-se confirmar a viabilidade de previsão da estabilidade de óleos vegetais pela rede neural, a partir de dados de sua composição química, utilizando como parâmetro de estabilidade o índice de peróxido.
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.