927 resultados para Negative Selection Algorithm
Resumo:
Background: The ratio of the rates of non-synonymous and synonymous substitution (d(N)/d(S)) is commonly used to estimate selection in coding sequences. It is often suggested that, all else being equal, d(N)/d(S) should be lower in populations with large effective size (Ne) due to increased efficacy of purifying selection. As N-e is difficult to measure directly, life history traits such as body mass, which is typically negatively associated with population size, have commonly been used as proxies in empirical tests of this hypothesis. However, evidence of whether the expected positive correlation between body mass and d(N)/d(S) is consistently observed is conflicting. Results: Employing whole genome sequence data from 48 avian species, we assess the relationship between rates of molecular evolution and life history in birds. We find a negative correlation between dN/dS and body mass, contrary to nearly neutral expectation. This raises the question whether the correlation might be a method artefact. We therefore in turn consider non-stationary base composition, divergence time and saturation as possible explanations, but find no clear patterns. However, in striking contrast to d(N)/d(S), the ratio of radical to conservative amino acid substitutions (K-r/K-c) correlates positively with body mass. Conclusions: Our results in principle accord with the notion that non-synonymous substitutions causing radical amino acid changes are more efficiently removed by selection in large populations, consistent with nearly neutral theory. These findings have implications for the use of d(N)/d(S) and suggest that caution is warranted when drawing conclusions about lineage-specific modes of protein evolution using this metric.
Resumo:
The objective of this work was to select surviving breeders of Litopenaeus vannamei from white spot syndrome virus (WSSV) outbreak, adapted to local climatic conditions and negatively diagnosed for WSSV and infectious hypodermal and hematopoietic necrosis virus (IHHNV), and to evaluate if this strategy is a viable alternative for production in Santa Catarina, Brazil. A total of 800 males and 800 females were phenotypically selected in a farm pond. Nested-PCR analyses of 487 sexually mature females and 231 sexually mature males showed that 63% of the females and 55% of the males were infected with IHHNV. Animals free of IHHNV were tested for WSSV, and those considered double negative were used for breeding. The post-larvae produced were stocked in nine nursery tanks for analysis. From the 45 samples, with 50 post-larvae each, only two were positive for IHHNV and none for WSSV. Batches of larvae diagnosed free of virus by nested-PCR were sent to six farms. A comparative analysis was carried out in growth ponds, between local post-larvae and post-larvae from Northeast Brazil. Crabs (Chasmagnathus granulata), blue crabs (Callinectes sapidus), and sea hares (Aplysia brasiliana), which are possible vectors of these viruses, were also evaluated. The mean survival was 55% for local post-larvae against 23.4% for post-larvae from the Northeast. Sea hares showed prevalence of 50% and crabs of 67% of WSSV.
Resumo:
STUDY DESIGN:: Retrospective database- query to identify all anterior spinal approaches. OBJECTIVES:: To assess all patients with pharyngo-cutaneous fistulas after anterior cervical spine surgery. SUMMARY OF BACKGROUND DATA:: Patients treated in University of Heidelberg Spine Medical Center, Spinal Cord Injury Unit and Department of Otolaryngology (Germany), between 2005 and 2011 with the diagnosis of pharyngo-cutaneous fistulas. METHODS:: We conducted a retrospective study on 5 patients between 2005 and 2011 with PCF after ACSS, their therapy management and outcome according to radiologic data and patient charts. RESULTS:: Upon presentation 4 patients were paraplegic. 2 had PCF arising from one piriform sinus, two patients from the posterior pharyngeal wall and piriform sinus combined and one patient only from the posterior pharyngeal wall. 2 had previous unsuccessful surgical repair elsewhere and 1 had prior radiation therapy. In 3 patients speech and swallowing could be completely restored, 2 patients died. Both were paraplegic. The patients needed an average of 2-3 procedures for complete functional recovery consisting of primary closure with various vascularised regional flaps and refining laser procedures supplemented with negative pressure wound therapy where needed. CONCLUSION:: Based on our experience we are able to provide a treatment algorithm that indicates that chronic as opposed to acute fistulas require a primary surgical closure combined with a vascularised flap that should be accompanied by the immediate application of a negative pressure wound therapy. We also conclude that particularly in paraplegic patients suffering this complication the risk for a fatal outcome is substantial.
Resumo:
The objective of this work was to isolate strains of lactic acid bacteria with probiotic potential from the digestive tract of marine shrimp (Litopenaeus vannamei), and to carry out in vitro selection based on multiple characters. The ideotype (ideal proposed strain) was defined by the highest averages for the traits maximum growth velocity, final count of viable cells, and inhibition halo against nine freshwater and marine pathogens, and by the lowest averages for the traits duplication time and resistance of strains to NaCl (1.5 and 3%), pH (6, 8, and 9), and biliary salts (5%). Mahalanobis distance (D²) was estimated among the evaluated strains, and the best ones were those with the shortest distances to the ideotype. Ten bacterial strains were isolated and biochemically identified as Lactobacillus plantarum (3), L. brevis (3), Weissella confusa (2), Lactococcus lactis (1), and L. delbrueckii (1). Lactobacillus plantarum strains showed a wide spectrum of action and the largest inhibition halos against pathogens, both Gram-positive and negative, high growth rate, and tolerance to all evaluated parameters. In relation to ideotype, L. plantarum showed the lowest Mahalanobis (D²) distance, followed by the strains of W. confusa, L. brevis, L. lactis, and L. delbrueckii. Among the analyzed bacterial strains, those of Lactobacillus plantarum have the greatest potential for use as a probiotic for marine shrimp.
Resumo:
The objective of this work was to list potential candidate bee species for environmental risk assessment (ERA) of genetically modified (GM) cotton and to identify the most suited bee species for this task, according to their abundance and geographical distribution. Field inventories of bee on cotton flowers were performed in the states of Bahia and Mato Grosso, and in Distrito Federal, Brazil. During a 344 hour sampling, 3,470 bees from 74 species were recovered, at eight sites. Apis mellifera dominated the bee assemblages at all sites. Sampling at two sites that received no insecticide application was sufficient to identify the three most common and geographically widespread wild species: Paratrigona lineata, Melissoptila cnecomola, and Trigona spinipes, which could be useful indicators of pollination services in the ERA. Indirect ordination of common wild species revealed that insecticides reduced the number of native bee species and that interannual variation in bee assemblages may be low. Accumulation curves of rare bee species did not saturate, as expected in tropical and megadiverse regions. Species-based approaches are limited to analyze negative impacts of GM cotton on pollinator biological diversity. The accumulation rate of rare bee species, however, may be useful for evaluating possible negative effects of GM cotton on bee diversity.
Resumo:
The objective of this work was to evaluate the carbon isotope fractionation as a phenomic facility for cotton selection in contrasting environments and to assess its relationship with yield components. The experiments were carried out in a randomized block design, with four replicates, in the municipalities of Santa Helena de Goiás (SHGO) and Montividiu (MONT), in the state of Goiás, Brazil. The analysis of carbon isotope discrimination (Δ) was performed in 15 breeding lines and three cultivars. Subsequently, the root growth kinetic and root system architecture from the selected genotypes were determined. In both locations, Δ analyses were suitable to discriminate cotton genotypes. There was a positive correlation between Δ and seed-cotton yield in SHGO, where water deficit was more severe. In this site, the negative correlations found between Δ and fiber percentage indicate an integrative effect of gas exchange on Δ and its association with yield components. As for root robustness and growth kinetic, the GO 05 809 genotype performance contributes to sustain the highest values of Δ found in MONT, where edaphoclimatic conditions were more suitable for cotton. The use of Δ analysis as a phenomic facility can help to select cotton genotypes, in order to obtain plants with higher efficiency for gas exchange and water use.
Resumo:
Summary Background: The combination of the Pulmonary Embolism Severity Index (PESI) and troponin testing could help physicians identify appropriate patients with acute pulmonary embolism (PE) for early hospital discharge. Methods: This prospective cohort study included a total of 567 patients from a single center registry with objectively confirmed acute symptomatic PE. On the basis of the PESI, each patient was classified into 1 of 5 classes (I to V). At the time of hospital admission, patients had troponin I (cTnI) levels measured. The endpoint of the study was all-cause mortality within 30 days after diagnosis. We calculated the mortality rates in 4 patient groups: group 1: PESI class I-II plus cTnI <0.1 ng mL(-1); group 2: PESI classes III-V plus cTnI <0.1 ng mL(-1); group 3: PESI classes I-II plus cTnI >/= 0.1 ng mL(-1); and group 4: PESI classes III-V plus cTnI >/= 0.1 ng mL(-1). Results: The study cohort had a 30-day mortality of 10% (95% confidence interval [CI], 7.6 to 12.5%). Mortality rates in the 4 groups were 1.3%, 14.2%, 0% and 15.4%, respectively. Compared to non-elevated cTnl, the low-risk PESI had a higher negative predictive value (NPV) (98.9% vs 90.8%) and negative likelihood ratio (NLR) (0.1 vs 0.9) for predicting mortality. The addition of non-elevated cTnI to low-risk PESI did not improve the NPV or the NLR compared to either test alone. Conclusions: Compared to cTnl testing, PESI classification more accurately identified patients with PE who are at low risk of all-cause death within 30-days of presentation.
Resumo:
Résumé : La radiothérapie par modulation d'intensité (IMRT) est une technique de traitement qui utilise des faisceaux dont la fluence de rayonnement est modulée. L'IMRT, largement utilisée dans les pays industrialisés, permet d'atteindre une meilleure homogénéité de la dose à l'intérieur du volume cible et de réduire la dose aux organes à risque. Une méthode usuelle pour réaliser pratiquement la modulation des faisceaux est de sommer de petits faisceaux (segments) qui ont la même incidence. Cette technique est appelée IMRT step-and-shoot. Dans le contexte clinique, il est nécessaire de vérifier les plans de traitement des patients avant la première irradiation. Cette question n'est toujours pas résolue de manière satisfaisante. En effet, un calcul indépendant des unités moniteur (représentatif de la pondération des chaque segment) ne peut pas être réalisé pour les traitements IMRT step-and-shoot, car les poids des segments ne sont pas connus à priori, mais calculés au moment de la planification inverse. Par ailleurs, la vérification des plans de traitement par comparaison avec des mesures prend du temps et ne restitue pas la géométrie exacte du traitement. Dans ce travail, une méthode indépendante de calcul des plans de traitement IMRT step-and-shoot est décrite. Cette méthode est basée sur le code Monte Carlo EGSnrc/BEAMnrc, dont la modélisation de la tête de l'accélérateur linéaire a été validée dans une large gamme de situations. Les segments d'un plan de traitement IMRT sont simulés individuellement dans la géométrie exacte du traitement. Ensuite, les distributions de dose sont converties en dose absorbée dans l'eau par unité moniteur. La dose totale du traitement dans chaque élément de volume du patient (voxel) peut être exprimée comme une équation matricielle linéaire des unités moniteur et de la dose par unité moniteur de chacun des faisceaux. La résolution de cette équation est effectuée par l'inversion d'une matrice à l'aide de l'algorithme dit Non-Negative Least Square fit (NNLS). L'ensemble des voxels contenus dans le volume patient ne pouvant être utilisés dans le calcul pour des raisons de limitations informatiques, plusieurs possibilités de sélection ont été testées. Le meilleur choix consiste à utiliser les voxels contenus dans le Volume Cible de Planification (PTV). La méthode proposée dans ce travail a été testée avec huit cas cliniques représentatifs des traitements habituels de radiothérapie. Les unités moniteur obtenues conduisent à des distributions de dose globale cliniquement équivalentes à celles issues du logiciel de planification des traitements. Ainsi, cette méthode indépendante de calcul des unités moniteur pour l'IMRT step-andshootest validée pour une utilisation clinique. Par analogie, il serait possible d'envisager d'appliquer une méthode similaire pour d'autres modalités de traitement comme par exemple la tomothérapie. Abstract : Intensity Modulated RadioTherapy (IMRT) is a treatment technique that uses modulated beam fluence. IMRT is now widespread in more advanced countries, due to its improvement of dose conformation around target volume, and its ability to lower doses to organs at risk in complex clinical cases. One way to carry out beam modulation is to sum smaller beams (beamlets) with the same incidence. This technique is called step-and-shoot IMRT. In a clinical context, it is necessary to verify treatment plans before the first irradiation. IMRT Plan verification is still an issue for this technique. Independent monitor unit calculation (representative of the weight of each beamlet) can indeed not be performed for IMRT step-and-shoot, because beamlet weights are not known a priori, but calculated by inverse planning. Besides, treatment plan verification by comparison with measured data is time consuming and performed in a simple geometry, usually in a cubic water phantom with all machine angles set to zero. In this work, an independent method for monitor unit calculation for step-and-shoot IMRT is described. This method is based on the Monte Carlo code EGSnrc/BEAMnrc. The Monte Carlo model of the head of the linear accelerator is validated by comparison of simulated and measured dose distributions in a large range of situations. The beamlets of an IMRT treatment plan are calculated individually by Monte Carlo, in the exact geometry of the treatment. Then, the dose distributions of the beamlets are converted in absorbed dose to water per monitor unit. The dose of the whole treatment in each volume element (voxel) can be expressed through a linear matrix equation of the monitor units and dose per monitor unit of every beamlets. This equation is solved by a Non-Negative Least Sqvare fif algorithm (NNLS). However, not every voxels inside the patient volume can be used in order to solve this equation, because of computer limitations. Several ways of voxel selection have been tested and the best choice consists in using voxels inside the Planning Target Volume (PTV). The method presented in this work was tested with eight clinical cases, which were representative of usual radiotherapy treatments. The monitor units obtained lead to clinically equivalent global dose distributions. Thus, this independent monitor unit calculation method for step-and-shoot IMRT is validated and can therefore be used in a clinical routine. It would be possible to consider applying a similar method for other treatment modalities, such as for instance tomotherapy or volumetric modulated arc therapy.
Resumo:
OBJECTIVE: To review the available knowledge on epidemiology and diagnoses of acute infections in children aged 2 to 59 months in primary care setting and develop an electronic algorithm for the Integrated Management of Childhood Illness to reach optimal clinical outcome and rational use of medicines. METHODS: A structured literature review in Medline, Embase and the Cochrane Database of Systematic Review (CDRS) looked for available estimations of diseases prevalence in outpatients aged 2-59 months, and for available evidence on i) accuracy of clinical predictors, and ii) performance of point-of-care tests for targeted diseases. A new algorithm for the management of childhood illness (ALMANACH) was designed based on evidence retrieved and results of a study on etiologies of fever in Tanzanian children outpatients. FINDINGS: The major changes in ALMANACH compared to IMCI (2008 version) are the following: i) assessment of 10 danger signs, ii) classification of non-severe children into febrile and non-febrile illness, the latter receiving no antibiotics, iii) classification of pneumonia based on a respiratory rate threshold of 50 assessed twice for febrile children 12-59 months; iv) malaria rapid diagnostic test performed for all febrile children. In the absence of identified source of fever at the end of the assessment, v) urine dipstick performed for febrile children <2 years to consider urinary tract infection, vi) classification of 'possible typhoid' for febrile children >2 years with abdominal tenderness; and lastly vii) classification of 'likely viral infection' in case of negative results. CONCLUSION: This smartphone-run algorithm based on new evidence and two point-of-care tests should improve the quality of care of <5 year children and lead to more rational use of antimicrobials.
Resumo:
As wireless communications evolve towards heterogeneousnetworks, mobile terminals have been enabled tohandover seamlessly from one network to another. At the sametime, the continuous increase in the terminal power consumptionhas resulted in an ever-decreasing battery lifetime. To that end,the network selection is expected to play a key role on howto minimize the energy consumption, and thus to extend theterminal lifetime. Hitherto, terminals select the network thatprovides the highest received power. However, it has been provedthat this solution does not provide the highest energy efficiency.Thus, this paper proposes an energy efficient vertical handoveralgorithm that selects the most energy efficient network thatminimizes the uplink power consumption. The performance of theproposed algorithm is evaluated through extensive simulationsand it is shown to achieve high energy efficiency gains comparedto the conventional approach.
Resumo:
In this study, feature selection in classification based problems is highlighted. The role of feature selection methods is to select important features by discarding redundant and irrelevant features in the data set, we investigated this case by using fuzzy entropy measures. We developed fuzzy entropy based feature selection method using Yu's similarity and test this using similarity classifier. As the similarity classifier we used Yu's similarity, we tested our similarity on the real world data set which is dermatological data set. By performing feature selection based on fuzzy entropy measures before classification on our data set the empirical results were very promising, the highest classification accuracy of 98.83% was achieved when testing our similarity measure to the data set. The achieved results were then compared with some other results previously obtained using different similarity classifiers, the obtained results show better accuracy than the one achieved before. The used methods helped to reduce the dimensionality of the used data set, to speed up the computation time of a learning algorithm and therefore have simplified the classification task
Resumo:
Seven selection indexes based on the phenotypic value of the individual and the mean performance of its family were assessed for their application in breeding of self-pollinated plants. There is no clear superiority from one index to another although some show one or more negative aspects, such as favoring the selection of a top performing plant from an inferior family in detriment of an excellent plant from a superior family
Resumo:
The paper-and-pencil digit-comparison task for assessing negative priming (NP) was introduced, using a referent-size-selection procedure that was demonstrated to enhance the effect. NP is indicated by slower responses to recently ignored items, and proposed within the clinical-experimental framework as a major cognitive index of active suppression of distracting information, critical to executive functioning. The digit-comparison task requires circling digits of a list with digit-asterisk pairs (a baseline measure for digit-selection), and the larger of two digits in each pair of the unrelated (with different digits in successive digit-pairs) and related lists (in which the smaller digit subsequently became a target). A total of 56 students (18-38 years) participated in two experiments that explored practice effects across lists and demonstrated reliable NP, i.e., slowing to complete the related list relative to the unrelated list, (F(2, 44) = 52.42, P < 0.0001). A 3rd experiment examined age-related effects. In the paper-and-pencil digit-comparison task, NP was reliable for the younger (N = 8, 18-24 years) and middle-aged adults (N = 8, 31-54 years), but absent for the older group (N = 8, 68-77 years). NP was also reduced with aging in a computer-implemented digit-comparison task, and preserved in a task typically used to test location-specific NP, accounting for the dissociation between identity- and spatial-based suppression of distractors (Rao R(3, 12) = 16.02, P < 0.0002). Since the paper-and-pencil digit-comparison task can be administered easily, it can be useful for neuropsychologists seeking practical measures of NP that do not require cumbersome technical equipment.
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.
Resumo:
The issue of selecting an appropriate healthcare information system is a very essential one. If implemented healthcare information system doesn’t fit particular healthcare institution, for example there are unnecessary functions; healthcare institution wastes its resources and its efficiency decreases. The purpose of this research is to develop a healthcare information system selection model to assist the decision-making process of choosing healthcare information system. Appropriate healthcare information system helps healthcare institutions to become more effective and efficient and keep up with the times. The research is based on comparison analysis of 50 healthcare information systems and 6 interviews with experts from St-Petersburg healthcare institutions that already have experience in healthcare information system utilization. 13 characteristics of healthcare information systems: 5 key and 7 additional features are identified and considered in the selection model development. Variables are used in the selection model in order to narrow the decision algorithm and to avoid duplication of brunches. The questions in the healthcare information systems selection model are designed to be easy-to-understand for common a decision-maker in healthcare institution without permanent establishment.