859 resultados para Fuzzy c-means algorithm
Resumo:
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Resumo:
Chronic hepatitis B (HBV) and C (HCV) virus infections are the most important factors associated with hepatocellular carcinoma (HCC), but tumor prognosis remains poor due to the lack of diagnostic biomarkers. In order to identify novel diagnostic markers and therapeutic targets, the gene expression profile associated with viral and non-viral HCC was assessed in 9 tumor samples by oligo-microarrays. The differentially expressed genes were examined using a z-score and KEGG pathway for the search of ontological biological processes. We selected a non-redundant set of 15 genes with the lowest P value for clustering samples into three groups using the non-supervised algorithm k-means. Fisher’s linear discriminant analysis was then applied in an exhaustive search of trios of genes that could be used to build classifiers for class distinction. Different transcriptional levels of genes were identified in HCC of different etiologies and from different HCC samples. When comparing HBV-HCC vs HCV-HCC, HBV-HCC/HCV-HCC vs non-viral (NV)-HCC, HBC-HCC vs NV-HCC, and HCV-HCC vs NV-HCC of the 58 non-redundant differentially expressed genes, only 6 genes (IKBKβ, CREBBP, WNT10B, PRDX6, ITGAV, and IFNAR1) were found to be associated with hepatic carcinogenesis. By combining trios, classifiers could be generated, which correctly classified 100% of the samples. This expression profiling may provide a useful tool for research into the pathophysiology of HCC. A detailed understanding of how these distinct genes are involved in molecular pathways is of fundamental importance to the development of effective HCC chemoprevention and treatment.
Resumo:
Exposure to air pollutants is associated with hospitalizations due to pneumonia in children. We hypothesized the length of hospitalization due to pneumonia may be dependent on air pollutant concentrations. Therefore, we built a computational model using fuzzy logic tools to predict the mean time of hospitalization due to pneumonia in children living in São José dos Campos, SP, Brazil. The model was built with four inputs related to pollutant concentrations and effective temperature, and the output was related to the mean length of hospitalization. Each input had two membership functions and the output had four membership functions, generating 16 rules. The model was validated against real data, and a receiver operating characteristic (ROC) curve was constructed to evaluate model performance. The values predicted by the model were significantly correlated with real data. Sulfur dioxide and particulate matter significantly predicted the mean length of hospitalization in lags 0, 1, and 2. This model can contribute to the care provided to children with pneumonia.
Resumo:
Low-level lasers are used at low power densities and doses according to clinical protocols supplied with laser devices or based on professional practice. Although use of these lasers is increasing in many countries, the molecular mechanisms involved in effects of low-level lasers, mainly on DNA, are controversial. In this study, we evaluated the effects of low-level red lasers on survival, filamentation, and morphology of Escherichia colicells that were exposed to ultraviolet C (UVC) radiation. Exponential and stationary wild-type and uvrA-deficientE. coli cells were exposed to a low-level red laser and in sequence to UVC radiation. Bacterial survival was evaluated to determine the laser protection factor (ratio between the number of viable cells after exposure to the red laser and UVC and the number of viable cells after exposure to UVC). Bacterial filaments were counted to obtain the percentage of filamentation. Area-perimeter ratios were calculated for evaluation of cellular morphology. Experiments were carried out in duplicate and the results are reported as the means of three independent assays. Pre-exposure to a red laser protected wild-type and uvrA-deficient E. coli cells against the lethal effect of UVC radiation, and increased the percentage of filamentation and the area-perimeter ratio, depending on UVC fluence and physiological conditions in the cells. Therapeutic, low-level red laser radiation can induce DNA lesions at a sub-lethal level. Consequences to cells and tissues should be considered when clinical protocols based on this laser are carried out.
Resumo:
During enzymatic process of cheese manufacturing, rennin cleaves κ-casein releasing two fractions: para-κ-casein and glycomacropeptide (GMP), which remains soluble in milk whey. GMP is a peptide with structural particularities such as chain carbohydrates linked to specific threonine residues, to which a great variety of biological activities is attributed. Worldwide cheese production has increased generating high volumes of milk whey that could be efficiently used as an alternative source of high quality peptide or protein in foodstuff formulations. In order to evaluate isolation and recovery on whey GMP by means of thermal treatment (90 °C), 18 samples (2 L each) of sweet whey, resuspended commercial whey (positive control) and acid whey (negative control) were processed. Indirect presence of GMP was verified using chemical tests and PAGE-SDS 15%. At 90 °C treated sweet whey, 14, 20 and 41 kDa bands were observed. These bands may correspond to olygomers of GMP. Peptide recovery showed an average of 1.5 g/L (34.08%). The results indicate that industrial scale GMP production is feasible; however, further research must be carried out for the biological and nutritional evaluation of GMP's incorporation to foodstuff as a supplement.
Resumo:
Modifications to the commercial hydride generator, manufactured by Spectrametrics, resulted in improved operating procedure and enhancement of the arsenic and germanium signals. Experiments with arsenic(III) and arsenic(V) showed that identical reiults could be produced from both oxidation states. However, since arsenic(V) is reduced more slowly than arsenic(III), peak areas and not peak heights must be measured when the arsine is immediately stripped from the system (approximately 5 seconds reaction). When the reduction is allowed to proceed for 20 seconds before the arsine is stripped, peak heights may be used. For a 200 ng/mL solution, the relative standard deviation is 2.8% for As(III) and 3.8% for As(V). The detection limit for arsenic using the modified system is 0.50 ng/mL. Studies performed on As(V) standards show that the interferences from 1000 mg/L of nickel(II), cobalt(II), iron(III), copper(II), cadmium(II), and zinc(II) can be eliminated with the aid of 5 M Hel and 3% L-cystine. Conditions for the reduction of germanium to the corresponding hydride were investigated. The effect of different concentrations of HCl on the reduction of germanium to the covalent hydride in aqueous media by means of NaBH 4 solutions was assessed. Results show that the best response is accomplished at a pH of 1.7. The use of buffer solutions was similarly characterized. In both cases, results showed that the element is best reduced when the final pH of the solution after reaction is almost neutral. In addition, a more sensitive method, which includes the use of (NH4)2S208' has been developed. A 20% increase in the germanium signal is registered when compared to the signal achieved with Hel alone. Moreover, under these conditions, reduction of germanium could be accomplished, even when the solution's pH is neutral. For a 100 ng/mL germanium standard the rsd is 3%. The detection limit for germanium in 0.05 M Hel medium (pH 1.7) is 0.10 ng/mL and 0.09 ng/mL when ammonium persulphate is used in conjunction with Hel. Interferences from 1000 mg/L of iron(III), copper(II), cobalt(II), nickel(II), cadmium(II), lead(II), mercury(II), aluminum(III), tin(IV), arsenic(III), arsenic(V) and zinc(II) were studied and characterized. In this regard, the use of (NH4)ZS20S and Hel at a pH of 1.7 proved to be a successful mixture in the sbppression of the interferences caused by iron, copper, aluminum, tin, lead, and arsenic. The method was applied to the determination of germanium in cherts and iron ores. In addition, experiments with tin(IV) showed that a 15% increase in the tin signal can be accomplished in the presence of 1 mL of (NH4)2S20S 10% (m/V).
Resumo:
At head of title: [78].
Resumo:
This letter mentions that Eleanore Celeste has not heard from Arthur in nearly a month. She hopes that this means he will be coming home soon from France. It is labelled number 175.
Resumo:
Un facteur d’incertitude de 10 est utilisé par défaut lors de l’élaboration des valeurs toxicologiques de référence en santé environnementale, afin de tenir compte de la variabilité interindividuelle dans la population. La composante toxicocinétique de cette variabilité correspond à racine de 10, soit 3,16. Sa validité a auparavant été étudiée sur la base de données pharmaceutiques colligées auprès de diverses populations (adultes, enfants, aînés). Ainsi, il est possible de comparer la valeur de 3,16 au Facteur d’ajustement pour la cinétique humaine (FACH), qui constitue le rapport entre un centile élevé (ex. : 95e) de la distribution de la dose interne dans des sous-groupes présumés sensibles et sa médiane chez l’adulte, ou encore à l’intérieur d’une population générale. Toutefois, les données expérimentales humaines sur les polluants environnementaux sont rares. De plus, ces substances ont généralement des propriétés sensiblement différentes de celles des médicaments. Il est donc difficile de valider, pour les polluants, les estimations faites à partir des données sur les médicaments. Pour résoudre ce problème, la modélisation toxicocinétique à base physiologique (TCBP) a été utilisée pour simuler la variabilité interindividuelle des doses internes lors de l’exposition aux polluants. Cependant, les études réalisées à ce jour n’ont que peu permis d’évaluer l’impact des conditions d’exposition (c.-à-d. voie, durée, intensité), des propriétés physico/biochimiques des polluants, et des caractéristiques de la population exposée sur la valeur du FACH et donc la validité de la valeur par défaut de 3,16. Les travaux de la présente thèse visent à combler ces lacunes. À l’aide de simulations de Monte-Carlo, un modèle TCBP a d’abord été utilisé pour simuler la variabilité interindividuelle des doses internes (c.-à-d. chez les adultes, ainés, enfants, femmes enceintes) de contaminants de l’eau lors d’une exposition par voie orale, respiratoire, ou cutanée. Dans un deuxième temps, un tel modèle a été utilisé pour simuler cette variabilité lors de l’inhalation de contaminants à intensité et durée variables. Ensuite, un algorithme toxicocinétique à l’équilibre probabiliste a été utilisé pour estimer la variabilité interindividuelle des doses internes lors d’expositions chroniques à des contaminants hypothétiques aux propriétés physico/biochimiques variables. Ainsi, les propriétés de volatilité, de fraction métabolisée, de voie métabolique empruntée ainsi que de biodisponibilité orale ont fait l’objet d’analyses spécifiques. Finalement, l’impact du référent considéré et des caractéristiques démographiques sur la valeur du FACH lors de l’inhalation chronique a été évalué, en ayant recours également à un algorithme toxicocinétique à l’équilibre. Les distributions de doses internes générées dans les divers scénarios élaborés ont permis de calculer dans chaque cas le FACH selon l’approche décrite plus haut. Cette étude a mis en lumière les divers déterminants de la sensibilité toxicocinétique selon le sous-groupe et la mesure de dose interne considérée. Elle a permis de caractériser les déterminants du FACH et donc les cas où ce dernier dépasse la valeur par défaut de 3,16 (jusqu’à 28,3), observés presqu’uniquement chez les nouveau-nés et en fonction de la substance mère. Cette thèse contribue à améliorer les connaissances dans le domaine de l’analyse du risque toxicologique en caractérisant le FACH selon diverses considérations.
Resumo:
Les films de simulations qui accompagnent le document ont été réalisés avec Pymol.
Resumo:
Les débats économiques au 19e siècle, loin d’être l’apanage du monde universitaire, étaient aux États-Unis un des principaux objets de contentieux entre les partis politiques et ceux-ci trouvaient écho dans la sphère publique. Les journaux étaient alors le principal moyen de communiquer les opinions des différents partis. La présente étude vise à mettre en contexte et cerner la position des écrits du plus important économiste américain de son époque, Henry Charles Carey (1793-1879), reconnu comme tel par J.S. Mill et Karl Marx en leur temps, lors de la décennie de 1850 dans le journal le plus influent de cette période, le New York Tribune. Pour ce faire, il a fallu au préalable identifier les articles non signés de Carey dans le journal, ce qui n’avait auparavant jamais été fait. Au moment d’écrire dans le principal organe américain qui défendait la protection aux États-Unis afin d’industrialiser le pays, Carey était alors le représentant le plus prééminent du système américain d’économie. Ce dernier, fondé sur les écrits d’Alexander Hamilton, prônait l’industrialisation des États-Unis et l’intervention de l’État pour défendre le bien commun, s’opposant ainsi à l’école libérale anglaise basée sur les écrits d’Adam Smith. Conceptuellement, la pensée économique de Carey se situe dans la tradition des Autres Canon, basée sur la production et l’innovation. Ceci le mena à s’opposer avec vigueur tant au malthusianisme qu’à la division internationale du travail, justifiée théoriquement par la thèse de l’avantage comparatif de Ricardo. En effet, dans son analyse, la volonté exprimée au milieu du 19e siècle par l’Angleterre de devenir l’atelier du monde et de faire du reste des nations des producteurs de matières premières sous un régime de libre-échange n’était rien d’autre que la continuation de la politique coloniale par d’autres moyens. Pour Carey, la spécialisation dans l’exportation de matières premières, notamment défendue par les planteurs du Sud des États-Unis, loin d’être bénéfique au pays, était le sûr gage de la pauvreté comme les cas de l’Irlande et de l’Inde le démontraient.
Resumo:
Ordering in a binary alloy is studied by means of a molecular-dynamics (MD) algorithm which allows to reach the domain growth regime. Results are compared with Monte Carlo simulations using a realistic vacancy-atom (MC-VA) mechanism. At low temperatures fast growth with a dynamical exponent x>1/2 is found for MD and MC-VA. The study of a nonequilibrium ordering process with the two methods shows the importance of the nonhomogeneity of the excitations in the system for determining its macroscopic kinetics.
Resumo:
It is believed that every fuzzy generalization should be formulated in such a way that it contain the ordinary set theoretic notion as a special case. Therefore the definition of fuzzy topology in the line of C.L.CHANG E9] with an arbitrary complete and distributive lattice as the membership set is taken. Almost all the results proved and presented in this thesis can, in a sense, be called generalizations of corresponding results in ordinary set theory and set topology. However the tools and the methods have to be in many of the cases, new. Here an attempt is made to solve the problem of complementation in the lattice of fuzzy topologies on a set. It is proved that in general, the lattice of fuzzy topologies is not complemented. Complements of some fuzzy topologies are found out. It is observed that (L,X) is not uniquely complemented. However, a complete analysis of the problem of complementation in the lattice of fuzzy topologies is yet to be found out
Resumo:
MicroRNAs are short non-coding RNAs that can regulate gene expression during various crucial cell processes such as differentiation, proliferation and apoptosis. Changes in expression profiles of miRNA play an important role in the development of many cancers, including CRC. Therefore, the identification of cancer related miRNAs and their target genes are important for cancer biology research. In this paper, we applied TSK-type recurrent neural fuzzy network (TRNFN) to infer miRNA–mRNA association network from paired miRNA, mRNA expression profiles of CRC patients. We demonstrated that the method we proposed achieved good performance in recovering known experimentally verified miRNA–mRNA associations. Moreover, our approach proved successful in identifying 17 validated cancer miRNAs which are directly involved in the CRC related pathways. Targeting such miRNAs may help not only to prevent the recurrence of disease but also to control the growth of advanced metastatic tumors. Our regulatory modules provide valuable insights into the pathogenesis of cancer