902 resultados para Dynamic search fireworks algorithm with covariance mutation
Resumo:
Background: This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results: The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions: We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.
Resumo:
The influence of the partial pressure of carbon dioxide (CO2) on the thermal decomposition process of a calcite (CI) and a dolomite (DP) is investigated in this paper using a thermogravimetric analyser. The tests were non-isothermal at five different heating rates in dynamic atmosphere of air with 0% and 15% carbon dioxide (CO2). In the atmosphere without CO2, the average activation energies (E-alpha) were 197.4 kJ mol(-1) and 188.1 kJ mol(-1) for CI and DP, respectively. For the DP with 15% CO2, two decomposition steps were observed, indicating a change of mechanism. The values of E-alpha for 15% CO2 were 378.7 kJ mol(-1) for the CI, and 299.8 kJ mol(-1) (first decomposition) and 453.4 kJ mol(-1) (second decomposition) for the DP, showing that the determination of E-alpha for DP should in this case be considered separately in those two distinct regions. The results obtained in this study are relevant to understanding the behaviour changes in the thermal decomposition of limestones with CO2 partial pressure when applied to technologies, such as carbon capture and storage (CCS), in which carbon dioxide is present in high concentrations.
Resumo:
Com o objetivo de analisar as alterações musculoesqueléticas dos indivíduos com espondilite anquilosante (EA) e suas repercussões sobre o controle postural, realizou-se uma revisão bibliográfica nas bases de dados da BIREME e EBSCO HOTS e no site Pubmed com as palavras-chave: "ankylosing spondylitis", "postural balance" e "posture". Foram selecionados artigos envolvendo seres humanos e que analisavam o controle postural e a biomecânica dos indivíduos com EA, nos idiomas inglês e português, publicados no período entre 1999 e 2010. Do total de artigos encontrados, apenas quatro preencheram os requisitos. Desses, três compararam os resultados de pacientes com EA com os dados obtidos de indivíduos saudáveis, e um analisou apenas indivíduos com EA. Nenhum artigo continha o mesmo método de análise postural. Para avaliação do equilíbrio foram utilizadas a Escala de Equilíbrio de Berg, a Plataforma de Força e a Magnometria. Os principais desvios posturais encontrados foram aumento da cifose torácica e flexão do quadril, que levam a uma anteriorização do centro de gravidade corporal, apresentando flexão do joelho e plantiflexão do tornozelo como compensação para manter o equilíbrio. Apenas um autor encontrou piora do equilíbrio funcional nos sujeitos com EA. Todos os métodos de avaliação utilizados foram considerados capazes de mensurar o equilíbrio, não havendo uma escala específica para pacientes com EA.
Resumo:
OBJECTIVE: To review the psychometric properties of the Beck Depression Inventory-II (BDI-II) as a self-report measure of depression in a variety of settings and populations. METHODS: Relevant studies of the BDI-II were retrieved through a search of electronic databases, a hand search, and contact with authors. Retained studies (k = 118) were allocated into three groups: non-clinical, psychiatric/institutionalized, and medical samples. RESULTS: The internal consistency was described as around 0.9 and the retest reliability ranged from 0.73 to 0.96. The correlation between BDI-II and the Beck Depression Inventory (BDI-I) was high and substantial overlap with measures of depression and anxiety was reported. The criterion-based validity showed good sensitivity and specificity for detecting depression in comparison to the adopted gold standard. However, the cutoff score to screen for depression varied according to the type of sample. Factor analysis showed a robust dimension of general depression composed by two constructs: cognitive-affective and somatic-vegetative. CONCLUSIONS: The BDI-II is a relevant psychometric instrument, showing high reliability, capacity to discriminate between depressed and non-depressed subjects, and improved concurrent, content, and structural validity. Based on available psychometric evidence, the BDI-II can be viewed as a cost-effective questionnaire for measuring the severity of depression, with broad applicability for research and clinical practice worldwide.
Resumo:
This work proposes a system for classification of industrial steel pieces by means of magnetic nondestructive device. The proposed classification system presents two main stages, online system stage and off-line system stage. In online stage, the system classifies inputs and saves misclassification information in order to perform posterior analyses. In the off-line optimization stage, the topology of a Probabilistic Neural Network is optimized by a Feature Selection algorithm combined with the Probabilistic Neural Network to increase the classification rate. The proposed Feature Selection algorithm searches for the signal spectrogram by combining three basic elements: a Sequential Forward Selection algorithm, a Feature Cluster Grow algorithm with classification rate gradient analysis and a Sequential Backward Selection. Also, a trash-data recycling algorithm is proposed to obtain the optimal feedback samples selected from the misclassified ones.
Resumo:
La presente tesi riguarda lo studio di procedimenti di ottimizzazione di sistemi smorzati. In particolare, i sistemi studiati sono strutture shear-type soggette ad azioni di tipo sismico impresse alla base. Per effettuare l’ottimizzazione dei sistemi in oggetto si agisce sulle rigidezze di piano e sui coefficienti di smorzamento effettuando una ridistribuzione delle quantità suddette nei piani della struttura. È interessante effettuare l’ottimizzazione di sistemi smorzati nell’ottica della progettazione antisismica, in modo da ridurre la deformata della struttura e, conseguentemente, anche le sollecitazioni che agiscono su di essa. Il lavoro consta di sei capitoli nei quali vengono affrontate tre procedure numerico-analitiche per effettuare l’ottimizzazione di sistemi shear-type. Nel primo capitolo si studia l’ottimizzazione di sistemi shear-type agendo su funzioni di trasferimento opportunamente vincolate. In particolare, le variabili di progetto sono le rigidezze di piano, mentre i coefficienti di smorzamento e le masse di piano risultano quantità note e costanti durante tutto il procedimento di calcolo iterativo; per effettuare il controllo dinamico della struttura si cerca di ottenere una deformata pressoché rettilinea. Tale condizione viene raggiunta ponendo le ampiezze delle funzioni di trasferimento degli spostamenti di interpiano pari all’ampiezza della funzione di trasferimento del primo piano. Al termine della procedura si ottiene una ridistribuzione della rigidezza complessiva nei vari piani della struttura. In particolare, si evince un aumento della rigidezza nei piani più bassi che risultano essere quelli più sollecitati da una azione impressa alla base e, conseguentemente, si assiste ad una progressiva riduzione della variabile di progetto nei piani più alti. L’applicazione numerica di tale procedura viene effettuata nel secondo capitolo mediante l’ausilio di un programma di calcolo in linguaggio Matlab. In particolare, si effettua lo studio di sistemi a tre e a cinque gradi di libertà. La seconda procedura numerico-analitica viene presentata nel terzo capitolo. Essa riguarda l’ottimizzazione di sistemi smorzati agendo simultaneamente sulla rigidezza e sullo smorzamento e consta di due fasi. La prima fase ricerca il progetto ottimale della struttura per uno specifico valore della rigidezza complessiva e dello smorzamento totale, mentre la seconda fase esamina una serie di progetti ottimali in funzione di diversi valori della rigidezza e dello smorzamento totale. Nella prima fase, per ottenere il controllo dinamico della struttura, viene minimizzata la somma degli scarti quadratici medi degli spostamenti di interpiano. Le variabili di progetto, aggiornate dopo ogni iterazione, sono le rigidezze di piano ed i coefficienti di smorzamento. Si pone, inoltre, un vincolo sulla quantità totale di rigidezza e di smorzamento, e i valori delle rigidezze e dei coefficienti di smorzamento di ogni piano non devono superare un limite superiore posto all’inizio della procedura. Anche in questo caso viene effettuata una ridistribuzione delle rigidezze e dei coefficienti di smorzamento nei vari piani della struttura fino ad ottenere la minimizzazione della funzione obiettivo. La prima fase riduce la deformata della struttura minimizzando la somma degli scarti quadrarici medi degli spostamenti di interpiano, ma comporta un aumento dello scarto quadratico medio dell’accelerazione assoluta dell’ultimo piano. Per mantenere quest’ultima quantità entro limiti accettabili, si passa alla seconda fase in cui si effettua una riduzione dell’accelerazione attraverso l’aumento della quantità totale di smorzamento. La procedura di ottimizzazione di sistemi smorzati agendo simultaneamente sulla rigidezza e sullo smorzamento viene applicata numericamente, mediante l’utilizzo di un programma di calcolo in linguaggio Matlab, nel capitolo quattro. La procedura viene applicata a sistemi a due e a cinque gradi di libertà. L’ultima parte della tesi ha come oggetto la generalizzazione della procedura che viene applicata per un sistema dotato di isolatori alla base. Tale parte della tesi è riportata nel quinto capitolo. Per isolamento sismico di un edificio (sistema di controllo passivo) si intende l’inserimento tra la struttura e le sue fondazioni di opportuni dispositivi molto flessibili orizzontalmente, anche se rigidi in direzione verticale. Tali dispositivi consentono di ridurre la trasmissione del moto del suolo alla struttura in elevazione disaccoppiando il moto della sovrastruttura da quello del terreno. L’inserimento degli isolatori consente di ottenere un aumento del periodo proprio di vibrare della struttura per allontanarlo dalla zona dello spettro di risposta con maggiori accelerazioni. La principale peculiarità dell’isolamento alla base è la possibilità di eliminare completamente, o quantomeno ridurre sensibilmente, i danni a tutte le parti strutturali e non strutturali degli edifici. Quest’ultimo aspetto è importantissimo per gli edifici che devono rimanere operativi dopo un violento terremoto, quali ospedali e i centri operativi per la gestione delle emergenze. Nelle strutture isolate si osserva una sostanziale riduzione degli spostamenti di interpiano e delle accelerazioni relative. La procedura di ottimizzazione viene modificata considerando l’introduzione di isolatori alla base di tipo LRB. Essi sono costituiti da strati in elastomero (aventi la funzione di dissipare, disaccoppiare il moto e mantenere spostamenti accettabili) alternati a lamine in acciaio (aventi la funzione di mantenere una buona resistenza allo schiacciamento) che ne rendono trascurabile la deformabilità in direzione verticale. Gli strati in elastomero manifestano una bassa rigidezza nei confronti degli spostamenti orizzontali. La procedura di ottimizzazione viene applicata ad un telaio shear-type ad N gradi di libertà con smorzatori viscosi aggiunti. Con l’introduzione dell’isolatore alla base si passa da un sistema ad N gradi di libertà ad un sistema a N+1 gradi di libertà, in quanto l’isolatore viene modellato alla stregua di un piano della struttura considerando una rigidezza e uno smorzamento equivalente dell’isolatore. Nel caso di sistema sheat-type isolato alla base, poiché l’isolatore agisce sia sugli spostamenti di interpiano, sia sulle accelerazioni trasmesse alla struttura, si considera una nuova funzione obiettivo che minimizza la somma incrementata degli scarti quadratici medi degli spostamenti di interpiano e delle accelerazioni. Le quantità di progetto sono i coefficienti di smorzamento e le rigidezze di piano della sovrastruttura. Al termine della procedura si otterrà una nuova ridistribuzione delle variabili di progetto nei piani della struttura. In tal caso, però, la sovrastruttura risulterà molto meno sollecitata in quanto tutte le deformazioni vengono assorbite dal sistema di isolamento. Infine, viene effettuato un controllo sull’entità dello spostamento alla base dell’isolatore perché potrebbe raggiungere valori troppo elevati. Infatti, la normativa indica come valore limite dello spostamento alla base 25cm; valori più elevati dello spostamento creano dei problemi soprattutto per la realizzazione di adeguati giunti sismici. La procedura di ottimizzazione di sistemi isolati alla base viene applicata numericamente mediante l’utilizzo di un programma di calcolo in linguaggio Matlab nel sesto capitolo. La procedura viene applicata a sistemi a tre e a cinque gradi di libertà. Inoltre si effettua il controllo degli spostamenti alla base sollecitando la struttura con il sisma di El Centro e il sisma di Northridge. I risultati hanno mostrato che la procedura di calcolo è efficace e inoltre gli spostamenti alla base sono contenuti entro il limite posto dalla normativa. Giova rilevare che il sistema di isolamento riduce sensibilmente le grandezze che interessano la sovrastruttura, la quale si comporta come un corpo rigido al di sopra dell’isolatore. In futuro si potrà studiare il comportamento di strutture isolate considerando diverse tipologie di isolatori alla base e non solo dispositivi elastomerici. Si potrà, inoltre, modellare l’isolatore alla base con un modello isteretico bilineare ed effettuare un confronto con i risultati già ottenuti per il modello lineare.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
Water distribution networks optimization is a challenging problem due to the dimension and the complexity of these systems. Since the last half of the twentieth century this field has been investigated by many authors. Recently, to overcome discrete nature of variables and non linearity of equations, the research has been focused on the development of heuristic algorithms. This algorithms do not require continuity and linearity of the problem functions because they are linked to an external hydraulic simulator that solve equations of mass continuity and of energy conservation of the network. In this work, a NSGA-II (Non-dominating Sorting Genetic Algorithm) has been used. This is a heuristic multi-objective genetic algorithm based on the analogy of evolution in nature. Starting from an initial random set of solutions, called population, it evolves them towards a front of solutions that minimize, separately and contemporaneously, all the objectives. This can be very useful in practical problems where multiple and discordant goals are common. Usually, one of the main drawback of these algorithms is related to time consuming: being a stochastic research, a lot of solutions must be analized before good ones are found. Results of this thesis about the classical optimal design problem shows that is possible to improve results modifying the mathematical definition of objective functions and the survival criterion, inserting good solutions created by a Cellular Automata and using rules created by classifier algorithm (C4.5). This part has been tested using the version of NSGA-II supplied by Centre for Water Systems (University of Exeter, UK) in MATLAB® environment. Even if orientating the research can constrain the algorithm with the risk of not finding the optimal set of solutions, it can greatly improve the results. Subsequently, thanks to CINECA help, a version of NSGA-II has been implemented in C language and parallelized: results about the global parallelization show the speed up, while results about the island parallelization show that communication among islands can improve the optimization. Finally, some tests about the optimization of pump scheduling have been carried out. In this case, good results are found for a small network, while the solutions of a big problem are affected by the lack of constraints on the number of pump switches. Possible future research is about the insertion of further constraints and the evolution guide. In the end, the optimization of water distribution systems is still far from a definitive solution, but the improvement in this field can be very useful in reducing the solutions cost of practical problems, where the high number of variables makes their management very difficult from human point of view.
Resumo:
Traditional morphological examinations are not anymore sufficient for a complete evaluation of tumoral tissue and the use of neoplastic markers is of utmost importance. Neoplastic markers can be classified in: diagnostic, prognostic and predictive markers. Three markers were analyzed. 1) Insulin-like growth factor binding protein 2 (IGFBP2) was immunohistochemically examined in prostatic tissues: 40 radical prostatectomies from hormonally untreated patients with their preoperative biopsies, 10 radical prostatectomies from patients under complete androgen ablation before surgery and 10 simple prostatectomies from patients with bladder outlet obstruction. Results were compared with α-methylacyl-CoA racemase (AMACR). IGFBP2 was expressed in the cytoplasm of untreated adenocarcinomas and, to a lesser extent, in HG-PIN; the expression was markedly lower in patients after complete androgen ablation. AMACR was similarly expressed in both adenocarcinoma and HG-PIN, the level being similar in both lesions; the expression was slightly lower in patients after complete androgen ablation. IGFBP2 may be used a diagnostic marker of prostatic adenocarcinomas. 2) Heparan surface proteoglycan immunohistochemical expression was examined in 150 oral squamous cell carcinomas. Follow up information was available in 93 patients (range: 6-34 months, mean: 19±7). After surgery, chemotherapy was performed in 8 patients and radiotherapy in 61 patients. Multivariate and univariate overall survival analyses showed that high expression of syndecan-1 (SYN-1) was associated with a poor prognosis. In patients treated with radiotherapy, such association was higher. SYN-1 is a prognostic marker in oral squamous cell carcinomas; it may also represent a predictive factor for responsiveness to radiotherapy. 3) EGFR was studied in 33 pulmonary adenocarcinomas with traditional DNA sequencing methods and with two mutation-specific antibodies. Overall, the two antibodies had 61.1% sensitivity and 100% specificity in detecting EGFR mutations. EGFR mutation-specific antibodies may represent a predictive marker to identify patients candidate to tyrosine kinase inhibitors therapy.
Resumo:
This doctoral dissertation aims to establish fiber-optic technologies overcoming the limiting issues of data communications in indoor environments. Specific applications are broadband mobile distribution in different in-building scenarios and high-speed digital transmission over short-range wired optical systems. Two key enabling technologies are considered: Radio over Fiber (RoF) techniques over standard silica fibers for distributed antenna systems (DAS) and plastic optical fibers (POFs) for short-range communications. Hence, the objectives and achievements of this thesis are related to the application of RoF and POF technologies in different in-building scenarios. On one hand, a theoretical and experimental analysis combined with demonstration activities has been performed on cost-effective RoF systems. An extensive modeling on modal noise impact both on linear and non-linear characteristics of RoF link over silica multimode fiber has been performed to achieve link design rules for an optimum choice of the transmitter, receiver and launching technique. A successful transmission of Long Term Evolution (LTE) mobile signals on the resulting optimized RoF system over silica multimode fiber employing a Fabry-Perot LD, central launch technique and a photodiode with a built-in ball lens was demonstrated up to 525m with performances well compliant with standard requirements. On the other hand, digital signal processing techniques to overcome the bandwidth limitation of POF have been investigated. An uncoded net bit-rate of 5.15Gbit/s was obtained on a 50m long POF link employing an eye-safe transmitter, a silicon photodiode, and DMT modulation with bit and power loading algorithm. With the insertion of 3x2N quadrature amplitude modulation constellation formats, an uncoded net-bit-rate of 5.4Gbit/s was obtained on a 50 m long POF link employing an eye-safe transmitter and a silicon avalanche photodiode. Moreover, simultaneous transmission of baseband 2Gbit/s with DMT and 200Mbit/s with an ultra-wideband radio signal has been validated over a 50m long POF link.
Resumo:
This work investigates the slamming phenomenon experienced during the water entry of deformable bodies. Wedges are chosen as reference geometry due to their similarity to a generic hull section. Hull slamming is a phenomenon occurring when a ship re-enters the water after having been partially or completely lifted out the water. While the analysis of rigid structures entering the water has been extensively studied in the past and there are analytical solutions capable of correctly predicting the hydrodynamic pressure distribution and the overall impact dynamics, the effect of the structural deformation on the structural force is still a challenging problem to be solved. In fact, in case of water impact of deformable bodies, the dynamic deflection could interact with the fluid flow, changing the hydrodynamic load. This work investigates the hull-slamming problem by experiments and numerical simulations of the water entry of elastic wedges impacting on an initially calm surface. The effect of asymmetry due to horizontal velocity component or initial tilt angle on the impact dynamics is also studied. The objective of this work is to determine an accurate model to predict the overall dynamics of the wedge and its deformations. More than 1200 experiments were conducted by varying wedge structural stiffness, deadrise angle, impact velocity and mass. On interest are the overall impact dynamics and the local structural deformation of the panels composing the wedge. Alongside with the experimental analysis, numerical simulations based on a coupled Smoothed Particle Hydrodynamics (SPH) and FEM method are developed. The experimental results provide evidence of the mutual interaction between hydrodynamic load and structural deformation. It is found a simple criterion for the onset of fluid structure interaction (FSI), giving reliable information on the cases where FSI should been taken into account.
Resumo:
Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.
Resumo:
Background The aim of this study is to analyse CDKN2A methylation using pyrosequencing on a large cohort of colorectal cancers and corresponding non-neoplastic tissues. In a second step, the effect of methylation on clinical outcome is addressed. Methods Primary colorectal cancers and matched non-neoplastic tissues from 432 patients underwent CDKN2A methylation analysis by pyrosequencing (PyroMarkQ96). Methylation was then related to clinical outcome, microsatellite instability (MSI), and BRAF and KRAS mutation. Different amplification conditions (35 to 50 PCR cycles) using a range of 0-100% methylated DNA were tested. Results Background methylation was at most 10% with ≥35 PCR cycles. Correlation of observed and expected values was high, even at low methylation levels (0.02%, 0.6%, 2%). Accuracy of detection was optimal with 45 PCR cycles. Methylation in normal mucosa ranged from 0 to >90% in some cases. Based on the maximum value of 10% background, positivity was defined as a ≥20% difference in methylation between tumor and normal tissue, which occurred in 87 cases. CDKN2A methylation positivity was associated with MSI (p = 0.025), BRAF mutation (p < 0.0001), higher tumor grade (p < 0.0001), mucinous histology (p = 0.0209) but not with KRAS mutation. CDKN2A methylation had an independent adverse effect (p = 0.0058) on prognosis. Conclusion The non-negligible CDKN2A methylation of normal colorectal mucosa may confound the assessment of tumor-specific hypermethylation, suggesting that corresponding non-neoplastic tissue should be used as a control. CDKN2A methylation is robustly detected by pyrosequencing, even at low levels, suggesting that this unfavorable prognostic biomarker warrants investigation in prospective studies.
Resumo:
Objectives: Recent anatomical-functional studies have transformed our understanding of cerebral motor control away from a hierarchical structure and toward parallel and interconnected specialized circuits. Subcortical electrical stimulation during awake surgery provides a unique opportunity to identify white matter tracts involved in motor control. For the first time, this study reports the findings on motor modulatory responses evoked by subcortical stimulation and investigates the cortico-subcortical connectivity of cerebral motor control. Experimental design: Twenty-one selected patients were operated while awake for frontal, insular, and parietal diffuse low-grade gliomas. Subcortical electrostimulation mapping was used to search for interference with voluntary movements. The corresponding stimulation sites were localized on brain schemas using the anterior and posterior commissures method. Principal observations: Subcortical negative motor responses were evoked in 20/21 patients, whereas acceleration of voluntary movements and positive motor responses were observed in three and five patients, respectively. The majority of the stimulation sites were detected rostral of the corticospinal tract near the vertical anterior-commissural line, and additional sites were seen in the frontal and parietal white matter. Conclusions: The diverse interferences with motor function resulting in inhibition and acceleration imply a modulatory influence of the detected fiber network. The subcortical stimulation sites were distributed veil-like, anterior to the primary motor fibers, suggesting descending pathways originating from premotor areas known for negative motor response characteristics. Further stimulation sites in the parietal white matter as well as in the anterior arm of the internal capsule indicate a large-scale fronto-parietal motor control network. Hum Brain Mapp, 2012. © 2012 Wiley Periodicals, Inc.
Resumo:
OBJECTIVE: Autopsy determination of fatal hemorrhage as the cause of death is often a difficult diagnosis in forensic medicine. No quantitative system for accurately measuring the blood volume in a corpse has been developed. MATERIALS AND METHODS: This article describes the measurement and evaluation of the cross-sectional areas of major blood vessels, of the diameter of the right pulmonary artery, of the volumes of thoracic aorta and spleen on MDCT, and of the volumes of heart chambers on MRI in 65 autopsy-verified cases of fatal hemorrhage or no fatal hemorrhage. RESULTS: Most cases with a cause of death of "fatal hemorrhage" had collapsed vessels. The finding of a collapsed superior vena cava, main pulmonary artery, or right pulmonary artery was 100% specific for fatal hemorrhage. The mean volumes of the thoracic aorta and of each of the heart chambers and the mean cross-sectional areas of all vessels except the inferior vena cava and abdominal aorta were significantly smaller in fatal hemorrhage than in no fatal hemorrhage. CONCLUSION: For the quantitative differentiation of fatal hemorrhage from other causes of death, we propose a three-step algorithm with measurements of the diameter of the right pulmonary artery, the cross-sectional area of the main pulmonary artery, and the volume of the right atrium (specificity, 100%; sensitivity, 95%). However, this algorithm must be corroborated in a prospective study, which would eliminate the limitations of this study. Quantitative postmortem cross-sectional imaging might become a reliable objective method to assess the question of fatal hemorrhage in forensic medicine.