959 resultados para CMF, molecular cloud, extraction algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple, sensitive and selective cloud point extraction procedure is described for the preconcentration and atomic absorption spectrometric determination of Zn2+ and Cd2+ ions in water and biological samples, after complexation with 3,3',3",3'"-tetraindolyl (terephthaloyl) dimethane (TTDM) in basic medium, using Triton X-114 as nonionic surfactant. Detection limits of 3.0 and 2.0 µg L-1 and quantification limits 10.0 and 7.0 µg L-1were obtained for Zn2+ and Cd2+ ions, respectively. Relative standard deviation was 2.9 and 3.3, and enrichment factors 23.9 and 25.6, for Zn2+ and Cd2+ ions, respectively. The method enabled determination of low levels of Zn2+ and Cd2+ ions in urine, blood serum and water samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method has been developed for the extraction and spectrophotometric determination of Hg2+ in a concentration range of 0.2-1.0 mg L-1; following the Lambert-Beer's law using high molecular weight quaternary ammonium salts dissolved in chloroform. The metal complex anion was determined in the extract in the region UV (260 nm).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of molecular diagnostic assays has increased tremendously in recent years.Nucleic acid diagnostic assays have been developed, especially for the detection of human pathogenic microbes and genetic markers predisposing to certain diseases. Closed-tube methods are preferred because they are usually faster and easier to perform than heterogenous methods and in addition, target nucleic acids are commonly amplified leading to risk of contamination of the following reactions by the amplification product if the reactions are opened. The present study introduces a new closed-tube switchable complementation probes based PCR assay concept where two non-fluorescent probes form a fluorescent lanthanide chelate complex in the presence of the target DNA. In this dual-probe PCR assay method one oligonucleotide probe carries a non-fluorescent lanthanide chelate and another probe a light absorbing antenna ligand. The fluorescent lanthanide chelate complex is formed only when the non-fluorescent probes are hybridized to adjacent positions into the target DNA bringing the reporter moieties in close proximity. The complex is formed by self-assembled lanthanide chelate complementation where the antenna ligand is coordinated to the lanthanide ion captured in the chelate. The complementation probes based assays with time-resolved fluorescence measurement showed low background signal level and hence, relatively high nucleic acid detection sensitivity (low picomolar target concentration). Different lanthanide chelate structures were explored and a new cyclic seven dentate lanthanide chelate was found suitable for complementation probe method. It was also found to resist relatively high PCR reaction temperatures, which was essential for the PCR assay applications. A seven-dentate chelate with two unoccupied coordination sites must be used instead of a more stable eight- or nine-dentate chelate because the antenna ligand needs to be coordinated to the free coordination sites of the lanthanide ion. The previously used linear seven-dentate lanthanide chelate was found to be unstable in PCR conditions and hence, the new cyclic chelate was needed. The complementation probe PCR assay method showed high signal-to-background ratio up to 300 due to a low background fluorescence level and the results (threshold cycles) in real-time PCR were reached approximately 6 amplification cycles earlier compared to the commonly used FRET-based closed-tube PCR method. The suitability of the complementation probe method for different nucleic acid assay applications was studied. 1) A duplex complementation probe C. trachomatis PCR assay with a simple 10-minute urine sample preparation was developed to study suitability of the method for clinical diagnostics. The performance of the C. trachomatis assay was equal to the commercial C. trachomatis nucleic acid amplification assay containing more complex sample preparation based on DNA extraction. 2) A PCR assay for the detection of HLA-DQA1*05 allele, that is used to predict the risk of type 1 diabetes, was developed to study the performance of the method in genotyping. A simple blood sample preparation was used where the nucleic acids were released from dried blood sample punches using high temperature and alkaline reaction conditions. The complementation probe HLA-DQA1*05 PCR assay showed good genotyping performance correlating 100% with the routinely used heterogenous reference assay. 3) To study the suitability of the complementation probe method for direct measurement of the target organism, e.g., in the culture media, the complementation probes were applied to amplificationfree closed-tube bacteriophage quantification by measuring M13 bacteriophage ssDNA. A low picomolar bacteriophage concentration was detected in a rapid 20- minute assay. The assay provides a quick and reliable alternative to the commonly used and relatively unreliable UV-photometry and time-consuming culture based bacteriophage detection methods and indicates that the method could also be used for direct measurement of other micro-organisms. The complementation probe PCR method has a low background signal level leading to a high signal-to-background ratio and relatively sensitive nucleic acid detection. The method is compatible with simple sample preparation and it was shown to tolerate residues of urine, blood, bacteria and bacterial culture media. The common trend in nucleic acid diagnostics is to create easy-to-use assays suitable for rapid near patient analysis. The complementation probe PCR assays with a brief sample preparation should be relatively easy to automate and hence, would allow the development of highperformance nucleic acid amplification assays with a short overall assay time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract A novel trypsin inhibitor of protease (CqTI) was purified from Chenopodium quinoa seeds. The optimal extracting solvent was 0.1M NaCl pH 6.8 (p < 0.05). The extraction time of 5h and 90 °C was optimum for the recovery of the trypsin inhibitor from C. quinoa seeds. The purification occurred in gel-filtration and reverse phase chromatography. CqTI presented active against commercial bovine trypsin and chymotrypsin and had a specific activity of 5,033.00 (TIU/mg), which was purified to 333.5-fold. The extent of purification was determined by SDS-PAGE. CqTI had an apparent molecular weight of approximately 12KDa and two bands in reduced conditions as determined by Tricine-SDS-PAGE. MALDI-TOF showed two peaks in 4,246.5 and 7,908.18m/z. CqTI presented high levels of essential amino acids. N-terminal amino acid sequence of this protein did not show similarity to any known protease inhibitor. Its activity was stable over a pH range (2-12), temperatures range (20-100 °C) and reducing agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The capability of molecular mechanics for modeling the wide distribution of bond angles and bond lengths characteristic of coordination complexes was investigatecl. This was the preliminary step for future modeling of solvent extraction. Several tin-phosphine oxide COrnI)le:){es were selected as the test groUl) for t.he d,esired range of geometry they eX!libi ted as \-vell as the ligands they cOD.tained r Wllich were c\f interest in connection with solvation. A variety of adjustments were made to Allinger's M:M2 force·-field ill order to inl.prove its performance in the treatment of these systems. A set of u,nique force constants was introduced for' those terms representing the metal ligand bond lengths, bond angles, and, torsion angles. These were significantly smaller than trad.itionallY used. with organic compounds. The ~1orse poteIlt.ial energ'Y function was incorporated for the M-X l')ond lE~ngths and the cosine harmonic potential erlerg-y function was invoked for the MOP bond angle. These functions were found to accomodate the wide distribution of observed values better than the traditional harmonic approximations~ Crystal packing influences on the MOP angle were explored thr"ollgh ttle inclusion of the isolated molecule withil1 a shell cc)ntaini11g tl1e nearest neigl1'bors duri.rlg energy rninimization experiments~ This was found to further improve the fit of the MOP angle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular mechanics calculations were done on tetrahedral phosphine oxide zinc complexes in simulated water, benzene and hexane phases using the DREIDING II force field in the BIOGRAF molecular modeling program. The SUN workstation computer (SUN_ 4c, with SPARK station 1 processor) was used for the calculations. Experimental structural information used in the parameterization was obtained from the September 1989 version of the Cambridge Structural Database. 2 Steric and solvation energies were calculated for complexes of the type ZnCl2 (RlO)2' The calculations were done with and without inclusion of electrostatic interactions. More reliable simulation results were obtained without inclusion of charges. In the simulated gas phase, the steric energies increase regularly with number of carbons in the alkyl group, whereas they go through a maximum when solvent shells are included in the calculation. Simulated distribution ratios vary with chain length and type of chain branching and the complexes are found to be more favourable for extraction by benzene than by hexane, in accord with experimental data. Also, in line with what would be expected for a favorable extraction, calculations without electrostatics predict that the complexes are better solvated by the organic solvents than by water.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of most clustering algorithms is to find the optimal number of clusters (i.e. fewest number of clusters). However, analysis of molecular conformations of biological macromolecules obtained from computer simulations may benefit from a larger array of clusters. The Self-Organizing Map (SOM) clustering method has the advantage of generating large numbers of clusters, but often gives ambiguous results. In this work, SOMs have been shown to be reproducible when the same conformational dataset is independently clustered multiple times (~100), with the help of the Cramérs V-index (C_v). The ability of C_v to determine which SOMs are reproduced is generalizable across different SOM source codes. The conformational ensembles produced from MD (molecular dynamics) and REMD (replica exchange molecular dynamics) simulations of the penta peptide Met-enkephalin (MET) and the 34 amino acid protein human Parathyroid Hormone (hPTH) were used to evaluate SOM reproducibility. The training length for the SOM has a huge impact on the reproducibility. Analysis of MET conformational data definitively determined that toroidal SOMs cluster data better than bordered maps due to the fact that toroidal maps do not have an edge effect. For the source code from MATLAB, it was determined that the learning rate function should be LINEAR with an initial learning rate factor of 0.05 and the SOM should be trained by a sequential algorithm. The trained SOMs can be used as a supervised classification for another dataset. The toroidal 10×10 hexagonal SOMs produced from the MATLAB program for hPTH conformational data produced three sets of reproducible clusters (27%, 15%, and 13% of 100 independent runs) which find similar partitionings to those of smaller 6×6 SOMs. The χ^2 values produced as part of the C_v calculation were used to locate clusters with identical conformational memberships on independently trained SOMs, even those with different dimensions. The χ^2 values could relate the different SOM partitionings to each other.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La compréhension de processus biologiques complexes requiert des approches expérimentales et informatiques sophistiquées. Les récents progrès dans le domaine des stratégies génomiques fonctionnelles mettent dorénavant à notre disposition de puissants outils de collecte de données sur l’interconnectivité des gènes, des protéines et des petites molécules, dans le but d’étudier les principes organisationnels de leurs réseaux cellulaires. L’intégration de ces connaissances au sein d’un cadre de référence en biologie systémique permettrait la prédiction de nouvelles fonctions de gènes qui demeurent non caractérisées à ce jour. Afin de réaliser de telles prédictions à l’échelle génomique chez la levure Saccharomyces cerevisiae, nous avons développé une stratégie innovatrice qui combine le criblage interactomique à haut débit des interactions protéines-protéines, la prédiction de la fonction des gènes in silico ainsi que la validation de ces prédictions avec la lipidomique à haut débit. D’abord, nous avons exécuté un dépistage à grande échelle des interactions protéines-protéines à l’aide de la complémentation de fragments protéiques. Cette méthode a permis de déceler des interactions in vivo entre les protéines exprimées par leurs promoteurs naturels. De plus, aucun biais lié aux interactions des membranes n’a pu être mis en évidence avec cette méthode, comparativement aux autres techniques existantes qui décèlent les interactions protéines-protéines. Conséquemment, nous avons découvert plusieurs nouvelles interactions et nous avons augmenté la couverture d’un interactome d’homéostasie lipidique dont la compréhension demeure encore incomplète à ce jour. Par la suite, nous avons appliqué un algorithme d’apprentissage afin d’identifier huit gènes non caractérisés ayant un rôle potentiel dans le métabolisme des lipides. Finalement, nous avons étudié si ces gènes et un groupe de régulateurs transcriptionnels distincts, non préalablement impliqués avec les lipides, avaient un rôle dans l’homéostasie des lipides. Dans ce but, nous avons analysé les lipidomes des délétions mutantes de gènes sélectionnés. Afin d’examiner une grande quantité de souches, nous avons développé une plateforme à haut débit pour le criblage lipidomique à contenu élevé des bibliothèques de levures mutantes. Cette plateforme consiste en la spectrométrie de masse à haute resolution Orbitrap et en un cadre de traitement des données dédié et supportant le phénotypage des lipides de centaines de mutations de Saccharomyces cerevisiae. Les méthodes expérimentales en lipidomiques ont confirmé les prédictions fonctionnelles en démontrant certaines différences au sein des phénotypes métaboliques lipidiques des délétions mutantes ayant une absence des gènes YBR141C et YJR015W, connus pour leur implication dans le métabolisme des lipides. Une altération du phénotype lipidique a également été observé pour une délétion mutante du facteur de transcription KAR4 qui n’avait pas été auparavant lié au métabolisme lipidique. Tous ces résultats démontrent qu’un processus qui intègre l’acquisition de nouvelles interactions moléculaires, la prédiction informatique des fonctions des gènes et une plateforme lipidomique innovatrice à haut débit , constitue un ajout important aux méthodologies existantes en biologie systémique. Les développements en méthodologies génomiques fonctionnelles et en technologies lipidomiques fournissent donc de nouveaux moyens pour étudier les réseaux biologiques des eucaryotes supérieurs, incluant les mammifères. Par conséquent, le stratégie présenté ici détient un potentiel d’application au sein d’organismes plus complexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pseudomonas aeruginosa MCCB 123 was grown in a synthetic medium for β-1,3 glucanase production. From the culture filtrate, β-1,3 glucanase was purified with a molecular mass of 45 kDa. The enzyme was a metallozyme as its β-1,3 glucanase activity got inhibited by the metal chelator EDTA. Optimum pH and temperature for β-1,3 glucanase activity on laminarin was found to be 7 and 50 °C respectively. The MCCB 123 β-1,3 glucanase was found to have good lytic action on a wide range of fungal isolates, and hence its application in fungal DNA extraction was evaluated. β-1,3 glucanase purified from the culture supernatant of P. aeruginosa MCCB 123 could be used for the extraction of fungal DNA without the addition of any other reagents generally used. Optimum pH and temperature of enzyme for fungal DNA extraction was found to be 7 and 65 °C respectively. This is the first report on β-1,3 glucanase employed in fungal DNA extraction

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les Mesures de Semblança Quàntica Molecular (MSQM) requereixen la maximització del solapament de les densitats electròniques de les molècules que es comparen. En aquest treball es presenta un algorisme de maximització de les MSQM, que és global en el límit de densitats electròniques deformades a funcions deltes de Dirac. A partir d'aquest algorisme se'n deriva l'equivalent per a densitats no deformades

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L'algorisme de McLachlan per a l'alineament de dos conjunts de coordenades atòmiques és interpretat sota l'òptica de l'Anàlisi Multivariant, que posa de manifest que el plantejament d'aquest problema és equivalent al de l'anàlisi de Procrustes i que la solució proposada per Kabsch és anàloga a la de Sibson, desenvolupada independentment

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La present tesi, tot i que emmarcada dins de la teoria de les Mesures Semblança Molecular Quántica (MQSM), es deriva en tres àmbits clarament definits: - La creació de Contorns Moleculars de IsoDensitat Electrònica (MIDCOs, de l'anglès Molecular IsoDensity COntours) a partir de densitats electròniques ajustades. - El desenvolupament d'un mètode de sobreposició molecular, alternatiu a la regla de la màxima semblança. - Relacions Quantitatives Estructura-Activitat (QSAR, de l'anglès Quantitative Structure-Activity Relationships). L'objectiu en el camp dels MIDCOs és l'aplicació de funcions densitat ajustades, ideades inicialment per a abaratir els càlculs de MQSM, per a l'obtenció de MIDCOs. Així, es realitza un estudi gràfic comparatiu entre diferents funcions densitat ajustades a diferents bases amb densitats obtingudes de càlculs duts a terme a nivells ab initio. D'aquesta manera, l'analogia visual entre les funcions ajustades i les ab initio obtinguda en el ventall de representacions de densitat obtingudes, i juntament amb els valors de les mesures de semblança obtinguts prèviament, totalment comparables, fonamenta l'ús d'aquestes funcions ajustades. Més enllà del propòsit inicial, es van realitzar dos estudis complementaris a la simple representació de densitats, i són l'anàlisi de curvatura i l'extensió a macromolècules. La primera observació correspon a comprovar no només la semblança dels MIDCOs, sinó la coherència del seu comportament a nivell de curvatura, podent-se així observar punts d'inflexió en la representació de densitats i veure gràficament aquelles zones on la densitat és còncava o convexa. Aquest primer estudi revela que tant les densitats ajustades com les calculades a nivell ab initio es comporten de manera totalment anàloga. En la segona part d'aquest treball es va poder estendre el mètode a molècules més grans, de fins uns 2500 àtoms. Finalment, s'aplica part de la filosofia del MEDLA. Sabent que la densitat electrònica decau ràpidament al allunyar-se dels nuclis, el càlcul d'aquesta pot ser obviat a distàncies grans d'aquests. D'aquesta manera es va proposar particionar l'espai, i calcular tan sols les funcions ajustades de cada àtom tan sols en una regió petita, envoltant l'àtom en qüestió. Duent a terme aquest procés, es disminueix el temps de càlcul i el procés esdevé lineal amb nombre d'àtoms presents en la molècula tractada. En el tema dedicat a la sobreposició molecular es tracta la creació d'un algorisme, així com la seva implementació en forma de programa, batejat Topo-Geometrical Superposition Algorithm (TGSA), d'un mètode que proporcionés aquells alineaments que coincideixen amb la intuïció química. El resultat és un programa informàtic, codificat en Fortran 90, el qual alinea les molècules per parelles considerant tan sols nombres i distàncies atòmiques. La total absència de paràmetres teòrics permet desenvolupar un mètode de sobreposició molecular general, que proporcioni una sobreposició intuïtiva, i també de forma rellevant, de manera ràpida i amb poca intervenció de l'usuari. L'ús màxim del TGSA s'ha dedicat a calcular semblances per al seu ús posterior en QSAR, les quals majoritàriament no corresponen al valor que s'obtindria d'emprar la regla de la màxima semblança, sobretot si hi ha àtoms pesats en joc. Finalment, en l'últim tema, dedicat a la Semblança Quàntica en el marc del QSAR, es tracten tres aspectes diferents: - Ús de matrius de semblança. Aquí intervé l'anomenada matriu de semblança, calculada a partir de les semblances per parelles d'entre un conjunt de molècules. Aquesta matriu és emprada posteriorment, degudament tractada, com a font de descriptors moleculars per a estudis QSAR. Dins d'aquest àmbit s'han fet diversos estudis de correlació d'interès farmacològic, toxicològic, així com de diverses propietats físiques. - Aplicació de l'energia d'interacció electró-electró, assimilat com a una forma d'autosemblança. Aquesta modesta contribució consisteix breument en prendre el valor d'aquesta magnitud, i per analogia amb la notació de l'autosemblança molecular quàntica, assimilar-la com a cas particular de d'aquesta mesura. Aquesta energia d'interacció s'obté fàcilment a partir de programari mecanoquàntic, i esdevé ideal per a fer un primer estudi preliminar de correlació, on s'utilitza aquesta magnitud com a únic descriptor. - Càlcul d'autosemblances, on la densitat ha estat modificada per a augmentar el paper d'un substituent. Treballs previs amb densitats de fragments, tot i donar molt bons resultats, manquen de cert rigor conceptual en aïllar un fragment, suposadament responsable de l'activitat molecular, de la totalitat de l'estructura molecular, tot i que les densitats associades a aquest fragment ja difereixen degut a pertànyer a esquelets amb diferents substitucions. Un procediment per a omplir aquest buit que deixa la simple separació del fragment, considerant així la totalitat de la molècula (calcular-ne l'autosemblança), però evitant al mateix temps valors d'autosemblança no desitjats provocats per àtoms pesats, és l'ús de densitats de Forats de fermi, els quals es troben definits al voltant del fragment d'interès. Aquest procediment modifica la densitat de manera que es troba majoritàriament concentrada a la regió d'interès, però alhora permet obtenir una funció densitat, la qual es comporta matemàticament igual que la densitat electrònica regular, podent-se així incorporar dins del marc de la semblança molecular. Les autosemblances calculades amb aquesta metodologia han portat a bones correlacions amb àcids aromàtics substituïts, podent així donar una explicació al seu comportament. Des d'un altre punt de vista, també s'han fet contribucions conceptuals. S'ha implementat una nova mesura de semblança, la d'energia cinètica, la qual consisteix en prendre la recentment desenvolupada funció densitat d'energia cinètica, la qual al comportar-se matemàticament igual a les densitats electròniques regulars, s'ha incorporat en el marc de la semblança. A partir d'aquesta mesura s'han obtingut models QSAR satisfactoris per diferents conjunts moleculars. Dins de l'aspecte del tractament de les matrius de semblança s'ha implementat l'anomenada transformació estocàstica com a alternativa a l'ús de l'índex Carbó. Aquesta transformació de la matriu de semblança permet obtenir una nova matriu no simètrica, la qual pot ser posteriorment tractada per a construir models QSAR.