902 resultados para Density-based Scanning Algorithm
Resumo:
The chemotherapeutic drug Taxol is known to interact within a specific site on β-tubulin. Although the general location of the site has been defined by photoaffinity labeling and electron crystallography, the original data were insufficient to make an absolute determination of the bound conformation. We have now correlated the crystallographic density with analysis of Taxol conformations and have found the unique solution to be a T-shaped Taxol structure. This T-shaped or butterfly structure is optimized within the β-tubulin site and exhibits functional similarity to a portion of the B9-B10 loop in the α-tubulin subunit. The model provides structural rationalization for a sizeable body of Taxol structure–activity relationship data, including binding affinity, photoaffinity labeling, and acquired mutation in human cancer cells.
Resumo:
We have developed a technique for isolating DNA markers tightly linked to a target region that is based on RLGS, named RLGS spot-bombing (RLGS-SB). RLGS-SB allows us to scan the genome of higher organisms quickly and efficiently to identify loci that are linked to either a target region or gene of interest. The method was initially tested by analyzing a C57BL/6-GusS mouse congenic strain. We identified 33 variant markers out of 10,565 total loci in a 4.2-centimorgan (cM) interval surrounding the Gus locus in 4 days of laboratory work. The validity of RLGS-SB to find DNA markers linked to a target locus was also tested on pooled DNA from segregating backcross progeny by analyzing the spot intensity of already mapped RLGS loci. Finally, we used RLGS-SB to identify DNA markers closely linked to the mouse reeler (rl) locus on chromosome 5 by phenotypic pooling. A total of 31 RLGS loci were identified and mapped to the target region after screening 8856 loci. These 31 loci were mapped within 11.7 cM surrounding rl. The average density of RLGS loci located in the rl region was 0.38 cM. Three loci were closely linked to rl showing a recombination frequency of 0/340, which is < 1 cM from rl. Thus, RLGS-SB provides an efficient and rapid method for the detection and isolation of polymorphic DNA markers linked to a trait or gene of interest.
Resumo:
We present a modelling method to estimate the 3-D geometry and location of homogeneously magnetized sources from magnetic anomaly data. As input information, the procedure needs the parameters defining the magnetization vector (intensity, inclination and declination) and the Earth's magnetic field direction. When these two vectors are expected to be different in direction, we propose to estimate the magnetization direction from the magnetic map. Then, using this information, we apply an inversion approach based on a genetic algorithm which finds the geometry of the sources by seeking the optimum solution from an initial population of models in successive iterations through an evolutionary process. The evolution consists of three genetic operators (selection, crossover and mutation), which act on each generation, and a smoothing operator, which looks for the best fit to the observed data and a solution consisting of plausible compact sources. The method allows the use of non-gridded, non-planar and inaccurate anomaly data and non-regular subsurface partitions. In addition, neither constraints for the depth to the top of the sources nor an initial model are necessary, although previous models can be incorporated into the process. We show the results of a test using two complex synthetic anomalies to demonstrate the efficiency of our inversion method. The application to real data is illustrated with aeromagnetic data of the volcanic island of Gran Canaria (Canary Islands).
Resumo:
In this study, a novel kind of hybrid pigment based on nanoclays and dyes was synthesized and characterized. These nanoclay-based pigments (NCPs) were prepared at the laboratory with sodium montmorillonite nanoclay (NC) and methylene blue (MB). The cation-exchange capacity of NC exchanged with MB was varied to obtain a wide color gamut. The synthesized nanopigments were thoroughly characterized. The NCPs were melt-mixed with linear low-density polyethylene (PE) with an internal mixer. Furthermore, samples with conventional colorants were prepared in the same way. Then, the properties (mechanical, thermal, and colorimetric) of the mixtures were assessed. The PE–NCP samples developed better color properties than those containing conventional colorants and used as references, and their other properties were maintained or improved, even at lower contents of dye compared to that with the conventional colorants.
Resumo:
We propose and discuss a new centrality index for urban street patterns represented as networks in geographical space. This centrality measure, that we call ranking-betweenness centrality, combines the idea behind the random-walk betweenness centrality measure and the idea of ranking the nodes of a network produced by an adapted PageRank algorithm. We initially use a PageRank algorithm in which we are able to transform some information of the network that we want to analyze into numerical values. Numerical values summarizing the information are associated to each of the nodes by means of a data matrix. After running the adapted PageRank algorithm, a ranking of the nodes is obtained, according to their importance in the network. This classification is the starting point for applying an algorithm based on the random-walk betweenness centrality. A detailed example of a real urban street network is discussed in order to understand the process to evaluate the ranking-betweenness centrality proposed, performing some comparisons with other classical centrality measures.
Resumo:
We present an extension of the logic outer-approximation algorithm for dealing with disjunctive discrete-continuous optimal control problems whose dynamic behavior is modeled in terms of differential-algebraic equations. Although the proposed algorithm can be applied to a wide variety of discrete-continuous optimal control problems, we are mainly interested in problems where disjunctions are also present. Disjunctions are included to take into account only certain parts of the underlying model which become relevant under some processing conditions. By doing so the numerical robustness of the optimization algorithm improves since those parts of the model that are not active are discarded leading to a reduced size problem and avoiding potential model singularities. We test the proposed algorithm using three examples of different complex dynamic behavior. In all the case studies the number of iterations and the computational effort required to obtain the optimal solutions is modest and the solutions are relatively easy to find.
Resumo:
Outliers are objects that show abnormal behavior with respect to their context or that have unexpected values in some of their parameters. In decision-making processes, information quality is of the utmost importance. In specific applications, an outlying data element may represent an important deviation in a production process or a damaged sensor. Therefore, the ability to detect these elements could make the difference between making a correct and an incorrect decision. This task is complicated by the large sizes of typical databases. Due to their importance in search processes in large volumes of data, researchers pay special attention to the development of efficient outlier detection techniques. This article presents a computationally efficient algorithm for the detection of outliers in large volumes of information. This proposal is based on an extension of the mathematical framework upon which the basic theory of detection of outliers, founded on Rough Set Theory, has been constructed. From this starting point, current problems are analyzed; a detection method is proposed, along with a computational algorithm that allows the performance of outlier detection tasks with an almost-linear complexity. To illustrate its viability, the results of the application of the outlier-detection algorithm to the concrete example of a large database are presented.
Resumo:
La reconstruction en deux étapes par expanseur et implant est la technique la plus répandue pour la reconstruction mammmaire post mastectomie. La formation d’une capsule périprothétique est une réponse physiologique universelle à tout corps étranger présent dans le corps humain; par contre, la formation d’une capsule pathologique mène souvent à des complications et par conséquent à des résultats esthétiques sous-optimaux. Le microscope électronique à balayage (MEB) est un outil puissant qui permet d’effectuer une évaluation sans pareille de la topographie ultrastructurelle de spécimens. Le premier objectif de cette thèse est de comparer le MEB conventionnel (Hi-Vac) à une technologie plus récente, soit le MEB environnemental (ESEM), afin de déterminer si cette dernière mène à une évaluation supérieure des tissus capsulaires du sein. Le deuxième objectif est d‘appliquer la modalité de MEB supérieure et d’étudier les modifications ultrastructurelles des capsules périprothétiques chez les femmes subissant différents protocoles d’expansion de tissus dans le contexte de reconstruction mammaire prothétique. Deux études prospectives ont été réalisées afin de répondre à nos objectifs de recherche. Dix patientes ont été incluses dans la première, et 48 dans la seconde. La modalité Hi-Vac s’est avérée supérieure pour l’analyse compréhensive de tissus capsulaires mammaires. En employant le mode Hi-Vac dans notre protocole de recherche établi, un relief 3-D plus prononcé à été observé autour des expanseurs BIOCELL® dans le groupe d’approche d’intervention retardée (6 semaines). Des changements significatifs n’ont pas été observés au niveau des capsules SILTEX® dans les groupes d’approche d’intervention précoce (2 semaines) ni retardée.
Resumo:
Oggigiorno la ricerca di nuovi materiali per gradatori di campo da impiegarsi in accessori di cavi ha iniziato a studiare alcuni materiali nano dielettrici con proprietà elettriche non lineari con la tensione ed aventi proprietà migliorate rispetto al materiale base. Per questo motivo in questo elaborato si sono studiati materiali nanostrutturati a base di polietilene a bassa densità (LDPE) contenenti nano polveri di grafene funzionalizzato (G*), ossido di grafene (GO) e carbon black (CB). Il primo obiettivo è stato quello di selezionare e ottimizzare i metodi di fabbricazione dei provini. La procedura di produzione è suddivisa in due parti. Nella prima parte è stata utilizzatala tecnica del ball-milling, mentre nella seconda un pressa termica (thermal pressing). Mediante la spettroscopia dielettrica a banda larga (BDS) si sono misurate le componenti reali e immaginarie della permettività e il modulo della conducibilità del materiale, in tensione alternata. Il miglioramento delle proprietà rispetto al provino di base composto dal solo polietilene si sono ottenute quando il quantitativo delle nanopolveri era maggiore. Le misure sono state effettuate sia a 3 V che a 1 kV. Attraverso misurazioni di termogravimetria (TGA) si è osservato l’aumento della resistenza termica di tutti i provini, soprattutto nel caso quando la % di nanopolveri è maggiore. Per i provini LDPE + 0.3 wt% GO e LDPE + 0.3 wt% G* si è misurata la resistenza alle scariche parziali attraverso la valutazione dell’erosione superficiale dei provini. Per il provino contenente G* è stato registrato una diminuzione del 22% del volume eroso, rispetto al materiale base, mentre per quello contenente GO non vi sono state variazioni significative. Infine si è ricercata la resistenza al breakdown di questi ultimi tre provini sopra citati. Per la caratterizzazione si è fatto uso della distribuzione di Weibull. Lo scale parameter α risulta aumentare solo per il provino LDPE + 0.3 wt% G*.
Resumo:
A specialised reconfigurable architecture is targeted at wireless base-band processing. It is built to cater for multiple wireless standards. It has lower power consumption than the processor-based solution. It can be scaled to run in parallel for processing multiple channels. Test resources are embedded on the architecture and testing strategies are included. This architecture is functionally partitioned according to the common operations found in wireless standards, such as CRC error correction, convolution and interleaving. These modules are linked via Virtual Wire Hardware modules and route-through switch matrices. Data can be processed in any order through this interconnect structure. Virtual Wire ensures the same flexibility as normal interconnects, but the area occupied and the number of switches needed is reduced. The testing algorithm scans all possible paths within the interconnection network exhaustively and searches for faults in the processing modules. The testing algorithm starts by scanning the externally addressable memory space and testing the master controller. The controller then tests every switch in the route-through switch matrix by making loops from the shared memory to each of the switches. The local switch matrix is also tested in the same way. Next the local memory is scanned. Finally, pre-defined test vectors are loaded into local memory to check the processing modules. This paper compares various base-band processing solutions. It describes the proposed platform and its implementation. It outlines the test resources and algorithm. It concludes with the mapping of Bluetooth and GSM base-band onto the platform.
Resumo:
Adsorption of nitrogen, argon, methane, and carbon dioxide on activated carbon Norit R1 over a wide range of pressure (up to 50 MPa) at temperatures from 298 to 343 K (supercritical conditions) is analyzed by means of the density functional theory modified by incorporating the Bender equation of state, which describes the bulk phase properties with very high accuracy. It has allowed us to precisely describe the experimental data of carbon dioxide adsorption slightly above and below its critical temperatures. The pore size distribution (PSD) obtained with supercritical gases at ambient temperatures compares reasonably well with the PSD obtained with subcritical nitrogen at 77 K. Our approach does not require the skeletal density of activated carbon from helium adsorption measurements to calculate excess adsorption. Instead, this density is treated as a fitting parameter, and in all cases its values are found to fall into a very narrow range close to 2000 kg/m(3). It was shown that in the case of high-pressure adsorption of supercritical gases the PSD could be reliably obtained for the range of pore width between 0.6 and 3 run. All wider pores can be reliably characterized only in terms of surface area as their corresponding excess local isotherms are the same over a practical range of pressure.
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
Power systems rely greatly on ancillary services in maintaining operation security. As one of the most important ancillary services, spinning reserve must be provided effectively in the deregulated market environment. This paper focuses on the design of an integrated market for both electricity and spinning reserve service with particular emphasis on coordinated dispatch of bulk power and spinning reserve services. A new market dispatching mechanism has been developed to minimize the ISO's total payment while ensuring system security. Genetic algorithms are used in the finding of the global optimal solutions for this dispatching problem. Case studies and corresponding analyses haw been carried out to demonstrate and discuss the efficiency and usefulness of the proposed market.
Resumo:
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.