991 resultados para Electromagnetism-like algorithm
Resumo:
The Electromagnetism-like (EM) algorithm is a population- based stochastic global optimization algorithm that uses an attraction- repulsion mechanism to move sample points towards the optimal. In this paper, an implementation of the EM algorithm in the Matlab en- vironment as a useful function for practitioners and for those who want to experiment a new global optimization solver is proposed. A set of benchmark problems are solved in order to evaluate the performance of the implemented method when compared with other stochastic methods available in the Matlab environment. The results con rm that our imple- mentation is a competitive alternative both in term of numerical results and performance. Finally, a case study based on a parameter estimation problem of a biology system shows that the EM implementation could be applied with promising results in the control optimization area.
Resumo:
The purpose of this work was to study and quantify the differences in dose distributions computed with some of the newest dose calculation algorithms available in commercial planning systems. The study was done for clinical cases originally calculated with pencil beam convolution (PBC) where large density inhomogeneities were present. Three other dose algorithms were used: a pencil beam like algorithm, the anisotropic analytic algorithm (AAA), a convolution superposition algorithm, collapsed cone convolution (CCC), and a Monte Carlo program, voxel Monte Carlo (VMC++). The dose calculation algorithms were compared under static field irradiations at 6 MV and 15 MV using multileaf collimators and hard wedges where necessary. Five clinical cases were studied: three lung and two breast cases. We found that, in terms of accuracy, the CCC algorithm performed better overall than AAA compared to VMC++, but AAA remains an attractive option for routine use in the clinic due to its short computation times. Dose differences between the different algorithms and VMC++ for the median value of the planning target volume (PTV) were typically 0.4% (range: 0.0 to 1.4%) in the lung and -1.3% (range: -2.1 to -0.6%) in the breast for the few cases we analysed. As expected, PTV coverage and dose homogeneity turned out to be more critical in the lung than in the breast cases with respect to the accuracy of the dose calculation. This was observed in the dose volume histograms obtained from the Monte Carlo simulations.
Resumo:
The Three-Layer distributed mediation architecture, designed by Secure System Architecture laboratory, employed a layered framework of presence, integration, and homogenization mediators. The architecture does not have any central component that may affect the system reliability. A distributed search technique was adapted in the system to increase its reliability. An Enhanced Chord-like algorithm (E-Chord) was designed and deployed in the integration layer. The E-Chord is a skip-list algorithm based on Distributed Hash Table (DHT) which is a distributed but structured architecture. DHT is distributed in the sense that no central unit is required to maintain indexes, and it is structured in the sense that indexes are distributed over the nodes in a systematic manner. Each node maintains three kind of routing information: a frequency list, a successor/predecessor list, and a finger table. None of the nodes in the system maintains all indexes, and each node knows about some other nodes in the system. These nodes, also called composer mediators, were connected in a P2P fashion. ^ A special composer mediator called a global mediator initiates the keyword-based matching decomposition of the request using the E-Chord. It generates an Integrated Data Structure Graph (IDSG) on the fly, creates association and dependency relations between nodes in the IDSG, and then generates a Global IDSG (GIDSG). The GIDSG graph is a plan which guides the global mediator how to integrate data. It is also used to stream data from the mediators in the homogenization layer which connected to the data sources. The connectors start sending the data to the global mediator just after the global mediator creates the GIDSG and just before the global mediator sends the answer to the presence mediator. Using the E-Chord and GIDSG made the mediation system more scalable than using a central global schema repository since all the composers in the integration layer are capable of handling and routing requests. Also, when a composer fails, it would only minimally affect the entire mediation system. ^
Resumo:
The authors propose a mathematical model to minimize the project total cost where there are multiple resources constrained by maximum availability. They assume the resources as renewable and the activities can use any subset of resources requiring any quantity from a limited real interval. The stochastic nature is inferred by means of a stochastic work content defined per resource within an activity and following a known distribution and the total cost is the sum of the resource allocation cost with the tardiness cost or earliness bonus in case the project finishes after or before the due date, respectively. The model was computationally implemented relying upon an interchange of two global optimization metaheuristics – the electromagnetism-like mechanism and the evolutionary strategies. Two experiments were conducted testing the implementation to projects with single and multiple resources, and with or without maximum availability constraints. The set of collected results shows good behavior in general and provide a tool to further assist project manager decision making in the planning phase.
Resumo:
Moving-least-squares (MLS) surfaces undergoing large deformations need periodic regeneration of the point set (point-set resampling) so as to keep the point-set density quasi-uniform. Previous work by the authors dealt with algebraic MLS surfaces, and proposed a resampling strategy based on defining the new points at the intersections of the MLS surface with a suitable set of rays. That strategy has very low memory requirements and is easy to parallelize. In this article new resampling strategies with reduced CPU-time cost are explored. The basic idea is to choose as set of rays the lines of a regular, Cartesian grid, and to fully exploit this grid: as data structure for search queries, as spatial structure for traversing the surface in a continuation-like algorithm, and also as approximation grid for an interpolated version of the MLS surface. It is shown that in this way a very simple and compact resampling technique is obtained, which cuts the resampling cost by half with affordable memory requirements.
Resumo:
A recently proposed colour based tracking algorithm has been established to track objects in real circumstances [Zivkovic, Z., Krose, B. 2004. An EM-like algorithm for color-histogram-based object tracking. In: Proc, IEEE Conf. on Computer Vision and Pattern Recognition, pp. 798-803]. To improve the performance of this technique in complex scenes, in this paper we propose a new algorithm for optimally adapting the ellipse outlining the objects of interest. This paper presents a Lagrangian based method to integrate a regularising component into the covariance matrix to be computed. Technically, we intend to reduce the residuals between the estimated probability distribution and the expected one. We argue that, by doing this, the shape of the ellipse can be properly adapted in the tracking stage. Experimental results show that the proposed method has favourable performance in shape adaption and object localisation.
Resumo:
The paper begins with a new characterization of (k,τ)(k,τ)-regular sets. Then, using this result as well as the theory of star complements, we derive a simplex-like algorithm for determining whether or not a graph contains a (0,τ)(0,τ)-regular set. When τ=1τ=1, this algorithm can be applied to solve the efficient dominating set problem which is known to be NP-complete. If −1−1 is not an eigenvalue of the adjacency matrix of the graph, this particular algorithm runs in polynomial time. However, although it does not work in polynomial time in general, we report on its successful application to a vast set of randomly generated graphs.
Resumo:
Context. B[e] supergiants are luminous, massive post-main sequence stars exhibiting non-spherical winds, forbidden lines, and hot dust in a disc-like structure. The physical properties of their rich and complex circumstellar environment (CSE) are not well understood, partly because these CSE cannot be easily resolved at the large distances found for B[e] supergiants (typically greater than or similar to 1 kpc). Aims. From mid-IR spectro-interferometric observations obtained with VLTI/MIDI we seek to resolve and study the CSE of the Galactic B[e] supergiant CPD-57 degrees 2874. Methods. For a physical interpretation of the observables (visibilities and spectrum) we use our ray-tracing radiative transfer code (FRACS), which is optimised for thermal spectro-interferometric observations. Results. Thanks to the short computing time required by FRACS (<10 s per monochromatic model), best-fit parameters and uncertainties for several physical quantities of CPD-57 degrees 2874 were obtained, such as inner dust radius, relative flux contribution of the central source and of the dusty CSE, dust temperature profile, and disc inclination. Conclusions. The analysis of VLTI/MIDI data with FRACS allowed one of the first direct determinations of physical parameters of the dusty CSE of a B[e] supergiant based on interferometric data and using a full model-fitting approach. In a larger context, the study of B[e] supergiants is important for a deeper understanding of the complex structure and evolution of hot, massive stars.
Resumo:
We present Monte Carlo simulations for a molecular motor system found in virtually all eukaryotic cells, the acto-myosin motor system, composed of a group of organic macromolecules. Cell motors were mapped to an Ising-like model, where the interaction field is transmitted through a tropomyosin polymer chain. The presence of Ca(2+) induces tropomyosin to block or unblock binding sites of the myosin motor leading to its activation or deactivation. We used the Metropolis algorithm to find the transient and the equilibrium states of the acto-myosin system composed of solvent, actin, tropomyosin, troponin, Ca(2+), and myosin-S1 at a given temperature, including the spatial configuration of tropomyosin on the actin filament surface. Our model describes the short- and long-range cooperativity during actin-myosin binding which emerges from the bending stiffness of the tropomyosin complex. We found all transition rates between the states only using the interaction energy of the constituents. The agreement between our model and experimental data also supports the recent theory of flexible tropomyosin.
Resumo:
Several phenomena present in electrical systems motivated the development of comprehensive models based on the theory of fractional calculus (FC). Bearing these ideas in mind, in this work are applied the FC concepts to define, and to evaluate, the electrical potential of fractional order, based in a genetic algorithm optimization scheme. The feasibility and the convergence of the proposed method are evaluated.
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
Descriptors based on Molecular Interaction Fields (MIF) are highly suitable for drug discovery, but their size (thousands of variables) often limits their application in practice. Here we describe a simple and fast computational method that extracts from a MIF a handful of highly informative points (hot spots) which summarize the most relevant information. The method was specifically developed for drug discovery, is fast, and does not require human supervision, being suitable for its application on very large series of compounds. The quality of the results has been tested by running the method on the ligand structure of a large number of ligand-receptor complexes and then comparing the position of the selected hot spots with actual atoms of the receptor. As an additional test, the hot spots obtained with the novel method were used to obtain GRIND-like molecular descriptors which were compared with the original GRIND. In both cases the results show that the novel method is highly suitable for describing ligand-receptor interactions and compares favorably with other state-of-the-art methods.
Resumo:
Introduction: Diffuse large B-cell lymphomas (DLBCL) represent a heterogeneous disease with variable clinical outcome. Identifying phenotypic biomarkers of tumor cells on paraffin sections that predict different clinical outcome remain an important goal that may also help to better understand the biology of this lymphoma. Differentiating non-germinal centre B-cell-like (non-GCB) from Germinal Centre B-cell-like (GCB) DLBCL according to Hans algorithm has been considered as an important immunohistochemical biomarker with prognostic value among patients treated with R-CHOP although not reproducibly found by all groups. Gene expression studies have also shown that IgM expression might be used as a surrogate for the GCB and ABC subtypes with a strong preferential expression of IgM in ABC DLBCL subtype. ImmunoFISH index based on the differential expression of MUM-1, FOXP1 by immunohistochemistry and on the BCL6 rearrangement by FISH has been previously reported (C Copie-Bergman, J Clin Oncol. 2009;27:5573-9) as prognostic in an homogeneous series of DLBCL treated with R-CHOP. In addition, oncogenic MYC protein overexpression by immunohistochemistry may represent an easy tool to identify the consequences of MYC deregulation in DLBCL. Our aim was to analyse by immunohistochemistry the prognostic relevance of MYC, IgM, GCB/nonGCB subtype and ImmunoFISH index in a large series of de novo DLBCL treated with Rituximab (R)-chemotherapy (anthracyclin based) included in the 2003 program of the Groupe d'Etude des Lymphomes de l'Adulte (GELA) trials. Methods: The 2003 program included patients with de novo CD20+ DLBCL enrolled in 6 different LNH-03 GELA trials (LNH-03-1B, -B, -3B, 39B, -6B, 7B) stratifying patients according to age and age-adjusted IPI. Tumor samples were analyzed by immunohistochemistry using CD10, BCL6, MUM1, FOXP1 (according to Barrans threshold), MYC, IgM antibodies on tissue microarrays and by FISH using BCL6 split signal DNA probes. Considering evaluable Hans score, 670 patients were included in the study with 237 (35.4%) receiving intensive R-ACVBP regimen and 433 (64.6%) R-CHOP/R-mini-CHOP. Results: 304 (45.4%) DLBCL were classified as GCB and 366 (54.6%) as non-GCB according to Hans algorithm. 337/567 cases (59.4%) were positive for the ImmunoFISH index (i.e. two out of the three markers positive: MUM1 protein positive, FOXP1 protein Variable or Strong, BCL6 rearrangement). Immunofish index was preferentially positive in the non-GCB subtype (81.3%) compared to the GCB subtype (31.2%), (p<0.001). IgM was recorded as positive in tumor cells in 351/637 (52.4%) DLBCL cases with a preferential expression in non-GCB 195 (53.3%) vs GCB subtype 100(32.9%), p<0.001). MYC was positive in 170/577 (29.5%) cases with a 40% cut-off and in 44/577 (14.2%) cases with a cut-off of 70%. There was no preferential expression of MYC among GCB or non-GCB subtype (p>0.4) for both cut-offs. Progression-free Survival (PFS) was significantly worse among patients with high IPI score (p<0.0001), IgM positive tumor (p<0.0001), MYC positive tumor with a 40% threshold (p<0.001), ImmunoFISH positive index (p<0.002), non-GCB DLBCL subtype (p<0.0001). Overall Survival (OS) was also significantly worse among patients with high IPI score (p<0.0001), IgM positive tumor (p=0.02), MYC positive tumor with a 40% threshold (p<0.01), ImmunoFISH positive index (p=0.02), non-GCB DLBCL subtype (p<0.0001). All significant parameters were included in a multivariate analysis using Cox Model and in addition to IPI, only the GCB/non-GCB subtype according to Hans algorithm predicted significantly a worse PFS among non-GCB subgroup (HR 1.9 [1.3-2.8] p=0.002) as well as a worse OS (HR 2.0 [1.3-3.2], p=0.003). This strong prognostic value of non-GCB subtyping was confirmed considering only patients treated with R- CHOP for PFS (HR 2.1 [1.4-3.3], p=0.001) and for OS (HR 2.3 [1.3-3.8], p=0.002). Conclusion: Our study on a large series of patients included in trials confirmed the relevance of immunohistochemistry as a useful tool to identify significant prognostic biomarkers for clinical use. We show here that IgM and MYC might be useful prognostic biomarkers. In addition, we confirmed in this series the prognostic value of the ImmunoFISH index. Above all, we fully validated the strong and independent prognostic value of the Hans algorithm, daily used by the pathologists to subtype DLBCL.