53 resultados para virtual topology, decomposition, hex meshing algorithms
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
This paper presents a strategy for the solution of the WDM optical networks planning. Specifically, the problem of Routing and Wavelength Allocation (RWA) in order to minimize the amount of wavelengths used. In this case, the problem is known as the Min-RWA. Two meta-heuristics (Tabu Search and Simulated Annealing) are applied to take solutions of good quality and high performance. The key point is the degradation of the maximum load on the virtual links in favor of minimization of number of wavelengths used; the objective is to find a good compromise between the metrics of virtual topology (load in Gb/s) and of the physical topology (quantity of wavelengths). The simulations suggest good results when compared to some existing in the literature.
Resumo:
Modal filters may be obtained by a properly designed weighted sum of the output signals of an array of sensors distributed on the host structure. Although several research groups have been interested in techniques for designing and implementing modal filters based on a given array of sensors, the effect of the array topology on the effectiveness of the modal filter has received much less attention. In particular, it is known that some parameters, such as size, shape and location of a sensor, are very important in determining the observability of a vibration mode. Hence, this paper presents a methodology for the topological optimization of an array of sensors in order to maximize the effectiveness of a set of selected modal filters. This is done using a genetic algorithm optimization technique for the selection of 12 piezoceramic sensors from an array of 36 piezoceramic sensors regularly distributed on an aluminum plate, which maximize the filtering performance, over a given frequency range, of a set of modal filters, each one aiming to isolate one of the first vibration modes. The vectors of the weighting coefficients for each modal filter are evaluated using QR decomposition of the complex frequency response function matrix. Results show that the array topology is not very important for lower frequencies but it greatly affects the filter effectiveness for higher frequencies. Therefore, it is possible to improve the effectiveness and frequency range of a set of modal filters by optimizing the topology of an array of sensors. Indeed, using 12 properly located piezoceramic sensors bonded on an aluminum plate it is shown that the frequency range of a set of modal filters may be enlarged by 25-50%.
Resumo:
In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Load cells are used extensively in engineering fields. This paper describes a novel structural optimization method for single- and multi-axis load cell structures. First, we briefly explain the topology optimization method that uses the solid isotropic material with penalization (SIMP) method. Next, we clarify the mechanical requirements and design specifications of the single- and multi-axis load cell structures, which are formulated as an objective function. In the case of multi-axis load cell structures, a methodology based on singular value decomposition is used. The sensitivities of the objective function with respect to the design variables are then formulated. On the basis of these formulations, an optimization algorithm is constructed using finite element methods and the method of moving asymptotes (MMA). Finally, we examine the characteristics of the optimization formulations and the resultant optimal configurations. We confirm the usefulness of our proposed methodology for the optimization of single- and multi-axis load cell structures.
Resumo:
Electrical impedance tomography is a technique to estimate the impedance distribution within a domain, based on measurements on its boundary. In other words, given the mathematical model of the domain, its geometry and boundary conditions, a nonlinear inverse problem of estimating the electric impedance distribution can be solved. Several impedance estimation algorithms have been proposed to solve this problem. In this paper, we present a three-dimensional algorithm, based on the topology optimization method, as an alternative. A sequence of linear programming problems, allowing for constraints, is solved utilizing this method. In each iteration, the finite element method provides the electric potential field within the model of the domain. An electrode model is also proposed (thus, increasing the accuracy of the finite element results). The algorithm is tested using numerically simulated data and also experimental data, and absolute resistivity values are obtained. These results, corresponding to phantoms with two different conductive materials, exhibit relatively well-defined boundaries between them, and show that this is a practical and potentially useful technique to be applied to monitor lung aeration, including the possibility of imaging a pneumothorax.
Resumo:
Several popular Machine Learning techniques are originally designed for the solution of two-class problems. However, several classification problems have more than two classes. One approach to deal with multiclass problems using binary classifiers is to decompose the multiclass problem into multiple binary sub-problems disposed in a binary tree. This approach requires a binary partition of the classes for each node of the tree, which defines the tree structure. This paper presents two algorithms to determine the tree structure taking into account information collected from the used dataset. This approach allows the tree structure to be determined automatically for any multiclass dataset.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Cellulose acetates with different degrees of substitution (DS, from 0.6 to 1.9) were prepared from previously mercerized linter cellulose, in a homogeneous medium, using N,N-dimethylacetamide/lithium chloride as a solvent system. The influence of different degrees of substitution on the properties of cellulose acetates was investigated using thermogravimetric analyses (TGA). Quantitative methods were applied to the thermogravimetric curves in order to determine the apparent activation energy (Ea) related to the thermal decomposition of untreated and mercerized celluloses and cellulose acetates. Ea values were calculated using Broido's method and considering dynamic conditions. Ea values of 158 and 187 kJ mol-1 were obtained for untreated and mercerized cellulose, respectively. A previous study showed that C6OH is the most reactive site for acetylation, probably due to the steric hindrance of C2 and C3. The C6OH takes part in the first step of cellulose decomposition, leading to the formation of levoglucosan and, when it is changed to C6OCOCH3, the results indicate that the mechanism of thermal decomposition changes to one with a lower Ea. A linear correlation between Ea and the DS of the acetates prepared in the present work was identified.
Resumo:
Thermoanalytical behavior of sodium and potassium salts of pyrrolidinedithiocarbamate (pyr), piperidineditiocarbamate (pip), morpholinedithiocarbamate (mor), hexametileneiminedithiocarbamate (hex), were investigated. In a first step the salts were synthesized and characterized by infrared spectroscopy (FTIR), ¹H and 13C nuclear magnetic resonance (NMR) and elementar analysis. Finally, thermal analytical (TG/DTG and DSC) studies were performed in order to evaluate the thermal stability, as well as the pathways of the thermal decomposition based in the intermediate and final decomposition products.
Resumo:
The thermal behavior of two polymorphic forms of rifampicin was studied by DSC and TG/DTG. The thermoanalytical results clearly showed the differences between the two crystalline forms. Polymorph I was the most thermally stable form, the DSC curve showed no fusion for this species and the thermal decomposition process occurred around 245 ºC. The DSC curve of polymorph II showed two consecutive events, an endothermic event (Tpeak = 193.9 ºC) and one exothermic event (Tpeak = 209.4 ºC), due to a melting process followed by recrystallization, which was attributed to the conversion of form II to form I. Isothermal and non-isothermal thermogravimetric methods were used to determine the kinetic parameters of the thermal decomposition process. For non-isothermal experiments, the activation energy (Ea) was derived from the plot of Log β vs 1/T, yielding values for polymorph form I and II of 154 and 123 kJ mol-1, respectively. In the isothermal experiments, the Ea was obtained from the plot of lnt vs 1/T at a constant conversion level. The mean values found for form I and form II were 137 and 144 kJ mol-1, respectively.
Resumo:
Introduction. The ToLigado Project - Your School Interactive Newspaper is an interactive virtual learning environment conceived, developed, implemented and supported by researchers at the School of the Future Research Laboratory of the University of Sao Paulo, Brazil. Method. This virtual learning environment aims to motivate trans-disciplinary research among public school students and teachers in 2,931 schools equipped with Internet-access computer rooms. Within this virtual community, students produce collective multimedia research documents that are immediately published in the portal. The project also aims to increase students' autonomy for research, collaborative work and Web authorship. Main sections of the portal are presented and described. Results. Partial results of the first two years' implementation are presented and indicate a strong motivation among students to produce knowledge despite the fragile hardware and software infrastructure at the time. Discussion. In this new environment, students should be seen as 'knowledge architects' and teachers as facilitators, or 'curiosity managers'. The ToLigado portal may constitute a repository for future studies regarding student attitudes in virtual learning environments, students' behaviour as 'authors', Web authorship involving collective knowledge production, teachers' behaviour as facilitators, and virtual learning environments as digital repositories of students' knowledge construction and social capital in virtual learning communities.
Resumo:
Background and Purpose: Several different methods of teaching laparoscopic skills have been advocated, with virtual reality surgical simulation (VRSS) being the most popular. Its effectiveness in improving surgical performance is not a consensus yet, however. The purpose of this study was to determine whether practicing surgical skills in a virtual reality simulator results in improved surgical performance. Materials and Methods: Fifteen medical students recruited for the study were divided into three groups. Group I (control) did not receive any VRSS training. For 10 weeks, group II trained basic laparoscopic skills (camera handling, cutting skill, peg transfer skill, and clipping skill) in a VRSS laparoscopic skills simulator. Group III practiced the same skills and, in addition, performed a simulated cholecystectomy. All students then performed a cholecystectomy in a swine model. Their performance was reviewed by two experienced surgeons. The following parameters were evaluated: Gallbladder pedicle dissection time, clipping time, time for cutting the pedicle, gallbladder removal time, total procedure time, and blood loss. Results: With practice, there was improvement in most of the evaluated parameters by each of the individuals. There were no statistical differences in any of evaluated parameters between those who did and did not undergo VRSS training, however. Conclusion: VRSS training is assumed to be an effective tool for learning and practicing laparoscopic skills. In this study, we could not demonstrate that VRSS training resulted in improved surgical performance. It may be useful, however, in familiarizing surgeons with laparoscopic surgery. More effective methods of teaching laparoscopic skills should be evaluated to help in improving surgical performance.
Resumo:
We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 <= r <= 21 (85.2%) and r >= 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 <= r <= 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (> 80%) while simultaneously achieving low contamination (similar to 2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.