960 resultados para Improved sequential algebraic algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A construction technique of finite point constellations in n-dimensional spaces from ideals in rings of algebraic integers is described. An algorithm is presented to find constellations with minimum average energy from a given lattice. For comparison, a numerical table of lattice constellations and group codes is computed for spaces of dimension two, three, and four. © 2001.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper adresses the problem on processing biological data such as cardiac beats, audio and ultrasonic range, calculating wavelet coefficients in real time, with processor clock running at frequency of present ASIC's and FPGA. The Paralell Filter Architecture for DWT has been improved, calculating wavelet coefficients in real time with hardware reduced to 60%. The new architecture, which also processes IDWT, is implemented with the Radix-2 or the Booth-Wallace Constant multipliers. Including series memory register banks, one integrated circuit Signal Analyzer, ultrasonic range, is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phasor Measurement Units (PMUs) optimized allocation allows control, monitoring and accurate operation of electric power distribution systems, improving reliability and service quality. Good quality and considerable results are obtained for transmission systems using fault location techniques based on voltage measurements. Based on these techniques and performing PMUs optimized allocation it is possible to develop an electric power distribution system fault locator, which provides accurate results. The PMUs allocation problem presents combinatorial features related to devices number that can be allocated, and also probably places for allocation. Tabu search algorithm is the proposed technique to carry out PMUs allocation. This technique applied in a 141 buses real-life distribution urban feeder improved significantly the fault location results. © 2004 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An improvement to the quality bidimensional Delaunay mesh generation algorithm, which combines the mesh refinement algorithms strategy of Ruppert and Shewchuk is proposed in this research. The developed technique uses diametral lenses criterion, introduced by L. P. Chew, with the purpose of eliminating the extremely obtuse triangles in the boundary mesh. This method splits the boundary segment and obtains an initial prerefinement, and thus reducing the number of necessary iterations to generate a high quality sequential triangulation. Moreover, it decreases the intensity of the communication and synchronization between subdomains in parallel mesh refinement. © 2008 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A significant proportion (up to 62) of oral squamous cell carcinomas (OSCCs) may arise from oral potential malignant lesions (OPMLs), such as leukoplakia. Patient outcomes may thus be improved through detection of lesions at a risk for malignant transformation, by identifying and categorizing genetic changes in sequential, progressive OPMLs. We conducted array comparative genomic hybridization analysis of 25 sequential, progressive OPMLs and same-site OSCCs from five patients. Recurrent DNA copy number gains were identified on 1p in 20/25 cases (80) with minimal, high-level amplification regions on 1p35 and 1p36. Other regions of gains were frequently observed: 11q13.4 (68), 9q34.13 (64), 21q22.3 (60), 6p21 and 6q25 (56) and 10q24, 19q13.2, 22q12, 5q31.2, 7p13, 10q24 and 14q22 (48). DNA losses were observed in 20 of samples and mainly detected on 5q31.2 (35), 16p13.2 (30), 9q33.1 and 9q33.29 (25) and 17q11.2, 3p26.2, 18q21.1, 4q34.1 and 8p23.2 (20). Such copy number alterations (CNAs) were mapped in all grades of dysplasia that progressed, and their corresponding OSCCs, in 70 of patients, indicating that these CNAs may be associated with disease progression. Amplified genes mapping within recurrent CNAs (KHDRBS1, PARP1, RAB1A, HBEGF, PAIP2, BTBD7) were selected for validation, by quantitative real-time PCR, in an independent set of 32 progressive leukoplakia, 32 OSSCs and 21 non-progressive leukoplakia samples. Amplification of BTBD7, KHDRBS1, PARP1 and RAB1A was exclusively detected in progressive leukoplakia and corresponding OSCC. BTBD7, KHDRBS1, PARP1 and RAB1A may be associated with OSCC progression. Proteinprotein interaction networks were created to identify possible pathways associated with OSCC progression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing amount of sequences stored in genomic databases has become unfeasible to the sequential analysis. Then, the parallel computing brought its power to the Bioinformatics through parallel algorithms to align and analyze the sequences, providing improvements mainly in the running time of these algorithms. In many situations, the parallel strategy contributes to reducing the computational complexity of the big problems. This work shows some results obtained by an implementation of a parallel score estimating technique for the score matrix calculation stage, which is the first stage of a progressive multiple sequence alignment. The performance and quality of the parallel score estimating are compared with the results of a dynamic programming approach also implemented in parallel. This comparison shows a significant reduction of running time. Moreover, the quality of the final alignment, using the new strategy, is analyzed and compared with the quality of the approach with dynamic programming.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Active machine learning algorithms are used when large numbers of unlabeled examples are available and getting labels for them is costly (e.g. requiring consulting a human expert). Many conventional active learning algorithms focus on refining the decision boundary, at the expense of exploring new regions that the current hypothesis misclassifies. We propose a new active learning algorithm that balances such exploration with refining of the decision boundary by dynamically adjusting the probability to explore at each step. Our experimental results demonstrate improved performance on data sets that require extensive exploration while remaining competitive on data sets that do not. Our algorithm also shows significant tolerance of noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to provide an improved NSGA-II (Non-Dominated Sorting Genetic Algorithm-version II) which incorporates a parameter-free self-tuning approach by reinforcement learning technique, called Non-Dominated Sorting Genetic Algorithm Based on Reinforcement Learning (NSGA-RL). The proposed method is particularly compared with the classical NSGA-II when applied to a satellite coverage problem. Furthermore, not only the optimization results are compared with results obtained by other multiobjective optimization methods, but also guarantee the advantage of no time-spending and complex parameter tuning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Piezoelectric ceramics, such as PZT, can generate subnanometric displacements, bu t in order to generate multi- micrometric displacements, they should be either driven by high electric voltages (hundreds of volts ), or operate at a mechanical resonant frequency (in narrow band), or have large dimensions (tens of centimeters). A piezoelectric flextensional actuator (PFA) is a device with small dimensions that can be driven by reduced voltages and can operate in the nano- and micro scales. Interferometric techniques are very adequate for the characterization of these devices, because there is no mechanical contact in the measurement process, and it has high sensitivity, bandwidth and dynamic range. A low cost open-loop homodyne Michelson interferometer is utilized in this work to experimentally detect the nanovi brations of PFAs, based on the spectral analysis of the interfero metric signal. By employing the well known J 1 ...J 4 phase demodulation method, a new and improved version is proposed, which presents the following characteristics: is direct, self-consistent, is immune to fading, and does not present phase ambiguity problems. The proposed method has resolution that is similar to the modified J 1 ...J 4 method (0.18 rad); however, differently from the former, its dynamic range is 20% larger, does not demand Bessel functions algebraic sign correction algorithms and there are no singularities when the static phase shift between the interferometer arms is equal to an integer multiple of  /2 rad. Electronic noise and random phase drifts due to ambient perturbations are taken into account in the analysis of the method. The PFA nanopositioner characterization was based on the analysis of linearity betw een the applied voltage and the resulting displacement, on the displacement frequency response and determination of main resonance frequencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process algebraic architectural description languages provide a formal means for modeling software systems and assessing their properties. In order to bridge the gap between system modeling and system im- plementation, in this thesis an approach is proposed for automatically generating multithreaded object-oriented code from process algebraic architectural descriptions, in a way that preserves – under certain assumptions – the properties proved at the architectural level. The approach is divided into three phases, which are illustrated by means of a running example based on an audio processing system. First, we develop an architecture-driven technique for thread coordination management, which is completely automated through a suitable package. Second, we address the translation of the algebraically-specified behavior of the individual software units into thread templates, which will have to be filled in by the software developer according to certain guidelines. Third, we discuss performance issues related to the suitability of synthesizing monitors rather than threads from software unit descriptions that satisfy specific constraints. In addition to the running example, we present two case studies about a video animation repainting system and the implementation of a leader election algorithm, in order to summarize the whole approach. The outcome of this thesis is the implementation of the proposed approach in a translator called PADL2Java and its integration in the architecture-centric verification tool TwoTowers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent research has shown that the performance of a single, arbitrarily efficient algorithm can be significantly outperformed by using a portfolio of —possibly on-average slower— algorithms. Within the Constraint Programming (CP) context, a portfolio solver can be seen as a particular constraint solver that exploits the synergy between the constituent solvers of its portfolio for predicting which is (or which are) the best solver(s) to run for solving a new, unseen instance. In this thesis we examine the benefits of portfolio solvers in CP. Despite portfolio approaches have been extensively studied for Boolean Satisfiability (SAT) problems, in the more general CP field these techniques have been only marginally studied and used. We conducted this work through the investigation, the analysis and the construction of several portfolio approaches for solving both satisfaction and optimization problems. We focused in particular on sequential approaches, i.e., single-threaded portfolio solvers always running on the same core. We started from a first empirical evaluation on portfolio approaches for solving Constraint Satisfaction Problems (CSPs), and then we improved on it by introducing new data, solvers, features, algorithms, and tools. Afterwards, we addressed the more general Constraint Optimization Problems (COPs) by implementing and testing a number of models for dealing with COP portfolio solvers. Finally, we have come full circle by developing sunny-cp: a sequential CP portfolio solver that turned out to be competitive also in the MiniZinc Challenge, the reference competition for CP solvers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of prolonged electroporation-mediated human interleukin-10 (hIL-10) overexpression 24 hours before transplantation, combined with sequential human hepatocyte growth factor (HGF) overexpression into skeletal muscle on day 5, on rat lung allograft rejection was evaluated. Left lung allotransplantation was performed from Brown-Norway to Fischer-F344 rats. Gene transfer into skeletal muscle was enhanced by electroporation. Three groups were studied: group I animals (n = 5) received 2.5 μg pCIK-hIL-10 (hIL-10/CMV [cytomegalovirus] early promoter enhancer) on day -1 and 80 μg pCIK-HGF (HGF/CMV early promoter enhancer) on day 5. Group II animals (n = 4) received 2.5 μg pCIK-hIL-10 and pUbC-hIL-10 (hIL-10/pUbC promoter) on day -1. Control group III animals (n = 4) were treated by sham electroporation on days -1 and 5. All animals received daily nontherapeutic intraperitoneal dose of cyclosporin A (2.5 mg/kg) and were sacrificed on day 15. Graft oxygenation and allograft rejection were evaluated. Significant differences were found between study groups in graft oxygenation (Pao(2)) (P = .0028; group I vs. groups II and III, P < .01 each). Pao(2) was low in group II (31 ± 1 mm Hg) and in group III controls (34 ± 10 mm Hg), without statistically significant difference between these 2 groups (P = .54). In contrast, in group I, Pao(2) of recipients sequentially transduced with IL-10 and HGF plasmids was much improved, with 112 ± 39 mm Hg (vs. groups II and III; P < .01 each), paralleled by reduced vascular and bronchial rejection (group I vs. groups II and III, P < .021 each). Sequential overexpression of anti-inflammatory cytokine IL-10, followed by sequential and overlapping HGF overexpression on day 5, preserves lung function and reduces acute lung allograft rejection up to day 15 post transplant as compared to prolonged IL-10 overexpression alone.