974 resultados para Symmetry Ratio Algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Study Design. Development of an automatic measurement algorithm and comparison with manual measurement methods. Objectives. To develop a new computer-based method for automatic measurement of vertebral rotation in idiopathic scoliosis from computed tomography images and to compare the automatic method with two manual measurement techniques. Summary of Background Data. Techniques have been developed for vertebral rotation measurement in idiopathic scoliosis using plain radiographs, computed tomography, or magnetic resonance images. All of these techniques require manual selection of landmark points and are therefore subject to interobserver and intraobserver error. Methods. We developed a new method for automatic measurement of vertebral rotation in idiopathic scoliosis using a symmetry ratio algorithm. The automatic method provided values comparable with Aaro and Ho's manual measurement methods for a set of 19 transverse computed tomography slices through apical vertebrae, and with Aaro's method for a set of 204 reformatted computed tomography images through vertebral endplates. Results. Confidence intervals (95%) for intraobserver and interobserver variability using manual methods were in the range 5.5 to 7.2. The mean (+/- SD) difference between automatic and manual rotation measurements for the 19 apical images was -0.5 degrees +/- 3.3 degrees for Aaro's method and 0.7 degrees +/- 3.4 degrees for Ho's method. The mean (+/- SD) difference between automatic and manual rotation measurements for the 204 endplate images was 0.25 degrees +/- 3.8 degrees. Conclusions. The symmetry ratio algorithm allows automatic measurement of vertebral rotation in idiopathic scoliosis without intraobserver or interobserver error due to landmark point selection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Work presented in the context of the European Master in Computational Logics, as partial requisit for the graduation as Master in Computational Logics

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Our objective is to evaluate the accuracy of three algorithms in differentiating the origins of outflow tract ventricular arrhythmias (OTVAs). This study involved 110 consecutive patients with OTVAs for whom a standard 12-lead surface electrocardiogram (ECG) showed typical left bundle branch block morphology with an inferior axis. All the ECG tracings were retrospectively analyzed using the following three recently published ECG algorithms: 1) the transitional zone (TZ) index, 2) the V2 transition ratio, and 3) V2 R wave duration and R/S wave amplitude indices. Considering all patients, the V2 transition ratio had the highest sensitivity (92.3%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (93.9%). The latter finding had a maximal area under the ROC curve of 0.925. In patients with left ventricular (LV) rotation, the V2 transition ratio had the highest sensitivity (94.1%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (87.5%). The former finding had a maximal area under the ROC curve of 0.892. All three published ECG algorithms are effective in differentiating the origin of OTVAs, while the V2 transition ratio, and the V2 R wave duration and R/S wave amplitude indices are the most sensitive and specific algorithms, respectively. Amongst all of the patients, the V2 R wave duration and R/S wave amplitude algorithm had the maximal area under the ROC curve, but in patients with LV rotation the V2 transition ratio algorithm had the maximum area under the ROC curve.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: To compare the effects of the treadmill training with partial body-weight support (TPBWS) and Proprioceptive Neuromuscular Facilitation (PNF) method on gait of subjects with chronic stroke. Design: Quasi-experimental study. Setting: Laboratorial research. Participants: Twenty-three subjects (13 men and 10 women), with a mean age of 56,7 ± 8,0 years and a mean time since the onset of the stroke of 27,7 ± 20,3 months, and able to walk with personal assistance or assistive devices. Interventions: Two experimental groups underwent gait training based on PNF method (PNF group, n=11) or using the TPBWS - Gait Trainer System 2, Biodex, USA (TPBWS group, n=12), for three weekly sessions, during four weeks. Measures: Evaluation of motor function - using the Stroke Rehabilitation Assessment of Movement (STREAM) and the motor subscale of the Functional Independence Measure (motor FIM) -, and kinematic gait analyze with the Qualisys System (Qualisys Medical AB, Gothenburg, Sweden) were carried out before and after the interventions. Results: Increases in the STREAM scores (F=49.189, P<0.001) and in motor FIM scores (F=7.093, P=0.016), as well as improvement in symmetry ratio (F=7.729, P=0.012) were observed for both groups. Speed, stride length and double-support time showed no change after training. Differences between groups were observed only for the maximum ankle dorsiflexion over the swing phase (F=6.046, P=0.024), which showed an increase for the PNF group. Other angular parameters remain unchanged. Conclusion: Improvement in motor function and in gait symmetry was observed for both groups, suggesting similarity of interventions. The cost-effectiveness of each treatment should be considered for your choice

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traditional Monte Carlo simulations of QCD in the presence of a baryon chemical potential are plagued by the complex phase problem and new numerical approaches are necessary for studying the phase diagram of the theory. In this work we consider a ℤ3 Polyakov loop model for the deconfining phase transition in QCD and discuss how a flux representation of the model in terms of dimer and monomer variable solves the complex action problem. We present results of numerical simulations using a worm algorithm for the specific heat and two-point correlation function of Polyakov loops. Evidences of a first order deconfinement phase transition are discussed. © 2013 American Institute of Physics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Emerging vehicular comfort applications pose a host of completely new set of requirements such as maintaining end-to-end connectivity, packet routing, and reliable communication for internet access while on the move. One of the biggest challenges is to provide good quality of service (QoS) such as low packet delay while coping with the fast topological changes. In this paper, we propose a clustering algorithm based on minimal path loss ratio (MPLR) which should help in spectrum efficiency and reduce data congestion in the network. The vehicular nodes which experience minimal path loss are selected as the cluster heads. The performance of the MPLR clustering algorithm is calculated by rate of change of cluster heads, average number of clusters and average cluster size. Vehicular traffic models derived from the Traffic Wales data are fed as input to the motorway simulator. A mathematical analysis for the rate of change of cluster head is derived which validates the MPLR algorithm and is compared with the simulated results. The mathematical and simulated results are in good agreement indicating the stability of the algorithm and the accuracy of the simulator. The MPLR system is also compared with V2R system with MPLR system performing better. © 2013 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Emerging vehicular comfort applications pose a host of completely new set of requirements such as maintaining end-to-end connectivity, packet routing, and reliable communication for internet access while on the move. One of the biggest challenges is to provide good quality of service (QoS) such as low packet delay while coping with the fast topological changes. In this paper, we propose a clustering algorithm based on minimal path loss ratio (MPLR) which should help in spectrum efficiency and reduce data congestion in the network. The vehicular nodes which experience minimal path loss are selected as the cluster heads. The performance of the MPLR clustering algorithm is calculated by rate of change of cluster heads, average number of clusters and average cluster size. Vehicular traffic models derived from the Traffic Wales data are fed as input to the motorway simulator. A mathematical analysis for the rate of change of cluster head is derived which validates the MPLR algorithm and is compared with the simulated results. The mathematical and simulated results are in good agreement indicating the stability of the algorithm and the accuracy of the simulator. The MPLR system is also compared with V2R system with MPLR system performing better. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A planar k-restricted structure is a simple graph whose blocks are planar and each has at most k vertices. Planar k-restricted structures are used by approximation algorithms for Maximum Weight Planar Subgraph, which motivates this work. The planar k-restricted ratio is the infimum, over simple planar graphs H, of the ratio of the number of edges in a maximum k-restricted structure subgraph of H to the number edges of H. We prove that, as k tends to infinity, the planar k-restricted ratio tends to 1/2. The same result holds for the weighted version. Our results are based on analyzing the analogous ratios for outerplanar and weighted outerplanar graphs. Here both ratios tend to 1 as k goes to infinity, and we provide good estimates of the rates of convergence, showing that they differ in the weighted from the unweighted case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the applicability of a new algorithm for the estimation of mechanical properties from instrumented indentation data was studied for thin films. The applicability was analyzed with the aid of both three-dimensional finite element simulations and experimental indentation tests. The numerical approach allowed studying the effect of the substrate on the estimation of mechanical properties of the film, which was conducted based on the ratio h(max)/l between maximum indentation depth and film thickness. For the experimental analysis, indentation tests were conducted on AISI H13 tool steel specimens, plasma nitrated and coated with TiN thin films. Results have indicated that, for the conditions analyzed in this work, the elastic deformation of the substrate limited the extraction of mechanical properties of the film/substrate system. This limitation occurred even at low h(max)/l ratios and especially for the estimation of the values of yield strength and strain hardening exponent. At indentation depths lower than 4% of the film thickness, the proposed algorithm estimated the mechanical properties of the film with accuracy. Particularly for hardness, precise values were estimated at h(max)/l lower than 0.1, i.e. 10% of film thickness. (C) 2010 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The popular Newmark algorithm, used for implicit direct integration of structural dynamics, is extended by means of a nodal partition to permit use of different timesteps in different regions of a structural model. The algorithm developed has as a special case an explicit-explicit subcycling algorithm previously reported by Belytschko, Yen and Mullen. That algorithm has been shown, in the absence of damping or other energy dissipation, to exhibit instability over narrow timestep ranges that become narrower as the number of degrees of freedom increases, making them unlikely to be encountered in practice. The present algorithm avoids such instabilities in the case of a one to two timestep ratio (two subcycles), achieving unconditional stability in an exponential sense for a linear problem. However, with three or more subcycles, the trapezoidal rule exhibits stability that becomes conditional, falling towards that of the central difference method as the number of subcycles increases. Instabilities over narrow timestep ranges, that become narrower as the model size increases, also appear with three or more subcycles. However by moving the partition between timesteps one row of elements into the region suitable for integration with the larger timestep these the unstable timestep ranges become extremely narrow, even in simple systems with a few degrees of freedom. As well, accuracy is improved. Use of a version of the Newmark algorithm that dissipates high frequencies minimises or eliminates these narrow bands of instability. Viscous damping is also shown to remove these instabilities, at the expense of having more effect on the low frequency response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aging affects peripheral nerve function and regeneration in experimental models but few literature reports deal with animals aged more than one year. We investigated morphological and morphometric aspects of the sural nerve in aging rats. Female Wistar rats 360, 640 and 720 days old were killed, proximal and distal segments of the right and left sural nerves were prepared for light microscopy and computerized morphometry. No morphometric differences between proximal and distal segments or between right and left sides at the same levels were found in all experimental groups. No increase in fiber and axon sizes was observed from 360 to 720 days. Likewise, no difference in total myelinated fiber number was observed between groups. Myelinated fiber population distribution was bimodal, being the 720-days old animals` distribution shifted to the left, indicating a reduction of the fiber diameters. The 9 ratio distribution of the 720-days old animals` myelinated fiber was also shifted to the left, which suggests axonal atrophy. Morphological alterations due to aging were observed, mainly related to the myelin sheath, which suggests demyelination. Large fibers were more affected than the smaller ones. Axon abnormalities were not as common or as obvious as the myelin changes and Wallerian degeneration was rarely found. These alterations were observed in all experimental groups but were much less pronounced in rats 360 days old and their severity increased with aging. in conclusion, the present study indicates that the aging neuropathy present in the sural nerve of female rats is both axonal and demyelinating. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Biomédica

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Projecte de recerca elaborat a partir d’una estada a la University of Groningen, Holanda, entre 2007 i 2009. La simulació directa de la turbulència (DNS) és una eina clau dins de la mecànica de fluids computacional. Per una banda permet conèixer millor la física de la turbulència i per l'altra els resultats obtinguts són claus per el desenvolupament dels models de turbulència. No obstant, el DNS no és una tècnica vàlida per a la gran majoria d'aplicacions industrials degut al elevats costos computacionals. Per tant, és necessari cert grau de modelització de la turbulència. En aquest context, s'han introduïts importants millores basades en la modelització del terme convectiu (no lineal) emprant symmetry-preserving regularizations. En tracta de modificar adequadament el terme convectiu a fi de reduir la producció d'escales més i més petites (vortex-stretching) tot mantenint tots els invariants de les equacions originals. Fins ara, aquest models s'han emprat amb èxit per nombres de Rayleigh (Ra) relativament elevats. En aquest punt, disposar de resultats DNS per a configuracions més complexes i nombres de Ra més elevats és clau. En aquest contexte, s'han dut a terme simulacions DNS en el supercomputador MareNostrum d'una Differentially Heated Cavity amb Ra=1e11 i Pr=0.71 durant el primer any dels dos que consta el projecte. A més a més, s'ha adaptat el codi a fi de poder simular el fluxe al voltant d'un cub sobre una pared amb Re=10000. Aquestes simulacions DNS són les més grans fetes fins ara per aquestes configuracions i la seva correcta modelització és un gran repte degut la complexitat dels fluxes. Aquestes noves simulacions DNS estan aportant nous coneixements a la física de la turbulència i aportant resultats indispensables per al progrés de les modelitzacións tipus symmetry-preserving regularization.