13 resultados para spine motion segment stiffness
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.
Resumo:
Wyner - Ziv (WZ) video coding is a particular case of distributed video coding (DVC), the recent video coding paradigm based on the Slepian - Wolf and Wyner - Ziv theorems which exploits the source temporal correlation at the decoder and not at the encoder as in predictive video coding. Although some progress has been made in the last years, WZ video coding is still far from the compression performance of predictive video coding, especially for high and complex motion contents. The WZ video codec adopted in this study is based on a transform domain WZ video coding architecture with feedback channel-driven rate control, whose modules have been improved with some recent coding tools. This study proposes a novel motion learning approach to successively improve the rate-distortion (RD) performance of the WZ video codec as the decoding proceeds, making use of the already decoded transform bands to improve the decoding process for the remaining transform bands. The results obtained reveal gains up to 2.3 dB in the RD curves against the performance for the same codec without the proposed motion learning approach for high motion sequences and long group of pictures (GOP) sizes.
Resumo:
The study of economic systems has generated deep interest in exploring the complexity of chaotic motions in economy. Due to important developments in nonlinear dynamics, the last two decades have witnessed strong revival of interest in nonlinear endogenous business chaotic models. The inability to predict the behavior of dynamical systems in the presence of chaos suggests the application of chaos control methods, when we are more interested in obtaining regular behavior. In the present article, we study a specific economic model from the literature. More precisely, a system of three ordinary differential equations gather the variables of profits, reinvestments and financial flow of borrowings in the structure of a firm. Firstly, using results of symbolic dynamics, we characterize the topological entropy and the parameter space ordering of kneading sequences, associated with one-dimensional maps that reproduce significant aspects of the model dynamics. The analysis of the variation of this numerical invariant, in some realistic system parameter region, allows us to quantify and to distinguish different chaotic regimes. Finally, we show that complicated behavior arising from the chaotic firm model can be controlled without changing its original properties and the dynamics can be turned into the desired attracting time periodic motion (a stable steady state or into a regular cycle). The orbit stabilization is illustrated by the application of a feedback control technique initially developed by Romeiras et al. [1992]. This work provides another illustration of how our understanding of economic models can be enhanced by the theoretical and numerical investigation of nonlinear dynamical systems modeled by ordinary differential equations.
Resumo:
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in in vivo small animal imaging. Small animal imaging has been applied frequently to the imaging of small animals (mice and rats), which are ubiquitous in modeling human diseases and testing treatments. The use of PET in small animals allows the use of subjects as their own control, reducing the interanimal variability. This allows performing longitudinal studies on the same animal and improves the accuracy of biological models. However, small animal PET still suffers from several limitations. The amounts of radiotracers needed, limited scanner sensitivity, image resolution and image quantification issues, all could clearly benefit from additional research. Because nuclear medicine imaging deals with radioactive decay, the emission of radiation energy through photons and particles alongside with the detection of these quanta and particles in different materials make Monte Carlo method an important simulation tool in both nuclear medicine research and clinical practice. In order to optimize the quantitative use of PET in clinical practice, data- and image-processing methods are also a field of intense interest and development. The evaluation of such methods often relies on the use of simulated data and images since these offer control of the ground truth. Monte Carlo simulations are widely used for PET simulation since they take into account all the random processes involved in PET imaging, from the emission of the positron to the detection of the photons by the detectors. Simulation techniques have become an importance and indispensable complement to a wide range of problems that could not be addressed by experimental or analytical approaches.
Resumo:
Mestrado em Fisioterapia
Resumo:
A 9.9 kb DNA fragment from the right arm of chromosome VII of Saccharomyces cerevisiae has been sequenced and analysed. The sequence contains four open reading frames (ORFs) longer than 100 amino acids. One gene, PFK1, has already been cloned and sequenced and the other one is the probable yeast gene coding for the beta-subunit of the succinyl-CoA synthetase. The two remaining ORFs share homology with the deduced amino acid sequence (and their physical arrangement is similar to that) of the YHR161c and YHR162w ORFs from chromosome VIII.
Resumo:
A 17.6 kb DNA fragment from the right arm of chromosome VII of Saccharomyces cerevisiae has been sequenced and analysed. The sequence contains twelve open reading frames (ORFs) longer than 100 amino acids. Three genes had already been cloned and sequenced: CCT, ADE3 and TR-I. Two ORFs are similar to other yeast genes: G7722 with the YAL023 (PMT2) and PMT1 genes, encoding two integral membrane proteins, and G7727 with the first half of the genes encoding elongation factors 1gamma, TEF3 and TEF4. Two other ORFs, G7742 and G7744, are most probably yeast orthologues of the human and Paracoccus denitrificans electron-transferring flavoproteins (beta chain) and of the Escherichia coli phosphoserine phosphohydrolase. The five remaining identified ORFs do not show detectable homology with other protein sequences deposited in data banks. The sequence has been deposited in the EMBL data library under Accession Number Z49133.
Resumo:
Purpose - To verify the results of a diaphragmatic breathing technique (DBT) on diaphragmatic range of motion in healthy subjects. Methods - A total of 51 healthy subjects (10 male; 41 female), mean age 20 years old and a body mass index (BMI) ranging from 15.6 to 34.9 kg/m2, were enrolled in this study. Diaphragmatic range of motion was assessed by M-mode ultrasound imaging. Measurements were made before and after the DBT implementation in a standard protocol, based on 3 seconds of inspiration starting from a maximum expiration. Differences between assessments were analyzed by descriptive statistics and t-test (p < 0.05). Results - Mean value range of motion before DBT was 55.3 ± 13.4 mm and after DBT was 63.8 ± 13.2 mm showing a significant improvement of 8.5 ± 14.7 mm (p < 0.001). A strong correlation between the slope and the range of motion was found (r = 0.71, p < 0.001). Conclusions - Based on ultrasound measurements, it has been proved that DBT really contributes to a higher diaphragmatic range of motion. Future studies are needed in order to understand the influence of protocol parameters (e.g. inspiration time). Clinical implications - In the contest of evidence-based practice in physiotherapy, it has been showed by objective measurements that DBT improves the diaphragm range of motion, translating into a more efficient ventilatory function and thus can be used in clinical setting. To our knowledge this is the first study to assess the effects of DBT on range of motion of diaphragm muscle with ultrasound imaging.
Resumo:
Diaphragm is the principal inspiratory muscle. Different techniques have been used to assess diaphragm motion. Among them, M-mode ultrasound has gain particular interest since it is non-invasive and accessible. However it is operator-dependent and no objective acquisition protocol has been established. Purpose: to establish a reliable method for the assessment of the diaphragmatic motion via the M-mode ultrasound.
Resumo:
Functionally graded materials are a type of composite materials which are tailored to provide continuously varying properties, according to specific constituent's mixing distributions. These materials are known to provide superior thermal and mechanical performances when compared to the traditional laminated composites, because of this continuous properties variation characteristic, which enables among other advantages, smoother stresses distribution profiles. Therefore the growing trend on the use of these materials brings together the interest and the need for getting optimum configurations concerning to each specific application. In this work it is studied the use of particle swarm optimization technique for the maximization of a functionally graded sandwich beam bending stiffness. For this purpose, a set of case studies is analyzed, in order to enable to understand in a detailed way, how the different optimization parameters tuning can influence the whole process. It is also considered a re-initialization strategy, which is not a common approach in particle swarm optimization as far as it was possible to conclude from the published research works. As it will be shown, this strategy can provide good results and also present some advantages in some conditions. This work was developed and programmed on symbolic computation platform Maple 14. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Aim: Optimise a set of exposure factors, with the lowest effective dose, to delineate spinal curvature with the modified Cobb method in a full spine using computed radiography (CR) for a 5-year-old paediatric anthropomorphic phantom. Methods: Images were acquired by varying a set of parameters: positions (antero-posterior (AP), posteroanterior (PA) and lateral), kilo-voltage peak (kVp) (66-90), source-to-image distance (SID) (150 to 200cm), broad focus and the use of a grid (grid in/out) to analyse the impact on E and image quality (IQ). IQ was analysed applying two approaches: objective [contrast-to-noise-ratio/(CNR] and perceptual, using 5 observers. Monte-Carlo modelling was used for dose estimation. Cohen’s Kappa coefficient was used to calculate inter-observer-variability. The angle was measured using Cobb’s method on lateral projections under different imaging conditions. Results: PA promoted the lowest effective dose (0.013 mSv) compared to AP (0.048 mSv) and lateral (0.025 mSv). The exposure parameters that allowed lower dose were 200cm SID, 90 kVp, broad focus and grid out for paediatrics using an Agfa CR system. Thirty-seven images were assessed for IQ and thirty-two were classified adequate. Cobb angle measurements varied between 16°±2.9 and 19.9°±0.9. Conclusion: Cobb angle measurements can be performed using the lowest dose with a low contrast-tonoise ratio. The variation on measurements for this was ±2.9° and this is within the range of acceptable clinical error without impact on clinical diagnosis. Further work is recommended on improvement to the sample size and a more robust perceptual IQ assessment protocol for observers.
Resumo:
The formulation of a bending vibration problem of an elastically restrained Bernoulli-Euler beam carrying a finite number of concentrated elements along its length is presented. In this study, the authors exploit the application of the differential evolution optimization technique to identify the torsional stiffness properties of the elastic supports of a Bernoulli-Euler beam. This hybrid strategy allows the determination of the natural frequencies and mode shapes of continuous beams, taking into account the effect of attached concentrated masses and rotational inertias, followed by a reconciliation step between the theoretical model results and the experimental ones. The proposed optimal identification of the elastic support parameters is computationally demanding if the exact eigenproblem solving is considered. Hence, the use of a Gaussian process regression as a meta-model is addressed. An experimental application is used in order to assess the accuracy of the estimated parameters throughout the comparison of the experimentally obtained natural frequency, from impact tests, and the correspondent computed eigenfrequency.
Resumo:
In the framework of multibody dynamics, the path motion constraint enforces that a body follows a predefined curve being its rotations with respect to the curve moving frame also prescribed. The kinematic constraint formulation requires the evaluation of the fourth derivative of the curve with respect to its arc length. Regardless of the fact that higher order polynomials lead to unwanted curve oscillations, at least a fifth order polynomials is required to formulate this constraint. From the point of view of geometric control lower order polynomials are preferred. This work shows that for multibody dynamic formulations with dependent coordinates the use of cubic polynomials is possible, being the dynamic response similar to that obtained with higher order polynomials. The stabilization of the equations of motion, always required to control the constraint violations during long analysis periods due to the inherent numerical errors of the integration process, is enough to correct the error introduced by using a lower order polynomial interpolation and thus forfeiting the analytical requirement for higher order polynomials.