983 resultados para function approximation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction - Cerebrovascular diseases, and among them, cerebral vascular accidents, are one of the main causes of morbidity and disability at European Union countries. Clinical framework resulting from these diseases include important limitations in functional ability of the these patients Postural control dysfunctions are one of the most common and devastating consequences of a stroke interfering with function and autonomy and affecting different aspects of people’s life and contributing to decrease quality of life. Neurological physiotherapy plays a central role in the recovery of movement and posture, however it is necessary to study the efficacy of techniques that physiotherapists use to treat these problems. Objectives - The aim of this study was to investigate the effects of a physiotherapy intervention program, based on oriented tasks and strengthening of the affected lower limb, on balance and functionality of individuals who have suffered a stroke. In addition our study aimed to investigate the effect of strength training of the affected lower limb on muscle tone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A crucial method for investigating patients with coronary artery disease (CAD) is the calculation of the left ventricular ejection fraction (LVEF). It is, consequently, imperative to precisely estimate the value of LVEF--a process that can be done with myocardial perfusion scintigraphy. Therefore, the present study aimed to establish and compare the estimation performance of the quantitative parameters of the reconstruction methods filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM). Methods: A beating-heart phantom with known values of end-diastolic volume, end-systolic volume, and LVEF was used. Quantitative gated SPECT/quantitative perfusion SPECT software was used to obtain these quantitative parameters in a semiautomatic mode. The Butterworth filter was used in FBP, with the cutoff frequencies between 0.2 and 0.8 cycles per pixel combined with the orders of 5, 10, 15, and 20. Sixty-three reconstructions were performed using 2, 4, 6, 8, 10, 12, and 16 OSEM subsets, combined with several iterations: 2, 4, 6, 8, 10, 12, 16, 32, and 64. Results: With FBP, the values of end-diastolic, end-systolic, and the stroke volumes rise as the cutoff frequency increases, whereas the value of LVEF diminishes. This same pattern is verified with the OSEM reconstruction. However, with OSEM there is a more precise estimation of the quantitative parameters, especially with the combinations 2 iterations × 10 subsets and 2 iterations × 12 subsets. Conclusion: The OSEM reconstruction presents better estimations of the quantitative parameters than does FBP. This study recommends the use of 2 iterations with 10 or 12 subsets for OSEM and a cutoff frequency of 0.5 cycles per pixel with the orders 5, 10, or 15 for FBP as the best estimations for the left ventricular volumes and ejection fraction quantification in myocardial perfusion scintigraphy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To examine the effects of the length and timing of nighttime naps on performance and physiological functions, an experimental study was carried out under simulated night shift schedules. METHODS: Six students were recruited for this study that was composed of 5 experiments. Each experiment involved 3 consecutive days with one night shift (22:00-8:00) followed by daytime sleep and night sleep. The experiments had 5 conditions in which the length and timing of naps were manipulated: 0:00-1:00 (E60), 0:00-2:00 (E120), 4:00-5:00 (L60), 4:00-6:00 (L120), and no nap (No-nap). During the night shifts, participants underwent performance tests. A questionnaire on subjective fatigue and a critical flicker fusion frequency test were administered after the performance tests. Heart rate variability and rectal temperature were recorded continuously during the experiments. Polysomnography was also recorded during the nap. RESULTS: Sleep latency was shorter and sleep efficiency was higher in the nap in L60 and L120 than that in E60 and E120. Slow wave sleep in the naps in E120 and L120 was longer than that in E60 and L60. The mean reaction time in L60 became longer after the nap, and faster in E60 and E120. Earlier naps serve to counteract the decrement in performance and physiological functions during night shifts. Performance was somewhat improved by taking a 2-hour nap later in the shift, but deteriorated after a one-hour nap. CONCLUSIONS: Naps in the latter half of the night shift were superior to earlier naps in terms of sleep quality. However performance declined after a 1-hour nap taken later in the night shift due to sleep inertia. This study suggests that appropriate timing of a short nap must be carefully considered, such as a 60-min nap during the night shift.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Renal scintigraphy with 99mTc-dimercaptosuccinic acid (99mTc-DMSA) is performed with the aim of detect cortical abnormalities related to urinary tract infection and accurately quantify relative renal function (RRF). For this quantitative assessment Nuclear Medicine Technologist should draw regions of interest (ROI) around each kidney (KROI) and peri-renal background (BKG) ROI although controversy still exists about BKG-ROI. The aim of this work was to evaluate the effect of the normalization procedure, number and location of BKG-ROI on the RRF in 99mTc-DMSA scintigraphy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this work is to solve mathematical program with complementarity constraints (MPCC) using nonlinear programming techniques (NLP). An hyperbolic penalty function is used to solve MPCC problems by including the complementarity constraints in the penalty term. This penalty function [1] is twice continuously differentiable and combines features of both exterior and interior penalty methods. A set of AMPL problems from MacMPEC [2] are tested and a comparative study is performed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mathematical Program with Complementarity Constraints (MPCC) finds many applications in fields such as engineering design, economic equilibrium and mathematical programming theory itself. A queueing system model resulting from a single signalized intersection regulated by pre-timed control in traffic network is considered. The model is formulated as an MPCC problem. A MATLAB implementation based on an hyperbolic penalty function is used to solve this practical problem, computing the total average waiting time of the vehicles in all queues and the green split allocation. The problem was codified in AMPL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a fluid of hard boomerangs, each composed of two hard spherocylinders joined at their ends at an angle Psi. The resulting particle is nonconvex and biaxial. The occurence of nematic order in such a system has been investigated using Straley's theory, which is a simplificaton of Onsager's second-virial treatment of long hard rods, and by bifurcation analysis. The excluded volume of two hard boomerangs has been approximated by the sum of excluded volumes of pairs of constituent spherocylinders, and the angle-dependent second-virial coefficient has been replaced by a low-order interpolating function. At the so-called Landau point, Psi(Landau)approximate to 107.4 degrees, the fluid undergoes a continuous transition from the isotropic to a biaxial nematic (B) phase. For Psi not equal Psi(Landau) ordering is via a first-order transition into a rod-like uniaxial nematic phase (N(+)) if Psi > Psi(Landau), or a plate-like uniaxial nematic (N(-)) phase if Psi < Psi(Landau). The B phase is separated from the N(+) and N(-) phases by two lines of continuous transitions meeting at the Landau point. This topology of the phase diagram is in agreement with previous studies of spheroplatelets and biaxial ellipsoids. We have checked the accuracy of our theory by performing numerical calculations of the angle-dependent second virial coefficient, which yields Psi(Landau)approximate to 110 degrees for very long rods, and Psi(Landau)approximate to 90 degrees for short rods. In the latter case, the I-N transitions occur at unphysically high packing fractions, reflecting the inappropriateness of the second-virial approximation in this limit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Schwinger proper-time method is an effective calculation method, explicitly gauge-invariant and nonperturbative. We make use of this method to investigate the radiatively induced Lorentz- and CPT-violating effects in quantum electrodynamics when an axial-vector interaction term is introduced in the fermionic sector. The induced Lorentz- and CPT-violating Chern-Simons term coincides with the one obtained using a covariant derivative expansion but differs from the result usually obtained in other regularization schemes. A possible ambiguity in the approach is also discussed. (C) 2001 Published by Elsevier Science B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors extend their earlier work on the stability of a reacting binary polymer blend with respect to demixing [D. J. Read, Macromolecules 31, 899 (1998); P. I. C. Teixeira , Macromolecules 33, 387 (2000)] to the case where one of the polymers is rod-like and may order nematically. As before, the authors combine the random phase approximation for the free energy with a Markov chain model for the chemistry to obtain the spinodal as a function of the relevant degrees of reaction. These are then calculated by assuming a simple second-order chemical kinetics. Results are presented, for linear systems, which illustrate the effects of varying the proportion of coils and rods, their relative sizes, and the strength of the nematic interaction between the rods. (c) 2007 American Institute of Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims - To compare reading performance in children with and without visual function anomalies and identify the influence of abnormal visual function and other variables in reading ability. Methods - A cross-sectional study was carried in 110 children of school age (6-11 years) with Abnormal Visual Function (AVF) and 562 children with Normal Visual Function (NVF). An orthoptic assessment (visual acuity, ocular alignment, near point of convergence and accommodation, stereopsis and vergences) and autorefraction was carried out. Oral reading was analyzed (list of 34 words). Number of errors, accuracy (percentage of success) and reading speed (words per minute - wpm) were used as reading indicators. Sociodemographic information from parents (n=670) and teachers (n=34) was obtained. Results - Children with AVF had a higher number of errors (AVF=3.00 errors; NVF=1.00 errors; p<0.001), a lower accuracy (AVF=91.18%; NVF=97.06%; p<0.001) and reading speed (AVF=24.71 wpm; NVF=27.39 wpm; p=0.007). Reading speed in the 3rd school grade was not statistically different between the two groups (AVF=31.41 wpm; NVF=32.54 wpm; p=0.113). Children with uncorrected hyperopia (p=0.003) and astigmatism (p=0.019) had worst reading performance. Children in 2nd, 3rd, or 4th grades presented a lower risk of having reading impairment when compared with the 1st grade. Conclusion - Children with AVF had reading impairment in the first school grade. It seems that reading abilities have a wide variation and this disparity lessens in older children. The slow reading characteristics of the children with AVF are similar to dyslexic children, which suggest the need for an eye evaluation before classifying the children as dyslexic.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article addresses the problem of obtaining reduced complexity models of multi-reach water delivery canals that are suitable for robust and linear parameter varying (LPV) control design. In the first stage, by applying a method known from the literature, a finite dimensional rational transfer function of a priori defined order is obtained for each canal reach by linearizing the Saint-Venant equations. Then, by using block diagrams algebra, these different models are combined with linearized gate models in order to obtain the overall canal model. In what concerns the control design objectives, this approach has the advantages of providing a model with prescribed order and to quantify the high frequency uncertainty due to model approximation. A case study with a 3-reach canal is presented, and the resulting model is compared with experimental data. © 2014 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a single precision floating point arithmetic unit with support for multiplication, addition, fused multiply-add, reciprocal, square-root and inverse squareroot with high-performance and low resource usage. The design uses a piecewise 2nd order polynomial approximation to implement reciprocal, square-root and inverse square-root. The unit can be configured with any number of operations and is capable to calculate any function with a throughput of one operation per cycle. The floatingpoint multiplier of the unit is also used to implement the polynomial approximation and the fused multiply-add operation. We have compared our implementation with other state-of-the-art proposals, including the Xilinx Core-Gen operators, and conclude that the approach has a high relative performance/area efficiency. © 2014 Technical University of Munich (TUM).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Penalty and Barrier methods are normally used to solve Nonlinear Optimization Problems constrained problems. The problems appear in areas such as engineering and are often characterised by the fact that involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. This means that optimization methods based on derivatives cannot net used. A Java based API was implemented, including only derivative-free optimizationmethods, to solve both constrained and unconstrained problems, which includes Penalty and Barriers methods. In this work a new penalty function, based on Fuzzy Logic, is presented. This function imposes a progressive penalization to solutions that violate the constraints. This means that the function imposes a low penalization when the violation of the constraints is low and a heavy penalisation when the violation is high. The value of the penalization is not known in beforehand, it is the outcome of a fuzzy inference engine. Numerical results comparing the proposed function with two of the classic penalty/barrier functions are presented. Regarding the presented results one can conclude that the prosed penalty function besides being very robust also exhibits a very good performance.