9 resultados para Aperture height index

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The exposure index (lgM) obtained from a radiographic image may be a useful feedback indicator to the radiographer about the appropriate exposure level in routine clinical practice. This study aims to evaluate lgM in orthopaedic radiography performed in the standard clinical environment. We analysed the lgM of 267 exposures performed with an AGFA CR system. The mean value of lgM in our sample is 2.14. A significant difference (P=0.000<0.05) from 1.96 lgM reference is shown. Data show that 72% of exposures are above the 1.96 lgM and 42% are above the limit of 2.26. Median values of lgM are above 1.96 and below 2.26 for Speed class (SC) 200 (2.16) and SC400 (2.13). The interquartile range is lower in SC400 than in SC200. Data seem to indicate that lgM values are above the manufacturer’s reference of 1.96. Departmental exposure charts should be optimised to reduce the dose given to patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we analyze the relationship between volatility in index futures markets and the number of open and closed positions. We observe that, although in general both positions are positively correlated with contemporaneous volatility, in the case of S&P 500, only the number of open positions has influence over the volatility. Additionally, we observe a stronger positive relationship on days characterized by extreme movements of these contracting movements dominating the market. Finally, our findings suggest that day-traders are not associated to an increment of volatility, whereas uninformed traders, both opening and closing their positions, have to do with it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a Markov chain framework to characterize the behavior of the CBOE Volatility Index (VIX index). Two possible regimes are considered: high volatility and low volatility. The specification accounts for deviations from normality and the existence of persistence in the evolution of the VIX index. Since the time evolution of the VIX index seems to indicate that its conditional variance is not constant over time, I consider two different versions of the model. In the first one, the variance of the index is a function of the volatility regime, whereas the second version includes an autoregressive conditional heteroskedasticity (ARCH) specification for the conditional variance of the index.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe the Lorenz links generated by renormalizable Lorenz maps with reducible kneading invariant (K(f)(-), = K(f)(+)) = (X, Y) * (S, W) in terms of the links corresponding to each factor. This gives one new kind of operation that permits us to generate new knots and links from the ones corresponding to the factors of the *-product. Using this result we obtain explicit formulas for the genus and the braid index of this renormalizable Lorenz knots and links. Then we obtain explicit formulas for sequences of these invariants, associated to sequences of renormalizable Lorenz maps with kneading invariant (X, Y) * (S,W)*(n), concluding that both grow exponentially. This is specially relevant, since it is known that topological entropy is constant on the archipelagoes of renormalization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective - To evaluate the effect of prepregnancy body mass index (BMI), energy and macronutrient intakes during pregnancy, and gestational weight gain (GWG) on the body composition of full-term appropriate-for-gestational age neonates. Study Design - This is a cross-sectional study of a systematically recruited convenience sample of mother-infant pairs. Food intake during pregnancy was assessed by food frequency questionnaire and its nutritional value by the Food Processor Plus (ESHA Research Inc, Salem, OR). Neonatal body composition was assessed both by anthropometry and air displacement plethysmography. Explanatory models for neonatal body composition were tested by multiple linear regression analysis. Results - A total of 100 mother-infant pairs were included. Prepregnancy overweight was positively associated with offspring weight, weight/length, BMI, and fat-free mass in the whole sample; in males, it was also positively associated with midarm circumference, ponderal index, and fat mass. Higher energy intake from carbohydrate was positively associated with midarm circumference and weight/length in the whole sample. Higher GWG was positively associated with weight, length, and midarm circumference in females. Conclusion - Positive adjusted associations were found between both prepregnancy BMI and energy intake from carbohydrate and offspring body size in the whole sample. Positive adjusted associations were also found between prepregnancy overweight and adiposity in males, and between GWG and body size in females.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução – A estimativa da função renal relativa (FRR) através de cintigrafia renal (CR) com ácido dimercaptossuccínico marcado com tecnécio-99 metaestável (99mTc-DMSA) pode ser influenciada pela profundidade renal (PR), atendendo ao efeito de atenuação por parte dos tecidos moles que envolvem os rins. Dado que raramente é conhecida esta mesma PR, diferentes métodos de correção de atenuação (CA) foram desenvolvidos, nomeadamente os que utilizam fórmulas empíricas, como os de Raynaud, de Taylor ou de Tonnesen, ou recorrendo à aplicação direta da média geométrica (MG). Objetivos – Identificar a influência dos diferentes métodos de CA na quantificação da função renal relativa através da CR com 99mTc-DMSA e avaliar a respetiva variabilidade dos resultados de PR. Metodologia – Trinta e um pacientes com indicação para realização de CR com 99mTc-DMSA foram submetidos ao mesmo protocolo de aquisição. O processamento foi efetuado por dois operadores independentes, três vezes por exame, variando para o mesmo processamento o método de determinação da FRR: Raynaud, Taylor, Tonnesen, MG ou sem correção de atenuação (SCA). Aplicou-se o teste de Friedman para o estudo da influência dos diferentes métodos de CA e a correlação de Pearson para a associação e significância dos valores de PR com as variáveis idade, peso e altura. Resultados – Da aplicação do teste de Friedman verificaram-se diferenças estatisticamente significativas entre os vários métodos (p=0,000), excetuando as comparações SCA/Raynaud, Tonnesen/MG e Taylor/MG (p=1,000) para ambos os rins. A correlação de Pearson demonstra que a variável peso apresenta uma correlação forte positiva com todos os métodos de cálculo da PR. Conclusões – O método de Taylor, entre os três métodos de cálculo de PR, é o que apresenta valores de FRR mais próximos da MG. A escolha do método de CA influencia significativamente os parâmetros quantitativos de FRR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goals of the present work are the evaluation of the influence of several variables and test parameters on the melt flow index (MFI) of thermoplastics, and the determination of the uncertainty associated with the measurements. To evaluate the influence of test parameters on the measurement of MFI the design of experiments (DOE) approach has been used. The uncertainty has been calculated using a "bottom-up" approach given in the "Guide to the Expression of the Uncertainty of Measurement" (GUM). Since an analytical expression relating the output response (MFI) with input parameters does not exist, it has been necessary to build mathematical models by adjusting the experimental observations of the response variable in accordance with each input parameter. Subsequently, the determination of the uncertainty associated with the measurement of MFI has been performed by applying the law of propagation of uncertainty to the values of uncertainty of the input parameters. Finally, the activation energy (Ea) of the melt flow at around 200 degrees C and the respective uncertainty have also been determined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.