995 resultados para Gaussian probability function
Resumo:
Introdução – Os estudos Gated – Single Photon Emission Computed Tomography (SPECT) são uma das técnicas de imagiologia cardíaca que mais evoluiu nas últimas décadas. Para a análise das imagens obtidas, a utilização de softwares de quantificação leva a um aumento da reprodutibilidade e exatidão das interpretações. O objetivo deste estudo consiste em avaliar, em estudos Gated-SPECT, a variabilidade intra e interoperador de parâmetros quantitativos de função e perfusão do miocárdio, obtidos com os softwares Quantitative Gated SPECT (QGS) e Quantitative Perfusion SPECT (QPS). Material e métodos – Recorreu-se a uma amostra não probabilística por conveniência de 52 pacientes, que realizaram estudos Gated-SPECT do miocárdio por razões clínicas e que integravam a base de dados da estação de processamento da Xeleris da ESTeSL. Os cinquenta e dois estudos foram divididos em dois grupos distintos: Grupo I (GI) de 17 pacientes com imagens com perfusão do miocárdio normal; Grupo II (GII) de 35 pacientes que apresentavam defeito de perfusão nas imagens Gated-SPECT. Todos os estudos foram processados 5 vezes por 4 operadores independentes (com experiência de 3 anos em Serviços de Medicina Nuclear com casuística média de 15 exames/semana de estudos Gated-SPECT). Para a avaliação da variabilidade intra e interoperador foi utilizado o teste estatístico de Friedman, considerando α=0,01. Resultados e discussão – Para todos os parâmetros avaliados, os respectivos valores de p não traduziram diferenças estatisticamente significativas (p>α). Assim, não foi verificada variabilidade intra ou interoperador significativa no processamento dos estudos Gated-SPECT do miocárdio. Conclusão – Os softwares QGS e QPS são reprodutíveis na quantificação dos parâmetros de função e perfusão avaliados, não existindo variabilidade introduzida pelo operador.
Resumo:
Aims - To compare reading performance in children with and without visual function anomalies and identify the influence of abnormal visual function and other variables in reading ability. Methods - A cross-sectional study was carried in 110 children of school age (6-11 years) with Abnormal Visual Function (AVF) and 562 children with Normal Visual Function (NVF). An orthoptic assessment (visual acuity, ocular alignment, near point of convergence and accommodation, stereopsis and vergences) and autorefraction was carried out. Oral reading was analyzed (list of 34 words). Number of errors, accuracy (percentage of success) and reading speed (words per minute - wpm) were used as reading indicators. Sociodemographic information from parents (n=670) and teachers (n=34) was obtained. Results - Children with AVF had a higher number of errors (AVF=3.00 errors; NVF=1.00 errors; p<0.001), a lower accuracy (AVF=91.18%; NVF=97.06%; p<0.001) and reading speed (AVF=24.71 wpm; NVF=27.39 wpm; p=0.007). Reading speed in the 3rd school grade was not statistically different between the two groups (AVF=31.41 wpm; NVF=32.54 wpm; p=0.113). Children with uncorrected hyperopia (p=0.003) and astigmatism (p=0.019) had worst reading performance. Children in 2nd, 3rd, or 4th grades presented a lower risk of having reading impairment when compared with the 1st grade. Conclusion - Children with AVF had reading impairment in the first school grade. It seems that reading abilities have a wide variation and this disparity lessens in older children. The slow reading characteristics of the children with AVF are similar to dyslexic children, which suggest the need for an eye evaluation before classifying the children as dyslexic.
Resumo:
Mestrado em Tecnologia de Diagnóstico e Intervenção Cardiovascular - Ramo de especialização: Ultrassonografia Cardiovascular
Resumo:
Retinal imaging with a confocal scaning laser Ophthalmoscope (cSLO) involves scanning a small laser beam over the retina and constructing an image from the reflected light. By applying the confocal principle, tomographic images can be produced by measuring a sequence of slices at different depths. However, the thickness of such slices, when compared with the retinal thickness, is too large to give useful 3D retinal images, if no processing is done. In this work, a prototype cSLO was modified in terms hardware and software to give the ability of doing the tomographic measurements with the maximum theoretical axial resolution possible. A model eye was built to test the performance of the system. A novel algorithm has been developed which fits a double Gaussian curve to the axial intensity profiles generated from a stack of images slices. The underlying assumption is that the laser light has mainly been reflected by two structures in the retina, the internal limiting membrane and the retinal pigment epithelium. From the fitted curve topographic images and novel thickness images of the retina can be generated. Deconvolution algorithms have also been developed to improve the axial resolution of the system, using a theoretically predicted cSLO point spread function. The technique was evaluated using measurements made on a model eye, four normal eyes and seven eyes containing retinal pathology. The reproducibility, accuracy and physiological measurements obtained, were compared with available published data, and showed good agreement. The difference in the measurements when using a double rather than a single Gaussian model was also analysed.
Resumo:
This technical report describes the PDFs which have been implemented to model the behaviours of certain parameters of the Repeater-Based Hybrid Wired/Wireless PROFIBUS Network Simulator (RHW2PNetSim) and Bridge-Based Hybrid Wired/Wireless PROFIBUS Network Simulator (BHW2PNetSim).
Resumo:
In this paper, we analyze the performance limits of the slotted CSMA/CA mechanism of IEEE 802.15.4 in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent) on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).
Resumo:
The IEEE 802.15.4 has been adopted as a communication protocol standard for Low-Rate Wireless Private Area Networks (LRWPANs). While it appears as a promising candidate solution for Wireless Sensor Networks (WSNs), its adequacy must be carefully evaluated. In this paper, we analyze the performance limits of the slotted CSMA/CA medium access control (MAC) mechanism in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility and potential for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent), the number of nodes and the data frame size on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We also analytically evaluate the impact of the slotted CSMA/CA overheads on the saturation throughput. We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).
Resumo:
Penalty and Barrier methods are normally used to solve Nonlinear Optimization Problems constrained problems. The problems appear in areas such as engineering and are often characterised by the fact that involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. This means that optimization methods based on derivatives cannot net used. A Java based API was implemented, including only derivative-free optimizationmethods, to solve both constrained and unconstrained problems, which includes Penalty and Barriers methods. In this work a new penalty function, based on Fuzzy Logic, is presented. This function imposes a progressive penalization to solutions that violate the constraints. This means that the function imposes a low penalization when the violation of the constraints is low and a heavy penalisation when the violation is high. The value of the penalization is not known in beforehand, it is the outcome of a fuzzy inference engine. Numerical results comparing the proposed function with two of the classic penalty/barrier functions are presented. Regarding the presented results one can conclude that the prosed penalty function besides being very robust also exhibits a very good performance.
Resumo:
Wind resource evaluation in two sites located in Portugal was performed using the mesoscale modelling system Weather Research and Forecasting (WRF) and the wind resource analysis tool commonly used within the wind power industry, the Wind Atlas Analysis and Application Program (WAsP) microscale model. Wind measurement campaigns were conducted in the selected sites, allowing for a comparison between in situ measurements and simulated wind, in terms of flow characteristics and energy yields estimates. Three different methodologies were tested, aiming to provide an overview of the benefits and limitations of these methodologies for wind resource estimation. In the first methodology the mesoscale model acts like “virtual” wind measuring stations, where wind data was computed by WRF for both sites and inserted directly as input in WAsP. In the second approach, the same procedure was followed but here the terrain influences induced by the mesoscale model low resolution terrain data were removed from the simulated wind data. In the third methodology, the simulated wind data is extracted at the top of the planetary boundary layer height for both sites, aiming to assess if the use of geostrophic winds (which, by definition, are not influenced by the local terrain) can bring any improvement in the models performance. The obtained results for the abovementioned methodologies were compared with those resulting from in situ measurements, in terms of mean wind speed, Weibull probability density function parameters and production estimates, considering the installation of one wind turbine in each site. Results showed that the second tested approach is the one that produces values closest to the measured ones, and fairly acceptable deviations were found using this coupling technique in terms of estimated annual production. However, mesoscale output should not be used directly in wind farm sitting projects, mainly due to the mesoscale model terrain data poor resolution. Instead, the use of mesoscale output in microscale models should be seen as a valid alternative to in situ data mainly for preliminary wind resource assessments, although the application of mesoscale and microscale coupling in areas with complex topography should be done with extreme caution.
Resumo:
OBJECTIVE To analyze the prevalence of individuals at risk of dependence and its associated factors.METHODS The study was based on data from the Catalan Health Survey, Spain conducted in 2010 and 2011. Logistic regression models from a random sample of 3,842 individuals aged ≥ 15 years were used to classify individuals according to the state of their personal autonomy. Predictive models were proposed to identify indicators that helped distinguish dependent individuals from those at risk of dependence. Variables on health status, social support, and lifestyles were considered.RESULTS We found that 18.6% of the population presented a risk of dependence, especially after age 65. Compared with this group, individuals who reported dependence (11.0%) had difficulties performing activities of daily living and had to receive support to perform them. Habits such as smoking, excessive alcohol consumption, and being sedentary were associated with a higher probability of dependence, particularly for women.CONCLUSIONS Difficulties in carrying out activities of daily living precede the onset of dependence. Preserving personal autonomy and function without receiving support appear to be a preventive factor. Adopting an active and healthy lifestyle helps reduce the risk of dependence.
Resumo:
3D laser scanning is becoming a standard technology to generate building models of a facility's as-is condition. Since most constructions are constructed upon planar surfaces, recognition of them paves the way for automation of generating building models. This paper introduces a new logarithmically proportional objective function that can be used in both heuristic and metaheuristic (MH) algorithms to discover planar surfaces in a point cloud without exploiting any prior knowledge about those surfaces. It can also adopt itself to the structural density of a scanned construction. In this paper, a metaheuristic method, genetic algorithm (GA), is used to test this introduced objective function on a synthetic point cloud. The results obtained show the proposed method is capable to find all plane configurations of planar surfaces (with a wide variety of sizes) in the point cloud with a minor distance to the actual configurations. © 2014 IEEE.
Resumo:
This paper proposes a Genetic Algorithm (GA) for the design of combinational logic circuits. The fitness function evaluation is calculated using Fractional Calculus. This approach extends the classical fitness function by including a fractional-order dynamical evaluation. The experiments reveal superior results when comparing with the classical method.
Fractional derivatives: probability interpretation and frequency response of rational approximations
Resumo:
The theory of fractional calculus (FC) is a useful mathematical tool in many applied sciences. Nevertheless, only in the last decades researchers were motivated for the adoption of the FC concepts. There are several reasons for this state of affairs, namely the co-existence of different definitions and interpretations, and the necessity of approximation methods for the real time calculation of fractional derivatives (FDs). In a first part, this paper introduces a probabilistic interpretation of the fractional derivative based on the Grünwald-Letnikov definition. In a second part, the calculation of fractional derivatives through Padé fraction approximations is analyzed. It is observed that the probabilistic interpretation and the frequency response of fraction approximations of FDs reveal a clear correlation between both concepts.