917 resultados para Power Sensitivity Model
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.
Resumo:
The vulnerability of subpopulations of retinal neurons delineated by their content of cytoskeletal or calcium-binding proteins was evaluated in the retinas of cynomolgus monkeys in which glaucoma was produced with an argon laser. We quantitatively compared the number of neurons containing either neurofilament (NF) protein, parvalbumin, calbindin or calretinin immunoreactivity in central and peripheral portions of the nasal and temporal quadrants of the retina from glaucomatous and fellow non-glaucomatous eyes. There was no significant difference between the proportion of amacrine, horizontal and bipolar cells labeled with antibodies to the calcium-binding proteins comparing the two eyes. NF triplet immunoreactivity was present in a subpopulation of retinal ganglion cells, many of which, but not all, likely correspond to large ganglion cells that subserve the magnocellular visual pathway. Loss of NF protein-containing retinal ganglion cells was widespread throughout the central (59-77% loss) and peripheral (96-97%) nasal and temporal quadrants and was associated with the loss of NF-immunoreactive optic nerve fibers in the glaucomatous eyes. Comparison of counts of NF-immunoreactive neurons with total cell loss evaluated by Nissl staining indicated that NF protein-immunoreactive cells represent a large proportion of the cells that degenerate in the glaucomatous eyes, particularly in the peripheral regions of the retina. Such data may be useful in determining the cellular basis for sensitivity to this pathologic process and may also be helpful in the design of diagnostic tests that may be sensitive to the loss of the subset of NF-immunoreactive ganglion cells.
Resumo:
L'étude du mouvement des organismes est essentiel pour la compréhension du fonctionnement des écosystèmes. Dans le cas des écosystèmes marins exploités, cela amène à s'intéresser aux stratégies spatiales des pêcheurs. L'une des approches les plus utilisées pour la modélisation du mouvement des prédateurs supé- rieurs est la marche aléatoire de Lévy. Une marche aléatoire est un modèle mathématique composé par des déplacements aléatoires. Dans le cas de Lévy, les longueurs des déplacements suivent une loi stable de Lévy. Dans ce cas également, les longueurs, lorsqu'elles tendent vers l'in ni (in praxy lorsqu'elles sont grandes, grandes par rapport à la médiane ou au troisième quartile par exemple), suivent une loi puissance caractéristique du type de marche aléatoire de Lévy (Cauchy, Brownien ou strictement Lévy). Dans la pratique, outre que cette propriété est utilisée de façon réciproque sans fondement théorique, les queues de distribution, notion par ailleurs imprécise, sont modélisée par des lois puissances sans que soient discutées la sensibilité des résultats à la dé nition de la queue de distribution, et la pertinence des tests d'ajustement et des critères de choix de modèle. Dans ce travail portant sur les déplacements observés de trois bateaux de pêche à l'anchois du Pérou, plusieurs modèles de queues de distribution (log-normal, exponentiel, exponentiel tronqué, puissance et puissance tronqué) ont été comparés ainsi que deux dé nitions possible de queues de distribution (de la médiane à l'in ni ou du troisième quartile à l'in ni). Au plan des critères et tests statistiques utilisés, les lois tronquées (exponentielle et puissance) sont apparues les meilleures. Elles intègrent en outre le fait que, dans la pratique, les bateaux ne dépassent pas une certaine limite de longueur de déplacement. Le choix de modèle est apparu sensible au choix du début de la queue de distribution : pour un même bateau, le choix d'un modèle tronqué ou l'autre dépend de l'intervalle des valeurs de la variable sur lequel le modèle est ajusté. Pour nir, nous discutons les implications en écologie des résultats de ce travail.
Resumo:
Analysis of variance is commonly used in morphometry in order to ascertain differences in parameters between several populations. Failure to detect significant differences between populations (type II error) may be due to suboptimal sampling and lead to erroneous conclusions; the concept of statistical power allows one to avoid such failures by means of an adequate sampling. Several examples are given in the morphometry of the nervous system, showing the use of the power of a hierarchical analysis of variance test for the choice of appropriate sample and subsample sizes. In the first case chosen, neuronal densities in the human visual cortex, we find the number of observations to be of little effect. For dendritic spine densities in the visual cortex of mice and humans, the effect is somewhat larger. A substantial effect is shown in our last example, dendritic segmental lengths in monkey lateral geniculate nucleus. It is in the nature of the hierarchical model that sample size is always more important than subsample size. The relative weight to be attributed to subsample size thus depends on the relative magnitude of the between observations variance compared to the between individuals variance.
Resumo:
Computed Tomography (CT) represents the standard imaging modality for tumor volume delineation for radiotherapy treatment planning of retinoblastoma despite some inherent limitations. CT scan is very useful in providing information on physical density for dose calculation and morphological volumetric information but presents a low sensitivity in assessing the tumor viability. On the other hand, 3D ultrasound (US) allows a highly accurate definition of the tumor volume thanks to its high spatial resolution but it is not currently integrated in the treatment planning but used only for diagnosis and follow-up. Our ultimate goal is an automatic segmentation of gross tumor volume (GTV) in the 3D US, the segmentation of the organs at risk (OAR) in the CT and the registration of both modalities. In this paper, we present some preliminary results in this direction. We present 3D active contour-based segmentation of the eye ball and the lens in CT images; the presented approach incorporates the prior knowledge of the anatomy by using a 3D geometrical eye model. The automated segmentation results are validated by comparing with manual segmentations. Then, we present two approaches for the fusion of 3D CT and US images: (i) landmark-based transformation, and (ii) object-based transformation that makes use of eye ball contour information on CT and US images.
Resumo:
Intensification of agricultural production without a sound management and regulations can lead to severe environmental problems, as in Western Santa Catarina State, Brazil, where intensive swine production has caused large accumulations of manure and consequently water pollution. Natural resource scientists are asked by decision-makers for advice on management and regulatory decisions. Distributed environmental models are useful tools, since they can be used to explore consequences of various management practices. However, in many areas of the world, quantitative data for model calibration and validation are lacking. The data-intensive distributed environmental model AgNPS was applied in a data-poor environment, the upper catchment (2,520 ha) of the Ariranhazinho River, near the city of Seara, in Santa Catarina State. Steps included data preparation, cell size selection, sensitivity analysis, model calibration and application to different management scenarios. The model was calibrated based on a best guess for model parameters and on a pragmatic sensitivity analysis. The parameters were adjusted to match model outputs (runoff volume, peak runoff rate and sediment concentration) closely with the sparse observed data. A modelling grid cell resolution of 150 m adduced appropriate and computer-fit results. The rainfall runoff response of the AgNPS model was calibrated using three separate rainfall ranges (< 25, 25-60, > 60 mm). Predicted sediment concentrations were consistently six to ten times higher than observed, probably due to sediment trapping along vegetated channel banks. Predicted N and P concentrations in stream water ranged from just below to well above regulatory norms. Expert knowledge of the area, in addition to experience reported in the literature, was able to compensate in part for limited calibration data. Several scenarios (actual, recommended and excessive manure applications, and point source pollution from swine operations) could be compared by the model, using a relative ranking rather than quantitative predictions.
Resumo:
Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).
Resumo:
BACKGROUND/OBJECTIVES: To assess the distribution of interleukin (IL)-1β, IL-6, tumour necrosis factor (TNF)-α and C-reactive protein (CRP) according to the different definitions of metabolically healthy obesity (MHO). SUBJECTS/METHODS: A total of 881 obese (body mass index (BMI) > or =30 kg/m2) subjects derived from the population-based CoLaus Study participated in this study. MHO was defined using six sets of criteria including different combinations of waist, blood pressure, total high-density lipoprotein cholesterol or low-density lipoprotein -cholesterol, triglycerides, fasting glucose, homeostasis model, high-sensitivity CRP, and personal history of cardiovascular, respiratory or metabolic diseases. IL-1β, IL-6 and TNF-α were assessed by multiplexed flow cytometric assay. CRP was assessed by immunoassay. RESULTS: On bivariate analysis some, but not all, definitions of MHO led to significantly lower levels of IL-6, TNF-α and CRP compared with non-MH obese subjects. Most of these differences became nonsignificant after multivariate analysis. An posteriori analysis showed a statistical power between 9 and 79%, depending on the inflammatory biomarker and MHO definition considered. Further increasing sample size to overweight+obese individuals (BMI > or =25 kg/m2, n=2917) showed metabolically healthy status to be significantly associated with lower levels of CRP, while no association was found for IL-1β. Significantly lower IL-6 and TNF-α levels were also found with some but not all MHO definitions, the differences in IL-6 becoming nonsignificant after adjusting for abdominal obesity or percent body fat. CONCLUSIONS: MHO individuals present with decreased levels of CRP and, depending on MHO definition, also with decreased levels in IL-6 and TNF-α. Conversely, no association with IL-1β levels was found.
Resumo:
The objective of this work is to study the impact of the unions' bargaining power on production and wages. We present a model where a competitive final good is produced through two substitutable intermediate goods, one produced by unskilled labor and the other by skilled labor. Potential workers decide at their cost to become skilled or unskilled and, thus, labor supplies are determined endogenously. We find that the reallocation of the labor supplies due to changes in the unskilled (or skilled) unions¿ bargaining power may have a positive impact on the final goods production. At the same time, total labor earnings increase with the unskilled unions¿ bargaining power if the final goods production increases too. We also show that the minimum wage legislation has efects similar to an increase in the bargaining power of the unskilled unions.
Resumo:
A numerical study is presented of the third-dimensional Gaussian random-field Ising model at T=0 driven by an external field. Standard synchronous relaxation dynamics is employed to obtain the magnetization versus field hysteresis loops. The focus is on the analysis of the number and size distribution of the magnetization avalanches. They are classified as being nonspanning, one-dimensional-spanning, two-dimensional-spanning, or three-dimensional-spanning depending on whether or not they span the whole lattice in different space directions. Moreover, finite-size scaling analysis enables identification of two different types of nonspanning avalanches (critical and noncritical) and two different types of three-dimensional-spanning avalanches (critical and subcritical), whose numbers increase with L as a power law with different exponents. We conclude by giving a scenario for avalanche behavior in the thermodynamic limit.
Resumo:
In this paper, we study dynamical aspects of the two-dimensional (2D) gonihedric spin model using both numerical and analytical methods. This spin model has vanishing microscopic surface tension and it actually describes an ensemble of loops living on a 2D surface. The self-avoidance of loops is parametrized by a parameter ¿. The ¿=0 model can be mapped to one of the six-vertex models discussed by Baxter, and it does not have critical behavior. We have found that allowing for ¿¿0 does not lead to critical behavior either. Finite-size effects are rather severe, and in order to understand these effects, a finite-volume calculation for non-self-avoiding loops is presented. This model, like his 3D counterpart, exhibits very slow dynamics, but a careful analysis of dynamical observables reveals nonglassy evolution (unlike its 3D counterpart). We find, also in this ¿=0 case, the law that governs the long-time, low-temperature evolution of the system, through a dual description in terms of defects. A power, rather than logarithmic, law for the approach to equilibrium has been found.
Resumo:
We study the exact ground state of the two-dimensional random-field Ising model as a function of both the external applied field B and the standard deviation ¿ of the Gaussian random-field distribution. The equilibrium evolution of the magnetization consists in a sequence of discrete jumps. These are very similar to the avalanche behavior found in the out-of-equilibrium version of the same model with local relaxation dynamics. We compare the statistical distributions of magnetization jumps and find that both exhibit power-law behavior for the same value of ¿. The corresponding exponents are compared.
Resumo:
The development of side-branching in solidifying dendrites in a regime of large values of the Peclet number is studied by means of a phase-field model. We have compared our numerical results with experiments of the preceding paper and we obtain good qualitative agreement. The growth rate of each side branch shows a power-law behavior from the early stages of its life. From their birth, branches which finally succeed in the competition process of side-branching development have a greater growth exponent than branches which are stopped. Coarsening of branches is entirely defined by their geometrical position relative to their dominant neighbors. The winner branches escape from the diffusive field of the main dendrite and become independent dendrites.
Resumo:
We introduce two coupled map lattice models with nonconservative interactions and a continuous nonlinear driving. Depending on both the degree of conservation and the convexity of the driving we find different behaviors, ranging from self-organized criticality, in the sense that the distribution of events (avalanches) obeys a power law, to a macroscopic synchronization of the population of oscillators, with avalanches of the size of the system.
Resumo:
A dynamical model based on a continuous addition of colored shot noises is presented. The resulting process is colored and non-Gaussian. A general expression for the characteristic function of the process is obtained, which, after a scaling assumption, takes on a form that is the basis of the results derived in the rest of the paper. One of these is an expansion for the cumulants, which are all finite, subject to mild conditions on the functions defining the process. This is in contrast with the Lévy distribution¿which can be obtained from our model in certain limits¿which has no finite moments. The evaluation of the spectral density and the form of the probability density function in the tails of the distribution shows that the model exhibits a power-law spectrum and long tails in a natural way. A careful analysis of the characteristic function shows that it may be separated into a part representing a Lévy process together with another part representing the deviation of our model from the Lévy process. This