986 resultados para Square-law nonlinearity symbol timing estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper consists in the evaluation of the exposure rate to ionizing radiation to which professionals working in surgical procedures which require radiological examinations are subjected. Were initially performed real-time readings of exposure rate within four distinct operating rooms during the execution of four surgical procedures that made use of fluoroscopy equipment (including three orthopedic surgeries, one in the shoulder, one in the arm, another for deployment of metal pin in the leg region, and a fourth for vascular procedure); in these surgeries were used ionization chamber detector and an electrometer. In order to check the values achieved, was made a re-evaluation of the distribution of the rate of exposure to radiation, from the surgical procedures, now with thermoluminescent dosimeters (TLDs). For this, thirty TLDs were distributed in the operating rooms, arranged in points of interest as occupation by professionals. The TLDs were prepared for thirty consecutive days, after which they were removed and replaced with new dosimeters not exposed yet. The dosimeters were subjected to reading of the rate of exposure; this procedure was repeated for four months without interruption. The quantification of the results sought primarily to convert the rate of exposure for equivalent dose rate, both in measurements with ionization chamber as in measurements with TLDs, in order to highlight the presence of the biological effect of ionizing radiation for comparisons within scientific context. Then, the results were plotted to establish the relationship between the values of equivalent dose and the distance to the central axis of the x -ray source, confirming the inverse square law for distance. Finally, the values were associated with the maximum limit recommended by the legislation for occupationally exposed individuals. The methodology for the analysis and quantification of the data in this work aims at implementing a work plan that meets ...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La validità della legge di gravitazione di Newton, o ISL (dall'inglese inverse square law) è stata ampiamente dimostrata dalle osservazioni astronomiche nel sistema solare (raggio d'azione di circa 10^{7}- 10^{9} km). Gli esperimenti effettuati su scale geologiche (raggio d'azione tra cm e km), eredi dell'esperimento di Cavendish, sono stati capaci di fornire un valore sperimentale della costante G della ISL affetto però da un'incertezza consistente (la precisione con cui si conosce G è dell'ordine di grandezza di 10^{-4}). L'interesse nella determinazione di un valore più preciso della costante G è aumentato negli ultimi decenni, supportato dalla necessità di mettere alla prova nuove teorie di gravitazione non Newtoniane emergenti, e da un avanzamento tecnologico negli apparati di misura, che sono ora in grado di rilevare l'interazione gravitazionale anche su distanze molto brevi (al di sotto del mm). In questo elaborato vengono brevemente presentate alcune delle teorie avanzate negli ultimi decenni che hanno reso urgente la riduzione dell'incertezza sulla misura di G, elencando successivamente alcuni importanti esperimenti condotti per la determinazione di un valore di G ripercorrendone brevemente i metodi sperimentali seguiti. Tra gli esperimenti presentati, sono infine analizzati nel dettaglio due esperimenti significativi: La misura della costante gravitazionale effettuata a corto raggio con l'utilizzo di atomi freddi nell'ambito dell'esperimento MAGIA a Firenze, e l'osservazione della presunta variazione di G su scale temporali relativamente lunghe, effettuata mediante l'osservazione (della durata di 21 anni)della pulsar binaria PSR J1713+0747.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How do signals from the 2 eyes combine and interact? Our recent work has challenged earlier schemes in which monocular contrast signals are subject to square-law transduction followed by summation across eyes and binocular gain control. Much more successful was a new 'two-stage' model in which the initial transducer was almost linear and contrast gain control occurred both pre- and post-binocular summation. Here we extend that work by: (i) exploring the two-dimensional stimulus space (defined by left- and right-eye contrasts) more thoroughly, and (ii) performing contrast discrimination and contrast matching tasks for the same stimuli. Twenty-five base-stimuli made from 1 c/deg patches of horizontal grating, were defined by the factorial combination of 5 contrasts for the left eye (0.3-32%) with five contrasts for the right eye (0.3-32%). Other than in contrast, the gratings in the two eyes were identical. In a 2IFC discrimination task, the base-stimuli were masks (pedestals), where the contrast increment was presented to one eye only. In a matching task, the base-stimuli were standards to which observers matched the contrast of either a monocular or binocular test grating. In the model, discrimination depends on the local gradient of the observer's internal contrast-response function, while matching equates the magnitude (rather than gradient) of response to the test and standard. With all model parameters fixed by previous work, the two-stage model successfully predicted both the discrimination and the matching data and was much more successful than linear or quadratic binocular summation models. These results show that performance measures and perception (contrast discrimination and contrast matching) can be understood in the same theoretical framework for binocular contrast vision. © 2007 VSP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last ten years our understanding of early spatial vision has improved enormously. The long-standing model of probability summation amongst multiple independent mechanisms with static output nonlinearities responsible for masking is obsolete. It has been replaced by a much more complex network of additive, suppressive, and facilitatory interactions and nonlinearities across eyes, area, spatial frequency, and orientation that extend well beyond the classical recep-tive field (CRF). A review of a substantial body of psychophysical work performed by ourselves (20 papers), and others, leads us to the following tentative account of the processing path for signal contrast. The first suppression stage is monocular, isotropic, non-adaptable, accelerates with RMS contrast, most potent for low spatial and high temporal frequencies, and extends slightly beyond the CRF. Second and third stages of suppression are difficult to disentangle but are possibly pre- and post-binocular summation, and involve components that are scale invariant, isotropic, anisotropic, chromatic, achromatic, adaptable, interocular, substantially larger than the CRF, and saturated by contrast. The monocular excitatory pathways begin with half-wave rectification, followed by a preliminary stage of half-binocular summation, a square-law transducer, full binocular summation, pooling over phase, cross-mechanism facilitatory interactions, additive noise, linear summation over area, and a slightly uncertain decision-maker. The purpose of each of these interactions is far from clear, but the system benefits from area and binocular summation of weak contrast signals as well as area and ocularity invariances above threshold (a herd of zebras doesn't change its contrast when it increases in number or when you close one eye). One of many remaining challenges is to determine the stage or stages of spatial tuning in the excitatory pathway.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our understanding of early spatial vision owes much to contrast masking and summation paradigms. In particular, the deep region of facilitation at low mask contrasts is thought to indicate a rapidly accelerating contrast transducer (eg a square-law or greater). In experiment 1, we tapped an early stage of this process by measuring monocular and binocular thresholds for patches of 1 cycle deg-1 sine-wave grating. Threshold ratios were around 1.7, implying a nearly linear transducer with an exponent around 1.3. With this form of transducer, two previous models (Legge, 1984 Vision Research 24 385 - 394; Meese et al, 2004 Perception 33 Supplement, 41) failed to fit the monocular, binocular, and dichoptic masking functions measured in experiment 2. However, a new model with two-stages of divisive gain control fits the data very well. Stage 1 incorporates nearly linear monocular transducers (to account for the high level of binocular summation and slight dichoptic facilitation), and monocular and interocular suppression (to fit the profound 42 Oral presentations: Spatial vision Thursday dichoptic masking). Stage 2 incorporates steeply accelerating transduction (to fit the deep regions of monocular and binocular facilitation), and binocular summation and suppression (to fit the monocular and binocular masking). With all model parameters fixed from the discrimination thresholds, we examined the slopes of the psychometric functions. The monocular and binocular slopes were steep (Weibull ߘ3-4) at very low mask contrasts and shallow (ߘ1.2) at all higher contrasts, as predicted by all three models. The dichoptic slopes were steep (ߘ3-4) at very low contrasts, and very steep (ß>5.5) at high contrasts (confirming Meese et al, loco cit.). A crucial new result was that intermediate dichoptic mask contrasts produced shallow slopes (ߘ2). Only the two-stage model predicted the observed pattern of slope variation, so providing good empirical support for a two-stage process of binocular contrast transduction. [Supported by EPSRC GR/S74515/01]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classical studies of area summation measure contrast detection thresholds as a function of grating diameter. Unfortunately, (i) this approach is compromised by retinal inhomogeneity and (ii) it potentially confounds summation of signal with summation of internal noise. The Swiss cheese stimulus of T. S. Meese and R. J. Summers (2007) and the closely related Battenberg stimulus of T. S. Meese (2010) were designed to avoid these problems by keeping target diameter constant and modulating interdigitated checks of first-order carrier contrast within the stimulus region. This approach has revealed a contrast integration process with greater potency than the classical model of spatial probability summation. Here, we used Swiss cheese stimuli to investigate the spatial limits of contrast integration over a range of carrier frequencies (1–16 c/deg) and raised plaid modulator frequencies (0.25–32 cycles/check). Subthreshold summation for interdigitated carrier pairs remained strong (~4 to 6 dB) up to 4 to 8 cycles/check. Our computational analysis of these results implied linear signal combination (following square-law transduction) over either (i) 12 carrier cycles or more or (ii) 1.27 deg or more. Our model has three stages of summation: short-range summation within linear receptive fields, medium-range integration to compute contrast energy for multiple patches of the image, and long-range pooling of the contrast integrators by probability summation. Our analysis legitimizes the inclusion of widespread integration of signal (and noise) within hierarchical image processing models. It also confirms the individual differences in the spatial extent of integration that emerge from our approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The slope of the two-interval, forced-choice psychometric function (e.g. the Weibull parameter, ß) provides valuable information about the relationship between contrast sensitivity and signal strength. However, little is known about how or whether ß varies with stimulus parameters such as spatiotemporal frequency and stimulus size and shape. A second unresolved issue concerns the best way to estimate the slope of the psychometric function. For example, if an observer is non-stationary (e.g. their threshold drifts between experimental sessions), ß will be underestimated if curve fitting is performed after collapsing the data across experimental sessions. We measured psychometric functions for 2 experienced observers for 14 different spatiotemporal configurations of pulsed or flickering grating patches and bars on each of 8 days. We found ß ˜ 3 to be fairly constant across almost all conditions, consistent with a fixed nonlinear contrast transducer and/or a constant level of intrinsic stimulus uncertainty (e.g. a square law transducer and a low level of intrinsic uncertainty). Our analysis showed that estimating a single ß from results averaged over several experimental sessions was slightly more accurate than averaging multiple estimates from several experimental sessions. However, the small levels of non-stationarity (SD ˜ 0.8 dB) meant that the difference between the estimates was, in practice, negligible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To extend our understanding of the early visual hierarchy, we investigated the long-range integration of first- and second-order signals in spatial vision. In our first experiment we performed a conventional area summation experiment where we varied the diameter of (a) luminance-modulated (LM) noise and (b) contrastmodulated (CM) noise. Results from the LM condition replicated previous findings with sine-wave gratings in the absence of noise, consistent with long-range integration of signal contrast over space. For CM, the summation function was much shallower than for LM suggesting, at first glance, that the signal integration process was spatially less extensive than for LM. However, an alternative possibility was that the high spatial frequency noise carrier for the CM signal was attenuated by peripheral retina (or cortex), thereby impeding our ability to observe area summation of CM in the conventional way. To test this, we developed the ''Swiss cheese'' stimulus of Meese and Summers (2007) in which signal area can be varied without changing the stimulus diameter, providing some protection against inhomogeneity of the retinal field. Using this technique and a two-component subthreshold summation paradigm we found that (a) CM is spatially integrated over at least five stimulus cycles (possibly more), (b) spatial integration follows square-law signal transduction for both LM and CM and (c) the summing device integrates over spatially-interdigitated LM and CM signals when they are co-oriented, but not when crossoriented. The spatial pooling mechanism that we have identified would be a good candidate component for amodule involved in representing visual textures, including their spatial extent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurements of area summation for luminance-modulated stimuli are typically confounded by variations in sensitivity across the retina. Recently we conducted a detailed analysis of sensitivity across the visual field (Baldwin et al, 2012) and found it to be well-described by a bilinear “witch’s hat” function: sensitivity declines rapidly over the first 8 cycles or so, more gently thereafter. Here we multiplied luminance-modulated stimuli (4 c/deg gratings and “Swiss cheeses”) by the inverse of the witch’s hat function to compensate for the inhomogeneity. This revealed summation functions that were straight lines (on double log axes) with a slope of -1/4 extending to ≥33 cycles, demonstrating fourth-root summation of contrast over a wider area than has previously been reported for the central retina. Fourth-root summation is typically attributed to probability summation, but recent studies have rejected that interpretation in favour of a noisy energy model that performs local square-law transduction of the signal, adds noise at each location of the target and then sums over signal area. Modelling shows our results to be consistent with a wide field application of such a contrast integrator. We reject a probability summation model, a quadratic model and a matched template model of our results under the assumptions of signal detection theory. We also reject the high threshold theory of contrast detection under the assumption of probability summation over area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ce projet de recherche s’inscrit dans le domaine de la dosimétrie à scintillation en radiothérapie, plus précisément en curiethérapie à haut débit de dose (HDR). Lors de ce type de traitement, la dose est délivrée localement, ce qui implique de hauts gradients de dose autour de la source. Le but de ce travail est d’obtenir un détecteur mesurant la dose en 2 points distincts et optimisé pour la mesure de dose en curiethérapie HDR. Pour ce faire, le projet de recherche est séparé en deux études : la caractérisation spectrale du détecteur à 2-points et la caractérisation du système de photodétecteur menant à la mesure de la dose. D’abord, la chaine optique d’un détecteur à scintillation à 2-points est caractérisée à l’aide d’un spectromètre afin de déterminer les composantes scintillantes optimales. Cette étude permet de construire quelques détecteurs à partir des composantes choisies pour ensuite les tester avec le système de photodétecteur multi-point. Le système de photodétecteur est aussi caractérisé de façon à évaluer les limites de sensibilité pour le détecteur 2-points choisi précédemment. L’objectif final est de pouvoir mesurer le débit de dose avec précision et justesse aux deux points de mesure du détecteur multi-point lors d’un traitement de curiethérapie HDR.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The symbol transition density in a digitally modulated signal affects the performance of practical synchronization schemes designed for timing recovery. This paper focuses on the derivation of simple performance limits for the estimation of the time delay of a noisy linearly modulated signal in the presence of various degrees of symbol correlation produced by the varioustransition densities in the symbol streams. The paper develops high- and low-signal-to-noise ratio (SNR) approximations of the so-called (Gaussian) unconditional Cramér–Rao bound (UCRB),as well as general expressions that are applicable in all ranges of SNR. The derived bounds are valid only for the class of quadratic, non-data-aided (NDA) timing recovery schemes. To illustrate the validity of the derived bounds, they are compared with the actual performance achieved by some well-known quadratic NDA timing recovery schemes. The impact of the symbol transitiondensity on the classical threshold effect present in NDA timing recovery schemes is also analyzed. Previous work on performancebounds for timing recovery from various authors is generalized and unified in this contribution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The adhesive bonding technique enables both weight and complexity reduction in structures that require some joining technique to be used on account of fabrication/component shape issues. Because of this, adhesive bonding is also one of the main repair methods for metal and composite structures by the strap and scarf configurations. The availability of strength prediction techniques for adhesive joints is essential for their generalized application and it can rely on different approaches, such as mechanics of materials, conventional fracture mechanics or damage mechanics. These two last techniques depend on the measurement of the fracture toughness (GC) of materials. Within the framework of damage mechanics, a valid option is the use of Cohesive Zone Modelling (CZM) coupled with Finite Element (FE) analyses. In this work, CZM laws for adhesive joints considering three adhesives with varying ductility were estimated. The End-Notched Flexure (ENF) test geometry was selected based on overall test simplicity and results accuracy. The adhesives Araldite® AV138, Araldite® 2015 and Sikaforce® 7752 were studied between high-strength aluminium adherends. Estimation of the CZM laws was carried out by an inverse methodology based on a curve fitting procedure, which enabled a precise estimation of the adhesive joints’ behaviour. The work allowed to conclude that a unique set of shear fracture toughness (GIIC) and shear cohesive strength (ts0) exists for each specimen that accurately reproduces the adhesive layer’ behaviour. With this information, the accurate strength prediction of adhesive joints in shear is made possible by CZM.