545 resultados para SNR maximisation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La séparation du contrôle et du financement de l'entreprise, mise en lumière par Berle et Means dans les années 1930, allait remettre en question la théorie micro-économique de la firme, plus précisément l'hypothèse néoclassique de la maximisation des profits. L'argument central résumant la pensée de Berle & Means (1932) est que lorsque la propriété de la firme est diffuse, particulièrement dans les cas où il y a un grand nombre d'actionnaires minoritaires, les dirigeants vont employer les ressources de manière à satisfaire leurs ambitions plutôt que de maximiser la richesse des actionnaires. Ainsi, selon la théorie de l'agence, les gestionnaires (les agents) pourraient avoir d'autres buts que la maximisation de l'avoir des actionnaires (le principal). Il est également vrai, que pour diverses raisons, les actionnaires ont d'autres visées que la maximisation des profits, les actionnaires, par exemple, pourraient être averses au risque…

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To evaluate if physical measures of noise predict image quality at high and low noise levels. Method: Twenty-four images were acquired on a DR system using a Pehamed DIGRAD phantom at three kVp settings (60, 70 and 81) across a range of mAs values. The image acquisition setup consisted of 14 cm of PMMA slabs with the phantom placed in the middle at 120 cm SID. Signal-to-noise ratio (SNR) and Contrast-tonoise ratio (CNR) were calculated for each of the images using ImageJ software and 14 observers performed image scoring. Images were scored according to the observer`s evaluation of objects visualized within the phantom. Results: The R2 values of the non-linear relationship between objective visibility score and CNR (60kVp R2 = 0.902; 70Kvp R2 = 0.913; 80kVp R2 = 0.757) demonstrate a better fit for all 3 kVp settings than the linear R2 values. As CNR increases for all kVp settings the Object Visibility also increases. The largest increase for SNR at low exposure values (up to 2 mGy) is observed at 60kVp, when compared with 70 or 81kVp.CNR response to exposure is similar. Pearson r was calculated to assess the correlation between Score, OV, SNR and CNR. None of the correlations reached a level of statistical significance (p>0.01). Conclusion: For object visibility and SNR, tube potential variations may play a role in object visibility. Higher energy X-ray beam settings give lower SNR but higher object visibility. Object visibility and CNR at all three tube potentials are similar, resulting in a strong positive relationship between CNR and object visibility score. At low doses the impact of radiographic noise does not have a strong influence on object visibility scores because in noisy images objects could still be identified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le processus de planification forestière hiérarchique présentement en place sur les terres publiques risque d’échouer à deux niveaux. Au niveau supérieur, le processus en place ne fournit pas une preuve suffisante de la durabilité du niveau de récolte actuel. À un niveau inférieur, le processus en place n’appuie pas la réalisation du plein potentiel de création de valeur de la ressource forestière, contraignant parfois inutilement la planification à court terme de la récolte. Ces échecs sont attribuables à certaines hypothèses implicites au modèle d’optimisation de la possibilité forestière, ce qui pourrait expliquer pourquoi ce problème n’est pas bien documenté dans la littérature. Nous utilisons la théorie de l’agence pour modéliser le processus de planification forestière hiérarchique sur les terres publiques. Nous développons un cadre de simulation itératif en deux étapes pour estimer l’effet à long terme de l’interaction entre l’État et le consommateur de fibre, nous permettant ainsi d’établir certaines conditions pouvant mener à des ruptures de stock. Nous proposons ensuite une formulation améliorée du modèle d’optimisation de la possibilité forestière. La formulation classique du modèle d’optimisation de la possibilité forestière (c.-à-d., maximisation du rendement soutenu en fibre) ne considère pas que le consommateur de fibre industriel souhaite maximiser son profit, mais suppose plutôt la consommation totale de l’offre de fibre à chaque période, peu importe le potentiel de création de valeur de celle-ci. Nous étendons la formulation classique du modèle d’optimisation de la possibilité forestière afin de permettre l’anticipation du comportement du consommateur de fibre, augmentant ainsi la probabilité que l’offre de fibre soit entièrement consommée, rétablissant ainsi la validité de l’hypothèse de consommation totale de l’offre de fibre implicite au modèle d’optimisation. Nous modélisons la relation principal-agent entre le gouvernement et l’industrie à l’aide d’une formulation biniveau du modèle optimisation, où le niveau supérieur représente le processus de détermination de la possibilité forestière (responsabilité du gouvernement), et le niveau inférieur représente le processus de consommation de la fibre (responsabilité de l’industrie). Nous montrons que la formulation biniveau peux atténuer le risque de ruptures de stock, améliorant ainsi la crédibilité du processus de planification forestière hiérarchique. Ensemble, le modèle biniveau d’optimisation de la possibilité forestière et la méthodologie que nous avons développée pour résoudre celui-ci à l’optimalité, représentent une alternative aux méthodes actuellement utilisées. Notre modèle biniveau et le cadre de simulation itérative représentent un pas vers l’avant en matière de technologie de planification forestière axée sur la création de valeur. L’intégration explicite d’objectifs et de contraintes industrielles au processus de planification forestière, dès la détermination de la possibilité forestière, devrait favoriser une collaboration accrue entre les instances gouvernementales et industrielles, permettant ainsi d’exploiter le plein potentiel de création de valeur de la ressource forestière.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deux concepts socio-économiques qui semblent être contradictoires à l'origine font l'objet d'étude de cet essai. Ainsi, par exemple, un auteur nous indique que: "Fille" de la misère et de la nécessité, la coopération a eu à l'origine pour objectif l'abolition du profit alors que, "Fils" de l'abondance, le marketing a été longtemps axé sur la recherche de la maximisation du profit. Cependant, dans le présent travail de recherche nous découvrons qu'il existe aujourd'hui le Marketing Social ou en anglais, Societal Marketing. Ce marketing d'implication sociale vise plutôt une augmentation de la qualité de vie de l'ensemble de la société. On a aussi essayé de démontrer ici que toute entreprise de type coopératif pourrait plus facilement se développer économiquement et socialement, en se servant directement des techniques marketing. L'application des principes de marketing dans la gestion d'associations coopératives nous a permis de faire ressortir les besoins, les alternatives possibles, les menaces ou risques, ainsi que les avantages qui s'offrent à ces types d'entreprises (du "Tiers monde"); particulièrement dans le contexte d'une petite association agricole et coopérative du Pérou. C'est à partir de l'étude de cas de l'Association de producteurs agricoles "Los Incas", de la région centrale péruvienne de Satipo que nous avons tenté d'appliquer les principes modernes du marketing, afin de concevoir un plan ou stratégie de développement économique et social de cette petite entreprise, en indiquant de façon générale le développement régional souhaité; et en dernier lieu, suggérer un plan national d'urgence à plus long terme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many applications, including communications, test and measurement, and radar, require the generation of signals with a high degree of spectral purity. One method for producing tunable, low-noise source signals is to combine the outputs of multiple direct digital synthesizers (DDSs) arranged in a parallel configuration. In such an approach, if all noise is uncorrelated across channels, the noise will decrease relative to the combined signal power, resulting in a reduction of sideband noise and an increase in SNR. However, in any real array, the broadband noise and spurious components will be correlated to some degree, limiting the gains achieved by parallelization. This thesis examines the potential performance benefits that may arise from using an array of DDSs, with a focus on several types of common DDS errors, including phase noise, phase truncation spurs, quantization noise spurs, and quantizer nonlinearity spurs. Measurements to determine the level of correlation among DDS channels were made on a custom 14-channel DDS testbed. The investigation of the phase noise of a DDS array indicates that the contribution to the phase noise from the DACs can be decreased to a desired level by using a large enough number of channels. In such a system, the phase noise qualities of the source clock and the system cost and complexity will be the main limitations on the phase noise of the DDS array. The study of phase truncation spurs suggests that, at least in our system, the phase truncation spurs are uncorrelated, contrary to the theoretical prediction. We believe this decorrelation is due to the existence of an unidentified mechanism in our DDS array that is unaccounted for in our current operational DDS model. This mechanism, likely due to some timing element in the FPGA, causes some randomness in the relative phases of the truncation spurs from channel to channel each time the DDS array is powered up. This randomness decorrelates the phase truncation spurs, opening the potential for SFDR gain from using a DDS array. The analysis of the correlation of quantization noise spurs in an array of DDSs shows that the total quantization noise power of each DDS channel is uncorrelated for nearly all values of DAC output bits. This suggests that a near N gain in SQNR is possible for an N-channel array of DDSs. This gain will be most apparent for low-bit DACs in which quantization noise is notably higher than the thermal noise contribution. Lastly, the measurements of the correlation of quantizer nonlinearity spurs demonstrate that the second and third harmonics are highly correlated across channels for all frequencies tested. This means that there is no benefit to using an array of DDSs for the problems of in-band quantizer nonlinearities. As a result, alternate methods of harmonic spur management must be employed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider an LTE network where a secondary user acts as a relay, transmitting data to the primary user using a decode-and-forward mechanism, transparent to the base-station (eNodeB). Clearly, the relay can decode symbols more reliably if the employed precoder matrix indicators (PMIs) are known. However, for closed loop spatial multiplexing (CLSM) transmit mode, this information is not always embedded in the downlink signal, leading to a need for effective methods to determine the PMI. In this thesis, we consider 2x2 MIMO and 4x4 MIMO downlink channels corresponding to CLSM and formulate two techniques to estimate the PMI at the relay using a hypothesis testing framework. We evaluate their performance via simulations for various ITU channel models over a range of SNR and for different channel quality indicators (CQIs). We compare them to the case when the true PMI is known at the relay and show that the performance of the proposed schemes are within 2 dB at 10% block error rate (BLER) in almost all scenarios. Furthermore, the techniques add minimal computational overhead over existent receiver structure. Finally, we also identify scenarios when using the proposed precoder detection algorithms in conjunction with the cooperative decode-and-forward relaying mechanism benefits the PUE and improves the BLER performance for the PUE. Therefore, we conclude from this that the proposed algorithms as well as the cooperative relaying mechanism at the CMR can be gainfully employed in a variety of real-life scenarios in LTE networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The social landscape is filled with an intricate web of species-specific desired objects and course of actions. Humans are highly social animals and, as they navigate this landscape, they need to produce adapted decision-making behaviour. Traditionally social and non-social neural mechanisms affecting choice have been investigated using different approaches. Recently, in an effort to unite these findings, two main theories have been proposed to explain how the brain might encode social and non-social motivational decision-making: the extended common currency and the social valuation specific schema (Ruff & Fehr 2014). One way to test these theories is to directly compare neural activity related to social and non-social decision outcomes within the same experimental setting. Here we address this issue by focusing on the neural substrates of social and non-social forms of uncertainty. Using functional magnetic resonance imaging (fMRI) we directly compared the neural representations of reward and risk prediction and errors (RePE and RiPE) in social and non- social situations using gambling games. We used a trust betting game to vary uncertainty along a social dimension (trustworthiness), and a card game (Preuschoff et al. 2006) to vary uncertainty along a non-social dimension (pure risk). The trust game was designed to maintain the same structure of the card game. In a first study, we exposed a divide between subcortical and cortical regions when comparing the way these regions process social and non-social forms of uncertainty during outcome anticipation. Activity in subcortical regions reflected social and non-social RePE, while activity in cortical regions correlated with social RePE and non-social RiPE. The second study focused on outcome delivery and integrated the concept of RiPE in non-social settings with that of fairness and monetary utility maximisation in social settings. In particular these results corroborate recent models of anterior insula function (Singer et al. 2009; Seth 2013), and expose a possible neural mechanism that weights fairness and uncertainty but not monetary utility. The third study focused on functionally defined regions of the early visual cortex (V1) showing how activity in these areas, traditionally considered only visual, might reflect motivational prediction errors in addition to known perceptual prediction mechanisms (den Ouden et al 2012). On the whole, while our results do not support unilaterally one or the other theory modeling the underlying neural dynamics of social and non-social forms of decision making, they provide a working framework where both general mechanisms might coexist.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La monitorización del funcionamiento del corazón se realiza generalmente por medio del análisis de los potenciales de acción generados en las células responsables de la contracción y relajación de este órgano. El proceso de monitorización mencionado consta de diferentes partes. En primer lugar, se adquieren las señales asociadas a la actividad de las células cardíacas. La conexión entre el cuerpo humano y el sistema de acondicionamiento puede ser implementada mediante diferentes tipos de electrodos – de placa metálica, de succión, top-hat, entre otros. Antes de la adquisición la señal eléctrica recogida por los electrodos debe ser acondicionada de acuerdo a las especificaciones de la entrada de la tarjeta de adquisición de datos (DAQ o DAC). Básicamente, debe amplificar la señal de tal manera que se aproveche al máximo el rango dinámico del cuantificador. Las características de ruido del amplificador requerido deben ser diseñadas teniendo en cuenta que el ruido interno del amplificador no afecte a la interpretación del electrocardiograma original (ECG). Durante el diseño del amplificador se han tenido en cuenta varios requisitos. Deberá optimizarse ña relación señal a ruido (SNR) de la señal entre la señal del ECG y el ruido de cuantificación. Además, el nivel de la señal ECG a la entrada de la DAQ deberá alcanzar el máximo nivel del cuantificador. También, el ruido total a la entrada del cuantificador debe ser despreciable frente a la mínima señal discernible del ECG Con el objetivo de llevar a cabo un diseño electrónico con esas prestaciones de ruido, es necesario llevar a cabo un minucioso estudio de los fundamentos de caracterización de ruido. Se han abarcado temas como la teoría básica de señales aleatorias, análisis espectral y su aplicación a la caracterización en sistemas electrónicos. Finalmente, todos esos conceptos han sido aplicados a la caracterización de las diferentes fuentes de ruido en los circuitos con amplificadores operacionales. Muchos prototipos de amplificadores correspondientes a diferentes diseños han sido implementados en placas de circuito impreso (PCB – Printed Board Circuits). Aunque el ancho de banda del amplificador operacional es adecuado para su implementación en una ‘protoboard’, las especificaciones de ruido obligan al uso de PCB. De hecho, los circuitos implementados en PCB son menos sensibles al ruido e interferencias que las ‘protoboard’ dadas las características físicas de ambos tipos de prototipos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quality of the image of 18F-FDG PET/CT scans in overweight patients is commonly degraded. This study evaluates, retrospectively, the relation between SNR, weight and dose injected in 65 patients, with a range of weights from 35 to 120 kg, with scans performed using the Biograph mCT using a standardized protocol in the Nuclear Medicine Department at Radboud University Medical Centre in Nijmegen, The Netherlands. Five ROI’s were made in the liver, assumed to be an organ of homogenous metabolism, at the same location, in five consecutive slices of the PET/CT scans to obtain the mean uptake (signal) values and its standard deviation (noise). The ratio of both gave us the Signal-to- Noise Ratio in the liver. With the help of a spreadsheet, weight, height, SNR and Body Mass Index were calculated and graphs were designed in order to obtain the relation between these factors. The graphs showed that SNR decreases as the body weight and/or BMI increased and also showed that, even though the dose injected increased, the SNR also decreased. This is due to the fact that heavier patients receive higher dose and, as reported, heavier patients have less SNR. These findings suggest that the quality of the images, measured by SNR, that were acquired in heavier patients are worst than thinner patients, even though higher FDG doses are given. With all this taken in consideration, it was necessary to make a new formula to calculate a new dose to give to patients and having a good and constant SNR in every patient. Through mathematic calculations, it was possible to reach to two new equations (power and exponential), which would lead to a SNR from a scan made with a specific reference weight (86 kg was the considered one) which was independent of body mass. The study implies that with these new formulas, patients heavier than the reference weight will receive higher doses and lighter patients will receive less doses. With the median being 86 kg, the new dose and new SNR was calculated and concluded that the quality of the image remains almost constant as the weight increases and the quantity of the necessary FDG remains almost the same, without increasing the costs for the total amount of FDG used in all these patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atmospheric scattering plays a crucial rule in degrading the performance of electro optical imaging systems operating in the visible and infra-red spectral bands, and hence limits the quality of the acquired images, either through reduction of contrast or increase of image blur. The exact nature of light scattering by atmospheric media is highly complex and depends on the types, orientations, sizes and distributions of particles constituting these media, as well as wavelengths, polarization states and directions of the propagating radiation. Here we follow the common approach for solving imaging and propagation problems by treating the propagating light through atmospheric media as composed of two main components: a direct (unscattered), and a scattered component. In this work we developed a detailed model of the effects of absorption and scattering by haze and fog atmospheric aerosols on the optical radiation propagating from the object plane to an imaging system, based on the classical theory of EM scattering. This detailed model is then used to compute the average point spread function (PSF) of an imaging system which properly accounts for the effects of the diffraction, scattering, and the appropriate optical power level of both the direct and the scattered radiation arriving at the pupil of the imaging system. Also, the calculated PSF, properly weighted for the energy contributions of the direct and scattered components is used, in combination with a radiometric model, to estimate the average number of the direct and scattered photons detected at the sensor plane, which are then used to calculate the image spectrum signal to- noise ratio (SNR) in the visible near infra-red (NIR) and mid infra-red (MIR) spectral wavelength bands. Reconstruction of images degraded by atmospheric scattering and measurement noise is then performed, up to the limit imposed by the noise effective cutoff spatial frequency of the image spectrum SNR. Key results of this research are as follows: A mathematical model based on Mie scattering theory for how scattering from aerosols affects the overall point spread function (PSF) of an imaging system was developed, coded in MATLAB, and demonstrated. This model along with radiometric theory was used to predict the limiting resolution of an imaging system as a function of the optics, scattering environment, and measurement noise. Finally, image reconstruction algorithms were developed and demonstrated which mitigate the effects of scattering-induced blurring to within the limits imposed by noise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the study of the spatial characteristics of the visual channels, the power spectrum model of visual masking is one of the most widely used. When the task is to detect a signal masked by visual noise, this classical model assumes that the signal and the noise are previously processed by a bank of linear channels and that the power of the signal at threshold is proportional to the power of the noise passing through the visual channel that mediates detection. The model also assumes that this visual channel will have the highest ratio of signal power to noise power at its output. According to this, there are masking conditions where the highest signal-to-noise ratio (SNR) occurs in a channel centered in a spatial frequency different from the spatial frequency of the signal (off-frequency looking). Under these conditions the channel mediating detection could vary with the type of noise used in the masking experiment and this could affect the estimation of the shape and the bandwidth of the visual channels. It is generally believed that notched noise, white noise and double bandpass noise prevent off-frequency looking, and high-pass, low-pass and bandpass noises can promote it independently of the channel's shape. In this study, by means of a procedure that finds the channel that maximizes the SNR at its output, we performed numerical simulations using the power spectrum model to study the characteristics of masking caused by six types of one-dimensional noise (white, high-pass, low-pass, bandpass, notched, and double bandpass) for two types of channel's shape (symmetric and asymmetric). Our simulations confirm that (1) high-pass, low-pass, and bandpass noises do not prevent the off-frequency looking, (2) white noise satisfactorily prevents the off-frequency looking independently of the shape and bandwidth of the visual channel, and interestingly we proved for the first time that (3) notched and double bandpass noises prevent off-frequency looking only when the noise cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La réserve générale interdite de partage entre les membres est un avoir obligatoire, impartageable tout au long de l’existence de la coopérative et sujet à la «dévolution désintéressée en cas de liquidation ou de dissolution». Cette réserve fonctionne comme un levier de soutien au développement de la coopérative et du mouvement coopératif dans son ensemble. Le principe de l’impartageabilité de la réserve est l’interdiction faite à toutes les coopératives du Québec de partager la réserve générale entre tous les membres et l’interdiction de la diminuer, notamment par l’attribution d’une ristourne tout au long de l’existence de la coopérative. En effet, l’impartageabilité de la réserve se fonde sur l’idée que la coopérative n’a pas pour but l’accumulation des capitaux afin de les répartir entre les membres, mais il s’agit de la création d’un capital collectif qui bénéficie à tous les adhérents présents et futurs. Si le concept de l’impartageabilité de la réserve interdit donc le partage de la réserve tout au long de l’existence de la coopérative, cette même interdiction prend le nom de la dévolution désintéressée de l’actif net au moment de la disparition de la coopérative. Cette dévolution désintéressée signifie l’interdiction faite à toutes les coopératives non financières de partager le solde de l’actif lors de la disparition (dissolution ou liquidation) de la coopérative à l’exception des coopératives agricoles qui peuvent décider dans ce cas, de distribuer le solde de l’actif aux membres sans qu’on sache les raisons de cette exception. Par ailleurs, l’impartageabilité de la réserve est considérée comme un simple inconvénient juridique pour les membres et a connu quelques réécritures dans les législations sur les coopératives sans qu’on connaisse vraiment les raisons de ces modifications. L’objectif de notre thèse est d’engager une discussion critique autour du questionnement central suivant : au regard du cadre juridique actuel sur les coopératives, le principe de l’impartageabilité de la réserve doit être maintenu comme tel dans la Loi sur les coopératives, ou être tout simplement supprimé, comme dans la société par actions, où il est inexistant sans que cette suppression ne porte atteinte à la notion juridique de la coopérative? Plus précisément, quel est ce cadre juridique et quels sont les motifs qui peuvent plaider en faveur du maintien ou de la suppression du principe de l’impartageabilité de la réserve? Pour répondre à cette question, cette thèse se divise en deux parties. La première partie explore le cadre juridique des coopératives non financières au Québec en comparaison avec certains concepts juridiques issus d’autres législations. Elle étudie les fondements juridiques sous-jacents à l’impartageabilité de la réserve en droit québécois des coopératives non financières. La deuxième partie réalise une discussion critique autour de l’histoire du principe de l’impartageabilité de la réserve (ch. 3), des différents arguments juridiques disponibles (ch. 4) et d’hypothèses articulées autour des effets concrets disponibles (ch. 5). Elle explore ces dimensions au soutien du maintien ou non de l’impartageabilité de la réserve de la législation actuelle sur les coopératives non financières. Bien que la recherche effectuée conduise à une réponse nuancée, l'ensemble des résultats milite plutôt en faveur du maintien du principe de l'impartageabilité de la réserve. Au préalable, l’observation des fondements juridiques des concepts sous-jacents à l’impartageabilité de la réserve en droit québécois des coopératives non financières a permis de comprendre les concepts sous-jacents à ce principe avant de répondre à la question autour de son maintien ou de sa suppression de la législation actuelle sur les coopératives. La discussion réalisée a permis de souligner l’importance d’une réalité de base assez évidente : ce principe permet de préserver la réserve, utile au développement de la coopérative et du mouvement coopératif dans son ensemble. De plus, ce principe de l’impartageabilité de la réserve s’inscrit dans le cadre de la vocation sociale de la coopérative, qui n’a pas pour but la maximisation du profit pécuniaire. L’impartageabilité de la réserve s’inscrit également dans le cadre de la cohérence du droit québécois des coopératives avec la notion de coopérative telle que définie par le mouvement coopératif québécois et l’ACI tout en répondant aux finalités historiques d’équité entre les générations et de solidarité. Enfin, même si la discussion des arguments tirés des illustrations de données comptables et de quelques entretiens réalisés avec certains membres actifs du mouvement coopératif ne permet pas de mener à toute conclusion ferme, il ressort que l’impartageabilité de la réserve ne freinerait pas la tendance à la hausse des investissements et du chiffre d’affaires des coopératives non financières. Cette interdiction constituerait même un mécanisme d’autofinancement de la coopérative et un symbole de solidarité.